CN113405555B - Automatic driving positioning sensing method, system and device - Google Patents

Automatic driving positioning sensing method, system and device Download PDF

Info

Publication number
CN113405555B
CN113405555B CN202110951607.8A CN202110951607A CN113405555B CN 113405555 B CN113405555 B CN 113405555B CN 202110951607 A CN202110951607 A CN 202110951607A CN 113405555 B CN113405555 B CN 113405555B
Authority
CN
China
Prior art keywords
positioning
vehicle
information
module
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110951607.8A
Other languages
Chinese (zh)
Other versions
CN113405555A (en
Inventor
应子阳
贺锦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiji Automobile Technology Co Ltd
Original Assignee
Zhiji Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiji Automobile Technology Co Ltd filed Critical Zhiji Automobile Technology Co Ltd
Priority to CN202110951607.8A priority Critical patent/CN113405555B/en
Publication of CN113405555A publication Critical patent/CN113405555A/en
Application granted granted Critical
Publication of CN113405555B publication Critical patent/CN113405555B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial

Abstract

The invention discloses a positioning sensing method, a system and a device for automatic driving, wherein the system comprises a positioning fusion module and is characterized in that: the positioning fusion module is connected with the road area camera module, receives information transmitted from the road area camera module, generates first image information and extracts first characteristic points; the positioning fusion module is connected with the vehicle-mounted camera module, receives information transmitted from the vehicle-mounted camera module, generates second image information and extracts second characteristic points; the positioning fusion module is connected with the inertial navigation module and receives relative positioning information of the inertial navigation module; and the positioning fusion module respectively compares the first characteristic point and the second characteristic point with map data in the high-precision map database, and then performs error calibration on the relative positioning information by using a comparison result to obtain the actual position, direction and running track of the vehicle. The invention can improve the speed of positioning initialization, and simultaneously reduce the positioning error, so that the positioning is more accurate.

Description

Automatic driving positioning sensing method, system and device
Technical Field
The present invention relates to a positioning method, system and device, and more particularly, to a positioning sensing method, system and device for automatic driving.
Background
The existing technical strategies for high-precision positioning of vehicles are as follows: global Navigation Satellite Systems (GNSS), Inertial Navigation Systems (INS), visual synchronous positioning and Map creation (VSLAM) -based, vehicle models constructed based on vehicle sensors, navigation/high-precision maps (SD/HD Map), millimeter-wave radars (mm-W radars), laser radars (LiDAR), and the like.
However, each of the above techniques has certain short plates and defects in positioning, such as:
GNSS: relying solely on GNSS devices alone does not provide highly accurate positioning results, and the error is typically between 7.5m and 10 m. However, with the development of GNSS, various enhancement systems or services have been developed to enhance their performance, such as difference system, ground-based enhancement system, and satellite-based enhancement system. The real-time kinematic (RTK) technology uses carrier phase differential technology and a reference station to improve the positioning precision, and the highest precision can reach centimeter level. However, even with the aid of RTK, if the number of observable satellites is too small or the satellites cannot be observed, the positioning result provided by GNSS still generates a great error. To summarize, the positioning effect of gnss (rtk) is limited by the number of satellites that can be observed for solution and the satellite signal quality.
Inertial Navigation System (INS): an Inertial Measurement Unit (IMU) in the INS has a zero offset error, so that an error of a navigation result obtained by the calculation of the INS increases with time, and thus the INS can provide a high-precision navigation result only in a short time.
Visual synchronized positioning and map creation (VSLAM): VSLAM extracts the feature point of each frame of image based on visual perception, namely data provided by a camera, and matches the feature points of adjacent frames, so that local position estimation is completed, but the motion between two adjacent images has errors, the errors are gradually accumulated along with the multiple transmission of the frames, and the drift of the track is more and more serious. Even with many methods for optimizing VSLAM errors, cameras are still limited to external influences, such as extreme weather (fog or rain) and alternating light and dark scenes (in and out of tunnels or basements).
Vehicle sensor: similar to an Inertial Navigation System (INS), the vehicle sensor obtains vehicle positioning by a dead reckoning method from data obtained by a vehicle odometer, a steering wheel angle sensor, and the like, and a positioning error is generated due to an error of the sensor itself and external factors such as tire slip and ground flatness during the running of the vehicle, and the error increases with time.
In the existing high-precision positioning scheme for automatic driving, different sensors are fused for positioning calculation, and respective errors are mutually compensated, for example, GNSS/INS fusion positioning utilizes an Inertial Measurement Unit (IMU) to compensate a scene when GNSS fails, but due to the characteristics of INS, positioning precision can only be maintained in a short period. In addition, the hardware may be damaged unexpectedly due to external factors such as dirt, vibration, temperature, etc., and in order to cover more areas, the multi-sensor fusion scheme should also contain more driving/road information.
Disclosure of Invention
Aiming at short boards of various positioning technologies in the prior art, the invention provides an automatic driving positioning sensing method, system and device, which at least reduce the influence of external factors on a sensor and improve the positioning accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
the utility model provides an automatic location sensing system of driving, fuses the module including the location, its characterized in that: the positioning fusion module is connected with the road area camera module, receives information transmitted from the road area camera module, generates first image information and extracts first characteristic points; the positioning fusion module is connected with the vehicle-mounted camera module, receives information transmitted from the vehicle-mounted camera module, generates second image information and extracts second characteristic points; the positioning fusion module is connected with the vehicle-mounted positioning module and receives absolute positioning information of the vehicle-mounted positioning module; the positioning fusion module is connected with the inertial navigation module and receives relative positioning information of the inertial navigation module; the positioning fusion module is connected with the high-precision map and receives map data of the high-precision map; and the positioning fusion module performs temporary map building according to the absolute positioning information, the relative positioning information, the second image information and the second characteristic points, performs matching positioning on the temporary map building and map data to obtain vehicle position information, and performs closed-loop verification on the vehicle position information, the first image information and the first characteristic points to obtain the actual position, direction and running track of the vehicle.
As an embodiment of the present invention, the first image information includes one of image information, position information, and own vehicle information of the vehicle, or a combination thereof, and the first feature point includes a feature image about the position information of the vehicle in the first image; the second image information includes environmental information around the vehicle, and the second feature point includes a feature image regarding vehicle position information in the second image.
As an embodiment of the present invention, the method further includes: and the vehicle sensor provides the wheel speed and the vehicle corner of the vehicle.
As an embodiment of the present invention, the relative positioning information includes an angular velocity and an acceleration of the vehicle.
As an implementation mode of the invention, the positioning fusion module constructs a vehicle power and kinematic model according to the wheel speed and the rotation angle of a vehicle sensor; the positioning fusion module takes the difference value of the absolute positioning information and the relative positioning information as an observation model, uses Kalman filtering as a positioning basis to obtain the positioning results of the vehicle at the current moment and the next moment, and further performs positioning result compensation on the Kalman filtering based on the vehicle power and kinematic model.
In order to achieve the purpose, the invention also adopts the following technical scheme:
an autopilot position sensing method, comprising: acquiring road area camera information, generating first image information and extracting first feature points; acquiring vehicle-mounted camera information, generating second image information and extracting second feature points; acquiring absolute positioning information of vehicle-mounted positioning; collecting relative positioning information of inertial navigation; receiving map data of a high-precision map; and performing temporary map building according to the absolute positioning information, the relative positioning information, the second image information and the second characteristic points, matching and positioning the temporary map building and map data to obtain vehicle position information, and performing closed-loop verification on the vehicle position information, the first image information and the first characteristic points to obtain the actual position, direction and running track of the vehicle.
As an embodiment of the present invention, the first image information includes one of image information, position information, and own vehicle information of the vehicle, or a combination thereof, and the first feature point includes a feature image about the position information of the vehicle in the first image; the second image information includes environmental information around the vehicle, and the second feature point includes a feature image regarding vehicle position information in the second image.
As an embodiment of the present invention, the relative positioning information includes an angular velocity and an acceleration of the vehicle.
As an embodiment of the invention, a vehicle power and kinematic model is constructed according to the wheel speed and the rotation angle of the vehicle; and taking the difference value of the absolute positioning information and the relative positioning information as an observation model, taking Kalman filtering as a positioning basis, obtaining the positioning results of the vehicle at the current moment and the next moment, and further performing positioning result compensation on the Kalman filtering based on the vehicle power and kinematic model.
In order to achieve the purpose, the invention also adopts the following technical scheme:
an autopilot position sensing apparatus for performing the method of the present invention.
In the technical scheme, the method and the device can improve the speed of positioning initialization, and reduce the positioning error so that the positioning is more accurate.
Drawings
FIG. 1 is an architectural diagram of a system according to a first aspect of the invention;
FIG. 2 is a flow chart of one embodiment of the method of the present invention;
FIG. 3 is an expanded view of a detailed flow of the embodiment of FIG. 2;
FIG. 4 is an expanded view of a detailed flow of the embodiment of FIG. 2;
FIG. 5 is an expanded view of a detailed flow of the embodiment of FIG. 2;
FIG. 6 is an expanded view of a detailed flow of the embodiment of FIG. 2;
FIG. 7 is a system architecture diagram of a second aspect of the present invention;
FIG. 8 is a flow chart of one embodiment of the method of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be further clearly and completely described below with reference to the accompanying drawings and embodiments. It is obvious that the described embodiments are used for explaining the technical solution of the present invention, and do not mean that all embodiments of the present invention have been exhaustively exhausted.
Examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
Referring to fig. 1, according to a first aspect of the present invention, the present invention discloses an automatic driving positioning sensing system, which is mainly composed of a positioning fusion module 11, a vehicle-mounted positioning module 12, a road area camera module 13, an inertial navigation module 14, a vehicle-mounted camera module 15, a high-precision map database 16, and the like.
As shown in fig. 1, the positioning fusion module 11 is connected to the in-vehicle positioning module 12, the road area imaging module 13, the inertial navigation module 14, the in-vehicle imaging module 15, and the high-precision map database 16, respectively, in the connection relationship of the modules. In the data transmission relationship of the respective modules, the positioning fusion module 11 receives information transmitted from the road area camera module 13, information transmitted from the in-vehicle camera module 15, absolute positioning information of the in-vehicle positioning module 12, relative positioning information of the inertial navigation module 14, and map data in the high-precision map database 16.
As an embodiment of the present invention, the Inertial Navigation module 14 (INS) is an autonomous Navigation System that does not depend on external information and does not radiate energy to the outside. The inertial navigation module 14 is internally provided with an Inertial Measurement Unit (IMU), and the basic working principle of the inertial navigation module is based on the newton's law of mechanics, and by measuring the acceleration of the carrier in an inertial reference system, integrating the acceleration with time and transforming the acceleration into a navigation coordinate system, information such as speed, yaw angle, position and the like in the navigation coordinate system can be obtained. Thus, the data provided by the inertial navigation module 14 in the present invention is relative positioning information.
In contrast, the vehicle-mounted positioning module 12 is a Global Navigation Satellite System (GNSS), which may be a GPS, a compass, or another Navigation System as another embodiment of the present invention, and provides accurate positioning information, so that the data provided by the vehicle-mounted positioning module 12 in the present invention is absolute positioning information.
With reference to fig. 2-6, the invention further discloses how the method of the first aspect of the invention is performed using the system of the first aspect of the invention.
Fig. 2 illustrates the general idea of the method of the invention. The present invention first executes step S1, which is a step of processing information collected by the road area camera module 13. As shown in fig. 2, the localization fusion module 11 receives and processes the information transmitted from the road region camera module 13, generates first image information, and extracts a first feature point.
Meanwhile, step S2, which is a step of processing information collected by the in-vehicle camera module 15, is executed. As shown in fig. 2, the localization fusion module 11 receives and processes information transmitted from the in-vehicle camera module 15, generates second image information, and extracts a second feature point.
Meanwhile, step S3, which is a step of receiving data collected by the inertial navigation module 14, is performed. As shown in fig. 2, the position fusion module 11 receives the relative positioning information of the inertial navigation module 14.
As a preferred embodiment of the present invention, the positioning fusion module 11 is synchronously connected to the road area camera module 13, the vehicle-mounted camera module 15 and the inertial navigation module 14, so that the positioning fusion module 11 can synchronously process the first feature point of the first image information and the second feature point of the second image information, and then perform error calibration on the processed first feature point and the processed second feature point and the relative positioning information.
Those skilled in the art can understand that the positioning fusion module 11 may execute the steps S1, S2, and S3 synchronously, or selectively execute the steps S1, S2, and S3 according to a certain sequence, so as to achieve the technical purpose and the technical effect of the present invention. The present invention preferably performs steps S1, S2, and S3 synchronously, so that the first image information, the first feature point, the second image information, the second feature point, and the relative positioning information can be obtained simultaneously in parallel, and the system architecture of the present invention can support such a parallel flow.
After the positioning fusion module 11 executes steps S1, S2, and S3, the first image information, the first feature point, the second image information, the second feature point, and the relative positioning information are obtained. At this time, step S4 is executed, which is a step of the localization fusion module 11 processing the comparison data and the data collected by the inertial navigation module 14. As shown in fig. 2, the positioning fusion module 11 compares the first feature point and the second feature point with the map data in the high-precision map database, and then performs error calibration on the relative positioning information by using the comparison result to obtain the actual position, direction and moving track of the vehicle.
With further reference to fig. 3, fig. 3 illustrates a specific implementation method of step S1.
Firstly, step S1.1 is executed, the positioning fusion module 11 is connected to the vehicle-mounted positioning module 12, and receives absolute positioning information of the vehicle-mounted positioning module 12, for example, positioning that the vehicle is located in a certain area by using GPS.
After obtaining the absolute positioning information, step S1.2 is performed, and the positioning fusion module 11 connects the road area camera module 13 in the positioning area.
After the connection is established, step S1.3 is executed, and the positioning fusion module 11 receives the initial image information of the road area camera module 13.
After the initial image information is obtained, step S1.4 is executed, and the positioning fusion module 11 positions the image information about the host vehicle in the image information.
Then, step S1.5 is executed, the positioning fusion module 11 compares the absolute positioning information with the information transmitted by the road area camera module 13, and identifies and extracts the image information about the vehicle in the initial image information, where the information meeting such condition is the first image information.
After the first image information is obtained, step S1.6 is executed, and the positioning fusion module 11 extracts the first feature point in the first image information.
As an embodiment of the present invention, the first image information includes one of or a combination of image information, position information, and own vehicle information of the vehicle, such as a license plate, a specific area, a position, and the like. Thus, step S1.5 may identify the image information of the host vehicle by, for example, the license plate number. On the other hand, the first feature point includes a feature image on the vehicle position information in the first image, such as a lane line, a road sign, a capturing time point, and the like.
It will be understood by those skilled in the art that the foregoing list is only illustrative of the technical aspects of the present invention, and is not intended to limit the present invention. In other embodiments of the present invention, the first image information may include other data that can be used to identify the vehicle information, and the first feature point may also include other data that can identify the vehicle position, which are all within the scope of the present invention.
The positioning fusion module 11 completes the generation of the first image information and the extraction of the first feature point through steps S1.1-S1.6.
Referring to fig. 4, fig. 4 illustrates a specific implementation method of step S2.
Step S2.1 is executed first, and the vehicle-mounted camera module 15 acquires image information around the vehicle, where the image information is the second image information.
After obtaining the second image information, step S2.2 is performed, and the localization fusion module 11 receives the second image information.
After receiving the second image information, step S2.3 is performed: the positioning fusion module 11 extracts a second feature point of the environmental information around the own vehicle in the second image information.
As one embodiment of the present invention, the second image information includes environmental information around the vehicle, and the second feature point includes a feature image on the vehicle position information in the second image. However, it will be understood by those skilled in the art that the foregoing list is only for the purpose of illustrating the technical aspects of the present invention, and is not intended to limit the present invention.
Referring to fig. 5, fig. 5 illustrates a specific implementation method of step S3.
Step S3.1 is first performed, and the inertial navigation module 14 collects angular velocity and acceleration data of the vehicle.
Thereafter, step S3.2 is performed, and the localization fusion module 11 receives the angular velocity and acceleration data.
In steps S3.1 and S3.2, the inertial navigation module 14 is used to collect angular velocity and acceleration data of the vehicle, the positioning fusion module 11 is used to receive the angular velocity and acceleration data of the vehicle, and the positioning fusion module 11 calculates two adjacent frames of inertial navigation (IMU) data to calculate IMU error data, i.e. an error of relative positioning information.
Referring to fig. 6, fig. 6 illustrates a specific implementation method of step S4.
First, step S4.1 is executed, and the positioning fusion module 11 compares the first feature point of the first image information and the second feature point of the second image information with the feature points of different road section positions in the map data of the high-precision map database 16, respectively.
Through comparison, step S4.2 is further executed, the positioning fusion module 11 processes the comparison data and the data collected by the inertial navigation module 14, calibrates an error (i.e., an error of the relative positioning information) of the inertial navigation module 14, and finally obtains an actual position and direction of the vehicle, and draws a track map of the vehicle running.
As can be seen from fig. 2 to 6, step S4 summarizes the data collected and previously processed in steps S1-S3, and finally obtains the actual position, direction and trajectory of the vehicle through data comparison and processing.
It can be seen from the above solutions that, unlike the prior art that positioning information is simply fused (for example, the positioning information is fused with INS positioning data based on GNSS positioning, or vehicle sensor data is fused based on GNSS positioning), the system and method of the first aspect of the present invention draw the advantages of each positioning system and comprehensively utilize them. The system and method of the first aspect of the present invention apply the most common absolute positioning information (e.g. GNSS positioning) to the first image information, the generation of the first feature point, rather than directly fusing other assisted positioning information with GNSS positioning information.
On the other hand, the present invention does not directly apply the relative positioning information (for example, inertial navigation INS data) to the error correction of GNSS positioning, but uses the comparison data as the error correction of the relative positioning information on the basis of comprehensively comparing the data of the first image information, the first feature point, the second image information, and the second feature point.
Through the improvement on the flows of the two methods, the system and the method of the first aspect of the invention change the processing method and the flows of the same data of the traditional technology, can firstly accelerate the positioning speed, especially the positioning initialization speed under the vehicle initialization condition, and can also reduce the positioning error on the other hand, so that the positioning is more accurate.
Referring to fig. 7, according to a second aspect of the present invention, the present invention further discloses an automatic driving positioning sensing system, which mainly comprises modules similar to the first aspect of the present invention, such as a positioning fusion module 21, an on-vehicle positioning module 22, a road area camera module 23, an inertial navigation module 24, an on-vehicle camera module 25, a high-precision map database 26, and a vehicle sensor 27.
As shown in fig. 7, the positioning fusion module 21 connects the in-vehicle positioning module 22, the road area imaging module 23, the inertial navigation module 24, the in-vehicle imaging module 25, the high-precision map database 26, and the vehicle sensor 27 to each other in the connection relationship of the modules. In the data transmission relationship of each module, the positioning fusion module 21 receives the information transmitted from the road area camera module 23, the information transmitted from the vehicle-mounted camera module 25, the absolute positioning information of the vehicle-mounted positioning module 22, the relative positioning information of the inertial navigation module 24, the map data in the high-precision map database 26, and the wheel speed and the rotation angle information of the vehicle sensor 27.
The selection of the inertial navigation module 24 and the onboard positioning module 22 is the same as the first aspect of the present invention, and relative positioning information and absolute positioning information are provided, respectively, and will not be described herein.
With reference to fig. 8, the invention further discloses how to perform the method of the second aspect of the invention with the system of the second aspect of the invention.
As shown in fig. 8, step S5 is first executed, and the localization fusion module 21 receives the information transmitted from the in-vehicle camera module 25, generates second image information, and extracts a second feature point. As an embodiment of the present invention, the positioning fusion module 21 extracts second feature points in the image, such as lane line identification, road information, and length of the two lanes from the host vehicle, through the second image information of the vehicle-mounted camera module 25, and performs inter-frame relative pose estimation and pose increment constraint.
Meanwhile, step S6 is executed, and the positioning fusion module 21 receives the absolute positioning information of the vehicle-mounted positioning module 22.
Meanwhile, step S7 is executed, and the positioning fusion module 21 receives the relative positioning information of the inertial navigation module 24.
As an embodiment of the present invention, the inertial navigation module 24 includes data such as the position, speed, and attitude of the vehicle, for which the state model is the inertial navigation data (INS) positioning results. While performing steps S6 and S7, the localization fusion module 21 designs a corresponding state transition matrix based on the INS error model, a corresponding input control matrix based on the acceleration and angular velocity measured by the Inertial Measurement Unit (IMU), and a corresponding process noise covariance matrix based on the noise error of the IMU. Therefore, the INS model carries out corresponding prediction under the condition of considering the sensor error, and takes the difference value between the positioning result of a Global Navigation Satellite System (GNSS) and the positioning result of the INS as an observation model, and simultaneously constructs a corresponding state transition matrix and a measurement noise covariance matrix based on the GNSS error.
Meanwhile, step S8 is executed, and the position fusion module 21 receives the map data of the high-precision map database 26.
In one embodiment of the present invention, after the model construction corresponding to steps S6 and S7 is completed (i.e., the difference between the absolute positioning information and the relative positioning information is used as an observation model), the positioning fusion module 21 uses kalman filtering as a basis for fusion positioning to obtain the positioning results of the current and next time instants of the own vehicle. Meanwhile, based on a vehicle dynamic and kinematic model constructed by the vehicle sensor 27, the positioning result is compensated for Kalman filtering, so that the positioning result is more accurate.
Those skilled in the art can understand that the positioning fusion module 21 may synchronously execute steps S5, S6, S7, and S8, and may selectively execute steps S5, S6, S7, and S8 according to a certain sequence, so as to achieve the technical purpose of the present invention and achieve the technical effect of the present invention.
After the positioning fusion module 21 receives the data of each module, step S9 is executed to perform temporary mapping according to the absolute positioning information, the relative positioning information, the second image information, and the second feature point, and perform matching positioning between the temporary mapping and the map data, thereby obtaining the vehicle position information.
After obtaining the vehicle position information, step S10 is executed, and the localization fusion module 21 receives the information transmitted from the road region camera module 23, generates first image information, and extracts a first feature point.
As shown in fig. 7 and 8, the localization fusion module 21 may repeatedly request information of the road region camera module 23, and the road region camera module 23 may also repeatedly give feedback to the localization fusion module 21. After receiving the request, the road area camera module 23 extracts an image related to the vehicle in the area according to the received positioning information and the information of the vehicle, such as a license plate and the area where the vehicle is locked. By extracting feature points of the image, such as lane lines, road markings, and time points of image capturing, it is possible to determine location information, speed, history track, and traveling direction of the vehicle. The road area camera module 23 feeds back the extracted feature points and time points, and the positioning information, speed, historical track and driving direction of the vehicle to the vehicle-mounted positioning module, so as to help the vehicle to update and correct the self positioning, for example, whether the judgment of the lane where the vehicle is located is accurate or not, and when the vehicle is in cold start/positioning loss, the information provided by the module can be used for performing quick initial positioning.
Then, step S11 is executed, and the positioning fusion module 21 performs closed-loop verification on the vehicle position information, the first image information, and the first feature point to obtain an actual position, an actual direction, and an actual movement trajectory of the vehicle.
As a preferred embodiment of the present invention, the positioning fusion module 21 may execute step S10 synchronously with steps S5, S6, S7, and S8, or execute step S10 sequentially after step S9 is executed, so as to achieve the technical objects and effects of the present invention.
The specific meanings of the first image information, the first feature point, the second image information, and the second feature point are the same as described above. In addition, the positioning fusion module 21, the vehicle-mounted positioning module 22, the road region camera module 23, the inertial navigation module 24, the vehicle-mounted camera module 25, the high-precision map database 26 and the vehicle sensor 27 may perform the same steps as those of the first aspect of the present invention, and are not described herein again.
As can be seen from the integrated steps S5-S11, after the positioning fusion module 21 receives the GNSS/INS/vehicle-mounted sensing data/visual positioning information, the absolute positioning information and the relative positioning information of the vehicle, as well as the information of the vehicle and the surrounding environment during the driving process, can be obtained. The positioning fusion module 21 performs temporary map building based on the information, and performs matching positioning with the high-precision map data, so that the positioning fusion module 21 can know all the historical tracks of the vehicle, the vehicle information (speed/course, etc.), and the information fused with the high-precision map. At this time, the positioning fusion module 21 may know the area where the vehicle is located, so as to send a receiving request to the road area camera module 23 of the located area, and perform closed-loop verification with the fusion + map data of the vehicle after receiving the data of the road area camera module 23, thereby performing rapid initial positioning in the scenes of vehicle lock loss, high-precision map failure, vehicle cold start, and the like, and the closed-loop verification function of the area camera module may also assist the positioning performance of the vehicle-mounted positioning fusion module.
It can be seen from the above solutions that, unlike the prior art that positioning information is simply fused (for example, the positioning information is fused with INS positioning data based on GNSS positioning, or vehicle sensor data is fused based on GNSS positioning), the system and method of the second aspect of the present invention draw the advantages of each positioning system and comprehensively utilize the advantages.
Although the system and method of the second aspect of the present invention also correct the absolute positioning information (e.g., GNSS positioning) and the relative positioning information (e.g., INS positioning), the difference between the two is used as the observation model and is used as the basis of the kalman filter, and the data of the vehicle sensor is further combined to compensate the kalman filter. This is the first level of data.
In addition, the system and the method of the second aspect of the present invention further generate second image information and second feature points by using the environment information, and then combine the map data of the high-precision map as the data of the second layer. And fusing the data of the first layer with the data of the second layer to obtain the data of the third layer. And finally, performing closed-loop verification by using the first image information and the first characteristic point as data of a fourth layer and data of a third layer.
Therefore, the system and the method of the second aspect of the invention construct 4 levels of data processing flows, change the processing method and the flow of the same data in the traditional technology, accelerate the positioning speed by a specific data combination mode, especially accelerate the positioning initialization speed under the vehicle initialization condition, and reduce the positioning error on the other hand, so that the positioning is more accurate.
According to another aspect of the invention, the invention also discloses an automatic driving positioning sensing device.
The apparatus of the present invention may be a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute any one of the autonomous driving location sensing method of fig. 2 to 6, or the autonomous driving location sensing method of fig. 8.
The apparatus of the present invention may be a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
The apparatus of the present invention may be an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of a method as in the above method embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those skilled in the art will appreciate that all or a portion of the steps of the various illustrated embodiments of the invention may be performed by associated hardware as instructed by a computer program, which may be stored centrally or distributed on one or more computer devices, such as a readable storage medium. The computer device includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable rewritable Read-Only Memory (EEPROM), a compact disc Read-Only Memory (CD-ROM) or other optical disc Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer capable of carrying or storing data.
It should be understood by those skilled in the art that the above embodiments are only for illustrating the present invention and are not to be used as a limitation of the present invention, and that changes and modifications to the above described embodiments are within the scope of the claims of the present invention as long as they are within the spirit and scope of the present invention.

Claims (10)

1. The utility model provides an automatic location sensing system of driving, fuses the module including the location, its characterized in that:
the positioning fusion module is connected with the road area camera module, receives information transmitted from the road area camera module, generates first image information and extracts first characteristic points;
the positioning fusion module is connected with the vehicle-mounted camera module, receives information transmitted from the vehicle-mounted camera module, generates second image information and extracts second characteristic points;
the positioning fusion module is connected with the vehicle-mounted positioning module and receives absolute positioning information of the vehicle-mounted positioning module;
the positioning fusion module is connected with the inertial navigation module and receives relative positioning information of the inertial navigation module;
the positioning fusion module is connected with the high-precision map and receives map data of the high-precision map;
the positioning fusion module is used for temporarily establishing a map according to the absolute positioning information, the relative positioning information, the second image information and the second characteristic points, matching and positioning the temporarily established map and map data to obtain vehicle position information, and performing closed-loop verification on the vehicle position information, the first image information and the first characteristic points to obtain the actual position, direction and running track of the vehicle.
2. The autonomous driving position sensing system of claim 1, wherein:
the first image information comprises one of image information, position information and self-vehicle information of the vehicle or a combination of the image information, the first characteristic point comprises a characteristic image about the position information of the vehicle in the first image;
the second image information includes environmental information around the vehicle, and the second feature point includes a feature image regarding vehicle position information in the second image.
3. The autonomous-driving position sensing system of claim 1 further comprising:
a vehicle sensor providing a vehicle wheel speed and a vehicle corner.
4. The autonomous driving position sensing system of claim 3, wherein:
the relative positioning information includes an angular velocity and an acceleration of the vehicle.
5. The autonomous driving position sensing system of claim 4, wherein:
the positioning fusion module constructs a vehicle power and kinematic model according to the wheel speed and the corner of a vehicle sensor;
the positioning fusion module takes the difference value of the absolute positioning information and the relative positioning information as an observation model, uses Kalman filtering as a positioning basis to obtain the positioning results of the vehicle at the current moment and the next moment, and further performs positioning result compensation on the Kalman filtering based on the vehicle power and kinematic model.
6. An automatic driving positioning sensing method, comprising:
acquiring road area camera information, generating first image information and extracting first feature points;
acquiring vehicle-mounted camera information, generating second image information and extracting second feature points;
acquiring absolute positioning information of vehicle-mounted positioning;
collecting relative positioning information of inertial navigation;
receiving map data of a high-precision map;
and performing temporary map building according to the absolute positioning information, the relative positioning information, the second image information and the second characteristic points, matching and positioning the temporary map building and map data to obtain vehicle position information, and performing closed-loop verification on the vehicle position information, the first image information and the first characteristic points to obtain the actual position, direction and running track of the vehicle.
7. The automated driving position sensing method of claim 6, wherein:
the first image information comprises one of image information, position information and self-vehicle information of the vehicle or a combination of the image information, the first characteristic point comprises a characteristic image about the position information of the vehicle in the first image;
the second image information includes environmental information around the vehicle, and the second feature point includes a feature image regarding vehicle position information in the second image.
8. The automated driving position sensing method of claim 6, wherein:
the relative positioning information includes an angular velocity and an acceleration of the vehicle.
9. The automated driving position sensing method of claim 8, wherein:
constructing a vehicle power and kinematic model according to the wheel speed and the turning angle of the vehicle;
and taking the difference value of the absolute positioning information and the relative positioning information as an observation model, taking Kalman filtering as a positioning basis, obtaining the positioning results of the vehicle at the current moment and the next moment, and further performing positioning result compensation on the Kalman filtering based on the vehicle power and kinematic model.
10. An autonomous driving position sensing device, characterized in that the device performs the method of any of claims 6-9.
CN202110951607.8A 2021-08-19 2021-08-19 Automatic driving positioning sensing method, system and device Active CN113405555B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110951607.8A CN113405555B (en) 2021-08-19 2021-08-19 Automatic driving positioning sensing method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110951607.8A CN113405555B (en) 2021-08-19 2021-08-19 Automatic driving positioning sensing method, system and device

Publications (2)

Publication Number Publication Date
CN113405555A CN113405555A (en) 2021-09-17
CN113405555B true CN113405555B (en) 2021-11-23

Family

ID=77688643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110951607.8A Active CN113405555B (en) 2021-08-19 2021-08-19 Automatic driving positioning sensing method, system and device

Country Status (1)

Country Link
CN (1) CN113405555B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277286B (en) * 2022-06-10 2023-12-12 智己汽车科技有限公司 CAN bus communication method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872070A (en) * 2009-04-02 2010-10-27 通用汽车环球科技运作公司 Traffic infrastructure indicator on the head-up display
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN109798872A (en) * 2017-11-16 2019-05-24 北京凌云智能科技有限公司 Vehicle positioning method, device and system
CN111540237A (en) * 2020-05-19 2020-08-14 河北德冠隆电子科技有限公司 Method for automatically generating vehicle safety driving guarantee scheme based on multi-data fusion
CN112595331A (en) * 2020-12-14 2021-04-02 上海市政工程设计研究总院(集团)有限公司 Motor vehicle dynamic positioning and navigation system with computer video and map method integrated
CN113093254A (en) * 2021-04-12 2021-07-09 南京速度软件技术有限公司 Multi-sensor fusion based vehicle positioning method in viaduct with map features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9405972B2 (en) * 2013-09-27 2016-08-02 Qualcomm Incorporated Exterior hybrid photo mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872070A (en) * 2009-04-02 2010-10-27 通用汽车环球科技运作公司 Traffic infrastructure indicator on the head-up display
CN109798872A (en) * 2017-11-16 2019-05-24 北京凌云智能科技有限公司 Vehicle positioning method, device and system
CN109405824A (en) * 2018-09-05 2019-03-01 武汉契友科技股份有限公司 A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile
CN111540237A (en) * 2020-05-19 2020-08-14 河北德冠隆电子科技有限公司 Method for automatically generating vehicle safety driving guarantee scheme based on multi-data fusion
CN112595331A (en) * 2020-12-14 2021-04-02 上海市政工程设计研究总院(集团)有限公司 Motor vehicle dynamic positioning and navigation system with computer video and map method integrated
CN113093254A (en) * 2021-04-12 2021-07-09 南京速度软件技术有限公司 Multi-sensor fusion based vehicle positioning method in viaduct with map features

Also Published As

Publication number Publication date
CN113405555A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN110160542B (en) Method and device for positioning lane line, storage medium and electronic device
CN109946732B (en) Unmanned vehicle positioning method based on multi-sensor data fusion
CN106289275B (en) Unit and method for improving positioning accuracy
US10620317B1 (en) Lidar-based high definition map generation
US11004224B2 (en) Generation of structured map data from vehicle sensors and camera arrays
US8301374B2 (en) Position estimation for ground vehicle navigation based on landmark identification/yaw rate and perception of landmarks
JP4897542B2 (en) Self-positioning device, self-positioning method, and self-positioning program
CN110307836B (en) Accurate positioning method for welt cleaning of unmanned cleaning vehicle
CN104729506A (en) Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
JP6975513B2 (en) Camera-based automated high-precision road map generation system and method
US11754415B2 (en) Sensor localization from external source data
US20200265245A1 (en) Method and system for automatic generation of lane centerline
CN113252022A (en) Map data processing method and device
CN115135963A (en) Method for generating 3D reference point in scene map
CN113405555B (en) Automatic driving positioning sensing method, system and device
US10921137B2 (en) Data generation method for generating and updating a topological map for at least one room of at least one building
JP7203805B2 (en) Analysis of localization errors of moving objects
US11846520B2 (en) Method and device for determining a vehicle position
EP4113063A1 (en) Localization of autonomous vehicles using camera, gps, and imu
CN112113580A (en) Vehicle positioning method and device and automobile
CN114111811A (en) Navigation control system and method for automatically driving public bus
Noureldin et al. a Framework for Multi-Sensor Positioning and Mapping for Autonomous Vehicles
CN113390422B (en) Automobile positioning method and device and computer storage medium
RU2772620C1 (en) Creation of structured map data with vehicle sensors and camera arrays
WO2023139935A1 (en) Computing device, own-position estimating device, and map information generating method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant