CN115031755A - Automatic driving vehicle positioning method and device, electronic equipment and storage medium - Google Patents

Automatic driving vehicle positioning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115031755A
CN115031755A CN202210720180.5A CN202210720180A CN115031755A CN 115031755 A CN115031755 A CN 115031755A CN 202210720180 A CN202210720180 A CN 202210720180A CN 115031755 A CN115031755 A CN 115031755A
Authority
CN
China
Prior art keywords
data
positioning
vehicle
tunnel
semantic map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210720180.5A
Other languages
Chinese (zh)
Inventor
李岩
费再慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhidao Network Technology Beijing Co Ltd
Original Assignee
Zhidao Network Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhidao Network Technology Beijing Co Ltd filed Critical Zhidao Network Technology Beijing Co Ltd
Priority to CN202210720180.5A priority Critical patent/CN115031755A/en
Publication of CN115031755A publication Critical patent/CN115031755A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3667Display of a road map

Abstract

The application discloses a method and a device for positioning an automatic driving vehicle, electronic equipment and a storage medium, wherein the method comprises the steps of respectively matching an original image and a pavement element binary image in a tunnel scene according to prior parameters to obtain positioning data; projecting the laser point cloud data of the images at the same moment to obtain directional data; performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data; and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map. By the method and the device, the semantic map is established, and the vehicle can be accurately positioned in a tunnel scene.

Description

Automatic driving vehicle positioning method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for positioning an automatic driving vehicle, an electronic device, and a storage medium.
Background
With the development of automatic driving technology, automatic driving vehicles are increasingly applied to urban scenes. Different from high-speed and other open scenes, the interference of urban canyons, urban tunnels, viaducts and the like in cities on GNSS signals can seriously affect the positioning of vehicles, and the absolute positioning can deviate by several meters or even dozens of meters in serious conditions, so that the safety of automatically driving the vehicles is reduced, and the frequency of manual takeover is increased.
Two processing methods are mainly used for the difficult scene, one method is to use the vehicle body chassis information such as the vehicle speed, the course angle change rate Yawrate and the like, and when the GNSS signal is interfered, the time and the precision of the track deduction are improved. Another approach is to use additional sensors, such as cameras, LIDAR, etc., in conjunction with a high precision map or a pre-established point cloud map for localization.
If in a tunnel scene, the two methods have defects, the accuracy of vehicle positioning is still influenced.
Disclosure of Invention
The embodiment of the application provides an automatic driving vehicle positioning method and device, electronic equipment and a storage medium, so that fusion of multiple sensors is realized, and a semantic map based on road surface information is established, so that the positioning accuracy in a long tunnel scene is ensured.
The embodiment of the application adopts the following technical scheme:
in a first aspect, an embodiment of the present application provides an automatic driving vehicle positioning method, where, for a tunnel scene, the method includes: respectively matching an original image and a road surface element binary image in a tunnel scene according to prior parameters to obtain positioning data; projecting laser point cloud data of an original image at the same moment to obtain directional data; performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data; and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
In a second aspect, an embodiment of the present application further provides an automatic driving vehicle positioning device, where, for a tunnel scene, the device includes: the first acquisition module is used for respectively matching an original image and a road surface element binary image in a tunnel scene according to the prior parameters to acquire positioning data; the second acquisition module is used for projecting the laser point cloud data of the original image at the same moment to obtain directional data; the semantic map module is used for performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data; and the positioning module is used for positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor; and a memory arranged to store computer executable instructions that, when executed, cause the processor to perform the above method.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium storing one or more programs that, when executed by an electronic device including a plurality of application programs, cause the electronic device to perform the above-described method.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects:
the method comprises the steps of obtaining positioning data through prior parameters, carrying out global optimization on a semantic map of a tunnel scene by combining directional data to obtain the semantic map, and accurately positioning a vehicle in the tunnel scene through the semantic map.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart of an autonomous vehicle positioning method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an autonomous vehicle positioning apparatus according to an embodiment of the present application;
FIG. 3 is a semantic view (numbers) of a road surface in the automatic driving vehicle positioning method according to the embodiment of the present application;
FIG. 4 is a schematic road surface semantic diagram (arrow) in the automatic driving vehicle positioning method in the embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The inventor finds that the vehicle body data is not considerable data, and after parameters such as error coefficients and the like are calibrated, although the track deduction time of the IMU can be prolonged, if absolute observation data cannot be obtained for a long time or situations such as slipping and bumping occur, the positioning accuracy cannot be guaranteed.
Meanwhile, under the condition that the SLAM based on the laser radar has rich characteristic points and loops, a map with high precision can be established in advance, and observation information can be provided in urban canyons, viaducts and the like instead of GNSS. However, in long scenes with the same characteristics, such as tunnels, the lidar SLAM degrades and cannot provide effective positioning.
In addition, similar to the laser radar SLAM, the effect of the visual SLAM also depends heavily on the quality of the feature points in the scene, and is difficult to use in the tunnel.
Based on the above problems, the embodiment of the present application provides an automatic driving vehicle positioning method, and a 2D semantic SLAM method based on multi-sensor fusion: A2D semantic map based on road surface information is established by fusing the sensors by using a camera, a laser radar, vehicle body data and an IMU, and the positioning precision in a long tunnel scene is ensured based on the semantic map.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
The embodiment of the present application provides an automatic driving vehicle positioning method, and as shown in fig. 1, provides a schematic flow chart of the automatic driving vehicle positioning method in the embodiment of the present application, where the method at least includes the following steps S110 to S140:
and step S110, respectively matching the original image and the road surface element binary image in the tunnel scene according to the prior parameters to obtain positioning data.
And processing tunnel data in the tunnel scene according to the prior parameters. For the prior parameter, the prior parameter is mainly related to the course angle change rate and the pixel scale.
Further, the original image in the tunnel scene is matched according to the prior parameters, and the moving direction and the distance of the vehicle can be determined through the feature points obtained through matching.
Further, the road surface element binary image in the tunnel scene is matched according to the prior parameters, and the moving direction and the distance of the vehicle can be determined in a template matching mode.
It should be noted that, when the above-mentioned matching of the original image and the road surface binary element image based on the prior parameter is performed, the positioning data is obtained, and which way to obtain the positioning data may be determined according to the feature points that can be actually obtained by matching or the preset range that is obtained by template matching.
It can be understood that matching is also required to be performed on the original image or the road surface element binary image after IPM transformation is performed.
And step S120, projecting the laser point cloud data of the image at the same moment to obtain orientation data.
The image comprises an original image and/or a binary image of a road surface element, laser point cloud data at the same time with the image needs to be determined, and then the laser point cloud data at the same time is projected to obtain orientation data.
It can be understood that in the tunnel scene, the laser point cloud data mainly refers to the lateral SLAM positioning data on both sides of the tunnel. Meanwhile, the heading angle of the vehicle, i.e., the orientation/direction factor, is also mainly considered.
Step S130, performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data.
According to the positioning data and the directional data, global optimization can be carried out on the data in the tunnel scene, and a preset 2D semantic map is established according to the result after the global optimization.
It can be understood that the preset 2D semantic map includes visual positioning data and laser positioning data, and according to actual situations, the preset 2D semantic map generally includes visual SLAM data and laser SLAM data. In special cases, there is only one type of SLAM location data.
And step S140, positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
Based on visual SLAM positioning data and/or laser SLAM positioning data in the preset semantic map, the vehicle is positioned when passing through a tunnel scene. Meanwhile, the laser radar data and the image data are used, so that the problem of instability caused by a single sensor is solved, for example, image recognition failure caused by illumination in a tunnel or inaccurate calculation of the edge distance of the tunnel due to shielding of laser light received by surrounding vehicles is solved.
Preferably, it can be applied to long tunnel scenarios.
In an embodiment of the present application, the respectively matching an original image and a binary image of a road surface element in a tunnel scene according to a priori parameters to obtain positioning data includes: according to a first prior parameter, matching an original image in the tunnel scene through feature points to obtain a target feature point pair through an IPM (intelligent platform management) transformed image, and calculating the moving direction and distance of a vehicle according to the target feature point to obtain first positioning data; and/or matching the binary image of the road surface element in the tunnel scene through a template to obtain a preset range according to a second prior parameter, and calculating the moving direction and the distance of the vehicle according to the preset range to obtain second positioning data.
In specific implementation, the tunnel data is processed: according to the first prior parameter, the target feature point is obtained by matching the feature point with the original image in the tunnel scene, for example, a SIFT or KAZE feature point matching method may be adopted for the original image Isrc. And (3) screening out characteristic point pairs conforming to the motion rule and calculating the moving direction and distance by using the vehicle body speed and the course angle change rate Yawrate information as a first prior parameter. And the vehicle body data is used as prior information, so that the stable line of image matching is improved, and the time of image matching is reduced.
And (3) processing tunnel data: the road surface element binary image I after the depth model processing bw And carrying out IPM transformation and matching. According to the second prior parameter, a preset range is obtained by matching the road surface element binary image in the tunnel scene through a template, namely the binary image I bw A template matching method may be employed. The method can intercept the rich sub-area of the road surface element from the previous frame data, and obtain the approximate range of template matching by prior, match and calculate the moving direction and distance. And calculating the moving direction and the distance of the vehicle according to the preset range to obtain second positioning data.
In one embodiment of the present application, the obtaining of the orientation data by projecting the laser point cloud data of the original image at the same time includes: and performing two-dimensional projection on the laser point cloud data of the original image at the same moment to obtain a tunnel edge line in the tunnel scene as the directional data, wherein the tunnel edge line comprises two parallel straight lines or curves, and the original image comprises an image in front of the vehicle.
In specific implementation, two-dimensional projection is carried out on the laser point cloud data of the original image at the same time to obtain the laser point cloud data at the same time as the image, and 2D projection is carried out on the laser point cloud data. And further obtaining a tunnel edge line in the tunnel scene as the directional data, and obtaining two parallel straight lines or curves of the tunnel edge as a direction factor of subsequent optimization.
In an embodiment of the present application, the tunnel scene is globally optimized according to the positioning data and the directional data, and a preset semantic map is established according to a result after the global optimization, where the preset semantic map includes at least one of the following data: visual positioning data, laser positioning data include: according to the positioning data and the orientation data, global optimization is carried out on the positioning data in the tunnel scene by taking absolute positioning information with GNSS before entering the tunnel and after exiting the tunnel as a datum point; and establishing a 2D semantic map according to the positioning data in the tunnel scene.
In specific implementation, according to the positioning data and the orientation data, absolute positioning information with GNSS before entering the tunnel and after exiting the tunnel is taken as a datum point, the positioning data in the tunnel is globally optimized, and a 2D semantic map is established according to the positioning data.
It should be noted that, the accuracy of the longitudinal positioning of the vehicle is ensured by using the GNSS data before entering and exiting the tunnel and other semantic information of the road surface except the lane line.
In one embodiment of the present application, the objective function of the global optimization is defined as follows: min (cost) min (ω) src *cost srcbw *cost bwlidar *cost lidar )
Wherein, ω is src 、ω bw 、ω lidar Respectively the weight of the original image matching, the weight of the binary image matching, the weight of the laser matching, cost src Reprojection error for original image feature matching, cost bw SSD error for binary image template matching, cost lidar The angle error calculated for the laser point cloud. The course angle information in the tunnel scene can be optimized through the global optimization objective function, so that the positioning accuracy is ensured.
In an embodiment of the present application, the positioning a vehicle in a tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map includes: when the visual matching data of the self-vehicle comprises lane line data, matching the preset semantic map to perform transverse positioning of the self-vehicle in a tunnel scene; and/or when only the laser of the vehicle is matched with the data, matching the preset semantic map to transversely position the vehicle in the tunnel scene; and/or when the visual matching data of the self-vehicle comprises longitudinal semantic locating information, the preset semantic map is matched to correct the longitudinal locating accumulated error of the self-vehicle in the tunnel scene.
It can be understood that the visual positioning data and the laser positioning data in the 2D semantic map data are in a pre-established semantic map, and the vehicle needs to be matched with the 2D semantic map data according to the visual matching data and/or the laser matching data obtained by the vehicle in the positioning process based on the pre-established 2D semantic map data.
When the method is specifically implemented, the positioning in the tunnel can be carried out according to the established semantic map, and due to the requirement of real-time performance, the positioning does not use original image information, and only uses binary images and laser radar data for matching. It is understood that the calculation speed is reduced and the calculation time is increased if the original image information is used.
For example, when only the lane line data is only the lidar data, the lateral positioning of the own vehicle can be performed.
For example, as shown in fig. 3 and fig. 4, the accumulated error of longitudinal positioning of the vehicle is corrected when the longitudinal semantic positioning information is used (the road surface auxiliary positioning factors, i.e., the numbers in fig. 3 and the arrows in fig. 4, can correct the accumulated error of longitudinal positioning), so that the positioning information does not jump greatly before and after the vehicle exits the tunnel.
In one embodiment of the present application, the a priori parameters include: calculating the speed and the course angle change rate Yawrate of the vehicle according to the fact that the positioning information when the GNSS signal is normal is a true value, comparing the speed and the course angle change rate Yawrate with the speed of the chassis of the vehicle body and the Yawrate, and calculating an error proportional coefficient of the speed of the vehicle body and the Yawrate; and/or calculating a scale coefficient of a pixel of the image and an external reference error of the camera according to the fact that the positioning information when the GNSS signal is normal is a true value.
In specific implementation, the manner of obtaining the prior parameters includes, but is not limited to, acquiring multiple sets of data offline, including data in the case of good GNSS signals and data of the tunnel to be processed, including: IMU information, GNSS positioning information, vehicle body speed, images, laser radar point cloud data and the like.
Then, the data in the case of good GNSS signals are processed:
according to the fact that the positioning information when the GNSS signal is normal is true, the speed and the course angle change rate Yawrate of the vehicle are calculated, the speed and the course angle change rate Yawrate are compared with the speed of the chassis of the vehicle body and the Yawrate, the error proportion coefficient of the speed and the Yawrate of the vehicle body is calculated, namely the positioning information of the GNSS is true, the speed and the course angle change rate of the automatic driving vehicle are calculated through differentiation, the speed and the course angle change rate are compared with the speed of the chassis of the vehicle body and the Yawrate, and the error proportion coefficient of the speed and the Yawrate of the vehicle body is calculated.
According to the fact that the positioning information when the GNSS signal is normal is true, the scale coefficient of the pixel of the image is calculated, namely the positioning information of the GNSS is true, the scale coefficient of the pixel of the image is calculated, namely: how many meters a pixel corresponds to in the image.
The embodiment of the present application further provides an autonomous vehicle positioning apparatus 200, as shown in fig. 2, which provides a schematic structural diagram of the autonomous vehicle positioning apparatus in the embodiment of the present application, where the autonomous vehicle positioning apparatus 200 at least includes: a first obtaining module 210, a second obtaining module 220, a semantic map module 230, and a positioning module 240, wherein:
in an embodiment of the present application, the first obtaining module 210 is specifically configured to: and respectively matching the original image in the tunnel scene with the road surface element binary image according to the prior parameters to obtain positioning data.
And processing tunnel data in the tunnel scene according to the prior parameters. For the prior parameter, the prior parameter is mainly related to the course angle change rate and the pixel scale.
Further, the original image in the tunnel scene is matched according to the prior parameters, and the moving direction and the distance of the vehicle can be determined through the feature points obtained through matching.
Further, the road surface element binary image in the tunnel scene is matched according to the prior parameters, and the moving direction and the distance of the vehicle can be determined in a template matching mode.
It should be noted that, when the above-mentioned matching of the original image and the road surface binary element image based on the prior parameter is performed, the positioning data is obtained, and which way to obtain the positioning data may be determined according to the feature points that can be actually obtained by matching or the preset range that is obtained by template matching.
It can be understood that matching is also required to be performed on the original image or the road surface element binary image after IPM transformation is performed.
In an embodiment of the present application, the second obtaining module 220 is specifically configured to: and projecting the laser point cloud data of the image at the same moment to obtain the orientation data.
The image comprises an original image and/or a binary image of a road surface element, laser point cloud data at the same time with the image needs to be determined, and then the laser point cloud data at the same time is projected to obtain orientation data.
It can be understood that in the tunnel scenario, the laser point cloud data mainly refers to the lateral SLAM positioning data on both sides of the tunnel. Meanwhile, the heading angle of the vehicle, i.e., the orientation/direction factor, is also mainly considered.
In an embodiment of the present application, the semantic map module 230 is specifically configured to: performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data.
According to the positioning data and the directional data, global optimization can be carried out on the data in the tunnel scene, and a preset 2D semantic map is established according to the result after the global optimization.
It can be understood that visual positioning data and laser positioning data are included in the preset 2D semantic map, and according to actual conditions, the visual positioning data and the laser positioning data can be generally contained. There will be only one type of SLAM location data unless specifically so.
In an embodiment of the present application, the positioning module 240 is specifically configured to: and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
Based on visual SLAM positioning data and/or laser SLAM positioning data in the preset semantic map, the vehicle is positioned in a tunnel scene. And meanwhile, the laser radar data and the image data are used, so that instability caused by a single sensor is guaranteed, for example, image recognition failure is caused by illumination in the tunnel, or laser is shielded by surrounding vehicles, and the calculated tunnel edge distance is inaccurate.
Preferably, it can be applied to long tunnel scenarios.
It can be understood that the above-mentioned positioning apparatus for an autonomous vehicle can implement each step of the positioning method for an autonomous vehicle provided in the foregoing embodiments, and the relevant explanations regarding the positioning method for an autonomous vehicle are applicable to the positioning apparatus for an autonomous vehicle, and are not described herein again.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. Referring to fig. 5, at a hardware level, the electronic device includes a processor, and optionally further includes an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory, such as at least 1 disk Memory. Of course, the electronic device may also include hardware required for other services.
The processor, the network interface, and the memory may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 5, but this does not indicate only one bus or one type of bus.
And the memory is used for storing programs. In particular, the program may include program code comprising computer operating instructions. The memory may include both memory and non-volatile storage and provides instructions and data to the processor.
The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs the computer program to form the automatic driving vehicle positioning device on a logic level. The processor is used for executing the program stored in the memory and is specifically used for executing the following operations:
respectively matching an original image and a road surface element binary image in a tunnel scene according to prior parameters to obtain positioning data;
projecting the laser point cloud data of the images at the same moment to obtain orientation data;
performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data;
and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
The method performed by the autonomous vehicle positioning apparatus disclosed in the embodiment of fig. 1 may be implemented in or by a processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor.
The electronic device may further execute the method executed by the positioning apparatus for an autonomous vehicle in fig. 1, and implement the functions of the positioning apparatus for an autonomous vehicle in the embodiment shown in fig. 1, which are not described herein again in this application.
Embodiments of the present application further provide a computer-readable storage medium storing one or more programs, where the one or more programs include instructions, which when executed by an electronic device including a plurality of application programs, enable the electronic device to perform the method performed by the automatic driving vehicle positioning apparatus in the embodiment shown in fig. 1, and are specifically configured to perform:
respectively matching an original image and a road surface element binary image in a tunnel scene according to prior parameters to obtain positioning data;
projecting the laser point cloud data of the images at the same moment to obtain directional data;
performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data;
and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. An autonomous vehicle positioning method, wherein for a tunnel scenario, the method comprises:
respectively matching an original image and a road surface element binary image in a tunnel scene according to prior parameters to obtain positioning data;
projecting the laser point cloud data of the images at the same moment to obtain orientation data;
performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data;
and positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map.
2. The method of claim 1, wherein the matching of the original image and the binary image of the road surface element in the tunnel scene according to the prior parameters to obtain the positioning data comprises:
matching an original image in the tunnel scene through feature points according to a first prior parameter, obtaining target feature point pairs after IPM transformation, and calculating the moving direction and distance of the vehicle according to the matched target feature point pairs to obtain first positioning data;
and/or the presence of a gas in the atmosphere,
and matching the binary image of the road surface element in the tunnel scene through a template to obtain a preset range according to a second prior parameter, and calculating the moving direction and the distance of the vehicle according to the preset range to obtain second positioning data.
3. The method of claim 2, wherein obtaining orientation data by projecting laser point cloud data of the original image at the same time comprises:
and performing two-dimensional projection on the laser point cloud data of the original image at the same moment to obtain a tunnel edge line in the tunnel scene as the directional data, wherein the tunnel edge line comprises two parallel straight lines or curves, and the original image comprises an image in front of the vehicle.
4. The method according to claim 1, wherein the tunnel scene is globally optimized according to the positioning data and the directional data, and a preset semantic map is established according to a result of the global optimization, wherein the preset semantic map includes at least one of the following data: visual positioning data, laser positioning data include:
according to the positioning data and the orientation data, global optimization is carried out on the positioning data in the tunnel scene by taking absolute positioning information with GNSS before entering the tunnel and after exiting the tunnel as a datum point;
and establishing a 2D semantic map according to the positioning data in the tunnel scene.
5. The method of claim 4, wherein the globally optimized objective function is defined as follows:
min(cost)=min(ω src *cost srcbw *cost bwlidar *cost lidar )
wherein, ω is src 、ω bw 、ω lidar Respectively as the weight of the original image matching, the weight of the binary image matching, the weight of the laser matching, cost src Reprojection error for original image feature matching, cost bw SSD error for binary image template matching, cost lidar The angle error calculated for the laser point cloud.
6. The method according to claim 1, wherein the positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser positioning data in the preset semantic map comprises:
when the visual matching data of the self-vehicle comprises lane line data, matching the preset semantic map to perform transverse positioning of the self-vehicle in a tunnel scene;
and/or when only the laser of the vehicle is matched with the data, matching the preset semantic map to perform the transverse positioning of the vehicle in the tunnel scene;
and/or when the visual matching data of the self-vehicle comprises longitudinal semantic positioning information, the preset semantic map is matched to correct the longitudinal positioning accumulated error of the self-vehicle in the tunnel scene.
7. The method of claim 1, wherein the a priori parameters comprise:
calculating the speed and the course angle change rate Yawrate of the vehicle according to the fact that the positioning information when the GNSS signal is normal is a true value, comparing the speed and the course angle change rate Yawrate with the speed of the chassis of the vehicle body and the Yawrate, and calculating an error proportional coefficient of the speed of the vehicle body and the Yawrate;
and/or calculating a scale coefficient of a pixel of the image and an external parameter error of the camera according to the fact that the positioning information when the GNSS signal is normal is a true value.
8. An autonomous vehicle positioning apparatus, for use in a tunnel scenario, the apparatus comprising:
the first acquisition module is used for respectively matching an original image and a road surface element binary image in a tunnel scene according to the prior parameters to acquire positioning data;
the second acquisition module is used for projecting the laser point cloud data of the image at the same moment to acquire directional data;
the semantic map module is used for performing global optimization on data in the tunnel scene according to the positioning data and the directional data, and establishing a preset semantic map according to a result after the global optimization, wherein the preset semantic map at least comprises one of the following data: visual positioning data and laser positioning data;
and the positioning module is used for positioning the vehicle in the tunnel scene according to the visual positioning data and/or the laser SLAM data in the preset semantic map.
9. An electronic device, comprising:
a processor; and
a memory arranged to store computer executable instructions which, when executed, cause the processor to perform the method of any of claims 1 to 7.
10. A computer readable storage medium storing one or more programs which, when executed by an electronic device comprising a plurality of application programs, cause the electronic device to perform the method of any of claims 1-7.
CN202210720180.5A 2022-06-13 2022-06-13 Automatic driving vehicle positioning method and device, electronic equipment and storage medium Pending CN115031755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210720180.5A CN115031755A (en) 2022-06-13 2022-06-13 Automatic driving vehicle positioning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210720180.5A CN115031755A (en) 2022-06-13 2022-06-13 Automatic driving vehicle positioning method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115031755A true CN115031755A (en) 2022-09-09

Family

ID=83127126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210720180.5A Pending CN115031755A (en) 2022-06-13 2022-06-13 Automatic driving vehicle positioning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115031755A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390086A (en) * 2022-10-31 2022-11-25 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115390086A (en) * 2022-10-31 2022-11-25 智道网联科技(北京)有限公司 Fusion positioning method and device for automatic driving, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114279453B (en) Automatic driving vehicle positioning method and device based on vehicle-road cooperation and electronic equipment
CN115143952A (en) Automatic driving vehicle positioning method and device based on visual assistance
CN115493602A (en) Semantic map construction method and device, electronic equipment and storage medium
CN114894214A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115376090A (en) High-precision map construction method and device, electronic equipment and storage medium
CN115077541A (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN114547222A (en) Semantic map construction method and device and electronic equipment
CN114973198A (en) Course angle prediction method and device of target vehicle, electronic equipment and storage medium
CN114966632A (en) Laser radar calibration method and device, electronic equipment and storage medium
CN115031755A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
CN114877900A (en) Automatic driving vehicle fusion positioning method for tunnel and related device
CN114993333A (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115950441B (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN115856979B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN116740680A (en) Vehicle positioning method and device and electronic equipment
CN114739416A (en) Automatic driving vehicle positioning method and device, electronic equipment and storage medium
CN114037977B (en) Road vanishing point detection method, device, equipment and storage medium
CN116164763A (en) Target course angle determining method and device, electronic equipment and storage medium
CN115620277A (en) Monocular 3D environment sensing method and device, electronic equipment and storage medium
CN115014332A (en) Laser SLAM mapping method and device, electronic equipment and computer readable storage medium
CN115128655B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN114705121B (en) Vehicle pose measurement method and device, electronic equipment and storage medium
CN116559899B (en) Fusion positioning method and device for automatic driving vehicle and electronic equipment
CN114648576B (en) Target vehicle positioning method, device and system
CN116168087A (en) Verification method and device for road side camera calibration result and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination