CN110057373B - Method, apparatus and computer storage medium for generating high-definition semantic map - Google Patents

Method, apparatus and computer storage medium for generating high-definition semantic map Download PDF

Info

Publication number
CN110057373B
CN110057373B CN201910323449.4A CN201910323449A CN110057373B CN 110057373 B CN110057373 B CN 110057373B CN 201910323449 A CN201910323449 A CN 201910323449A CN 110057373 B CN110057373 B CN 110057373B
Authority
CN
China
Prior art keywords
map
point cloud
semantic
image
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910323449.4A
Other languages
Chinese (zh)
Other versions
CN110057373A (en
Inventor
曹明玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NIO Co Ltd
Original Assignee
NIO Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NIO Co Ltd filed Critical NIO Co Ltd
Priority to CN201910323449.4A priority Critical patent/CN110057373B/en
Publication of CN110057373A publication Critical patent/CN110057373A/en
Application granted granted Critical
Publication of CN110057373B publication Critical patent/CN110057373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Abstract

The present invention relates to a vehicle navigation technology and an automatic driving technology, and more particularly, to a method, an apparatus, a computer storage medium, and a navigation map mapping vehicle including the apparatus for generating a high-definition semantic map. A method for generating a high-definition semantic map according to one aspect of the present invention comprises the steps of: acquiring a point cloud map and a plurality of images related to the surrounding environment; determining a semantic recognition object from the object identified in the image and a corresponding object in the point cloud map, the object being associated with an object located in a plane intersecting the ground; extracting static environment semantic features from the point cloud map; storing the semantic recognition object and the descriptor about the static environment semantic features in a semantic map localization layer; generating a fine geographic category probability map for extracting ground marks from the image and the point cloud map; and generating a high-definition semantic map planning layer by utilizing the semantic recognition object and the fine geographic category probability map, wherein the planning layer comprises road topological relations, lane topological relations and basic road indication information.

Description

Method, apparatus and computer storage medium for generating high-definition semantic map
Technical Field
The present invention relates to vehicle navigation technology and autopilot technology, and in particular to a method, apparatus, computer storage medium and navigation map mapping vehicle incorporating the apparatus for generating a high-definition semantic map.
Background
Traditional navigation maps cannot meet the need of automatic driving due to insufficient precision. High-definition maps are becoming industry consensus as a necessary ring in unmanned operation, and have advantages such as high accuracy and multiple dimensions. The high-definition map can provide more prospective information indication and information redundancy for the driving system, and realize matching and positioning of the automobile, so that the driving system can sense a larger range of traffic situation, and the safety of automatic driving is ensured.
Among the high-definition maps, the point cloud map is favored by the industry of automatic driving because of the advantages of being not influenced by environmental illumination, being accurate in environmental modeling and the like. However, the data volume of the point cloud map is huge, which brings great obstruction to map storage and online matching and positioning. In recent years, although research has been made in the industry on sensor fusion, the internal semantic structure of a point cloud map, and the like, a high-definition map generation method and apparatus capable of effectively reducing the amount of data while maintaining high accuracy have not been provided.
Disclosure of Invention
An object of the present invention is to provide a method, apparatus, and computer storage medium for generating a high-definition semantic map, which can reduce the data amount of the map while ensuring a high-definition level of the map.
A method for generating a high-definition semantic map according to one aspect of the present invention comprises the steps of:
acquiring a point cloud map and a plurality of images related to the surrounding environment;
determining a semantic recognition object from the targets identified in the image and the corresponding targets in the point cloud map, the semantic recognition object being associated with an object located in a plane intersecting the ground;
extracting static environment semantic features from the point cloud map;
storing the semantic recognition object and the descriptor about the static environment semantic feature in a semantic map localization layer;
generating a fine geographic category probability map for extracting ground marks from the image and the point cloud map; and
and generating a high-definition semantic map planning layer by utilizing the semantic recognition object and the fine geographic category probability map, wherein the high-definition semantic map planning layer comprises road topological relations, lane topological relations and basic road indication information.
Optionally, in the above method, each of the images is acquired in the following manner:
performing distortion correction processing on monocular images shot by a plurality of monocular cameras by using internal calibration parameters; and
and splicing the plurality of monocular images subjected to distortion correction into one image by using external calibration parameters.
Optionally, in the above method, the point cloud map is acquired in the following manner:
performing reflectivity correction processing and motion error compensation processing on the point cloud data frame; and
and according to the positioning information, splicing the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing together to obtain the point cloud map.
Optionally, in the method, the positioning information is obtained by performing fusion filtering and resolving processing on two or more of GNSS/INS combined navigation data, wheel speed data, simultaneous positioning and mapping positioning data based on laser radar point cloud data and simultaneous positioning and mapping positioning data of multiple cameras.
Optionally, in the above method, the target is at least one of a traffic sign, a traffic light, and a static obstacle.
Optionally, in the above method, the step of determining the semantic recognition object includes:
identifying the target in the image;
identifying a corresponding target in the point cloud map; and
and carrying out fusion matching processing on the targets identified in the image and the corresponding targets in the point cloud map by means of the joint calibration transformation relation between the image and the point cloud map so as to determine the semantic identification objects.
Optionally, in the above method, the static environmental semantic features include edge features and plane features in a point cloud data frame.
Optionally, in the above method, the semantic recognition object is stored in a vectorized form within the semantic map localization layer.
Optionally, in the above method, the fine geographic category probability map for extracting the ground reticle is generated as follows:
generating a first geographic category probability map for extracting ground markings from the image;
generating a second geographic category probability map for extracting ground markings from the point cloud data frame; and
and generating the fine geographic category probability map by fusing the first geographic category probability map and the second geographic category probability map.
Optionally, in the above method, the fine geographic category probability map is saved in vectorized form within the semantic map planning map layer.
Optionally, in the above method, the step of generating a high-definition semantic map planning layer includes:
and generating a lane topological graph and a road topological group graph according to the semantic recognition object by using the lane separation lines, the road edges and the stop lines extracted from the fine geographic category probability map.
An in-vehicle system for generating a high-definition semantic map according to another aspect of the present invention includes:
an image acquisition unit configured to acquire a plurality of images related to a surrounding environment;
the point cloud data acquisition unit is configured to acquire a point cloud map related to the surrounding environment;
a processing unit configured to perform the steps of:
determining a semantic recognition object from the targets identified in the image and the corresponding targets in the point cloud map, the semantic recognition object being associated with an object located in a plane intersecting the ground;
extracting static environment semantic features from the point cloud map;
storing the semantic recognition object and the descriptor about the static environment semantic feature in a semantic map localization layer;
generating a fine geographic category probability map for extracting ground marks from the image and the point cloud map; and
and generating a high-definition semantic map planning layer by utilizing the semantic recognition object and the fine geographic category probability map, wherein the high-definition semantic map planning layer comprises a lane topological graph, a road topological graph and basic road indication information.
Optionally, in the above vehicle-mounted system, the image acquisition unit includes:
a plurality of monocular cameras;
an image processor configured to perform the steps of:
performing distortion correction processing on monocular images shot by a monocular camera by using internal calibration parameters, wherein the internal calibration parameters are determined according to image data of the same calibration object shot by one monocular camera at different angles and different distances;
and splicing the plurality of monocular images subjected to distortion correction processing into one image by using external calibration parameters, wherein the external calibration parameters are determined according to the images of the same calibration object shot by the plurality of monocular cameras.
Optionally, in the above vehicle-mounted system, the point cloud data acquisition unit includes:
a laser radar configured to emit a first laser beam and to receive a reflected second laser beam;
a processor configured to perform the steps of:
performing reflectivity correction processing and motion error compensation processing on the point cloud data frame; and
and according to the positioning information, splicing the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing together to obtain the point cloud map.
Optionally, in the above vehicle-mounted system, further comprising:
a GNSS/INS unit configured to provide GNSS/INS integrated navigation data;
a vehicle wheel speed meter unit configured to provide wheel speed data;
a multi-sensor data time synchronization unit configured to synchronize data of the image acquisition unit, the point cloud data acquisition unit, the GNSS/INS unit, and the vehicle wheel speed meter unit,
the processing unit is configured to obtain the positioning information by performing fusion filtering and resolving processing on two or more data of GNSS/INS combined navigation data, wheel speed data, simultaneous positioning and mapping positioning data based on laser radar point cloud data and simultaneous positioning and mapping positioning data of multiple cameras.
An apparatus for generating a high-definition semantic map according to another aspect of the present invention includes a memory, a processor, and a computer program stored on the memory and executable on the processor, which executes the program to implement the method as described above.
A computer-readable storage medium according to still another aspect of the present invention stores a computer program thereon, wherein the program, when executed by a processor, implements the method as described above.
A navigational mapping vehicle according to yet another aspect of the invention comprises means for generating a high-definition semantic map having one or more of the features described above.
In one or more embodiments in accordance with the invention, the semantic recognition objects and descriptors relating to static environmental semantic features are stored within a semantic map localization layer, and a high-definition semantic map planning layer is generated from the semantic map planning layer based on the semantic recognition objects, which enables a flexible, extensible layer structure for the generated high-definition semantic map that not only facilitates interfacing with matching localization functions and path planning functions required for autopilot, but also facilitates maintenance and updating. Furthermore, the preservation of semantic recognition objects, descriptors, and fine geographic class probability maps in a vector manner can effectively reduce the amount of data.
Drawings
The foregoing and/or other aspects and advantages of the present invention will become more apparent and more readily appreciated from the following description of the various aspects taken in conjunction with the accompanying drawings in which like or similar elements are designated with the same reference numerals. The drawings include:
FIG. 1 is a block diagram of an in-vehicle system for generating a high-definition semantic map according to one embodiment of the present invention.
Fig. 2A and 2B are top and side views, respectively, of a navigational mapping vehicle in accordance with another embodiment of the present invention.
Fig. 3 is a flow chart of a method for generating a high-definition semantic map according to yet another embodiment of the present invention.
Fig. 4 is a block diagram of an apparatus for generating a high-definition semantic map according to still another embodiment of the present invention.
Detailed Description
The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which illustrative embodiments of the invention are shown. This invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. The above-described embodiments are provided to fully convey the disclosure herein and to more fully convey the scope of the invention to those skilled in the art.
In this specification, terms such as "comprising" and "including" mean that there are other elements and steps not directly or explicitly recited in the description and claims, nor does the inventive solution exclude the presence of other elements or steps.
The terms such as "first" and "second" do not denote the order of units in terms of time, space, size, etc. but rather are merely used to distinguish one unit from another.
FIG. 1 is a block diagram of an in-vehicle system for generating a high-definition semantic map according to one embodiment of the present invention.
The in-vehicle system 10 for generating a high-definition semantic map shown in fig. 1 includes an image acquisition unit 110, a point cloud data acquisition unit 120, a processing unit 130, a global satellite navigation-inertial navigation combination (GNSS/INS) unit 140, a vehicle wheel speed meter unit 150, and a multi-sensor data time synchronization unit 160. It should be noted that the GNSS/INS unit 140, the vehicle wheel speed meter unit 150 and the multi-sensor data time synchronization unit 160 may alternatively be considered external units of the on-board system 10.
In the in-vehicle system 10 shown in fig. 1, the image acquisition unit 110 is configured to acquire a plurality of images related to the surrounding environment, and the point cloud data acquisition unit 120 is configured to acquire a point cloud map related to the surrounding environment. The GNSS/INS unit 140 is configured to provide GNSS/INS integrated navigation data and the vehicle wheel speed meter unit 150 is configured to provide wheel speed data.
Optionally, the image acquisition unit 110 comprises a plurality of monocular cameras 111 and an image processor 112 coupled to the monocular cameras. The image processing 112 is configured to generate an image in the following manner: a plurality of monocular images photographed by the plurality of monocular cameras 111 at the same time are received, then distortion correction processing is performed on the image photographed by each monocular camera according to the internal calibration parameters of the monocular camera, and then the plurality of monocular images subjected to the distortion correction processing are stitched together using the external calibration parameters, thereby generating one image. In this embodiment, alternatively, the internal calibration parameters may be determined according to image data of the same calibration object photographed by one monocular camera at different angles and different distances, and the external calibration parameters may be determined according to images of the same calibration object photographed by a plurality of monocular cameras.
Optionally, the point cloud data acquisition unit 120 comprises a lidar 121 (e.g. a tilted multi-line lidar) and a processor 122. The lidar 121 is configured to emit a first laser beam toward the surrounding environment and to receive a second laser beam reflected by objects in the environment (e.g., buildings, traffic lights, traffic signs, vehicles, pedestrians, roadway partitions, roads, etc.). The processor 122 is configured to generate a point cloud map in the following manner: and performing reflectivity correction processing and motion error compensation processing on each point cloud data frame, and then splicing the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing together according to the positioning information to obtain a point cloud map.
The positioning information may be from positioning data associated with one or more of the following: GNSS/INS combines navigation data, wheel speed data, simultaneous localization and mapping localization data based on laser radar point cloud data and simultaneous localization and mapping localization data of multiple cameras. When a plurality of types of positioning data are employed, the processing unit 130 is configured to obtain positioning information by subjecting two or more types of data to fusion filtering solution processing.
Alternatively, the point cloud data frame is corrected using a reflectance correction table based on a reflectance correction algorithm, and the point cloud data frame is compensated for motion errors based on combined navigation data (e.g., provided by the GNSS/INS unit 140) and wheel speed data (e.g., provided by the vehicle wheel speed meter unit 150).
As shown in fig. 1, the multi-sensor data time synchronization unit 160 is coupled with the image acquisition unit 110, the point cloud data acquisition unit 120, the processing unit 130, the GNSS/INS unit 140, and the vehicle wheel speed meter unit 150, and is configured to synchronize data acquired by the image acquisition unit, the point cloud data acquisition unit, the GNSS/INS unit, and the vehicle wheel speed meter unit.
The processing unit 130 is coupled with the image acquisition unit 110, the point cloud data acquisition unit 120, the GNSS/INS unit 140, the vehicle wheel speed meter unit 150.
In this embodiment, the processing unit 130 is configured to perform calibration operations of sensor data, including, but not limited to, joint calibration of data of the lidar and the integrated navigation, calibration of monocular camera internal calibration parameters, calibration of multi-camera external calibration parameters, and joint calibration of data of the image acquisition unit and the lidar. For example, the processing unit 130 may perform joint calibration on the point cloud data collected by the lidar, the GNSS/INS integrated navigation data, and the wheel speed data collected by the vehicle wheel speed meter, so as to obtain a transformation relationship between the lidar sensor coordinate system and the integrated navigation sensor coordinate system. As another example, the processing unit 130 may determine the internal calibration parameters of one monocular camera from the image data of the same calibration object photographed at different angles and different distances by the monocular camera, and determine the external calibration parameters from the images of the same calibration object photographed by a plurality of monocular cameras. Also for example, the processing unit 130 may determine the joint calibration transformation relationship between the image and the point cloud map according to the calibration image (for example, the image spliced by the internal and external calibration parameter modes) and the point cloud data of the calibrated lidar (for example, the joint calibration mode based on the point cloud data, the GNSS/INS joint navigation data and the wheel speed data).
In the present embodiment, the processing unit 130 is further configured to generate a high-definition semantic map including a semantic map localization layer and a semantic map planning layer.
Optionally, the semantic map localization map layer contains semantic recognition objects and descriptors on static environment semantic features. The semantic recognition objects described herein may be associated with objects that lie in a plane intersecting the ground, including, for example, but not limited to, traffic signs, traffic lights, and static obstacles. In the present embodiment, the semantic recognition object may be determined in the following manner: objects (e.g., traffic signs, traffic lights, and static obstacles such as traffic barriers (piers), street lamps, etc.) are identified in the image and corresponding objects are identified in the point cloud map, and then the objects identified in the image and the corresponding objects in the point cloud map are subjected to fusion matching processing by means of joint calibration transformation relationship between the image and the point cloud map to determine semantic identification objects.
In this embodiment, the static environment semantic features are extracted from the point cloud map and descriptors about the static environment semantic features are saved in the semantic map localization layer. Optionally, the static environmental semantic features include edge features and planar features in the point cloud data frame.
In the present embodiment, the high-definition semantic map planning layer contains a lane topology map, a road topology map, and basic road indication information. For example, a fine geographic class probability map for extracting ground markings may be generated from an image and a point cloud map and stored in a semantic map planning layer, and then a lane topology map and a road topology group map are generated from lane separation lines, road edges, and stop lines extracted from the fine geographic class probability map according to a semantic recognition object.
Alternatively, the fine geographic category probability map may be generated in the following manner: a first geographic category probability map for extracting the ground mark is generated from the image, then a second geographic category probability map for extracting the ground mark is generated from the point cloud map, and then a fine geographic category probability map is generated by fusing the first geographic category probability map and the second geographic category probability map. Generally, the first geographic category probability map is finer than the second geographic category probability map.
Fig. 2A and 2B are top and side views, respectively, of a navigational mapping vehicle in accordance with another embodiment of the present invention.
As shown in fig. 2A and 2B, a plurality of monocular cameras 210 are provided outside the vehicle 20 (e.g., in front of the upper surface of the vehicle). As shown, the mounting height of the monocular cameras 210 is set, for example, to ensure that the combination of monocular images they provide can cover at least three lanes and at the same time ensure adequate field of view overlap between the two monocular cameras; the GNSS module 220 (e.g., a dual antenna GNSS signal receiving module) is mounted, for example, in the middle of the upper surface of the vehicle 20 at a mutual distance of not less than 1m, optionally 1.2 to 1.5 meters; lidar 230 (e.g., a tilting multi-line lidar module) is mounted to the rear of the upper surface of vehicle 20 at a mounting height selected, for example, to ensure that it is not obscured by the rear of the vehicle, at a tilt angle in the range of 30 degrees to 60 degrees to ensure that its field of view is not obscured and as far as possible within the safe vehicle distance of the rear of the vehicle; the vehicle wheel speed meter unit 240 is mounted near the wheels (for example, near the rear wheels); the integrated navigational positioning unit 250 is mounted on the vehicle center axis near the center of the rear axle of the vehicle and is set at an angle, for example, as aligned as possible to the vehicle center axis direction, and includes an accelerometer, a gyroscope, and a processing unit configured to fuse satellite positioning signals received by the GNSS module 220 with inertial navigational signals based on the accelerometer, the gyroscope to generate GNSS/INS integrated navigational data. As shown, the vehicle 20 also includes a multi-sensor data time synchronization module 260 disposed on a side of the vehicle.
Fig. 3 is a flow chart of a method for generating a high-definition semantic map according to yet another embodiment of the present invention. Illustratively, the apparatus shown in fig. 1 is used herein as an entity for implementing the method of the present embodiment, but it should be noted that the present embodiment and modifications, adaptations or variations thereof are not only applicable to the apparatus shown in fig. 1, but also to apparatuses having other structures.
As shown in fig. 3, in step 310, the processing unit 130 receives a plurality of images and point cloud maps related to the surrounding environment from the image acquisition unit 110 and the point cloud data acquisition unit 120, respectively. For example, each image may be acquired by a plurality of monocular cameras. For example, as described above, the distortion correction processing may be performed on the monocular images captured by the plurality of monocular cameras using the internal calibration parameters and the plurality of monocular images subjected to the distortion correction processing may be stitched into one image using the external calibration parameters, and the point cloud map of the laser radar may be obtained by stitching together the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing based on the positioning information.
Similarly, the positioning information may be one type of positioning data, or may be obtained by fusing a plurality of types of positioning data. For example, two or more data of GNSS/INS navigation positioning data, wheel speed data, simultaneous positioning and mapping positioning data based on laser radar point cloud data and simultaneous positioning and mapping positioning data of multiple cameras can be subjected to fusion filtering and resolving processing to obtain positioning information.
Then, proceeding to step 320, the processing unit 130 determines a semantic recognition object from the recognized object in the image and the corresponding object in the point cloud map. In step 320, the processing unit 130 may perform fusion matching processing on the target identified in the image and the corresponding target in the point cloud map by means of the joint calibration transformation relationship between the image and the point cloud map, so as to determine the semantic identification object.
Next, step 330 is entered, where the processing unit 130 extracts static environmental semantic features from the point cloud map. It should be noted that the order of steps 320 and 330 may be interchanged.
Then in step 340, the processing unit 130 saves the semantic recognition object and the descriptors for the static environment semantic features in the semantic map localization layer. Optionally, the semantic recognition object is saved in vectorized form within the semantic map localization layer.
After step 340, the method flow shown in FIG. 3 proceeds to step 350. In this step, the processing unit 130 generates a fine geographic category probability map for extracting ground reticle from the image and the point cloud map and saves it in the semantic map planning layer. Optionally, in step 350, a fine geographic category probability map for extracting the ground reticle is generated as follows: a first geo-category probability map for extracting the ground reticle is generated from the image and a second geo-category probability map for extracting the ground reticle is generated from the point cloud data frame, followed by generating a fine geo-category probability map by fusing the first geo-category probability map and the second geo-category probability map. Optionally, the fine geographic category probability map is saved in vectorized form within the semantic map planning map layer.
Next, proceeding to step 360, the processing unit 130 generates a high-definition semantic map layout layer from the semantic map layout layer based on the semantic recognition object. Illustratively, the processing unit 130 generates a lane topology map and a road topology group map from the lane separation lines, road edges, and stop lines extracted from the fine geographic category probability map according to the semantic recognition object.
Fig. 4 is a block diagram of an apparatus for generating a high-definition semantic map according to still another embodiment of the present invention.
The apparatus 40 shown in fig. 4 comprises a memory 410, a processor 420 and a computer program 430 stored on the memory 410 and executable on the processor 420, wherein execution of the computer program 430 can implement the method described above with reference to fig. 3.
According to another aspect of the invention there is also provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the method described above with reference to figure 3.
The embodiments and examples set forth herein are presented to best explain the embodiments in accordance with the present technology and its particular application and to thereby enable those skilled in the art to make and use the invention. However, those skilled in the art will recognize that the foregoing description and examples have been presented for the purpose of illustration and example only. The description as set forth is not intended to cover various aspects of the invention or to limit the invention to the precise form disclosed.
In view of the foregoing, the scope of the present disclosure is determined by the following claims.

Claims (17)

1. A method for generating a high-definition semantic map, comprising the steps of:
acquiring a point cloud map and a plurality of images related to the surrounding environment;
determining a semantic recognition object from the target identified in the image and the corresponding target in the point cloud map by means of fusion matching processing of the target identified in the image and the corresponding target in the point cloud map by means of joint calibration transformation relation between the image and the point cloud map, wherein the semantic recognition object is associated with an object located in a plane intersecting with the ground;
extracting static environment semantic features from the point cloud map;
storing the semantic recognition object and the descriptor about the static environment semantic feature in a semantic map localization layer;
generating a fine geographic category probability map for extracting a ground reticle by fusing a first geographic category probability map for extracting a ground reticle and a second geographic category probability map for extracting a ground reticle, wherein the first and second geographic category probability maps are generated by the image and the point cloud data frame, respectively; and
and generating a high-definition semantic map planning layer by utilizing the semantic recognition object and the fine geographic category probability map, wherein the high-definition semantic map planning layer comprises road topological relations, lane topological relations and basic road indication information.
2. The method of claim 1, wherein each of the images is acquired in the following manner:
performing distortion correction processing on monocular images shot by a plurality of monocular cameras by using internal calibration parameters; and
and splicing the plurality of monocular images subjected to distortion correction into one image by using external calibration parameters.
3. The method of claim 1, wherein the point cloud map is obtained as follows:
performing reflectivity correction processing and motion error compensation processing on the point cloud data frame; and
and according to the positioning information, splicing the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing together to obtain the point cloud map.
4. The method of claim 3, wherein the positioning information is obtained by fusion filtering of two or more of navigation positioning data, wheel speed data, simultaneous positioning and mapping positioning data based on lidar point cloud data, and simultaneous positioning and mapping positioning data of multiple cameras.
5. The method of claim 1, wherein the target is at least one of a traffic sign, a traffic light, and a static obstacle.
6. The method of claim 1, wherein the static environmental semantic features comprise edge features and plane features in a point cloud data frame.
7. The method of claim 1, wherein the semantic recognition object is stored in vectorized form within the semantic map localization layer.
8. The method of claim 1, wherein the fine geographic category probability map is stored in vectorized form within the semantic map planning layer.
9. The method of claim 1, wherein generating a high-definition semantic map layout layer comprises:
and generating a lane topological graph and a road topological group graph according to the semantic recognition object by using the lane separation lines, the road edges and the stop lines extracted from the fine geographic category probability map.
10. An in-vehicle system for generating a high-definition semantic map, comprising:
an image acquisition unit configured to acquire a plurality of images related to a surrounding environment;
the point cloud data acquisition unit is configured to acquire a point cloud map related to the surrounding environment;
a processing unit configured to perform the steps of:
determining a semantic recognition object from the target identified in the image and the corresponding target in the point cloud map by means of fusion matching processing of the target identified in the image and the corresponding target in the point cloud map by means of joint calibration transformation relation between the image and the point cloud map, wherein the semantic recognition object is associated with an object located in a plane intersecting with the ground;
extracting static environment semantic features from the point cloud map;
storing the semantic recognition object and the descriptor about the static environment semantic feature in a semantic map localization layer;
generating a fine geographic category probability map for extracting a ground reticle by fusing a first geographic category probability map for extracting a ground reticle and a second geographic category probability map for extracting a ground reticle, wherein the first and second geographic category probability maps are generated by the image and the point cloud data frame, respectively; and
and generating a high-definition semantic map planning layer by utilizing the semantic recognition object and the fine geographic category probability map, wherein the high-definition semantic map planning layer comprises a lane topological graph, a road topological graph and basic road indication information.
11. The in-vehicle system of claim 10, wherein the image acquisition unit comprises:
a plurality of monocular cameras;
an image processor configured to perform the steps of:
performing distortion correction processing on monocular images shot by a monocular camera by using internal calibration parameters, wherein the internal calibration parameters are determined according to image data of the same calibration object shot by one monocular camera at different angles and different distances;
and splicing the plurality of monocular images subjected to distortion correction processing into one image by using external calibration parameters, wherein the external calibration parameters are determined according to the images of the same calibration object shot by the plurality of monocular cameras.
12. The in-vehicle system of claim 10, wherein the point cloud data acquisition unit comprises:
a laser radar configured to emit a first laser beam and to receive a reflected second laser beam;
a processor configured to perform the steps of:
performing reflectivity correction processing and motion error compensation processing on the point cloud data frame; and
and according to the positioning information, splicing the point cloud data frames subjected to the reflectivity correction processing and the motion error compensation processing together to obtain the point cloud map.
13. The in-vehicle system of claim 12, further comprising:
a GNSS/INS unit configured to provide GNSS/INS integrated navigation data;
a vehicle wheel speed meter unit configured to provide wheel speed data;
a multi-sensor data time synchronization unit configured to synchronize data of the image acquisition unit, the point cloud data acquisition unit, the GNSS/INS unit, and the vehicle wheel speed meter unit,
the processing unit is configured to obtain the positioning information by performing fusion filtering and resolving processing on two or more data of GNSS/INS combined navigation data, wheel speed data, simultaneous positioning and mapping positioning data based on laser radar point cloud data and simultaneous positioning and mapping positioning data of multiple cameras.
14. The in-vehicle system of claim 10, wherein the processing unit generates the high-definition semantic map planning layer as follows:
and generating a lane topological graph and a road topological group graph according to the semantic recognition object by using the lane separation lines, the road edges and the stop lines extracted from the fine geographic category probability map.
15. An apparatus for generating a high-definition semantic map comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the program is executed to implement the method of any of claims 1-9.
16. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-9.
17. A navigational mapping vehicle comprising the apparatus for generating a high definition semantic map according to any of claims 10-15.
CN201910323449.4A 2019-04-22 2019-04-22 Method, apparatus and computer storage medium for generating high-definition semantic map Active CN110057373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910323449.4A CN110057373B (en) 2019-04-22 2019-04-22 Method, apparatus and computer storage medium for generating high-definition semantic map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910323449.4A CN110057373B (en) 2019-04-22 2019-04-22 Method, apparatus and computer storage medium for generating high-definition semantic map

Publications (2)

Publication Number Publication Date
CN110057373A CN110057373A (en) 2019-07-26
CN110057373B true CN110057373B (en) 2023-11-03

Family

ID=67320043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910323449.4A Active CN110057373B (en) 2019-04-22 2019-04-22 Method, apparatus and computer storage medium for generating high-definition semantic map

Country Status (1)

Country Link
CN (1) CN110057373B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419404A (en) * 2019-08-21 2021-02-26 北京初速度科技有限公司 Map data acquisition method and device
CN112802095B (en) * 2019-11-14 2024-04-16 北京四维图新科技股份有限公司 Positioning method, device and equipment, and automatic driving positioning system
CN110986945B (en) * 2019-11-14 2023-06-27 上海交通大学 Local navigation method and system based on semantic altitude map
CN110825093B (en) * 2019-11-28 2021-04-16 安徽江淮汽车集团股份有限公司 Automatic driving strategy generation method, device, equipment and storage medium
CN111008660A (en) * 2019-12-03 2020-04-14 北京京东乾石科技有限公司 Semantic map generation method, device and system, storage medium and electronic equipment
US11940804B2 (en) * 2019-12-17 2024-03-26 Motional Ad Llc Automated object annotation using fused camera/LiDAR data points
CN111025331B (en) * 2019-12-25 2023-05-23 湖北省空间规划研究院 Laser radar mapping method based on rotating structure and scanning system thereof
CN111176279B (en) * 2019-12-31 2023-09-26 北京四维图新科技股份有限公司 Determination method, device, equipment and storage medium for vulnerable crowd area
CN111192341A (en) * 2019-12-31 2020-05-22 北京三快在线科技有限公司 Method and device for generating high-precision map, automatic driving equipment and storage medium
CN111207762B (en) * 2019-12-31 2021-12-07 深圳一清创新科技有限公司 Map generation method and device, computer equipment and storage medium
CN113128303A (en) * 2019-12-31 2021-07-16 华为技术有限公司 Automatic driving method, related equipment and computer readable storage medium
JP7111118B2 (en) * 2020-01-29 2022-08-02 トヨタ自動車株式会社 Map generation data collection device and map generation data collection method
CN111311709B (en) * 2020-02-05 2023-06-20 北京三快在线科技有限公司 Method and device for generating high-precision map
WO2021223116A1 (en) * 2020-05-06 2021-11-11 上海欧菲智能车联科技有限公司 Perceptual map generation method and apparatus, computer device and storage medium
CN111595357B (en) * 2020-05-14 2022-05-20 广州文远知行科技有限公司 Visual interface display method and device, electronic equipment and storage medium
CN111561923B (en) * 2020-05-19 2022-04-15 北京数字绿土科技股份有限公司 SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN113721599B (en) * 2020-05-25 2023-10-20 华为技术有限公司 Positioning method and positioning device
CN111710040B (en) * 2020-06-03 2024-04-09 纵目科技(上海)股份有限公司 High-precision map construction method, system, terminal and storage medium
CN111958592B (en) * 2020-07-30 2021-08-20 国网智能科技股份有限公司 Image semantic analysis system and method for transformer substation inspection robot
CN112254737A (en) * 2020-10-27 2021-01-22 北京晶众智慧交通科技股份有限公司 Map data conversion method
CN112556654A (en) * 2020-12-17 2021-03-26 武汉中海庭数据技术有限公司 High-precision map data acquisition device and method
CN113418528A (en) * 2021-05-31 2021-09-21 江苏大学 Intelligent automobile-oriented traffic scene semantic modeling device, modeling method and positioning method
CN113419245B (en) * 2021-06-23 2022-05-31 北京易航远智科技有限公司 Real-time mapping system and mapping method based on V2X
CN113656525B (en) * 2021-08-19 2024-04-16 广州小鹏自动驾驶科技有限公司 Map processing method and device
CN113978484A (en) * 2021-09-30 2022-01-28 东风汽车集团股份有限公司 Vehicle control method, device, electronic device and storage medium
CN116206278A (en) * 2021-10-14 2023-06-02 华为技术有限公司 Road information identification method and device, electronic equipment, vehicle and medium
CN115100631A (en) * 2022-07-18 2022-09-23 浙江省交通运输科学研究院 Road map acquisition system and method for multi-source information composite feature extraction
CN114973910B (en) * 2022-07-27 2022-11-11 禾多科技(北京)有限公司 Map generation method and device, electronic equipment and computer readable medium
CN115164918B (en) * 2022-09-06 2023-02-03 联友智连科技有限公司 Semantic point cloud map construction method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN109117718A (en) * 2018-07-02 2019-01-01 东南大学 A kind of semantic map structuring of three-dimensional towards road scene and storage method
CN109410301A (en) * 2018-10-16 2019-03-01 张亮 High-precision semanteme map production method towards pilotless automobile
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10078790B2 (en) * 2017-02-16 2018-09-18 Honda Motor Co., Ltd. Systems for generating parking maps and methods thereof

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN107451526A (en) * 2017-06-09 2017-12-08 蔚来汽车有限公司 The structure of map and its application
CN109117718A (en) * 2018-07-02 2019-01-01 东南大学 A kind of semantic map structuring of three-dimensional towards road scene and storage method
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109410301A (en) * 2018-10-16 2019-03-01 张亮 High-precision semanteme map production method towards pilotless automobile
CN109556617A (en) * 2018-11-09 2019-04-02 同济大学 A kind of map elements extracting method of automatic Jian Tu robot
CN109461211A (en) * 2018-11-12 2019-03-12 南京人工智能高等研究院有限公司 Semantic vector map constructing method, device and the electronic equipment of view-based access control model point cloud

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨玉荣 ; 李峰 ; .基于激光点云扫描的高精导航地图关键技术研究.现代计算机(专业版).2018,(09),全文. *

Also Published As

Publication number Publication date
CN110057373A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
CN110057373B (en) Method, apparatus and computer storage medium for generating high-definition semantic map
US10684372B2 (en) Systems, devices, and methods for autonomous vehicle localization
Ghallabi et al. LIDAR-Based road signs detection For Vehicle Localization in an HD Map
Brenner Extraction of features from mobile laser scanning data for future driver assistance systems
CN110945379A (en) Determining yaw error from map data, laser, and camera
US20210199437A1 (en) Vehicular component control using maps
JP5227065B2 (en) 3D machine map, 3D machine map generation device, navigation device and automatic driving device
JP5157067B2 (en) Automatic travel map creation device and automatic travel device.
US11512975B2 (en) Method of navigating an unmanned vehicle and system thereof
JP2001331787A (en) Road shape estimating device
CN113673282A (en) Target detection method and device
KR102425735B1 (en) Autonomous Driving Method and System Using a Road View or a Aerial View from a Map Server
US20130293716A1 (en) Mobile mapping system for road inventory
CN112189225A (en) Lane line information detection apparatus, method, and computer-readable recording medium storing computer program programmed to execute the method
Shunsuke et al. GNSS/INS/on-board camera integration for vehicle self-localization in urban canyon
KR101925366B1 (en) electronic mapping system and method using drones
Moras et al. Drivable space characterization using automotive lidar and georeferenced map information
JP7114165B2 (en) Position calculation device and position calculation program
Ghallabi et al. LIDAR-based high reflective landmarks (HRL) s for vehicle localization in an HD map
JP7418196B2 (en) Travel trajectory estimation method and travel trajectory estimation device
CN116997771A (en) Vehicle, positioning method, device, equipment and computer readable storage medium thereof
CN112622893A (en) Multi-sensor fusion target vehicle automatic driving obstacle avoidance method and system
CN115540889A (en) Locating autonomous vehicles using cameras, GPS and IMU
Belaroussi et al. Vehicle attitude estimation in adverse weather conditions using a camera, a GPS and a 3D road map
CN109991984B (en) Method, apparatus and computer storage medium for generating high-definition map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant