CN114186007A - High-precision map generation method and device, electronic equipment and storage medium - Google Patents

High-precision map generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114186007A
CN114186007A CN202111328425.1A CN202111328425A CN114186007A CN 114186007 A CN114186007 A CN 114186007A CN 202111328425 A CN202111328425 A CN 202111328425A CN 114186007 A CN114186007 A CN 114186007A
Authority
CN
China
Prior art keywords
map element
target
map
candidate
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111328425.1A
Other languages
Chinese (zh)
Inventor
彭玮琳
黄杰
李春晓
彭亮
白宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111328425.1A priority Critical patent/CN114186007A/en
Publication of CN114186007A publication Critical patent/CN114186007A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a high-precision map generation method, a high-precision map generation device, electronic equipment and a storage medium, and relates to the fields of automatic driving, intelligent transportation and the like. The specific implementation scheme is as follows: acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar, and performing map element detection on the source image to determine a first position of a target map element in the source image; and determining the second position of the target map element projected to the point cloud data according to the first position, matching the second position with the third position of each candidate map element in the point cloud data, and determining the target position matched with the second position from the third position of each candidate map element, so that the electronic map can be generated according to the target map element and the target position. Therefore, the world coordinates of the map elements in the real physical world are determined by combining the image data and the point cloud data, the determination difficulty of the coordinates of all the map elements can be reduced, and the determination efficiency of the coordinates of all the map elements can be improved.

Description

High-precision map generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the technical fields of automatic driving, intelligent transportation, and the like, and in particular, to a high-precision map generation method, apparatus, electronic device, and storage medium.
Background
The high-precision map is also called a high-precision map and is used for an automatic driving vehicle. The high-precision map has accurate vehicle position information and rich road element data information, can help vehicles to predict road surface complex information such as gradient, curvature, course and the like, and can better avoid potential risks. High-precision map production is to describe map elements in the real physical world with a minimum amount of data, wherein each map element can be composed of geometric coordinates and attribute information. Therefore, it is very important to acquire the coordinates of a map element and objectively present the coordinates of the map element in an electronic map, so as to perform daily travel navigation for people.
Disclosure of Invention
The disclosure provides a method and a device for generating a high-precision map, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a high-precision map generation method, including:
acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar;
performing map element detection on the source image to determine a first position of a target map element in the source image;
determining a second position of the target map element projected into the point cloud data according to the first position;
matching the second position with a third position of each candidate map element in the point cloud data to determine a target position matched with the second position from the third position of each candidate map element;
and generating an electronic map according to the target map elements and the target position.
According to another aspect of the present disclosure, there is provided a high-precision map generating apparatus including:
the acquisition module is used for acquiring at least one frame of source image acquired by the vehicle-mounted camera and point cloud data acquired by the vehicle-mounted radar;
the detection module is used for carrying out map element detection on the source image so as to determine a first position of a target map element in the source image;
the determining module is used for determining a second position of the target map element projected to the point cloud data according to the first position;
the matching module is used for matching the second position with the third position of each candidate map element in the point cloud data so as to determine a target position matched with the second position from the third position of each candidate map element;
and the generating module is used for generating an electronic map according to the target map elements and the target position.
According to still another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a high-precision map generation method set forth in the above-described aspect of the disclosure.
According to yet another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium of computer instructions for causing a computer to perform the high precision map generation method set forth in the above-described aspect of the present disclosure.
According to yet another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the high-precision map generation method set forth in the above-mentioned aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a high-precision map generation method according to a first embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a high-precision map generation method according to a second embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a high-precision map generation method according to a third embodiment of the present disclosure;
fig. 4 is a schematic flow chart of a high-precision map generation method according to a fourth embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a high-precision map generation method according to a fifth embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a high-precision map generating apparatus according to a sixth embodiment of the present disclosure;
FIG. 7 illustrates a schematic block diagram of an example electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
High-precision map production adopts minimum data size to describe map elements in a real physical world, wherein each map element can be composed of geometric coordinates and attribute information and also can be composed of relationships among different map elements (such as intersections to which a certain traffic light belongs), so one of core appeal requirements of high-precision map production is as follows: world coordinates of each map element in the physical world (such as lane lines, traffic lights, crosswalks, etc.) are obtained.
At present, in an acquisition link of map production, an acquisition vehicle can acquire a GPS (Global positioning System) signal in real time to generate an acquisition track, and meanwhile, relevant data is acquired through equipment such as a laser radar and a camera, and finally, the acquisition vehicle flows into a production link to acquire world coordinates of map elements based on the acquisition data.
In the related art, in order to obtain real world coordinates of map elements in high-precision map production, a common scheme is as follows: the acquisition vehicle acquires GPS signals while scanning the laser radar, and then off-line splices the point clouds acquired by the laser radar in multiple times, so as to obtain the world coordinate of each frame of point cloud and further obtain the world coordinate of each point. In high-precision map production, required map elements are found from point clouds, and accurate coordinates of the map elements can be expressed by points in the point clouds.
However, since the point cloud collected by the laser radar only has the intensity information of the reflection value, lacks color information, and has noise, when the GPS signal is not good enough or the point cloud stitching effect is not perfect, the difficulty of marking map elements from the point cloud is very high.
Therefore, in order to solve the existing problems, the present disclosure provides a high-precision map generation method, device, electronic device and storage medium.
A high-precision map generation method, apparatus, electronic device, and storage medium according to embodiments of the present disclosure are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a high-precision map generation method according to a first embodiment of the present disclosure.
The disclosed embodiments are exemplified in that the high-precision map generation method is configured in a high-precision map generation apparatus that can be applied to any electronic device so that the electronic device can perform a high-precision map generation function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the high-precision map generation method may include the steps of:
step 101, acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar.
In the embodiment of the disclosure, the vehicle-mounted camera can be used for carrying out image acquisition on the map elements in the real world to obtain the source image containing the map elements, so that at least one frame of source image acquired by the vehicle-mounted camera can be obtained. And moreover, point cloud data acquired by the vehicle-mounted radar can be acquired.
Step 102, map element detection is carried out on the source image so as to determine a first position of a target map element in the source image.
In the embodiment of the present disclosure, based on an image recognition technology, map element detection may be performed on a source image to determine a map element present in the source image, which is denoted as a target map element in the present disclosure, and after the target map element is detected, a position of the target map element in the source image may be determined, which is denoted as a first position in the present disclosure.
As an example, map element detection may be performed on the source image based on an object detection technique to determine a position of the detection box and a category to which the object map element belongs within the detection box, such that a first position of the object map element in the source image may be determined according to the position of the detection box. For example, the position of the detection frame may be set as the first position.
And 103, determining a second position of the target map element projected to the point cloud data according to the first position.
In the embodiment of the present disclosure, a target map element in a source image may be projected into point cloud data, and a position of the target map element projected into the point cloud data, which is denoted as a second position in the present disclosure, is determined. That is, a second location of the projection of the target map element into the point cloud data may be determined from a first location of the target map element in the source image.
And 104, matching the second position with the third position of each candidate map element in the point cloud data so as to determine a target position matched with the second position from the third position of each candidate map element.
In the embodiment of the present disclosure, each candidate map element in the point cloud data may be obtained by detecting through a deep learning technique, for example, a convolutional neural network may be adopted to perform target detection on the point cloud data to obtain each candidate map element.
In the embodiment of the present disclosure, the second position of the target map element in the point cloud data may be matched with the third position of each candidate map element in the point cloud data, so as to determine the target position matched with the second position from the third position of each candidate map element.
For example, assuming that the number of candidate map elements is 3, which are candidate map elements 1, 2 and 3, respectively, and assuming that the degree of matching between the third position and the second position of the candidate map element 2 is higher than that of other candidate map elements, the third position of the candidate map element 2 may be used as the target position matching with the second position.
And 105, generating an electronic map according to the target map elements and the target position.
In the embodiment of the present disclosure, an electronic map may be generated according to the target map element and the target position, for example, the target position of the target map element may be labeled in the electronic map.
It should be understood that the electronic map may also be updated according to the target map element and the target position, for example, when the position of the target map element changes, the position of the target map element in the electronic map may be updated according to the target position, or when the target map element has not previously appeared in the electronic map, the position of the target map element may also be increased in the electronic map according to the target map element and the target position.
The number of the target map elements present in the source image may be one or more. When a plurality of target map elements exist in the source image, the target position of each target map element can be obtained through the steps, which are not repeated herein.
As an application scene, at least one camera can be installed on the acquisition vehicle to acquire images of a real physical environment and record physical world information. Because the image has RGB information, map elements can be accurately extracted from the image based on mature image recognition technology. However, since the image is a two-dimensional coordinate system, it is difficult to convert the map elements extracted from the image into a world coordinate system, and the point cloud data has world coordinates, the coordinates of the map elements can be labeled in the electronic map by combining the image data and the point cloud data. Therefore, the labeling difficulty of map elements can be reduced, and the labeling efficiency is improved.
According to the high-precision map generation method, at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar are acquired, map element detection is carried out on the source image to determine a first position of a target map element in the source image, then, a second position of the target map element projected to the point cloud data can be determined according to the first position, the second position is matched with a third position of each candidate map element in the point cloud data, a target position matched with the second position is determined from the third position of each candidate map element, and therefore an electronic map can be generated according to the target map element and the target position. Therefore, the world coordinates of the map elements in the real physical world are determined by combining the image data and the point cloud data, the determination difficulty of the coordinates of each map element can be reduced, and the determination efficiency of the coordinates of each map element can be improved.
In order to clearly illustrate how the target position matching the second position is determined in the above embodiments of the present disclosure, the present disclosure also provides a high-precision map generation method.
Fig. 2 is a schematic flow chart of a high-precision map generation method according to a second embodiment of the present disclosure.
As shown in fig. 2, the high-precision map generation method may include the steps of:
step 201, acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar.
Step 202, map element detection is performed on the source image to determine a first position of the target map element in the source image.
Step 203, determining a second position of the target map element projected to the point cloud data according to the first position.
The execution process of steps 201 to 203 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
And 204, matching the second position with a third position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element.
In the embodiment of the present disclosure, the second position may be matched with a third position of each candidate map element in the point cloud data to determine a matching degree of each candidate map element.
As an example, for each candidate map element, a distance between the second position and the third position of the candidate map element may be calculated, and the degree of matching of the candidate map element may be determined based on the distance. The distance is in an inverse relationship with the matching degree, that is, the smaller the distance between the second position and the third position, the higher the matching degree of the candidate map element is, and conversely, the larger the distance between the second position and the third position, the lower the matching degree of the candidate map element is.
Step 205, according to the matching degree of each candidate map element, determining a matching map element corresponding to the maximum matching degree from each candidate map element.
In the embodiment of the present disclosure, the candidate map element corresponding to the maximum matching degree may be determined from the matching degrees of the candidate elements, and is referred to as a matching map element in the present disclosure.
And step 206, determining the position of the matching map element in the point cloud data as a target position matched with the second position.
In the disclosed embodiment, the position of the matching map element in the point cloud data may be determined as a target position matching the second position.
And step 207, generating an electronic map according to the target map elements and the target position.
The execution process of step 207 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
According to the high-precision map generation method, the candidate map element which is most matched with the projected position of the target map element is determined from the point cloud data, and the position of the most matched candidate map element is used as the target position of the target map element in the world coordinate system, so that the accuracy and reliability of the target position determination result can be improved.
It should be noted that, when the data amount of the point cloud data is large, the workload of identifying each map element from the point cloud data is very huge. Therefore, in the present disclosure, in order to save computing resources and improve recognition efficiency, part of the point cloud data including the target map elements may be screened from the point cloud data, and the part of the point cloud data is marked as a candidate area in the present disclosure, so that the target recognition may be performed only according to the candidate area to obtain each candidate map element. The above process is described in detail with reference to example three.
Fig. 3 is a schematic flow chart of a high-precision map generation method provided in the third embodiment of the present disclosure.
As shown in fig. 3, the high-precision map generation method may include the steps of:
step 301, acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar.
Step 302, map element detection is performed on a source image to determine a first position of a target map element in the source image.
Step 303, determining a second position of the target map element projected to the point cloud data according to the first position.
The execution process of steps 301 to 303 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
And step 304, acquiring a fourth position where the vehicle-mounted camera collects the source image.
In the embodiment of the present disclosure, when the vehicle-mounted camera collects an image, the vehicle-mounted camera may be located by the vehicle-mounted positioning device, so that a position where the vehicle-mounted camera collects a source image may be determined according to the position information detected by the vehicle-mounted positioning device, and the position is denoted as a fourth position in the present disclosure. The vehicle-mounted positioning equipment is a sensor capable of realizing positioning, position measurement and attitude measurement.
As an example, the acquisition time of the source image may be obtained, and according to the acquisition time, a position matching the acquisition time, referred to as a fourth position in this disclosure, is obtained from the position information detected by the vehicle-mounted positioning device.
As another example, the acquisition time of the source image may be acquired, a position matched with the acquisition time is acquired from the position information detected by the vehicle-mounted positioning device according to the acquisition time, and the source image is marked according to the matched position, so that the fourth position where the vehicle-mounted camera acquires the source image may be determined according to the marking information of the source image.
And 305, screening a candidate area containing the fourth position from the point cloud data according to the fourth position.
In the embodiment of the present disclosure, the candidate region refers to a part of the point cloud data including the fourth position, or referred to as candidate point cloud data.
In the embodiment of the disclosure, a candidate region including the fourth position may be screened from the point cloud data according to the fourth position. For example, the candidate area may be cut from the point cloud data with the fourth position as the center and the radius as the set length.
Step 306, map element detection is performed on the candidate area to obtain each candidate map element.
In the embodiment of the disclosure, map elements of the candidate area may be detected based on a deep learning technique to obtain each candidate map element. For example, a convolutional neural network may be used to perform target detection on the candidate regions to obtain each candidate map element.
And 307, matching the third position and the second position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element.
In the embodiment of the present disclosure, the third position and the second position of each candidate map element in the point cloud data may be matched to determine the matching degree of each candidate map element.
As an example, for each candidate map element, a distance between the second position and the third position of the candidate map element may be calculated, and the degree of matching of the candidate map element may be determined based on the distance. The distance is in an inverse relationship with the matching degree, that is, the smaller the distance between the second position and the third position, the higher the matching degree of the candidate map element is, and conversely, the larger the distance between the second position and the third position, the lower the matching degree of the candidate map element is.
And 308, determining a matching map element corresponding to the maximum matching degree from the candidate map elements according to the matching degree of the candidate map elements.
Step 309, determining the position of the matching map element in the point cloud data as a target position matching the second position.
In step 310, an electronic map is generated according to the target map elements and the target position.
The execution process of steps 308 to 310 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
According to the high-precision map generation method, the fourth position where the vehicle-mounted camera collects the source image is obtained; screening a candidate area containing the fourth position from the point cloud data according to the fourth position; carrying out map element detection on the candidate areas to obtain each candidate map element; and matching the third position and the second position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element. Therefore, partial point cloud data containing the target map elements can be screened from the point cloud data and marked as candidate areas in the disclosure, so that the map elements can be identified only according to the candidate areas to obtain the candidate map elements, the identification efficiency can be improved, and the computing resources can be saved.
In order to clearly illustrate how the target map element is projected to the second position in the point cloud data in any embodiment of the present disclosure, the present disclosure also provides a high-precision map generation method.
Fig. 4 is a schematic flow chart of a high-precision map generation method according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the high-precision map generation method may include the steps of:
step 401, acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar.
Step 402, performing map element detection on the source image to determine a first position of the target map element in the source image, wherein the first position is a position of the target map element in an image coordinate system corresponding to the source image.
The execution process of steps 401 to 402 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
And 403, transforming the first position of the target map element in the image coordinate system to the position of the target map element in the camera coordinate system by using the first mapping relation between the camera coordinate system where the source image is located and the image coordinate system corresponding to the source image, so as to obtain the position of the target map element in the camera coordinate system.
In the embodiment of the present disclosure, the first mapping relationship between the camera coordinate system and the image coordinate system may be obtained by calibration in advance.
In the embodiment of the present disclosure, a first mapping relationship between a camera coordinate system where a source image is located and an image coordinate system corresponding to the source image may be adopted, and a first position of a target map element in the image coordinate system is transformed to be under the camera coordinate system, so as to obtain a position of the target map element in the camera coordinate system.
And 404, transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system.
In the embodiment of the present disclosure, the second mapping relationship between the radar coordinate system and the camera coordinate system may also be obtained by a pre-calibration method.
Step 405, determining a second position of the target map element projected to the point cloud data according to the position of the target map element in the radar coordinate system.
In the embodiment of the disclosure, a second mapping relationship between a radar coordinate system of the vehicle-mounted radar and a camera coordinate system in which the source image is located may be adopted, and the position of the target map element in the camera coordinate system is transformed to be under the radar coordinate system, so as to obtain the position of the target map element in the radar coordinate system. The second position of the projection of the target map element into the point cloud data can thus be determined from the position in the radar coordinate system of the target map element.
In addition, the above is exemplified by only the first position as the position of the target map element in the image coordinate system corresponding to the source image, and in actual application, the first position may also be the position of the target map element in the pixel coordinate system corresponding to the source image, and in this case, the second position may be determined by: converting the first position of the target map element in the pixel coordinate system to the position under the camera coordinate system by adopting a third mapping relation between the camera coordinate system where the source image is located and the pixel coordinate system corresponding to the source image so as to obtain the position of the target map element in the camera coordinate system; transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system; and determining a second position of the projection of the target map element to the point cloud data according to the position of the target map element in the radar coordinate system.
And 406, matching the second position with a third position of each candidate map element in the point cloud data so as to determine a target position matched with the second position from the third position of each candidate map element.
Step 407, generating an electronic map according to the target map elements and the target position.
The execution process of steps 406 to 407 may refer to the execution process of any embodiment of the present disclosure, which is not described herein again.
According to the high-precision map generation method, a first mapping relation between a camera coordinate system where a source image is located and an image coordinate system corresponding to the source image is adopted, and a first position of a target map element in the image coordinate system is converted into a position under the camera coordinate system, so that the position of the target map element in the camera coordinate system is obtained; transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system; and determining a second position of the projection of the target map element to the point cloud data according to the position of the target map element in the radar coordinate system. Therefore, the target map elements in the source image can be effectively projected into the point cloud data according to the mapping relation among the coordinate systems, and the accuracy and reliability of the second position determination result of the target map elements projected into the point cloud data are improved.
It can be understood that the electronic map may include, in addition to the geometric coordinates of each map element, attribute information of the map element, and therefore, in the present disclosure, in order to meet the actual map production requirement, the attribute information of the target map element may be extracted from the source image, so that the electronic map may be generated according to the attribute information and the target position of the target map element. The above process is described in detail with reference to example five.
Fig. 5 is a schematic flow chart of a high-precision map generation method provided in the fifth embodiment of the present disclosure.
As shown in fig. 5, the high-precision map generation method may include the steps of:
step 501, acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar.
The execution process of step 501 may refer to the execution process of any of the above embodiments, which is not described herein again.
Step 502, map element detection is carried out on a source image to determine attribute information of a target map element and determine a first position of the target map element in the source image; wherein the attribute information contains at least one of a name, a color, and a pose orientation.
In the embodiment of the present disclosure, the first position of the target map element in the source image may refer to the execution process of any one of the embodiments, which is not described herein again.
In the disclosed embodiments, after a target map element is identified from a source image, attribute information of the target map element may be determined.
For example, after map element detection is performed on a source image based on an object detection technology, and the position of a detection frame and the category to which an object map element in the detection frame belongs are determined, the name of the object map element may be determined according to the category to which the object map element belongs, and the color of the object map element may be determined according to color information of each pixel point in the detection frame. For another example, the pose orientation of the target map elements within the detection frame may be identified based on a deep learning technique or an image recognition technique.
Step 503, determining a second position of the target map element projected to the point cloud data according to the first position.
And 504, matching the second position with the third position of each candidate map element in the point cloud data so as to determine a target position matched with the second position from the third position of each candidate map element.
The execution process of steps 503 to 504 may refer to the execution process of any embodiment of the present disclosure, and is not described herein again.
And step 505, generating the electronic map according to the attribute information and the target position of the target map element.
In the embodiment of the present disclosure, the electronic map may be generated according to the attribute information of the target map element and the target position. For example, the electronic map may be labeled with the target positions and attribute information of the target map elements.
It should be understood that the electronic map may also be updated according to the attribute information and the target position of the target map element, for example, when the position and/or attribute information (such as color, pose orientation) of the target map element changes, the position and attribute of the target map element in the electronic map may be updated according to the target position and/or attribute information, or when the target map element has not been present in the electronic map before, the position and attribute of the target map element may be correspondingly increased in the electronic map according to the attribute information and the target position of the target map element.
According to the high-precision map generation method, the electronic map is generated according to the attribute information and the target position of the target map element, and the actual map production requirement can be met.
In correspondence with the high-precision map generation method provided in the embodiments of fig. 1 to 5, the present disclosure also provides a high-precision map generation device, and since the high-precision map generation device provided in the embodiments of the present disclosure corresponds to the high-precision map generation method provided in the embodiments of fig. 1 to 5, the embodiment of the high-precision map generation method is also applicable to the high-precision map generation device provided in the embodiments of the present disclosure, and will not be described in detail in the embodiments of the present disclosure.
Fig. 6 is a schematic structural diagram of a high-precision map generating device according to a sixth embodiment of the present disclosure.
As shown in fig. 6, the high-precision map generating apparatus 600 may include: an acquisition module 610, a detection module 620, a determination module 630, a matching module 640, and a generation module 650.
The acquiring module 610 is configured to acquire at least one frame of source image acquired by the vehicle-mounted camera and point cloud data acquired by the vehicle-mounted radar.
A detection module 620, configured to perform map element detection on the source image to determine a first position of the target map element in the source image.
A determining module 630, configured to determine, according to the first location, a second location where the target map element is projected into the point cloud data.
And the matching module 640 is configured to match the second position with a third position of each candidate map element in the point cloud data, so as to determine a target position matching the second position from the third position of each candidate map element.
And a generating module 650 for generating an electronic map according to the target map elements and the target positions.
In a possible implementation manner of the embodiment of the present disclosure, the matching module 640 may include:
and the matching unit is used for matching the second position with the third position of each candidate map element in the point cloud data so as to determine the matching degree of each candidate map element.
And the determining unit is used for determining the matching map element corresponding to the maximum matching degree from the candidate map elements according to the matching degree of the candidate map elements.
And the processing unit is used for determining the position of the matching map element in the point cloud data as a target position matched with the second position.
In a possible implementation manner of the embodiment of the present disclosure, the matching unit is specifically configured to: acquiring a fourth position where the vehicle-mounted camera is located when the vehicle-mounted camera collects a source image; screening a candidate area containing the fourth position from the point cloud data according to the fourth position; carrying out map element detection on the candidate areas to obtain each candidate map element; and matching the third position and the second position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element.
In a possible implementation manner of the embodiment of the present disclosure, the first position is a position of the target map element in an image coordinate system corresponding to the source image, and the determining module 630 is specifically configured to: converting the first position of the target map element in the image coordinate system to the position under the camera coordinate system by adopting a first mapping relation between the camera coordinate system where the source image is located and the image coordinate system corresponding to the source image so as to obtain the position of the target map element in the camera coordinate system; transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system; and determining a second position of the projection of the target map element to the point cloud data according to the position of the target map element in the radar coordinate system.
In a possible implementation manner of the embodiment of the present disclosure, the detecting module 620 is specifically configured to: map element detection is carried out on the source image so as to determine attribute information of the target map element and determine a first position of the target map element in the source image; wherein the attribute information contains at least one of a name, a color, and a pose orientation.
Accordingly, the generating module 650 is specifically configured to: and generating the electronic map according to the attribute information and the target position of the target map element.
The high-precision map generation device of the embodiment of the disclosure acquires at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar, and performs map element detection on the source image to determine a first position of a target map element in the source image, and then determines a second position of the target map element projected to the point cloud data according to the first position, and matches the second position with a third position of each candidate map element in the point cloud data to determine a target position matched with the second position from the third position of each candidate map element, so that an electronic map can be generated according to the target map element and the target position. Therefore, the world coordinates of the map elements in the real physical world are determined by combining the image data and the point cloud data, the determination difficulty of the coordinates of each map element can be reduced, and the determination efficiency of the coordinates of each map element can be improved.
To implement the above embodiments, the present disclosure also provides an electronic device, which may include at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the high-precision map generation method proposed by any one of the above embodiments of the present disclosure.
In order to achieve the above embodiments, the present disclosure also provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the high-precision map generation method proposed by any one of the above embodiments of the present disclosure.
In order to implement the above embodiments, the present disclosure also provides a computer program product, which includes a computer program that, when executed by a processor, implements the high-precision map generation method proposed by any of the above embodiments of the present disclosure.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device that can be used to implement embodiments of the present disclosure. The electronic device may include the server and the client in the above embodiments. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the electronic apparatus 700 includes a computing unit 701, which can perform various appropriate actions and processes in accordance with a computer program stored in a ROM (Read-Only Memory) 702 or a computer program loaded from a storage unit 707 into a RAM (Random Access Memory) 703. In the RAM 703, various programs and data required for the operation of the electronic device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An I/O (Input/Output) interface 705 is also connected to the bus 704.
A number of components in the electronic device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the electronic device 700 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of the computing Unit 701 include, but are not limited to, a CPU (Central Processing Unit), a GPU (graphics Processing Unit), various dedicated AI (Artificial Intelligence) computing chips, various computing Units running machine learning model algorithms, a DSP (Digital Signal Processor), and any suitable Processor, controller, microcontroller, and the like. The calculation unit 701 executes the respective methods and processes described above, such as the above-described high-precision map generation method. For example, in some embodiments, the high-precision map generation methods described above may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 700 via the ROM 702 and/or the communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of the high precision map generation method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the above-described high-precision map generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, Integrated circuitry, FPGAs (Field Programmable Gate arrays), ASICs (Application-Specific Integrated circuits), ASSPs (Application Specific Standard products), SOCs (System On Chip, System On a Chip), CPLDs (Complex Programmable Logic devices), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an EPROM (Electrically Programmable Read-Only-Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only-Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a Display device (e.g., a CRT (Cathode Ray Tube) or LCD (Liquid Crystal Display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network), WAN (Wide Area Network), internet, and blockchain Network.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in a conventional physical host and a VPS (Virtual Private Server). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge map technology and the like.
According to the technical scheme of the embodiment of the disclosure, at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar are acquired, map element detection is carried out on the source image to determine a first position of a target map element in the source image, then, a second position of the target map element projected to the point cloud data can be determined according to the first position, the second position is matched with a third position of each candidate map element in the point cloud data, a target position matched with the second position is determined from the third position of each candidate map element, and an electronic map can be generated according to the target map element and the target position. Therefore, the world coordinates of the map elements in the real physical world are determined by combining the image data and the point cloud data, the determination difficulty of the coordinates of each map element can be reduced, and the determination efficiency of the coordinates of each map element can be improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions proposed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (13)

1. A high-precision map generation method, comprising:
acquiring at least one frame of source image acquired by a vehicle-mounted camera and point cloud data acquired by a vehicle-mounted radar;
performing map element detection on the source image to determine a first position of a target map element in the source image;
determining a second position of the target map element projected into the point cloud data according to the first position;
matching the second position with a third position of each candidate map element in the point cloud data to determine a target position matched with the second position from the third position of each candidate map element;
and generating an electronic map according to the target map elements and the target position.
2. The method of claim 1, wherein the matching the second location to a third location of each candidate map element in the point cloud data to determine a target location from the third location of each candidate map element that matches the second location comprises:
matching the second position with a third position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element;
according to the matching degree of each candidate map element, determining a matching map element corresponding to the maximum matching degree from each candidate map element;
and determining the position of the matching map element in the point cloud data as a target position matched with the second position.
3. The method of claim 2, wherein the matching the second location with a third location of each candidate map element in the point cloud data to determine a degree of match for each candidate map element comprises:
acquiring a fourth position where the vehicle-mounted camera is located when the vehicle-mounted camera collects the source image;
screening a candidate region containing the fourth position from the point cloud data according to the fourth position;
carrying out map element detection on the candidate areas to obtain each candidate map element;
and matching the third position and the second position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element.
4. The method according to any of claims 1-3, wherein the first position is a position of the target map element in an image coordinate system corresponding to the source image,
the determining, according to the first location, that the target map element is projected to a second location in the point cloud data comprises:
transforming the first position of the target map element in the image coordinate system to the position of the target map element in the camera coordinate system by adopting a first mapping relation between the camera coordinate system where the source image is located and the image coordinate system corresponding to the source image;
transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system;
and determining a second position of the target map element projected to the point cloud data according to the position of the target map element in the radar coordinate system.
5. The method of any of claims 1-3, wherein the map element detecting the source image to determine a first location of a target map element in the source image comprises:
map element detection is carried out on the source image so as to determine attribute information of a target map element and determine a first position of the target map element in the source image; wherein the attribute information contains at least one of a name, a color, and a pose orientation;
correspondingly, the generating an electronic map according to the target map element and the target position includes:
and generating the electronic map according to the attribute information of the target map elements and the target position.
6. A high-precision map generation apparatus comprising:
the acquisition module is used for acquiring at least one frame of source image acquired by the vehicle-mounted camera and point cloud data acquired by the vehicle-mounted radar;
the detection module is used for carrying out map element detection on the source image so as to determine a first position of a target map element in the source image;
the determining module is used for determining a second position of the target map element projected to the point cloud data according to the first position;
the matching module is used for matching the second position with the third position of each candidate map element in the point cloud data so as to determine a target position matched with the second position from the third position of each candidate map element;
and the generating module is used for generating an electronic map according to the target map elements and the target position.
7. The apparatus of claim 6, wherein the matching module comprises:
the matching unit is used for matching the second position with a third position of each candidate map element in the point cloud data so as to determine the matching degree of each candidate map element;
a determining unit, configured to determine, according to the matching degree of each candidate map element, a matching map element corresponding to a maximum matching degree from each candidate map element;
and the processing unit is used for determining the position of the matching map element in the point cloud data as a target position matched with the second position.
8. The apparatus according to claim 7, wherein the matching unit is specifically configured to:
acquiring a fourth position where the vehicle-mounted camera is located when the vehicle-mounted camera collects the source image;
screening a candidate region containing the fourth position from the point cloud data according to the fourth position;
carrying out map element detection on the candidate areas to obtain each candidate map element;
and matching the third position and the second position of each candidate map element in the point cloud data to determine the matching degree of each candidate map element.
9. The apparatus according to any of claims 6-8, wherein the first position is a position of the target map element in an image coordinate system corresponding to the source image,
the determining module is specifically configured to:
transforming the first position of the target map element in the image coordinate system to the position of the target map element in the camera coordinate system by adopting a first mapping relation between the camera coordinate system where the source image is located and the image coordinate system corresponding to the source image;
transforming the position of the target map element in the camera coordinate system to the position under the radar coordinate system by adopting a second mapping relation between the radar coordinate system of the vehicle-mounted radar and the camera coordinate system where the source image is located, so as to obtain the position of the target map element in the radar coordinate system;
and determining a second position of the target map element projected to the point cloud data according to the position of the target map element in the radar coordinate system.
10. The apparatus according to any one of claims 6-8, wherein the detection module is specifically configured to:
map element detection is carried out on the source image so as to determine attribute information of a target map element and determine a first position of the target map element in the source image; wherein the attribute information contains at least one of a name, a color, and a pose orientation;
correspondingly, the generating module is specifically configured to:
and generating the electronic map according to the attribute information of the target map elements and the target position.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the high precision map generation method of any of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the high precision map generation method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, carries out the steps of the high precision map generation method according to any of claims 1-5.
CN202111328425.1A 2021-11-10 2021-11-10 High-precision map generation method and device, electronic equipment and storage medium Pending CN114186007A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111328425.1A CN114186007A (en) 2021-11-10 2021-11-10 High-precision map generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111328425.1A CN114186007A (en) 2021-11-10 2021-11-10 High-precision map generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114186007A true CN114186007A (en) 2022-03-15

Family

ID=80539905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111328425.1A Pending CN114186007A (en) 2021-11-10 2021-11-10 High-precision map generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114186007A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117494248A (en) * 2023-12-29 2024-02-02 中科图新(苏州)科技有限公司 Coordinate data processing method, device, computer equipment and storage medium
CN117606470A (en) * 2024-01-24 2024-02-27 航天宏图信息技术股份有限公司 Intelligent self-adaptive additional acquisition generation method, device and equipment for linear elements of high-precision navigation chart
WO2024093641A1 (en) * 2022-11-01 2024-05-10 北京百度网讯科技有限公司 Multi-modal-fused method and apparatus for recognizing high-definition map element, and device and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024093641A1 (en) * 2022-11-01 2024-05-10 北京百度网讯科技有限公司 Multi-modal-fused method and apparatus for recognizing high-definition map element, and device and medium
CN117494248A (en) * 2023-12-29 2024-02-02 中科图新(苏州)科技有限公司 Coordinate data processing method, device, computer equipment and storage medium
CN117494248B (en) * 2023-12-29 2024-04-12 中科图新(苏州)科技有限公司 Coordinate data processing method, device, computer equipment and storage medium
CN117606470A (en) * 2024-01-24 2024-02-27 航天宏图信息技术股份有限公司 Intelligent self-adaptive additional acquisition generation method, device and equipment for linear elements of high-precision navigation chart
CN117606470B (en) * 2024-01-24 2024-04-16 航天宏图信息技术股份有限公司 Intelligent self-adaptive additional acquisition generation method, device and equipment for linear elements of high-precision navigation chart

Similar Documents

Publication Publication Date Title
EP3506161A1 (en) Method and apparatus for recovering point cloud data
US20230039293A1 (en) Method of processing image, electronic device, and storage medium
CN110176078B (en) Method and device for labeling training set data
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN113377888A (en) Training target detection model and method for detecting target
CN113674287A (en) High-precision map drawing method, device, equipment and storage medium
CN112785625A (en) Target tracking method and device, electronic equipment and storage medium
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
CN112147632A (en) Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN113012200B (en) Method and device for positioning moving object, electronic equipment and storage medium
CN114034295A (en) High-precision map generation method, device, electronic device, medium, and program product
CN113688935A (en) High-precision map detection method, device, equipment and storage medium
CN114648676A (en) Point cloud processing model training and point cloud instance segmentation method and device
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN114111813B (en) High-precision map element updating method and device, electronic equipment and storage medium
JP2023038164A (en) Obstacle detection method, device, automatic driving vehicle, apparatus, and storage medium
CN114238790A (en) Method, apparatus, device and storage medium for determining maximum perception range
CN114187357A (en) High-precision map production method and device, electronic equipment and storage medium
CN113742440A (en) Road image data processing method and device, electronic equipment and cloud computing platform
CN113591569A (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN113011298A (en) Truncated object sample generation method, target detection method, road side equipment and cloud control platform
CN112987707A (en) Automatic driving control method and device for vehicle
CN115410173B (en) Multi-mode fused high-precision map element identification method, device, equipment and medium
CN116642490A (en) Visual positioning navigation method based on hybrid map, robot and storage medium
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination