CN115937448A - Map data generation method, high-precision map generation method and device - Google Patents

Map data generation method, high-precision map generation method and device Download PDF

Info

Publication number
CN115937448A
CN115937448A CN202211533576.5A CN202211533576A CN115937448A CN 115937448 A CN115937448 A CN 115937448A CN 202211533576 A CN202211533576 A CN 202211533576A CN 115937448 A CN115937448 A CN 115937448A
Authority
CN
China
Prior art keywords
point
feature
candidate
point cloud
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211533576.5A
Other languages
Chinese (zh)
Inventor
魏新
丁文东
万国伟
白宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211533576.5A priority Critical patent/CN115937448A/en
Publication of CN115937448A publication Critical patent/CN115937448A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The disclosure provides a map data generation method, which relates to the technical field of artificial intelligence and automatic driving, in particular to the fields of deep learning, intelligent transportation and high-precision maps. The specific implementation scheme is as follows: determining a first characteristic point set of the source point cloud according to the geometric information and the semantic information of the source point cloud; determining a second characteristic point set of the target point cloud according to the geometric information and the semantic information of the target point cloud; determining a set of associated point pairs according to the first characteristic point set and the second characteristic point set, wherein the set of associated point pairs comprises candidate associated point pairs, and the candidate associated point pairs comprise a first characteristic point and a second characteristic point which are associated with each other; determining transformation data between the source point cloud and the target point cloud according to the associated point pair set; and splicing the source point cloud and the target point cloud according to the transformation data to obtain map data. The disclosure also provides a map generation method, a map generation device, an electronic device and a storage medium.

Description

Map data generation method, high-precision map generation method and device
Technical Field
The present disclosure relates to the field of artificial intelligence and autopilot technology, and in particular to deep learning, intelligent transportation and high-precision map technology. More specifically, the present disclosure provides a map data generation method, a map generation method, an apparatus, an electronic device, and a storage medium.
Background
In the field of automatic driving, a special map for an automatic driving vehicle may be called a high-precision map, and may also be called a high-precision map, and the high-precision map may be generated by using point cloud data acquired by a point cloud acquisition device on the vehicle at different times. The high-precision map has accurate vehicle position information and rich road element data information, can help vehicles to predict road surface complex information such as gradient, curvature, course and the like, and can better avoid potential risks.
Disclosure of Invention
The disclosure provides a map data generation method, a map generation device, a map generation apparatus and a storage medium.
According to a first aspect, there is provided a map data generation method, the method comprising: determining a first feature point set of the source point cloud according to the geometric information and the semantic information of the source point cloud; determining a second characteristic point set of the target point cloud according to the geometric information and the semantic information of the target point cloud; determining an associated point pair set according to the first characteristic point set and the second characteristic point set, wherein the associated point pair set comprises candidate associated point pairs, and the candidate associated point pairs comprise a first characteristic point and a second characteristic point which are associated with each other; determining transformation data between the source point cloud and the target point cloud according to the associated point pair set; and splicing the source point cloud and the target point cloud according to the transformation data to obtain map data.
According to a second aspect, there is provided a map generation method, the method comprising: acquiring map data; and generating a map from the map data; the map data is obtained according to the map data generation method.
According to a third aspect, there is provided a map data generating apparatus comprising: the first determining module is used for determining a first characteristic point set of the source point cloud according to the geometric information and the semantic information of the source point cloud; the second determining module is used for determining a second feature point set of the target point cloud according to the geometric information and the semantic information of the target point cloud; a third determining module, configured to determine a set of associated point pairs according to the first feature point set and the second feature point set, where the set of associated point pairs includes candidate associated point pairs, and the candidate associated point pairs include the first feature point and the second feature point that are associated with each other; the fourth determining module is used for determining transformation data between the source point cloud and the target point cloud according to the associated point pair set; and the splicing module is used for splicing the source point cloud and the target point cloud according to the transformation data to obtain map data.
According to a fourth aspect, there is provided a map generating apparatus comprising: the acquisition module is used for acquiring map data; the generating module is used for generating a map according to the map data; wherein the map data is obtained from the map data generating device.
According to a fifth aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method provided in accordance with the present disclosure.
According to a sixth aspect, there is provided an autonomous vehicle comprising the above-mentioned electronic device.
According to a seventh aspect, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method provided according to the present disclosure.
According to an eighth aspect, there is provided a computer program product comprising a computer program stored on at least one of a readable storage medium and an electronic device, which computer program, when executed by a processor, implements a method provided according to the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram of an exemplary system architecture to which a map data generation method and a map generation method may be applied, according to one embodiment of the present disclosure;
FIG. 2 is a flow diagram of a map data generation method according to one embodiment of the present disclosure;
FIG. 3 is a block diagram of a method of determining pairs of associated points according to one embodiment of the present disclosure;
FIG. 4A is an illustration of an initial relationship diagram according to one embodiment of the disclosure;
FIG. 4B is an illustration of a target relationship diagram according to one embodiment of the disclosure;
FIG. 5 is a flow diagram of a map generation method according to one embodiment of the present disclosure;
fig. 6 is a block diagram of a map data generation apparatus according to one embodiment of the present disclosure;
FIG. 7 is a block diagram of a map generation apparatus according to one embodiment of the present disclosure;
fig. 8 is a block diagram of a map data generation method and/or a map generated electronic device according to one embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Point cloud registration is an important task in the production process of high-precision maps, and is mainly responsible for performing point cloud alignment on point cloud data acquired at different times and positions. The input of a classical Point cloud registration algorithm such as ICP (Iterative Closest Point) is two groups of Point clouds, namely a source Point cloud and a target Point cloud, and the initial relative pose transformation, and the output is the relative pose enabling the two groups of Point clouds to obtain the optimal alignment. The accuracy and success rate of the classical point cloud registration algorithm are greatly influenced by the accuracy of the initial pose.
In a task of a point cloud registration algorithm, an initial relative pose is generally obtained in two ways, namely carrier motion information and Global Navigation Satellite System (GNSS) information.
The carrier motion information generally utilizes sensors such as an Inertial Measurement Unit (IMU) and a wheel speed carried by a carrier (e.g., a radar) to continuously estimate the motion of the carrier, and the carrier motion information has high precision in a short time and can provide a good initial pose for point cloud registration of adjacent frames. But only can be applied to the registration of local point clouds, and the registration of global point clouds cannot be realized.
The initial pose provided by utilizing GNSS information is seriously influenced by the shielding of GNSS signals, and when the GNSS signals are poor, the error of the provided initial pose is large, so that the registration of a classical point cloud registration algorithm fails or the error is large.
In order to improve the point cloud registration accuracy and success rate under the condition of large initial transformation error, a global point cloud registration algorithm independent of an initial value needs to be developed.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the personal information of the related user are all in accordance with the regulations of related laws and regulations and do not violate the good customs of the public order.
In the technical scheme of the disclosure, before the personal information of the user is acquired or collected, the authorization or the consent of the user is acquired.
Fig. 1 is a schematic diagram of an exemplary system architecture to which a map data generation method and a map generation method can be applied according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
As shown in FIG. 1, a system architecture according to this embodiment may include an autonomous vehicle 110, a network 120, and a server 130. Autonomous vehicle 110 may include a point cloud capture device 111, and point cloud capture device 111 may capture point cloud data of the surrounding environment during autonomous driving of autonomous vehicle 110.
At least one of the map data generation method and the map generation method provided by the embodiment of the present disclosure may be executed by the server 130. Accordingly, at least one of the map data generation device and the map generation device provided by the embodiment of the present disclosure may be provided in the server 130.
For example, the point cloud collecting device 111 may transmit the collected point cloud data to the server 130 through the network 120, and the server 130 executes the map data generating method provided by the present disclosure based on the received point cloud data to obtain the map data. The server 130 may further perform a one-step map generation method provided by the present disclosure based on the obtained map data, so as to obtain a high-precision map.
At least one of the map data generation method and the map generation method provided by the embodiment of the present disclosure may be performed by the autonomous vehicle 110. Accordingly, at least one of the map data generation device and the map generation device provided by the embodiment of the present disclosure may be provided in the autonomous vehicle 110.
For example, the mobile driving vehicle 110 further includes an electronic device to which the point cloud collection device 111 may transmit the collected point cloud data. The electronic equipment executes the map data generation method provided by the disclosure based on the received point cloud data to obtain the map data. The electronic equipment can further execute the map generation method provided by the disclosure in one step based on the obtained map data to obtain a high-precision map.
For example, the map data generation method provided by the present disclosure may also be executed by an electronic device in the autonomous vehicle 110, resulting in map data. Then, the electronic device transmits the map data to the server 130 through the network 120, and the server 130 executes the map generation method provided by the present disclosure, so as to obtain a high-precision map.
Fig. 2 is a flowchart of a map data generation method according to one embodiment of the present disclosure.
As shown in fig. 2, the map data generation method 200 may include operations S210 to S250. For example, a global point cloud registration method may register a source point cloud and a target point cloud based on an association between key points in the source point cloud and key points of the target point cloud. The positions (coordinates) of the key points in the source point cloud and the key points in the target point cloud can be obtained by detection through a deep learning model, the deep learning model can also extract features of the key points (such as space difference between the key points and neighborhood points), the positions and the corresponding features of the key points in the source point cloud are used as the feature points of the source point cloud, and the positions and the corresponding features of the key points in the target point cloud are used as the feature points of the target point cloud. The source point cloud and the target point cloud may be registered based on an association between feature points of the source point cloud and feature points of the target point cloud.
The deep Learning model is, for example, a D3Feat (Joint Learning Detection & Description, joint Learning of Dense Detection and 3D local feature Description) model. The D3Feat model may contain sub-models (detectors) for keypoint detection and sub-models (descriptors) for feature extraction. The input to the D3Feat model may be the coordinates of the point cloud and the output is the feature points comprising the coordinates and the feature description vectors.
In operation S210, a first feature point set of the source point cloud is determined according to the geometric information and the semantic information of the source point cloud.
For example, the geometric information of the source point cloud may include coordinates of points in the source point cloud. The semantic information may include categories of objects in the source point cloud, such as tunnels, lane lines on the road surface, crosswalks, arrows, and so on. The first feature points in the first set of feature points may be locations of keypoints in the source point cloud and corresponding feature description information. The feature description information comprises a first feature description vector, which may be a 32-dimensional feature vector, comprising geometric features in a plurality of neighborhoods of the first feature point. A plurality of neighborhoods, e.g., 0.5m, 1m.
For example, a first feature point set may be obtained by performing key point detection and feature extraction on a source point cloud using a deep learning model. However, in a scenario of a repeated structure such as a tunnel and a road surface, the geometrical information of the point cloud is easily degraded, and it is difficult to effectively extract the key point by using only the geometrical information (e.g., coordinates) of the point cloud. However, in a tunnel, a road surface, and the like, a large number of traffic elements such as arrows, crosswalks, lane lines, and the like are generally included to indicate the operation of the vehicle, and the semantics (categories) of the elements can significantly increase the ambiguity of the point cloud. Therefore, when the deep learning model is used to obtain the feature points of the source point cloud, the semantic category information of the point cloud can be introduced to assist the detection and feature extraction of the first feature point of the source point cloud.
For example, coordinates of a point in a source point cloud and semantics of an object in the point cloud may be input to a deep learning model to obtain the first feature point set, and since semantic information capable of representing cloud ambiguity of the source point is introduced, a detection effect of the first feature point may be improved, for example, more first feature points belonging to the source point cloud are extracted, that is, an interior point rate of the first feature points is improved.
In operation S220, a second feature point set of the target point cloud is determined according to the geometric information and the semantic information of the target point cloud.
Similar to the source point cloud, the geometric information of the target point cloud may include coordinates of points in the target point cloud. The semantic information may include categories of objects in the target point cloud, such as tunnels, lane lines on a road surface, crosswalks, arrows, and so forth. The second feature points in the second feature point set may be key points in the target point cloud and corresponding feature description information. The feature description information includes a second feature description vector, which may be a 32-dimensional feature vector containing geometric features in multiple neighborhoods of the second feature point. A plurality of neighborhoods, e.g., 0.5m, 1m.
For example, coordinates of a midpoint of the target point cloud and semantics of an object in the point cloud may be input to the deep learning model to obtain the second feature point set, and since semantic information capable of representing ambiguity of the target point cloud is introduced, a detection effect of the second feature points may be improved, for example, more second feature points belonging to the target point cloud are extracted, that is, an interior point rate of the second feature points is improved.
In operation S230, a set of associated point pairs is determined according to the first set of characteristic points and the second set of characteristic points.
Wherein the associated point pair set includes candidate associated point pairs including a first characteristic point and a second characteristic point associated with each other.
For example, a first feature point in the first feature point set may be associated with a second feature point in the second feature point set to obtain an associated point pair set.
For example, the first feature point and the second feature point may be associated according to a spatial distance between the first feature vector and the second feature vector, and the first feature point and the second feature point respectively characterized by the first feature vector and the second feature vector having a spatial distance smaller than a preset value (e.g., 0.1 meter) are determined as candidate associated point pairs, so as to obtain an associated point pair set.
For example, KD (K-dimensional index tree) trees are respectively established for a first feature vector of the source point cloud and a second feature vector point of the target point cloud; then traversing each node (first characteristic vector) in the KD tree of the source point cloud, and searching a second characteristic vector which is closest to the node in the target point cloud; and traversing each node (second characteristic vector) in the KD tree of the target point cloud, and searching a first characteristic vector which is closest to the node in the source point cloud. And when the first characteristic vector and the second characteristic vector are nearest neighbors, determining a first characteristic point and a second characteristic point respectively characterized by the first characteristic vector and the second characteristic vector as candidate associated point pairs, thereby obtaining an associated point pair set.
In operation S240, transformation data between the source point cloud and the target point cloud is determined according to the associated point pair set.
For example, using the respective coordinates of the first feature point and the second feature point in the candidate associated point pair, transformation data between the first feature point and the second feature point may be solved, the transformation data including a rotation matrix and a translation vector.
For example, for any candidate associated point pair { q } i ,p i },p i Representing the first feature point of the ith candidate associated point pair, q j Representing the second feature point in the ith candidate associated point pair, i being an integer greater than 1. Candidate associated point pair { q i ,p i The rotation matrix of is R and the translation vector is t. If the association of the candidate associated point pair is correct, the candidate associated point pair conforms to the association relationship of the following formula (1).
q i =Rp i +t (1)
In operation S250, the source point cloud and the target point cloud are spliced according to the transformation data to obtain map data.
For example, since the transformation data includes a rotation matrix and a translation vector between the first feature point and the second feature point in the associated point pair, the transformation data represents a pose change between the source point cloud and the target point cloud. Therefore, according to the transformation data, the source point cloud and the target point cloud can be spliced, and the spliced point cloud data can be used as map data for drawing a map.
Compared with a mode of extracting the feature points by solely utilizing the geometric information, the embodiment of the invention jointly extracts the point cloud feature points by utilizing the point cloud semantic information and the geometric information, so that the internal point rate of the first feature points of the source point cloud and the internal point rate of the second feature points of the target point cloud are higher, and the improvement of the success rate of global point cloud registration is facilitated.
Fig. 3 is a block diagram of a method of determining pairs of associated points according to one embodiment of the present disclosure.
As shown in fig. 3, the source point cloud 311 is subjected to key point detection and feature extraction of the deep learning model to obtain a first feature point set 312. The deep learning model may further output a first evaluation value of the first feature point, the first evaluation value characterizing a criticality or an ambiguity of the first feature point. The higher the first evaluation value is, the more obvious the ambiguity of the first feature point is, and the more favorable the subsequent point cloud registration is. Therefore, at least one first feature point 313 for which the first evaluation value is larger than the first threshold value (for example, 0.5) can be screened out from the first feature point set 312 for subsequent feature point association.
Similarly, the target point cloud 321 is subjected to key point detection and feature extraction of the deep learning model to obtain a second feature point set 322. The deep learning model may further output a second evaluation value of the second feature point, the second evaluation value characterizing the criticality or ambiguity of the first feature point. The higher the second evaluation value is, the more obvious the ambiguity of the second feature point is, and the more favorable the subsequent point cloud registration is. Therefore, at least one second feature point 323 whose second evaluation value is larger than the second threshold value (for example, 0.5) can be screened out from the second feature point set 322 for subsequent feature point association.
And associating the at least one first characteristic point 313 and the at least one second characteristic point 323 according to a first characteristic vector of the at least one first characteristic point 313 and a second characteristic vector of the at least one second characteristic point 323 to obtain an associated point pair set 331.
According to the embodiment of the disclosure, compared with the case that the first feature point set 312 and the second feature point set 322 are directly used for association, the first feature point and the second feature point with obvious ambiguity are selected from the first feature point set 312 and the second feature point set 322 respectively for association, so that the problems of high time consumption and poor effect caused by association of feature points with unobvious ambiguity can be avoided, and therefore, the efficiency and the accuracy of feature point association can be improved.
According to an embodiment of the present disclosure, the set of associated point pairs includes a plurality of candidate associated point pairs. Determining transformation data between the first set of feature points and the second set of feature points according to the set of associated point pairs comprises: screening target associated point pairs from the associated point pair set according to the translation invariant relation and the rotation invariant relation among the candidate associated point pairs; and determining transformation data according to the target associated point pairs.
For example, the set of associated point pairs includes a plurality of candidate associated point pairs { q } i ,p i And (4) the associated point pairs which are possibly in error association in the plurality of candidate associated point pairs. It is therefore necessary to use these candidate pairs of associated points to solve for the rotation matrix R and the translation vector t that result in optimal registration for these pairs. Since there may be a mis-association in the association relationship,it is therefore not straightforward to solve using a method similar to point-to-point ICP, but instead requires a robust solver that is insensitive to outliers.
RANSAC (Random Sample Consensus) is a classic robust solving algorithm, but the calculation efficiency of the method is influenced by the ratio of interior points, and when the ratio of interior points is low, the solving speed is slow.
The method comprises the steps of taking candidate associated point pairs as observation data, calculating a translation invariant relation (also called translation invariant observation) and a rotation invariant relation (rotation invariant observation) between any two candidate associated point pairs, and constructing an initial relation graph by taking each candidate associated point pair as a vertex and the translation invariant relation as an edge; adjusting the initial relation graph according to the rotation invariant relation and the rotation invariant relation to obtain a target relation graph; and determining the candidate associated point pair represented by the vertex of the maximum complete subgraph in the target relational graph as a target associated point pair. And finally, solving a rotation matrix R and a translation vector t by using a target associated point pair through a robust optimization method.
For example, for the ith candidate associated point pair { q } i ,p i And (3) according to the association relationship expressed by the following formula (2).
q i =Rp i +t+o i +∈ i (2)
Wherein p is i Representing the first feature point of the ith candidate associated point pair, q j Representing the second feature point, o, of the ith candidate pair of associated points i Represents the correlation error of the ith candidate correlation point pair, epsilon i Indicating the observed error of the ith candidate associated point pair. R denotes a rotation matrix and t denotes a translation vector.
When the i-th candidate associated point pair { q } i ,p i When correct association is found, o i Is a zero vector; when the i-th candidate associated point pair { q } i ,p i When it is a wrong association, o i Is an arbitrary non-zero vector.
If another candidate associated point pair { q } exists at this time j ,p j And (4) according to the association relationship expressed by the following formula (3).
q j =Rp j +t+o j +∈ j (3)
p j Representing the first feature point in the jth candidate association point pair, q j Representing the second feature point, o, of the jth candidate pair of associated points j Represents the correlation error of the ith candidate correlation point pair, epsilon j Indicating the observed error of the ith candidate associated point pair. R denotes a rotation matrix and t denotes a translation vector.
The jth candidate associated point pair { q j ,p j When it is correct association, o j Is a zero vector; when the jth candidate associated point pair { q } j ,p j When it is wrong, o j Is an arbitrary non-zero vector.
By subtracting the above equations (2) and (3), the translation vector t can be eliminated from the equation, resulting in the following equation (4).
Figure BDA0003971674130000091
Equation (4) is independent of the translation vector t and may therefore be referred to as a translation invariant relationship between the ith candidate associated point pair and the jth candidate associated point pair. Therefore, if there are N (N is an integer greater than 1) associated point pairs, N (N-1)/2 translation invariant relationships can be obtained by combining two pairs.
The following formula (5) can be obtained by taking the modulus of the left and right sides of the above formula (4).
Figure BDA0003971674130000101
The above equation (5) may be referred to as a rotation invariant relationship.
Fig. 4A is an illustration of an initial relationship diagram according to one embodiment of the disclosure.
The initial relationship diagram shown in fig. 4A is obtained by constructing an edge for each translation invariant relationship with each candidate associated point pair as a vertex.
As shown in FIG. 4AAs shown, the initial relationship graph includes vertices 410-460, if vertex 410 is a candidate associated point pair { q } i ,p i Vertex 420 is a candidate associated point pair q j ,p j Then the edge between vertex 410 and vertex 420 is equation (4) above. The rotation invariant relationship corresponding to the edge is the above equation (5), and if the two vertices (vertex 410 and vertex 420) corresponding to the edge are both correctly related, then o jj For a zero vector, the above equation (5) holds. If the above equation (5) does not hold, it indicates that at least one of the vertices corresponding to the edge is not in a correct relationship, i.e. at least one of the vertex 410 and the vertex 420 is not in a correct relationship.
And in the same way, traversing the edge between any two vertexes in the initial relationship graph, and determining whether the rotation invariant relationship corresponding to the edge is established. For an edge with an unfit rotation invariant relationship, the edge can be deleted from the relationship diagram to obtain a target relationship diagram.
Fig. 4B is an illustration of a target relationship graph according to one embodiment of the disclosure.
As shown in FIG. 4B, the target relationship graph is obtained after deleting the edge between vertex 430 and vertex 420 and the edge between vertex 430 and vertex 440 in the initial relationship graph. That is, the rotation invariant relationship for the edge between vertex 430 and vertex 420 does not hold, and the rotation invariant relationship for the edge between vertex 430 and vertex 440 does not hold.
For the target relationship graph shown in fig. 4B, a maximum group screening method may be adopted to screen out the target associated point pairs. The graph with edges between any two vertices in the graph containing a plurality of vertices is called a clique, and the maximum clique refers to a sub-clique with the largest number of vertices in the graph and is also called a maximum complete subgraph.
For example, as shown in fig. 4B, the subgraph composed of the vertices 410, 420, 440, 450, 460 is the largest complete subgraph in the target relationship graph, and the candidate associated point pairs represented by the vertices of the largest complete subgraph are target associated point pairs. I.e., the candidate associated point pairs represented by vertices 410, 420, 440, 450, 460, respectively, are target associated point pairs.
The following describes a method for solving a rotation matrix R and a translation vector t by a robust optimization method using a target associated point.
For example, after maximum clique screening, the co-existence of K target association point pairs is used for solving the rotation matrix R and the translation vector t. To solve the rotation matrix R, a truncated least squares optimization problem can be defined, as shown in equation (6) below. According to Black-Rangarajan duality, the truncated least square optimization problem shown in the formula (6) can be converted into the optimization problem shown in the formula (7), and the optimization problem in the formula (7)
Figure BDA0003971674130000111
When ω is k =1, based on the measured signal strength>
Figure BDA0003971674130000112
When ω is k When the value is not less than 0, the reaction time is not less than 0,
Figure BDA0003971674130000113
c is constant, the operation of taking the minimum value in equation (6) is introduced>
Figure BDA0003971674130000114
The problem of large error of the rotation matrix caused by large association error due to wrong association relation is solved.
Figure BDA0003971674130000115
The optimization problem shown in the formula (7) cannot be solved directly, so that R and omega can be solved alternately based on a GNC (Gradully Non-Convex) method k . First fix omega k E.g. ω k =1, obtaining a formula (8), wherein the optimization problem shown in the formula (8) is a classical point-to-point ICP problem, and the optimized optimization problem can be obtained by using SVD (Singular Value Decomposition) solution
Figure BDA0003971674130000116
Then the optimized->
Figure BDA0003971674130000117
Substituted into equation (9), equation (9) is true>
Figure BDA00039716741300001111
Is K ω k Vector of composition, solving and updating omega k . There is an analytical solution for equation (9), as shown in equation (10).
Figure BDA0003971674130000118
In the formula (10)
Figure BDA0003971674130000119
At each iteration, μ t =kμ t-1 ,k>1,
Figure BDA00039716741300001110
When the corresponding objective function change in equation (8) is less than a given threshold or reaches the maximum number of iterations, the iteration convergence ends.
According to an embodiment of the present disclosure, a rotation matrix R is calculated * Then, for each target associated point pair, based on the association relationship, a corresponding translation vector may be calculated. By using the voting method, the optimal translation vector t can be solved *
According to the embodiment of the invention, the target associated point pair is screened out from the candidate associated point pairs, so that the associated point pair internal point rate can be further improved, and the problem that the robust optimization solving algorithm consumes a lot of time when the internal point rate is low can be avoided, therefore, the robust optimization solving is more efficient.
Fig. 5 is a flow diagram of a map generation method according to one embodiment of the present disclosure.
As shown in fig. 5, the map generation method 500 includes operations S510 to S520.
In operation S510, map data is acquired.
In operation S520, a map is generated from the map data.
Wherein the map data is generated according to the map data generation method.
For example, map data obtained by stitching the source point cloud and the target point cloud reflects a real motion scene of the autonomous vehicle. And then, traffic elements such as lane lines, zebra stripes, arrows, signal lamps, signs and the like sensed in the driving track of the automatic driving vehicle are marked in the map data, so that a complete high-precision map can be obtained.
Fig. 6 is a block diagram of a map data generation apparatus according to one embodiment of the present disclosure.
As shown in fig. 6, the map data generating apparatus 600 includes a first determining module 601, a second determining module 602, a third determining module 603, a fourth determining module 604, and a splicing module 605.
The first determining module 601 is configured to determine a first feature point set of the source point cloud according to the geometric information and the semantic information of the source point cloud.
The second determining module 602 is configured to determine a second feature point set of the target point cloud according to the geometric information and the semantic information of the target point cloud.
The third determining module 603 is configured to determine a set of associated point pairs according to the first set of characteristic points and the second set of characteristic points, where the set of associated point pairs includes candidate associated point pairs, and the candidate associated point pairs include the first characteristic point and the second characteristic point that are associated with each other.
The fourth determining module 604 is configured to determine transformation data between the source point cloud and the target point cloud according to the set of associated point pairs.
The splicing module 605 is configured to splice the source point cloud and the target point cloud according to the transformation data to obtain map data.
According to an embodiment of the present disclosure, the first feature point in the first feature point set includes a first evaluation value and a first feature description vector, and the second feature point in the second feature point set includes a second evaluation value and a second feature description vector.
The third determination module 603 comprises a first screening unit, a second screening unit and an association unit.
The first screening unit is used for screening out at least one first characteristic point of which the first evaluation value is larger than a first threshold value from the first characteristic point set.
The second screening unit is used for screening out at least one second feature point of which the second evaluation value is larger than a second threshold value from the second feature point set.
The association unit is configured to associate the at least one first feature point with the at least one second feature point according to the first feature description vector and the second feature description vector, so as to obtain an associated point pair set.
According to an embodiment of the present disclosure, the set of associated point pairs includes a plurality of candidate associated point pairs. The fourth determination module 604 includes a third screening unit and a determination unit.
The third screening unit is used for screening the target associated point pair from the associated point pair set according to the translation invariant relation and the rotation invariant relation among the candidate associated point pairs.
The determining unit is used for determining the transformation data according to the target associated point pair.
According to an embodiment of the present disclosure, the third screening unit includes a first determining subunit, a constructing subunit, a second determining subunit, and a third determining subunit.
The first determining subunit is configured to determine a translation-invariant relationship and a rotation-invariant relationship between the plurality of candidate pairs of associated points to each other.
And the construction subunit is used for constructing an initial relationship graph by taking each candidate associated point pair as a vertex and taking the translation invariant relationship as an edge.
And the second determining subunit is used for adjusting the initial relationship diagram according to the rotation invariant relationship to obtain a target relationship diagram.
And a third determining subunit, configured to determine, as the target associated point pair, the candidate associated point pair represented by the vertex of the largest complete subgraph in the target relationship graph.
According to an embodiment of the present disclosure, the transformation data includes a rotation matrix and a translation vector, and the candidate associated point pairs satisfy the following association relationship:
q i =Rp i +t+o i +∈ i
qj=Rp j +t+oj+∈ j
wherein p is i Representing the first feature point of the ith candidate associated point pair, q i Representing the second feature point, o, of the ith candidate pair of associated points j Represents the correlation error of the ith candidate correlation point pair, epsilon j Indicating the observed error of the ith candidate associated point pair.
pj represents the first feature point in the jth candidate associated point pair, q j Representing the second feature point, o, of the jth candidate associated-point pair j Represents the correlation error of the ith candidate correlation point pair, epsilon j Indicating the observed error of the ith candidate associated point pair.
R denotes a rotation matrix and t denotes a translation vector.
The first determining subunit is configured to determine a translation invariant relationship between the plurality of candidate pairs of associated points with respect to each other according to the following formula:
Figure BDA0003971674130000141
determining a rotation invariant relationship between a plurality of candidate pairs of associated points to each other according to the following formula:
Figure BDA0003971674130000142
wherein the rotation invariant relationship is obtained by taking a modulus of the translation invariant relationship.
The first determining module is used for inputting the geometric information and the semantic information of the source point cloud into the deep learning model to obtain a first feature point set of the source point cloud;
and the second determining module is used for inputting the geometric information and the semantic information of the target point cloud into the deep learning model to obtain a second characteristic point set of the target point cloud.
Fig. 7 is a block diagram of a map generation apparatus according to one embodiment of the present disclosure.
As shown in fig. 7, the map generating apparatus 700 may include an obtaining module 701 and a generating module 702.
The obtaining module 701 is configured to obtain map data.
The generation module 702 is configured to generate a map according to the map data.
Wherein the map data is obtained from the map data generating device.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the map data generation method and/or the map generation method. For example, in some embodiments, the map data generation method and/or the map generation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the map data generation method and/or map generation method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the map data generation method and/or the map generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (20)

1. A map data generation method, comprising:
determining a first feature point set of a source point cloud according to geometric information and semantic information of the source point cloud;
determining a second feature point set of the target point cloud according to the geometric information and the semantic information of the target point cloud;
determining a set of associated point pairs according to the first feature point set and the second feature point set, wherein the set of associated point pairs comprises candidate associated point pairs, and the candidate associated point pairs comprise a first feature point and a second feature point which are associated with each other;
determining transformation data between the source point cloud and the target point cloud according to the associated point pair set; and
and splicing the source point cloud and the target point cloud according to the transformation data to obtain map data.
2. The method according to claim 1, wherein a first feature point in the first feature point set includes a first evaluation value and a first feature description vector, and a second feature point in the second feature point set includes a second evaluation value and a second feature description vector; the determining a set of associated point pairs according to the first set of feature points and the second set of feature points comprises:
screening at least one first characteristic point of which the first evaluation value is larger than a first threshold value from the first characteristic point set;
screening out at least one second characteristic point with a second evaluation value larger than a second threshold value from the second characteristic point set; and
and associating the at least one first characteristic point with the at least one second characteristic point according to the first characteristic description vector and the second characteristic description vector to obtain the associated point pair set.
3. The method of claim 1, wherein the set of associated point pairs comprises a plurality of candidate associated point pairs; the determining, according to the associated point pair set, transformation data between the first feature point set and the second feature point set includes:
screening target associated point pairs from the associated point pair set according to the translation invariant relation and the rotation invariant relation among the candidate associated point pairs; and
and determining the transformation data according to the target associated point pair.
4. The method of claim 3, wherein the screening of a target pair of associated points from the set of associated point pairs according to a translationally invariant relationship and a rotationally invariant relationship of a plurality of candidate pairs of associated points from each other in the set of associated point pairs comprises:
determining a translation-invariant relationship and a rotation-invariant relationship between the plurality of candidate pairs of associated points to each other;
constructing an initial relationship graph by taking each candidate associated point pair as a vertex and taking the translation invariant relationship as an edge;
adjusting the initial relationship graph according to the rotation invariant relationship to obtain a target relationship graph; (ii) a And
and determining the candidate associated point pair represented by the vertex of the maximum complete subgraph in the target relational graph as the target associated point pair.
5. The method of claim 4, wherein the transformation data comprises a rotation matrix and a translation vector, and the candidate pairs of associated points conform to the following association relationship:
q i =Rp i +t+o i +∈ i
q j =Rp j +t+o j +∈ j
wherein p is i Representing the first feature point of the ith candidate associated point pair, q i Representing the second feature point, o, of the ith candidate associated point pair i Represents the correlation error of the ith candidate correlation point pair, epsilon i Representing an observed error of the ith candidate associated point pair;
p j representing the first feature point in the jth candidate pair of associated points, q j Representing the second feature point, o, of the jth candidate associated-point pair j Represents the correlation error of the ith candidate correlation point pair, epsilon j Representing an observed error of the ith candidate associated point pair;
r represents the rotation matrix and t represents the translation vector.
6. The method of claim 5, wherein the determining a translation-invariant relationship and a rotation-invariant relationship of the plurality of candidate pairs of associated points to each other comprises:
determining a translation invariant relationship of the plurality of candidate pairs of associated points to each other according to the following formula:
Figure FDA0003971674120000021
determining a rotation invariant relationship of the plurality of candidate pairs of associated points to each other according to the following formula:
Figure FDA0003971674120000031
wherein the rotation invariant relationship is obtained by taking a modulus of the translation invariant relationship.
7. The method of claim 1, wherein,
the determining a first feature point set of the source point cloud according to the geometric information and the semantic information of the source point cloud comprises:
inputting the geometric information and semantic information of the source point cloud into a deep learning model to obtain a first feature point set of the source point cloud;
the determining the second feature point set of the target point cloud according to the geometric information and the semantic information of the target point cloud comprises:
and inputting the geometric information and the semantic information of the target point cloud into a deep learning model to obtain a second feature point set of the target point cloud.
8. A map generation method, comprising:
acquiring map data; and
generating a map according to the map data;
wherein the map data is obtained according to the method of any one of claims 1 to 7.
9. A map data generation apparatus comprising:
the first determining module is used for determining a first feature point set of a source point cloud according to geometric information and semantic information of the source point cloud;
the second determining module is used for determining a second feature point set of the target point cloud according to the geometric information and the semantic information of the target point cloud;
a third determining module, configured to determine a set of associated point pairs according to the first feature point set and the second feature point set, where the set of associated point pairs includes candidate associated point pairs including a first feature point and a second feature point associated with each other;
a fourth determining module, configured to determine transformation data between the source point cloud and the target point cloud according to the associated point pair set; and
and the splicing module is used for splicing the source point cloud and the target point cloud according to the transformation data to obtain map data.
10. The apparatus according to claim 9, wherein a first feature point in the first feature point set includes a first evaluation value and a first feature description vector, and a second feature point in the second feature point set includes a second evaluation value and a second feature description vector; the third determining module includes:
a first screening unit configured to screen at least one first feature point, of which a first evaluation value is greater than a first threshold, from the first feature point set;
a second filtering unit configured to filter out at least one second feature point, of which a second evaluation value is greater than a second threshold, from the second feature point set; and
and the association unit is configured to associate the at least one first feature point with the at least one second feature point according to the first feature description vector and the second feature description vector, so as to obtain the associated point pair set.
11. The apparatus of claim 9, wherein the set of pairs of associated points comprises a plurality of candidate pairs of associated points; the fourth determining module comprises:
a third screening unit, configured to screen out a target associated point pair from the associated point pair set according to a translation invariant relationship and a rotation invariant relationship between the multiple candidate associated point pairs; and
a determining unit, configured to determine the transform data according to the target associated point pair.
12. The apparatus of claim 11, wherein the third screening unit comprises:
a first determining subunit, configured to determine a translation-invariant relationship and a rotation-invariant relationship between the plurality of candidate pairs of associated points;
the construction subunit is used for constructing an initial relationship graph by taking each candidate associated point pair as a vertex and taking the translation invariant relationship as an edge;
the second determining subunit is used for adjusting the initial relationship diagram according to the rotation invariant relationship to obtain a target relationship diagram; and
a third determining subunit, configured to determine, as the target associated point pair, a candidate associated point pair represented by a vertex of a largest complete subgraph in the target relationship graph.
13. The apparatus of claim 12, wherein the transformation data comprises a rotation matrix and a translation vector, and the candidate pairs of associated points satisfy the following association relationship:
q i =Rp i +t+o i +∈ i
q j =Rp j +t+o j +∈ j
wherein p is i Representing the first feature point of the ith candidate associated point pair, q i Representing the second feature point, o, of the ith candidate associated point pair i Represents the correlation error of the ith candidate correlation point pair, epsilon i Representing an observed error of the ith candidate associated point pair;
p j representing the first feature point in the jth candidate association point pair, q j Representing the second feature point, o, of the jth candidate associated-point pair j Represents the correlation error of the ith candidate correlation point pair, epsilon j Representing an observed error of the ith candidate associated point pair;
r represents the rotation matrix and t represents the translation vector.
14. The apparatus of claim 13, wherein the first determining subunit is configured to determine a translation invariant relationship between the plurality of candidate pairs of associated points to each other according to the following formula:
Figure FDA0003971674120000051
determining a rotation invariant relationship of the plurality of candidate pairs of associated points to each other according to the following formula:
Figure FDA0003971674120000052
wherein the rotation invariant relationship is obtained by taking a modulus of the translation invariant relationship.
15. The apparatus of claim 9, wherein the first determining module is configured to input geometric information and semantic information of the source point cloud into a deep learning model to obtain a first feature point set of the source point cloud;
the second determining module is used for inputting the geometric information and the semantic information of the target point cloud into a deep learning model to obtain a second feature point set of the target point cloud.
16. A map generation apparatus comprising:
the acquisition module is used for acquiring map data; and
the generating module is used for generating a map according to the map data;
wherein the map data is derived by an apparatus according to any of claims 9 to 15.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
18. An autonomous vehicle comprising the electronic device of claim 17.
19. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 8.
20. A computer program product comprising a computer program stored on at least one of a readable storage medium and an electronic device, the computer program when executed by a processor implementing the method according to any one of claims 1 to 8.
CN202211533576.5A 2022-11-30 2022-11-30 Map data generation method, high-precision map generation method and device Pending CN115937448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211533576.5A CN115937448A (en) 2022-11-30 2022-11-30 Map data generation method, high-precision map generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211533576.5A CN115937448A (en) 2022-11-30 2022-11-30 Map data generation method, high-precision map generation method and device

Publications (1)

Publication Number Publication Date
CN115937448A true CN115937448A (en) 2023-04-07

Family

ID=86650090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211533576.5A Pending CN115937448A (en) 2022-11-30 2022-11-30 Map data generation method, high-precision map generation method and device

Country Status (1)

Country Link
CN (1) CN115937448A (en)

Similar Documents

Publication Publication Date Title
EP3505869B1 (en) Method, apparatus, and computer readable storage medium for updating electronic map
Yin et al. Airport detection based on improved faster RCNN in large scale remote sensing images
CN112184508A (en) Student model training method and device for image processing
US11255678B2 (en) Classifying entities in digital maps using discrete non-trace positioning data
CN113377888A (en) Training target detection model and method for detecting target
CN110704652A (en) Vehicle image fine-grained retrieval method and device based on multiple attention mechanism
CN104463826A (en) Novel point cloud parallel Softassign registering algorithm
CN114003613A (en) High-precision map lane line updating method and device, electronic equipment and storage medium
Yuan et al. Image feature based GPS trace filtering for road network generation and road segmentation
EP3985637A2 (en) Method and apparatus for outputting vehicle flow direction, roadside device, and cloud control platform
Lu et al. A lightweight real-time 3D LiDAR SLAM for autonomous vehicles in large-scale urban environment
CN112883236B (en) Map updating method and device, electronic equipment and storage medium
CN114187357A (en) High-precision map production method and device, electronic equipment and storage medium
CN115239899B (en) Pose map generation method, high-precision map generation method and device
Zhang et al. Edge-preserving stereo matching using minimum spanning tree
CN115937448A (en) Map data generation method, high-precision map generation method and device
CN115937449A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114674328A (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN115032672A (en) Fusion positioning method and system based on positioning subsystem
CN115239776A (en) Point cloud registration method, device, equipment and medium
CN114111813A (en) High-precision map element updating method and device, electronic equipment and storage medium
US11574417B2 (en) Portable device positioning data processing method and apparatus, device, and storage medium
Rekavandi et al. B-Pose: Bayesian Deep Network for Accurate Camera 6-DoF Pose Estimation from RGB Images
CN112733817B (en) Method for measuring precision of point cloud layer in high-precision map and electronic equipment
CN116663329B (en) Automatic driving simulation test scene generation method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination