CN111462029B - Visual point cloud and high-precision map fusion method and device and electronic equipment - Google Patents

Visual point cloud and high-precision map fusion method and device and electronic equipment Download PDF

Info

Publication number
CN111462029B
CN111462029B CN202010228591.3A CN202010228591A CN111462029B CN 111462029 B CN111462029 B CN 111462029B CN 202010228591 A CN202010228591 A CN 202010228591A CN 111462029 B CN111462029 B CN 111462029B
Authority
CN
China
Prior art keywords
point cloud
visual point
precision map
transformation matrix
similarity transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010228591.3A
Other languages
Chinese (zh)
Other versions
CN111462029A (en
Inventor
蔡育展
郑超
闫超
张瀚天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Intelligent Technology Beijing Co Ltd
Original Assignee
Apollo Intelligent Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Intelligent Technology Beijing Co Ltd filed Critical Apollo Intelligent Technology Beijing Co Ltd
Priority to CN202010228591.3A priority Critical patent/CN111462029B/en
Publication of CN111462029A publication Critical patent/CN111462029A/en
Application granted granted Critical
Publication of CN111462029B publication Critical patent/CN111462029B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method and a device for fusing visual point cloud and a high-precision map and electronic equipment, and relates to the technical field of high-precision maps. The method comprises the following steps: calculating a first similarity transformation matrix according to the GPS track and the camera pose of each image of the image sequence; mapping a first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, wherein the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence; determining an element matching pair of the high-precision map and the second visual point cloud; and determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair, and fusing the second visual point cloud and the high-precision map. The method and the device can improve the accuracy of fusion of the visual point cloud and the high-precision map, and improve the fusion effect of the visual point cloud and the high-precision map.

Description

Visual point cloud and high-precision map fusion method and device and electronic equipment
Technical Field
The application relates to the image processing technology, in particular to the technical field of high-precision maps, and specifically relates to a method and a device for fusing a visual point cloud and a high-precision map, and electronic equipment.
Background
At present, the three-dimensional reconstruction technology of images can be used in high-precision map applications, for example, an image acquisition device acquires an image sequence, the image sequence generates a visual point cloud after three-dimensional modeling, and the visual point cloud can be fused with a high-precision map to update the high-precision map. In many application scenes, a camera for acquiring an image sequence may be a camera without calibration internal reference or a monocular camera, a GPS reference coordinate of the image sequence may not be accurate enough, and the factors bring difficulty to the fusion of the visual point cloud and the high-precision map, so that the fusion effect is poor.
Disclosure of Invention
The application provides a method and a device for fusing a visual point cloud and a high-precision map and electronic equipment, which are used for solving the technical problem in the fusion of the visual point cloud and the high-precision map.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, the present application provides a method for fusing a visual point cloud and a high-precision map, where the method includes:
calculating a first similarity transformation matrix according to the GPS track and the camera pose of each image of the image sequence;
mapping a first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, wherein the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence;
determining an element matching pair of the high-precision map and the second visual point cloud;
and determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair, and fusing the second visual point cloud and the high-precision map.
By adopting the technical means, the coordinate of the visual point cloud in the high-precision map can be preliminarily determined according to the similarity transformation matrix, and then the coordinate of the visual point cloud in the high-precision map can be further accurately determined according to the coordinate of the element matching pair, so that the fusion of the visual point cloud and the high-precision map is realized. The accuracy of fusion of the visual point cloud and the high-precision map can be improved, and the fusion effect of the visual point cloud and the high-precision map is improved.
Optionally, mapping the first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, including:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
In the embodiment, the ground angle of the visual point cloud is corrected, so that the overlooking effect of the visual point cloud in the high-precision map can be ensured to basically meet the requirement, and a better basis is provided for the fusion of the subsequent visual point cloud and the high-precision map; by correcting the ground height of the visual point cloud, the ground height of the visual point cloud mapped to the high-precision map can be ensured to basically meet the requirement, the condition that the visual point cloud mapped to the high-precision map is inverted from top to bottom can be effectively avoided, and a better basis is provided for the fusion of the subsequent visual point cloud and the high-precision map.
Optionally, the determining an element matching pair of the high-precision map and the second visual point cloud includes:
selecting a second element from the high-precision map according to the three-dimensional coordinates of the first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
determining a matching element of the first element from the second element;
and obtaining an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
In the embodiment, through the process, the element matching pair of the high-precision map and the second visual point cloud can be determined accurately and quickly, and a better basis is provided for the fusion of the subsequent visual point cloud and the high-precision map.
Optionally, the determining, according to the coordinates of the element matching pair, the coordinates of the second visual point cloud in the high-precision map, and fusing the second visual point cloud and the high-precision map includes:
calculating a second similarity transformation matrix according to the coordinates of the element matching pairs;
and mapping the second visual point cloud to the high-precision map coordinate system according to the second similarity transformation matrix, and fusing the mapped visual point cloud and the high-precision map.
In this embodiment, the absolute error between the second visual point cloud and the high-precision map can be eliminated from the second similarity transformation matrix calculated from the coordinates of the element matching pairs. In this way, the coordinates of the mapped visual point cloud in the high-precision map can be regarded as coordinates with higher precision, and the fusion condition can be met, so that the mapped visual point cloud and the high-precision map can be fused.
Optionally, the first similarity transformation matrix includes a first transformation matrix and a first scaling coefficient;
the second similar transform matrix includes a second transform matrix and second scale transform coefficients.
In the embodiment, the problem that the visual point cloud lacks scale information caused by a monocular camera can be solved through the scale transformation coefficient contained in the similarity transformation matrix, so that a basis can be provided for scale transformation of the visual point cloud when the visual point cloud is mapped to a high-precision map.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In the embodiment, the image sequence is collected in a crowdsourcing mode, the acquisition difficulty and cost of the image sequence can be reduced, and therefore the difficulty and cost of fusion of the visual point cloud and the high-precision map are further reduced.
In a second aspect, the present application provides a device for fusing a visual point cloud and a high-precision map, including:
the calculation module is used for calculating a first similarity transformation matrix according to the GPS track and the camera pose of each image of the image sequence;
the mapping module is used for mapping the first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix so as to obtain a second visual point cloud, and the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence;
the determining module is used for determining an element matching pair of the high-precision map and the second visual point cloud;
and the fusion module is used for determining the coordinate of the second visual point cloud in the high-precision map according to the coordinate of the element matching pair and fusing the second visual point cloud and the high-precision map.
Optionally, the mapping module is specifically configured to:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
Optionally, the determining module includes:
the selection submodule is used for selecting a second element from the high-precision map according to the three-dimensional coordinates of a first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
a determining submodule for determining a matching element of the first element from the second element;
and the acquisition submodule is used for acquiring an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
Optionally, the fusion module includes:
the calculation submodule is used for calculating a second similarity transformation matrix according to the coordinates of the element matching pair;
and the fusion sub-module is used for mapping the second visual point cloud into the high-precision map coordinate system according to the second similarity transformation matrix and fusing the mapped visual point cloud and the high-precision map.
Optionally, the first similarity transformation matrix includes a first transformation matrix and a first scaling coefficient;
the second similar transform matrix includes a second transform matrix and second scale transform coefficients.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In a third aspect, the present application provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods of the first aspect.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the first aspects.
One embodiment in the present application has the following advantages or benefits:
according to the method and the device, a similarity transformation matrix between the visual point cloud and the high-precision map is obtained according to the GPS track and the camera pose of each image of the image sequence, and an element matching pair of the high-precision map and the visual point cloud is determined, so that the coordinate of the visual point cloud in the high-precision map can be preliminarily determined according to the similarity transformation matrix, and then the coordinate of the visual point cloud in the high-precision map is further accurately determined according to the coordinate of the element matching pair, and therefore fusion of the visual point cloud and the high-precision map is achieved. By adopting the technical means, the fusion accuracy of the visual point cloud and the high-precision map can be improved, and the fusion effect of the visual point cloud and the high-precision map is improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flowchart of a fusion method of a visual point cloud and a high-precision map provided in an embodiment of the present application;
fig. 2 is an exemplary diagram of an overall technical framework of a fusion algorithm of a visual point cloud and a high-precision map provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a fusion apparatus of a visual point cloud and a high-precision map provided in an embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing a method for fusing a visual point cloud and a high-precision map according to an embodiment of the present disclosure.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The application provides a method for fusing a visual point cloud and a high-precision map, which is used for fusing the visual point cloud and the high-precision map.
As shown in fig. 1, the method for fusing the visual point cloud and the high-precision map comprises the following steps:
step 101: a first similarity transformation matrix is calculated from the GPS trajectory and the camera pose of each image of the image sequence.
Prior to this step, a sequence of images captured by the camera may be acquired in advance, the sequence of images including a number of images, a GPS (Global Positioning System) trajectory of each image of the sequence of images, and a camera pose of each image of the sequence of images may also be acquired in advance.
When the images are collected, the GPS track corresponding to each image can be obtained, and the GPS tracks of the images can be obtained through the sensor. The camera pose of the image may be obtained by performing three-dimensional reconstruction on the image sequence, and the visual point cloud corresponding to the image sequence, that is, the first visual point cloud in step 102, may also be obtained. The first visual point cloud obtained by three-dimensional reconstruction of the image sequence and the camera pose of the image can be collectively referred to as a sparse model.
Because the GPS track data has the characteristics of small relative error and good stability, the current high-precision map is usually generated based on the GPS track of the image. Therefore, the first similarity transformation matrix is calculated according to the GPS track and the camera pose of the image, and the similarity transformation matrix between the high-precision map and the visual point cloud is calculated according to the GPS track and the camera pose of the image. That is, the first similarity transformation matrix may be regarded as a similarity transformation matrix between the high-precision map and the visual point cloud, and further, the first similarity transformation matrix may be regarded as an initial (or rough) similarity transformation matrix between the high-precision map and the visual point cloud.
The high-precision map can be obtained by inquiring a GPS track of an image sequence.
In the present application, the first similarity transformation matrix may be solved by using a GPS calibration algorithm based on Sim3 transformation.
The basic principle of Sim3 transformation is: according to the least square method, a Sim3 similarity transformation matrix which enables the matching pair error to be minimum is found, and the calculation process can involve the steps of constructing the matrix according to the point pairs, carrying out Singular Value Decomposition (SVD) on the matrix and the like.
The GPS calibration algorithm, which may also be referred to as GPS _ Aligner algorithm, is a RANSAC (Random Sample Consensus) algorithm, and the basic process is as follows: firstly, randomly selecting m GPS track-camera pose matching pairs; solving a similarity transformation matrix through m GPS track-camera pose matching pairs; verifying the accuracy of the solved similarity transformation matrix for the rest other matching pairs, and if the error is smaller than a certain threshold value, calling the similarity transformation matrix as an interior point; the process is repeated until the number of the inner points exceeds a certain number of the total number, and the solved similarity transformation matrix is considered to be accurate; and outputting the solved similarity transformation matrix.
Optionally, the first similarity transformation matrix includes a first transformation matrix and a first scaling coefficient.
Given that the camera that acquired the sequence of images is likely a monocular camera, the first visual point cloud resulting from the three-dimensional reconstruction of the sequence of images may lack scale information. In this case, the first similarity transform matrix may include a first transform matrix and a first scaling coefficient. Here, the first scaling factor may be understood as a scaling relationship between the high-precision map and the visual point cloud, and further, the first scaling factor may be understood as an initial (or rough) scaling relationship between the high-precision map and the visual point cloud.
Therefore, in the embodiment, the problem that the visual point cloud is lack of scale information due to the monocular camera can be solved through the scale transformation coefficient contained in the similarity transformation matrix, and thus, when the visual point cloud is mapped to the high-precision map, a basis can be provided for the scale transformation of the visual point cloud.
Step 102: and mapping the first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud.
The first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the second visual point cloud is obtained by performing similarity transformation on the first visual point cloud according to the first similarity transformation matrix.
In the step, according to the first similarity transformation matrix, similarity transformation can be carried out on the first visual point cloud to obtain a second visual point cloud mapped into a high-precision map coordinate system, and preliminary transformation from the first visual point cloud to the high-precision map is achieved.
Step 103: and determining an element matching pair of the high-precision map and the second visual point cloud.
Although the relative error of the GPS track data is small, the GPS track data may have a non-negligible absolute error, and thus, the second visual point cloud of the first visual point cloud obtained through the first similarity transformation matrix in the high-precision map coordinate system may also have an absolute error, that is, an absolute error exists between the second visual point cloud and the high-precision map. In this case, the second visual point cloud cannot be directly fused with the high-precision map, and if the second visual point cloud is directly fused with the high-precision map, the fusion effect is poor.
Here, the relative error of the GPS track data may be understood as an error of the relative coordinates (or relative positions) between the elements represented by the GPS track; the absolute error of the GPS track data can be understood as an error between the coordinates (or position) of the element represented by the GPS track and the actual coordinates of the element in the physical world.
In order to eliminate the absolute error existing between the second visual point cloud and the high-precision map, the application provides a related technical scheme for further and accurately determining the coordinate of the second visual point cloud in the high-precision map based on the element matching pair.
In this step, determining an element matching pair of the high-precision map and the second visual point cloud, which may be understood as determining a one-to-one matching relationship between some elements in the high-precision map and some elements in the second visual point cloud, where each element matching pair includes two elements, one of which is located in the high-precision map and the other of which is located in the second visual point cloud. For example, the element a, the element b and the element c are extracted from the second visual point cloud, and if the element d, the element e and the element f in the high-precision map are matched with the element a, the element b and the element c, a one-to-one matching relationship among the elements needs to be determined. Assuming that the element a matches the element d, the element b matches the element f, and the element c matches the element e, the element a and the element d are an element matching pair, the element b and the element f are an element matching pair, and the element c and the element e are an element matching pair. Here, the elements in the present application may include elements such as a signboard, a lane line, a building, or a pole, and the elements in the present application may be referred to as semantic elements.
In the application, by determining the element matching pair, the absolute error between the second visual point cloud and the high-precision map can be reflected by the coordinate relation between two elements of the element matching pair.
Step 104: and determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair, and fusing the second visual point cloud and the high-precision map.
In the step, the absolute error between the second visual point cloud and the high-precision map can be determined through the coordinates of the element matching pair, so that the coordinates of the second visual point cloud in the high-precision map are further accurately determined. Therefore, the coordinate of the second visual point cloud in the high-precision map can meet the fusion requirement, and the second visual point cloud and the high-precision map can be fused.
According to the method and the device, a similarity transformation matrix between the visual point cloud and the high-precision map is obtained according to the GPS track and the camera pose of each image of the image sequence, and an element matching pair of the high-precision map and the visual point cloud is determined, so that the coordinate of the visual point cloud in the high-precision map can be preliminarily determined according to the similarity transformation matrix, and then the coordinate of the visual point cloud in the high-precision map is further accurately determined according to the coordinate of the element matching pair, and therefore fusion of the visual point cloud and the high-precision map is achieved. By adopting the technical means, the fusion accuracy of the visual point cloud and the high-precision map can be improved, and the fusion effect of the visual point cloud and the high-precision map is improved.
Optionally, the mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, including:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
This embodiment provides a further technical solution from a first visual point cloud to a second visual point cloud.
In this embodiment, the ground angle of the visual point cloud may be corrected based on plane fitting so that the visual point cloud is as perpendicular to the ground as possible, or the perpendicularity of the visual point cloud with respect to the ground may reach a preset threshold. The ground angle of the visual point cloud is corrected, so that the overlooking effect of the visual point cloud in the high-precision map can be basically met, and a better basis is provided for the fusion of the subsequent visual point cloud and the high-precision map.
In this embodiment, the ground height of the visual point cloud may be corrected according to the ground height of the visual point cloud in the high-precision map and the ground height of the visual point cloud. By correcting the ground height of the visual point cloud, the ground height of the visual point cloud mapped to the high-precision map can be ensured to basically meet the requirement, the condition that the visual point cloud mapped to the high-precision map is inverted from top to bottom can be effectively avoided, and a better basis is provided for the fusion of the subsequent visual point cloud and the high-precision map.
Optionally, the determining an element matching pair of the high-precision map and the second visual point cloud includes:
selecting a second element from the high-precision map according to the three-dimensional coordinates of the first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
determining a matching element of the first element from the second element;
and obtaining an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
The embodiment provides a technical scheme for determining the element matching pair of the high-precision map and the second visual point cloud.
In this embodiment, the first elements, such as the sign, the pole, the lane line, etc., may be determined in advance from the second visual point cloud. The first element is determined from the second visual point cloud, which element can be present in the high-precision map. That is, if any element exists in both the second visual point cloud and the high-precision map, the element may be regarded as the first element. If a certain element in the second visual point cloud is a new element and does not exist in the high-precision map, the element is not suitable to be determined as the first element. In the application, the first element can be flexibly determined from the second visual point cloud according to the requirement, for example, a proper first element can be selected according to the identification degree of the element; or selecting a proper first element according to the coordinates of the elements in the visual point cloud.
After the first element is predetermined, the three-dimensional coordinates of the first element in the second visual point cloud may be calculated, for example, the two-dimensional coordinates of the first element may be extracted from the image, and then the three-dimensional coordinates of the first element in the second visual point cloud may be calculated.
After obtaining the three-dimensional coordinates of the first element in the second visual point cloud, a second element similar to the first element may be selected from the high-precision map. It should be noted that the number of the second elements may be larger than that of the first elements, that is, the second elements may include elements similar to the first elements but not matching the first elements. In view of this, the matching element of the first element may be further determined from the second element, for example, a preliminary matching of elements may be achieved based on a sliding window, or in other words, the matching element of the first element may be determined from the second element based on a sliding window. The matching element of the first element may be understood as an element matching the first element.
After the matching elements of the first element are determined from the high-precision map, the element matching pairs matching the first element and the matching elements one by one can be obtained according to the maximum voting principle.
Through the process, the embodiment can accurately and quickly determine the element matching pair of the high-precision map and the second visual point cloud, and provides a better basis for the fusion of the subsequent visual point cloud and the high-precision map.
Optionally, the determining, according to the coordinates of the element matching pair, the coordinates of the second visual point cloud in the high-precision map, and fusing the second visual point cloud and the high-precision map includes:
calculating a second similarity transformation matrix according to the coordinates of the element matching pairs;
and mapping the second visual point cloud into the high-precision map coordinate system according to the second similarity transformation matrix, and fusing the mapped visual point cloud and the high-precision map.
The embodiment provides a technical scheme of determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pairs.
In this embodiment, since the coordinates of the element matching pair can reflect an absolute error existing between the second visual point cloud and the high-precision map, a second similarity transformation matrix capable of eliminating the absolute error can be calculated from the coordinates of the element matching pair. Here, the second similarity transformation matrix may also be referred to as a registration matrix.
After the second similarity transformation matrix is obtained, the second visual point cloud can be mapped into a high-precision map coordinate system, so that the coordinates of the mapped visual point cloud in the high-precision map can be regarded as coordinates with higher precision, and a fusion condition can be achieved, so that the mapped visual point cloud and the high-precision map can be fused.
Optionally, the second similar transform matrix includes a second transform matrix and a second scale transform coefficient.
In the embodiment, the problem that the visual point cloud lacks scale information caused by a monocular camera can be solved through the scale transformation coefficient contained in the similarity transformation matrix, so that a basis can be provided for scale transformation of the visual point cloud when the visual point cloud is mapped to a high-precision map.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In the embodiment, the image sequence is collected in a crowdsourcing mode, the acquisition difficulty and cost of the image sequence can be reduced, so that the difficulty and cost of fusion of the visual point cloud and the high-precision map are further reduced, and the crowdsourcing updating mode can be applied to the high-precision map.
In order to more intuitively understand the fusion method of the visual point cloud and the high-precision map, fig. 2 shows an example of the overall technical framework for fusion of the visual point cloud and the high-precision map.
As shown in fig. 2, an image sequence, a GPS track of the image sequence, a sparse model of the image sequence (including a visual point cloud and a camera pose of the image), and a high-precision map, which can be obtained by GPS track query of the image sequence, are taken as inputs. The algorithm required to be used for fusing the visual point cloud and the high-precision map can comprise the following steps: a GPS calibration algorithm, a ground angle correction algorithm, a ground height correction algorithm, a semantic element matching algorithm, a registration matrix solving algorithm and the like. And after the GPS track and the sparse model are sequentially processed by a GPS calibration algorithm, a ground angle correction algorithm and a ground height correction algorithm, a second visual point cloud can be obtained and output. And the high-precision map and the second visual point cloud are processed by a semantic element matching algorithm and a registration matrix solving algorithm in sequence to obtain and output a registration matrix. And finally, mapping the second visual point cloud into the high-precision map according to the registration matrix to realize the fusion of the second visual point cloud and the high-precision map.
It should be noted that, in the present application, various optional embodiments in the method for fusing a visual point cloud and a high-precision map may be implemented in combination with each other or separately, and the present application is not limited thereto.
The above-described embodiments of the present application have the following advantages or beneficial effects:
according to the method and the device, the similarity transformation matrix between the visual point cloud and the high-precision map is obtained according to the GPS track and the camera pose of each image of the image sequence, and the element matching pair of the high-precision map and the visual point cloud is determined, so that the coordinate of the visual point cloud in the high-precision map can be preliminarily determined according to the similarity transformation matrix, and then the coordinate of the visual point cloud in the high-precision map is further accurately determined according to the coordinate of the element matching pair, and therefore the fusion of the visual point cloud and the high-precision map is achieved. By adopting the technical means, the fusion accuracy of the visual point cloud and the high-precision map can be improved, and the fusion effect of the visual point cloud and the high-precision map is improved.
The application also provides a device for fusing the visual point cloud and the high-precision map, as shown in fig. 3, the device 200 for fusing the visual point cloud and the high-precision map comprises:
a calculating module 201, configured to calculate a first similarity transformation matrix according to a GPS track and a camera pose of each image of the image sequence;
the mapping module 202 is configured to map a first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, where the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence;
a determining module 203, configured to determine an element matching pair of the high-precision map and the second visual point cloud;
and the fusion module 204 is configured to determine the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair, and fuse the second visual point cloud and the high-precision map.
Optionally, the mapping module 202 is specifically configured to:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
Optionally, the determining module 203 includes:
the selection submodule is used for selecting a second element from the high-precision map according to the three-dimensional coordinates of a first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
a determining submodule for determining a matching element of the first element from the second element;
and the acquisition submodule is used for acquiring an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
Optionally, the fusion module 204 includes:
the calculation submodule is used for calculating a second similarity transformation matrix according to the coordinates of the element matching pair;
and the fusion submodule is used for mapping the second visual point cloud to the high-precision map coordinate system according to the second similarity transformation matrix and fusing the mapped visual point cloud and the high-precision map.
Optionally, the first similarity transformation matrix includes a first transformation matrix and a first scaling coefficient;
the second similar transform matrix includes a second transform matrix and second scale transform coefficients.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
The visual point cloud and high-precision map fusion device 200 provided by the application can realize each process realized by the visual point cloud and high-precision map fusion device in the embodiment of the visual point cloud and high-precision map fusion method, can achieve the same beneficial effects, and is not repeated here for avoiding repetition.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 4 is a block diagram of an electronic device for fusing a visual point cloud and a high-precision map according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing some of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). Fig. 4 illustrates an example of a processor 501.
Memory 502 is a non-transitory computer readable storage medium as provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the visual point cloud and high precision map fusion method provided herein. A non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to perform the visual point cloud and high precision map fusion method provided herein.
The memory 502, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the first calculation module 201, the mapping module 202, the determination module 203, and the fusion module 204 shown in fig. 3) corresponding to the visual point cloud and the high precision map fusion method in the embodiments of the present application. The processor 501 executes various functional applications and data processing of the visual point cloud and high-precision map fusion apparatus by running non-transitory software programs, instructions and modules stored in the memory 502, that is, the visual point cloud and high-precision map fusion method in the above method embodiments is implemented.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the electronic device of the visual point cloud and high-precision map fusion method, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 502 may optionally include a memory remotely located from the processor 501, which may be connected to the electronic device of the visual point cloud and high precision map fusion method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the fusion method of the visual point cloud and the high-precision map may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal input related to user settings and function control of the electronic device of the visual point cloud and high-precision map fusion method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the coordinates of the visual point cloud in the high-precision map can be preliminarily determined according to the similarity transformation matrix, and then the coordinates of the visual point cloud in the high-precision map can be further accurately determined according to the coordinates of the element matching pairs, so that the fusion of the visual point cloud and the high-precision map is realized. By adopting the technical means, the fusion accuracy of the visual point cloud and the high-precision map can be improved, and the fusion effect of the visual point cloud and the high-precision map is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (14)

1. A method for fusing a visual point cloud and a high-precision map is characterized by comprising the following steps:
calculating a first similarity transformation matrix according to the GPS track and the camera pose of each image of the image sequence;
mapping a first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, wherein the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence;
determining an element matching pair of the high-precision map and the second visual point cloud;
and determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair, and fusing the second visual point cloud and the high-precision map.
2. The method of claim 1, wherein mapping the first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud comprises:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
3. The method of claim 1, wherein determining the matching pairs of elements of the high precision map and the second visual point cloud comprises:
selecting a second element from the high-precision map according to the three-dimensional coordinates of the first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
determining a matching element of the first element from the second element;
and obtaining an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
4. The method of claim 1, wherein determining coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pairs and fusing the second visual point cloud with the high-precision map comprises:
calculating a second similarity transformation matrix according to the coordinates of the element matching pairs;
and mapping the second visual point cloud to the high-precision map coordinate system according to the second similarity transformation matrix, and fusing the mapped visual point cloud and the high-precision map.
5. The method of claim 4, wherein the first similarity transformation matrix comprises a first transformation matrix and a first scaling factor;
the second similar transform matrix includes a second transform matrix and second scale transform coefficients.
6. The method of claim 1, wherein the image sequence is a sequence of images acquired by crowd sourcing.
7. A visual point cloud and high-precision map fusion device is characterized by comprising:
the calculation module is used for calculating a first similarity transformation matrix according to the GPS track and the camera pose of each image of the image sequence;
the mapping module is used for mapping the first visual point cloud into a high-precision map coordinate system according to the first similarity transformation matrix to obtain a second visual point cloud, and the first visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence;
the determining module is used for determining an element matching pair of the high-precision map and the second visual point cloud;
and the fusion module is used for determining the coordinates of the second visual point cloud in the high-precision map according to the coordinates of the element matching pair and fusing the second visual point cloud and the high-precision map.
8. The apparatus of claim 7, wherein the mapping module is specifically configured to:
and mapping the first visual point cloud to a high-precision map coordinate system according to the first similarity transformation matrix, and correcting the ground angle and the ground height of the mapped visual point cloud to obtain a second visual point cloud.
9. The apparatus of claim 7, wherein the determining module comprises:
the selection submodule is used for selecting a second element from the high-precision map according to the three-dimensional coordinates of a first element in the second visual point cloud, wherein the first element is an element in the second visual point cloud, and the second element is an element similar to the first element;
a determining submodule for determining a matching element of the first element from the second element;
and the acquisition submodule is used for acquiring an element matching pair of the high-precision map and the second visual point cloud according to a maximum voting principle.
10. The apparatus of claim 7, wherein the fusion module comprises:
the calculation submodule is used for calculating a second similarity transformation matrix according to the coordinates of the element matching pair;
and the fusion submodule is used for mapping the second visual point cloud to the high-precision map coordinate system according to the second similarity transformation matrix and fusing the mapped visual point cloud and the high-precision map.
11. The apparatus of claim 10, wherein the first similarity transformation matrix comprises a first transformation matrix and a first scaling coefficient;
the second similar transform matrix includes a second transform matrix and second scale transform coefficients.
12. The apparatus of claim 7, wherein the image sequence is a sequence of images captured in a crowd-sourced manner.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the first and the second end of the pipe are connected with each other,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 6.
14. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 6.
CN202010228591.3A 2020-03-27 2020-03-27 Visual point cloud and high-precision map fusion method and device and electronic equipment Active CN111462029B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010228591.3A CN111462029B (en) 2020-03-27 2020-03-27 Visual point cloud and high-precision map fusion method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010228591.3A CN111462029B (en) 2020-03-27 2020-03-27 Visual point cloud and high-precision map fusion method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111462029A CN111462029A (en) 2020-07-28
CN111462029B true CN111462029B (en) 2023-03-03

Family

ID=71683317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010228591.3A Active CN111462029B (en) 2020-03-27 2020-03-27 Visual point cloud and high-precision map fusion method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111462029B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311743B (en) * 2020-03-27 2023-04-07 北京百度网讯科技有限公司 Three-dimensional reconstruction precision testing method and device and electronic equipment
CN112050821B (en) * 2020-09-11 2021-08-20 湖北亿咖通科技有限公司 Lane line polymerization method
CN112327842B (en) * 2020-10-29 2022-06-03 深圳市普渡科技有限公司 Method and system for positioning charging pile by robot
CN112381877B (en) * 2020-11-09 2023-09-01 北京百度网讯科技有限公司 Positioning fusion and indoor positioning method, device, equipment and medium
CN112577499B (en) * 2020-11-19 2022-10-11 上汽大众汽车有限公司 VSLAM feature map scale recovery method and system
CN112508773B (en) * 2020-11-20 2024-02-09 小米科技(武汉)有限公司 Image processing method and device, electronic equipment and storage medium
CN112668505A (en) 2020-12-30 2021-04-16 北京百度网讯科技有限公司 Three-dimensional perception information acquisition method of external parameters based on road side camera and road side equipment
CN112860828B (en) * 2021-01-22 2022-08-23 浙江商汤科技开发有限公司 Positioning method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006083297A2 (en) * 2004-06-10 2006-08-10 Sarnoff Corporation Method and apparatus for aligning video to three-dimensional point clouds
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110148164A (en) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 Transition matrix generation method and device, server and computer-readable medium
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006083297A2 (en) * 2004-06-10 2006-08-10 Sarnoff Corporation Method and apparatus for aligning video to three-dimensional point clouds
EP3078935A1 (en) * 2015-04-10 2016-10-12 The European Atomic Energy Community (EURATOM), represented by the European Commission Method and device for real-time mapping and localization
CN109410256A (en) * 2018-10-29 2019-03-01 北京建筑大学 Based on mutual information cloud and image automatic, high precision method for registering
CN109658450A (en) * 2018-12-17 2019-04-19 武汉天乾科技有限责任公司 A kind of quick orthography generation method based on unmanned plane
CN109974707A (en) * 2019-03-19 2019-07-05 重庆邮电大学 A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN110148164A (en) * 2019-05-29 2019-08-20 北京百度网讯科技有限公司 Transition matrix generation method and device, server and computer-readable medium
CN110322500A (en) * 2019-06-28 2019-10-11 Oppo广东移动通信有限公司 Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring
CN110648398A (en) * 2019-08-07 2020-01-03 武汉九州位讯科技有限公司 Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN110675431A (en) * 2019-10-08 2020-01-10 中国人民解放军军事科学院国防科技创新研究院 Three-dimensional multi-target tracking method fusing image and laser point cloud

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Simultaneous localization and mapping in unknown environment using dynamic matching of images and registration of point clouds;A.Vokhmintsev 等;《2016 2nd International Conference on Industrial Engineering, Applications and Manufacturing(ICIEAM)》;1-6 *
State of the art in high density image matching;Fabio Remondio 等;《The Photogrammetric Record》;第29卷(第146期);144-166 *
基于Kinect多移动机器人3D同步定位与制图;施尚杰;《中国优秀硕士学位论文全文数据库 信息科技辑》(第7期);I140-280 *
基于图像匹配-点云融合的建筑物立面三维重建;王俊 等;《计算机学报》;第35卷(第10期);2072-2079 *
基于序列图像的三维重建算法研究;彭科举;《中国优秀博士学位论文全文数据库 信息科技辑》(第3期);I138-32 *
基于点云融合的立体导航技术研究;赵会宾;《中国优秀硕士学位论文全文数据库 信息科技辑》(第5期);I138-394 *

Also Published As

Publication number Publication date
CN111462029A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN110595494B (en) Map error determination method and device
CN114581532A (en) Multi-phase external parameter combined calibration method, device, equipment and medium
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN111612852B (en) Method and apparatus for verifying camera parameters
CN111401251B (en) Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
CN111578951B (en) Method and device for generating information in automatic driving
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
CN111722245A (en) Positioning method, positioning device and electronic equipment
CN111739005A (en) Image detection method, image detection device, electronic equipment and storage medium
CN111311743B (en) Three-dimensional reconstruction precision testing method and device and electronic equipment
CN113989450A (en) Image processing method, image processing apparatus, electronic device, and medium
CN111652113A (en) Obstacle detection method, apparatus, device, and storage medium
US11721037B2 (en) Indoor positioning method and apparatus, electronic device and storage medium
CN111553844A (en) Method and device for updating point cloud
CN111311742A (en) Three-dimensional reconstruction method, three-dimensional reconstruction device and electronic equipment
CN111275827B (en) Edge-based augmented reality three-dimensional tracking registration method and device and electronic equipment
KR20220100813A (en) Automatic driving vehicle registration method and device, electronic equipment and a vehicle
CN111597987A (en) Method, apparatus, device and storage medium for generating information
KR20220004604A (en) Method for detecting obstacle, electronic device, roadside device and cloud control platform
CN111260722B (en) Vehicle positioning method, device and storage medium
CN113932796A (en) High-precision map lane line generation method and device and electronic equipment
CN111967481A (en) Visual positioning method and device, electronic equipment and storage medium
CN112102417A (en) Method and device for determining world coordinates and external reference calibration method for vehicle-road cooperative roadside camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211014

Address after: 105 / F, building 1, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant after: Apollo Intelligent Technology (Beijing) Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant