CN111311743A - Three-dimensional reconstruction precision testing method and device and electronic equipment - Google Patents
Three-dimensional reconstruction precision testing method and device and electronic equipment Download PDFInfo
- Publication number
- CN111311743A CN111311743A CN202010228592.8A CN202010228592A CN111311743A CN 111311743 A CN111311743 A CN 111311743A CN 202010228592 A CN202010228592 A CN 202010228592A CN 111311743 A CN111311743 A CN 111311743A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- point cloud
- visual point
- image sequence
- precision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012360 testing method Methods 0.000 title claims abstract description 71
- 230000000007 visual effect Effects 0.000 claims abstract description 94
- 230000009466 transformation Effects 0.000 claims abstract description 41
- 239000011159 matrix material Substances 0.000 claims abstract description 31
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000004422 calculation algorithm Methods 0.000 claims description 28
- 230000015654 memory Effects 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000010998 test method Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application discloses a three-dimensional reconstruction precision testing method, a testing device and electronic equipment, and relates to the technical field of high-precision maps. The method comprises the following steps: acquiring three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, wherein the visual point cloud is obtained through three-dimensional reconstruction, and the first object and the second object are objects existing in a high-precision map; calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map; according to the similarity transformation matrix, carrying out coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud to obtain a first three-dimensional coordinate of the second object; and calculating the precision of three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map. The method and the device make full use of the existing high-precision map data, and can improve the three-dimensional reconstruction precision testing efficiency by only calibrating a small amount of image sequences.
Description
Technical Field
The application relates to the image processing technology, in particular to the technical field of high-precision maps, and specifically relates to a three-dimensional reconstruction precision testing method, a three-dimensional reconstruction precision testing device and electronic equipment.
Background
At present, the three-dimensional reconstruction technology of images can be used in high-precision map applications, for example, an image acquisition device acquires an image sequence, the image sequence generates a visual point cloud after three-dimensional modeling, and the visual point cloud can be fused with a high-precision map to update the high-precision map. Because the high-precision map has a high requirement on the precision of the visual point cloud, the three-dimensional reconstruction precision needs to be tested to ensure the fusion precision of the visual point cloud and the high-precision map. Currently, professional equipment is generally adopted to map a scene, and a mapping result is used as a standard for evaluating the three-dimensional reconstruction accuracy. However, since the cost of the surveying and mapping equipment is high, the time consumption of the surveying and mapping process is long, so that the three-dimensional reconstruction precision testing efficiency is low, and the testing cost is high.
Disclosure of Invention
The application provides a three-dimensional reconstruction precision testing method, a three-dimensional reconstruction precision testing device and electronic equipment, and aims to solve the technical problems of an existing three-dimensional reconstruction precision testing method.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, the present application provides a method for testing three-dimensional reconstruction accuracy, where the method includes:
acquiring three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, wherein the visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the first object and the second object are objects existing in a high-precision map;
calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map;
according to the similarity transformation matrix, carrying out coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud to obtain a first three-dimensional coordinate of the second object;
and calculating the precision of the three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map.
The method and the device make full use of the existing high-precision map data, and can test and evaluate the three-dimensional reconstruction precision after the three-dimensional modeling is completed by only calibrating a small amount of image sequences before the three-dimensional reconstruction precision is tested. By adopting the technical means, the accuracy of the three-dimensional reconstruction precision test can be improved, the difficulty and the cost of the three-dimensional reconstruction precision test can be reduced, and the three-dimensional reconstruction precision test efficiency is improved.
Optionally, the acquiring three-dimensional coordinates of the first object and the second object in the image sequence in the visual point cloud includes:
acquiring pixel coordinates of the first object and the second object in the image sequence;
calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, wherein the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
This embodiment provides a solution to calculate the three-dimensional coordinates of an object in a visual point cloud from the pixel coordinates of the object and the camera pose of the image.
Optionally, the images of the image sequence include M signboard and N rod, where M and N are positive integers;
the first object is the M signboard, and the second object is the N pole.
In the embodiment, the signboard is used as the first object, and the rod is used as the second object, so that the three-dimensional reconstruction precision test is simplified, and the accuracy of the three-dimensional reconstruction precision test is improved.
Optionally, the three-dimensional coordinate of the first object in the visual point cloud is calculated by a big-card position calculation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud through a line feature position calculation LineSFM algorithm.
This embodiment provides an algorithm for calculating the three-dimensional coordinates of the first object and the second object in the visual point cloud.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In the embodiment, the image sequence is acquired in a crowdsourcing mode, so that the acquisition difficulty and cost of the image sequence can be reduced, and the difficulty and cost of the three-dimensional reconstruction precision test can be further reduced.
In a second aspect, the present application provides a three-dimensional reconstruction accuracy testing apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, the visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the first object and the second object are objects existing in a high-precision map;
the first calculation module is used for calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map;
the second acquisition module is used for carrying out coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud according to the similarity transformation matrix to obtain a first three-dimensional coordinate of the second object;
and the second calculation module is used for calculating the precision of the three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map.
Optionally, the first obtaining module includes:
an acquisition sub-module for acquiring pixel coordinates of the first object and the second object in the sequence of images;
the first calculation submodule is used for calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, and the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and the second calculation submodule is used for calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
Optionally, the images of the image sequence include M signboard and N rod, where M and N are positive integers;
the first object is the M signboard, and the second object is the N pole.
Optionally, the three-dimensional coordinate of the first object in the visual point cloud is calculated by a big-card position calculation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud through a line feature position calculation LineSFM algorithm.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In a third aspect, the present application provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods of the first aspect.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method of any one of the first aspects.
One embodiment in the present application has the following advantages or benefits:
according to the method, the first object and the second object existing in the existing high-precision map are marked in the image sequence, so that after the image sequence is subjected to three-dimensional reconstruction to obtain the visual point cloud, the similarity transformation matrix between the two coordinate systems can be obtained through the three-dimensional coordinates of the first object in the point cloud coordinate system and the high-precision coordinate system, then the second object can be projected to the high-precision coordinate system under the three-dimensional reconstruction precision from the point cloud coordinate system according to the similarity transformation matrix, and therefore the three-dimensional reconstruction precision can be evaluated according to the error between the projected three-dimensional coordinates and the three-dimensional coordinates in the high-precision map.
Therefore, the method and the device make full use of the existing high-precision map data, only need to calibrate a small amount of image sequences before the three-dimensional reconstruction precision test, and can test and evaluate the three-dimensional reconstruction precision after the three-dimensional modeling is completed. By adopting the technical means, the accuracy of the three-dimensional reconstruction precision test can be improved, the difficulty and the cost of the three-dimensional reconstruction precision test can be reduced, and the three-dimensional reconstruction precision test efficiency is improved.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a schematic flow chart of a three-dimensional reconstruction precision testing method provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of an overall technical framework of a three-dimensional reconstruction accuracy testing algorithm provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a three-dimensional reconstruction precision testing apparatus provided in an embodiment of the present application;
fig. 4 is a block diagram of an electronic device for implementing the three-dimensional reconstruction accuracy testing method according to the embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The application provides a three-dimensional reconstruction precision testing method which is used for testing the precision of a three-dimensional reconstruction algorithm or a three-dimensional reconstruction model.
As shown in fig. 1, the three-dimensional reconstruction accuracy testing method includes the following steps:
step 101: three-dimensional coordinates of a first object and a second object in the image sequence in the visual point cloud are acquired.
The image sequence includes a plurality of images, and before step 101, the image sequence is acquired in advance, and the visual point cloud is obtained by three-dimensionally reconstructing the image sequence, that is, the visual point cloud corresponding to the image sequence can be obtained by three-dimensionally reconstructing the image sequence. In the present application, the testing of the three-dimensional reconstruction precision may be understood as testing the precision of the visual point cloud obtained by three-dimensionally reconstructing the image sequence, that is, the three-dimensional reconstruction precision may be embodied as the precision of the visual point cloud generated by three-dimensionally reconstructing the image sequence.
After three-dimensional reconstruction of the image sequence to generate the visual point cloud, in this step, three-dimensional coordinates of the first object and the second object in the visual point cloud may be obtained. The three-dimensional coordinates of the object in the visual point cloud can also be called as point cloud coordinates of the object, and can also be called as three-dimensional coordinates of the object in a point cloud coordinate system.
In the application, the image sequence can be three-dimensionally reconstructed based on a multi-view geometric principle, the three-dimensional reconstruction can generate a visual point cloud of the image sequence and can also obtain a camera pose of each image in the image sequence, and a result of the three-dimensional reconstruction can also be called a sparse model, that is, the sparse model of the image sequence can comprise the visual point cloud of the image sequence and the camera pose of each image in the image sequence.
The first object and the second object are objects existing in the high-precision map, such as buildings, roads, signs, poles, and the like, the number of the first object is not limited, and may be one or more, and the number of the second object is not limited, and may be one or more. The first object and the second object may be previously identified from the image sequence before step 101.
Since both the first object and the second object exist in the high-precision map, the three-dimensional coordinates of the first object and the second object in the high-precision map can be easily acquired. The three-dimensional coordinates of the first object and the second object in the high-precision map can be used as a coordinate transformation basis and reference data in a subsequent step. The three-dimensional coordinates of the object in the high-precision map can also be called high-precision coordinates of the object, and can also be called three-dimensional coordinates of the object in a high-precision coordinate system.
In the application, the object existing in the high-precision map is marked in the image sequence, and the data of the existing high-precision map can be utilized, so that the difficulty and the workload of data acquisition can be reduced, the cost of data acquisition is reduced, and the efficiency of data acquisition is improved. In addition, since the three-dimensional coordinates in the high-precision map have a high-precision characteristic, by specifying an object existing in the high-precision map in the image sequence, it is possible to ensure that high-precision data is used as a coordinate transformation basis and reference data in the subsequent steps, and thus the precision of the three-dimensional reconstruction precision test can be improved.
Step 102: and calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map.
In this step, a matching pair may be formed by the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map, and a similarity transformation matrix of the matching pair may be calculated according to a suitable algorithm.
For example, the Sim3 similarity transformation matrix for the matched pair may be calculated according to the Sim3 algorithm. The Sim3 similarity transformation matrix of the matching pair is calculated according to the Sim3 algorithm, the idea is to find the Sim3 similarity transformation matrix which enables the error of the matching pair to be minimum according to the least square method, and the calculation process can involve the steps of constructing the matrix according to the point pairs, performing Singular Value Decomposition (SVD) on the matrix, and the like.
Step 103: and performing coordinate transformation on the three-dimensional coordinates of the second object in the visual point cloud according to the similarity transformation matrix to obtain a first three-dimensional coordinate of the second object.
In this step, coordinate transformation may be performed on the three-dimensional coordinates of the second object in the visual point cloud according to the similarity transformation matrix obtained in step 102, so as to obtain the first three-dimensional coordinates of the second object. Here, the first three-dimensional coordinates may be understood as three-dimensional coordinates corresponding to a point cloud coordinate of the second object mapped to a high-precision coordinate system with three-dimensional reconstruction accuracy. Due to the existence of the three-dimensional reconstruction precision, the first three-dimensional coordinates of the second object do not represent the three-dimensional coordinates of the second object in the high-precision map, but are three-dimensional coordinates with certain errors with the three-dimensional coordinates of the second object in the high-precision map, and the errors are related to the three-dimensional reconstruction precision, wherein the higher the three-dimensional reconstruction precision is, the smaller the errors are, and the lower the three-dimensional reconstruction precision is, the larger the errors are.
Step 104: and calculating the precision of the three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map.
In the step, for the second object after coordinate transformation, the matched second object can be determined from the high-precision map to form a matched pair, and then an error between the matched pair is calculated, and the error can be regarded as an error of a visual point cloud generated by three-dimensional reconstruction, so that the accuracy of three-dimensional reconstruction can be obtained.
According to the method, the first object and the second object existing in the existing high-precision map are marked in the image sequence, so that after the image sequence is subjected to three-dimensional reconstruction to obtain the visual point cloud, the similarity transformation matrix between the two coordinate systems can be obtained through the three-dimensional coordinates of the first object in the point cloud coordinate system and the high-precision coordinate system, then the second object can be projected to the high-precision coordinate system under the three-dimensional reconstruction precision from the point cloud coordinate system according to the similarity transformation matrix, and therefore the three-dimensional reconstruction precision can be evaluated according to the error between the projected three-dimensional coordinates and the three-dimensional coordinates in the high-precision map.
Therefore, the method and the device make full use of the existing high-precision map data, only need to calibrate a small amount of image sequences before the three-dimensional reconstruction precision test, and can test and evaluate the three-dimensional reconstruction precision after the three-dimensional modeling is completed. By adopting the technical means, the accuracy of the three-dimensional reconstruction precision test can be improved, the difficulty and the cost of the three-dimensional reconstruction precision test can be reduced, and the three-dimensional reconstruction precision test efficiency is improved.
In the present application, it is required to obtain three-dimensional coordinates of a first object and a second object in a visual point cloud in an image sequence, and the present application provides the following embodiments for obtaining three-dimensional coordinates of the first object and the second object in the visual point cloud:
optionally, the acquiring three-dimensional coordinates of the first object and the second object in the image sequence in the visual point cloud includes:
acquiring pixel coordinates of the first object and the second object in the image sequence;
calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, wherein the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
The step of acquiring the pixel coordinates of the object (the first object or the second object) in the image sequence refers to acquiring the pixel coordinates of the object in each image of the image sequence. The pixel coordinates of the object may be extracted from each image of the sequence of images by a deep learning algorithm.
As mentioned above, the three-dimensional reconstruction of the image sequence can obtain the visual point cloud corresponding to the image sequence, and also can obtain the camera pose of each image in the image sequence, so that the camera pose can be obtained by three-dimensional reconstruction of the image sequence.
This embodiment provides a solution to calculate the three-dimensional coordinates of an object in a visual point cloud from the pixel coordinates of the object and the camera pose of the image.
Optionally, the images of the image sequence include M signboard and N rod, where M and N are positive integers;
the first object is the M signboard, and the second object is the N pole.
In the case where the image of the image sequence includes the signboard and the pole, it is considered that the signboard is generally a regular rectangle in shape, and as a planar geometric figure having a regular shape, it is easier to determine a matching pair, and the pixel coordinates of four corner points of the rectangle are easier to acquire, so that the signboard can be used as the first object. In addition, considering that the rod can be regarded as a linear object, since the linear object is easier to determine a matching pair, and the error between the linear object and the linear object is easier to calculate, and the error between the linear object and the linear object is calculated with higher accuracy, the rod can be regarded as the second object.
Further, in order to more easily and accurately calculate the error between the first three-dimensional coordinates of the second object and the three-dimensional coordinates of the second object in the high-precision map, a stick (or a panned stick) with less distortion in the images of the image sequence may be selected as the second object.
If the signboard is used as the first object and the rod is used as the second object, the pixel coordinates of the first object may include the pixel coordinates of the four corner points of the signboard, and the pixel coordinates of the second object may include the pixel coordinates of the two end points of the rod.
Therefore, in the embodiment, the signboard is used as the first object, and the rod is used as the second object, so that the three-dimensional reconstruction precision test is facilitated to be simplified, and the accuracy of the three-dimensional reconstruction precision test is facilitated to be improved.
It should be noted that the type of the signboard is not limited, and may be any signboard having a marking function, such as a road traffic signboard, a billboard, and the like. The type of pole is also not limited and can be, for example, a light pole, a sign support pole, a utility pole, and the like.
In the present application, in addition to the mode in which the signboard is used as the first object and the lever is used as the second object, the lever may be used as the first object and the signboard may be used as the second object; or, some signboard is used as a first object, and other signboard is used as a second object; or, some of the bars are taken as first objects, and other bars are taken as second objects; and so on.
Optionally, the three-dimensional coordinate of the first object in the visual point cloud is calculated by a big-card position calculation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud by a LineSFM (line feature position estimation) algorithm.
In this embodiment, if the signboard is used as the first object and the rod is used as the second object, the three-dimensional coordinates of the first object in the visual point cloud include three-dimensional coordinates of four corner points of the first object in the visual point cloud, and the three-dimensional coordinates of the second object in the visual point cloud include three-dimensional coordinates of two end points of the second object in the visual point cloud.
The general principle of the big card position calculation algorithm is as follows: finding out the characteristic points of the rectangular area where the signboard is located in the image according to the four corner points of the signboard in the image; indexing point cloud coordinates corresponding to the feature points according to the sparse model, and fitting out a signboard plane; and calculating the three-dimensional coordinates of the four corner point coordinates of the signboard in the visual point cloud according to the camera pose of each image and the fitted signboard plane. The big card position estimation algorithm may also be referred to as the SignSFM algorithm.
The approximate principle of the LineSFM algorithm is: because the same rod can be observed in a plurality of images and the camera poses of the images can be acquired, the LineSFM algorithm can calculate the three-dimensional coordinates of the rod by taking the minimized projection errors of the rod in the plurality of images as an objective function.
This embodiment provides an algorithm for calculating the three-dimensional coordinates of the first object and the second object in the visual point cloud.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
In the embodiment, the image sequence is acquired in a crowdsourcing mode, so that the acquisition difficulty and cost of the image sequence can be reduced, and the difficulty and cost of the three-dimensional reconstruction precision test can be further reduced.
In order to more intuitively understand the three-dimensional reconstruction accuracy testing method of the present application, fig. 2 shows an exemplary diagram of an overall technical framework of the three-dimensional reconstruction accuracy testing.
As shown in fig. 2, a sparse model obtained by three-dimensional reconstruction, a pixel coordinate and a high-precision coordinate of a signboard, and a pixel coordinate and a high-precision coordinate of a rod are used as input, a point cloud coordinate of the signboard is obtained by adopting a large signboard position calculation algorithm, a similarity transformation matrix between the point cloud coordinate of the signboard and the high-precision coordinate is obtained by adopting a Sim3 algorithm, a point cloud coordinate of the rod is obtained by adopting a LineSFM algorithm, a first three-dimensional coordinate of the rod is obtained by performing coordinate transformation on the point cloud coordinate of the rod through the similarity transformation matrix, then an error of a matched rod is obtained through the first three-dimensional coordinate and the high-precision coordinate of the rod, and finally, the three-dimensional reconstruction precision is determined according to.
It should be noted that, the three-dimensional reconstruction accuracy testing method in the present application may be implemented in combination with each other or separately, and the present application is not limited thereto.
The above-described embodiments of the present application have the following advantages or beneficial effects:
according to the method, the first object and the second object existing in the existing high-precision map are marked in the image sequence, so that after the image sequence is subjected to three-dimensional reconstruction to obtain the visual point cloud, the similarity transformation matrix between the two coordinate systems can be obtained through the three-dimensional coordinates of the first object in the point cloud coordinate system and the high-precision coordinate system, then the second object can be projected to the high-precision coordinate system under the three-dimensional reconstruction precision from the point cloud coordinate system according to the similarity transformation matrix, and therefore the three-dimensional reconstruction precision can be evaluated according to the error between the projected three-dimensional coordinates and the three-dimensional coordinates in the high-precision map.
Therefore, the method and the device make full use of the existing high-precision map data, only need to calibrate a small amount of image sequences before the three-dimensional reconstruction precision test, and can test and evaluate the three-dimensional reconstruction precision after the three-dimensional modeling is completed. By adopting the technical means, the accuracy of the three-dimensional reconstruction precision test can be improved, the difficulty and the cost of the three-dimensional reconstruction precision test can be reduced, and the three-dimensional reconstruction precision test efficiency is improved.
The present application further provides a three-dimensional reconstruction accuracy testing apparatus, as shown in fig. 3, the three-dimensional reconstruction accuracy testing apparatus 200 includes:
a first obtaining module 201, configured to obtain three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, where the visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the first object and the second object are objects existing in a high-precision map;
a first calculating module 202, configured to calculate a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map;
the second obtaining module 203 is configured to perform coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud according to the similarity transformation matrix to obtain a first three-dimensional coordinate of the second object;
a second calculating module 204, configured to calculate accuracy of the three-dimensional reconstruction according to an error between the first three-dimensional coordinates and three-dimensional coordinates of the second object in the high-precision map.
Optionally, the first obtaining module 201 includes:
an acquisition sub-module for acquiring pixel coordinates of the first object and the second object in the sequence of images;
the first calculation submodule is used for calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, and the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and the second calculation submodule is used for calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
Optionally, the images of the image sequence include M signboard and N rod, where M and N are positive integers;
the first object is the M signboard, and the second object is the N pole.
Optionally, the three-dimensional coordinate of the first object in the visual point cloud is calculated by a big-card position calculation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud through a line feature position calculation LineSFM algorithm.
Optionally, the image sequence is an image sequence acquired in a crowdsourcing manner.
The three-dimensional reconstruction precision testing device 200 provided by the application can realize each process realized by the three-dimensional reconstruction precision testing device in the three-dimensional reconstruction precision testing method embodiment, and can achieve the same beneficial effect, and for avoiding repetition, the description is omitted here.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 4 is a block diagram of an electronic device of a three-dimensional reconstruction accuracy testing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 4, the electronic apparatus includes: one or more processors 501, memory 502, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 4, one processor 501 is taken as an example.
The memory 502, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the first obtaining module 201, the first calculating module 202, the second obtaining module 203, and the second calculating module 204 shown in fig. 3) corresponding to the three-dimensional reconstruction accuracy testing method in the embodiment of the present application. The processor 501 executes various functional applications and data processing of the three-dimensional reconstruction accuracy testing apparatus by running the non-transitory software programs, instructions and modules stored in the memory 502, that is, implements the three-dimensional reconstruction accuracy testing method in the above-described method embodiments.
The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the three-dimensional reconstruction accuracy test method, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 502 may optionally include a memory remotely located from the processor 501, and these remote memories may be connected to the electronics of the three-dimensional reconstruction accuracy testing method via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the three-dimensional reconstruction accuracy testing method may further include: an input device 503 and an output device 504. The processor 501, the memory 502, the input device 503 and the output device 504 may be connected by a bus or other means, and fig. 4 illustrates the connection by a bus as an example.
The input device 503 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus of the three-dimensional reconstruction accuracy test method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, and the like. The output devices 504 may include a display device, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the first object and the second object existing in the existing high-precision map are marked in the image sequence, so that after the image sequence is subjected to three-dimensional reconstruction to obtain the visual point cloud, the similarity transformation matrix between the two coordinate systems can be obtained through the three-dimensional coordinates of the first object in the point cloud coordinate system and the high-precision coordinate system, then the second object can be projected to the high-precision coordinate system under the three-dimensional reconstruction precision from the point cloud coordinate system according to the similarity transformation matrix, and therefore the three-dimensional reconstruction precision can be evaluated according to the error between the projected three-dimensional coordinates and the three-dimensional coordinates in the high-precision map. The method and the device make full use of the existing high-precision map data, and can test and evaluate the three-dimensional reconstruction precision after the three-dimensional modeling is completed by only calibrating a small amount of image sequences before the three-dimensional reconstruction precision is tested. By adopting the technical means, the accuracy of the three-dimensional reconstruction precision test can be improved, the difficulty and the cost of the three-dimensional reconstruction precision test can be reduced, and the three-dimensional reconstruction precision test efficiency is improved.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (12)
1. A three-dimensional reconstruction precision testing method is characterized by comprising the following steps:
acquiring three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, wherein the visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the first object and the second object are objects existing in a high-precision map;
calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map;
according to the similarity transformation matrix, carrying out coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud to obtain a first three-dimensional coordinate of the second object;
and calculating the precision of the three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map.
2. The method of claim 1, wherein obtaining three-dimensional coordinates of a first object and a second object in a visual point cloud in the sequence of images comprises:
acquiring pixel coordinates of the first object and the second object in the image sequence;
calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, wherein the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
3. The method of claim 1, wherein the images of the sequence of images include M signs and N poles, M and N being positive integers;
the first object is the M signboard, and the second object is the N pole.
4. The method of claim 3, wherein the three-dimensional coordinates of the first object in the visual point cloud are calculated by a domino position estimation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud through a line feature position calculation LineSFM algorithm.
5. The method of claim 1, wherein the image sequence is a sequence of images captured in a crowd-sourced manner.
6. A three-dimensional reconstruction accuracy testing device is characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring three-dimensional coordinates of a first object and a second object in an image sequence in a visual point cloud, the visual point cloud is obtained by performing three-dimensional reconstruction on the image sequence, and the first object and the second object are objects existing in a high-precision map;
the first calculation module is used for calculating a similarity transformation matrix between the visual point cloud and the high-precision map according to the three-dimensional coordinates of the first object in the visual point cloud and the three-dimensional coordinates of the first object in the high-precision map;
the second acquisition module is used for carrying out coordinate transformation on the three-dimensional coordinate of the second object in the visual point cloud according to the similarity transformation matrix to obtain a first three-dimensional coordinate of the second object;
and the second calculation module is used for calculating the precision of the three-dimensional reconstruction according to the error between the first three-dimensional coordinates and the three-dimensional coordinates of the second object in the high-precision map.
7. The apparatus of claim 6, wherein the first obtaining module comprises:
an acquisition sub-module for acquiring pixel coordinates of the first object and the second object in the sequence of images;
the first calculation submodule is used for calculating to obtain a three-dimensional coordinate of the first object in the visual point cloud according to the pixel coordinate of the first object and the camera pose of each image in the image sequence, and the camera pose of each image in the image sequence is obtained by performing three-dimensional reconstruction on the image sequence;
and the second calculation submodule is used for calculating to obtain the three-dimensional coordinates of the second object in the visual point cloud according to the pixel coordinates of the second object and the camera pose of each image in the image sequence.
8. The apparatus according to claim 6, wherein the images of the image sequence comprise M signboards and N sticks, wherein M and N are positive integers;
the first object is the M signboard, and the second object is the N pole.
9. The apparatus of claim 8, wherein the three-dimensional coordinates of the first object in the visual point cloud are calculated by a domino position estimation algorithm;
and calculating the three-dimensional coordinate of the second object in the visual point cloud through a line feature position calculation LineSFM algorithm.
10. The apparatus of claim 6, wherein the image sequence is a sequence of images captured in a crowd-sourced manner.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010228592.8A CN111311743B (en) | 2020-03-27 | 2020-03-27 | Three-dimensional reconstruction precision testing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010228592.8A CN111311743B (en) | 2020-03-27 | 2020-03-27 | Three-dimensional reconstruction precision testing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111311743A true CN111311743A (en) | 2020-06-19 |
CN111311743B CN111311743B (en) | 2023-04-07 |
Family
ID=71147476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010228592.8A Active CN111311743B (en) | 2020-03-27 | 2020-03-27 | Three-dimensional reconstruction precision testing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111311743B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508773A (en) * | 2020-11-20 | 2021-03-16 | 小米科技(武汉)有限公司 | Image processing method and device, electronic device and storage medium |
WO2022088799A1 (en) * | 2020-10-29 | 2022-05-05 | 陈志立 | Three-dimensional reconstruction method, three-dimensional reconstruction apparatus and storage medium |
CN116246026A (en) * | 2023-05-05 | 2023-06-09 | 北京百度网讯科技有限公司 | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258345A (en) * | 2013-04-18 | 2013-08-21 | 中国林业科学研究院资源信息研究所 | Method for extracting parameters of tree branches based on ground laser radar three-dimensional scanning |
CN103927787A (en) * | 2014-04-30 | 2014-07-16 | 南京大学 | Method and device for improving three-dimensional reconstruction precision based on matrix recovery |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
CA2914020A1 (en) * | 2014-12-10 | 2016-06-10 | Dassault Systemes | Texturing a 3d modeled object |
CN106307967A (en) * | 2015-06-30 | 2017-01-11 | 卡西欧计算机株式会社 | Drawing apparatus and drawing method for drawing apparatus |
CN106373141A (en) * | 2016-09-14 | 2017-02-01 | 上海航天控制技术研究所 | Tracking system and tracking method of relative movement angle and angular velocity of slowly rotating space fragment |
CN108320329A (en) * | 2018-02-02 | 2018-07-24 | 维坤智能科技(上海)有限公司 | A kind of 3D map creating methods based on 3D laser |
US20190073792A1 (en) * | 2017-09-05 | 2019-03-07 | Canon Kabushiki Kaisha | System and method for determining a camera pose |
CN109493375A (en) * | 2018-10-24 | 2019-03-19 | 深圳市易尚展示股份有限公司 | The Data Matching and merging method of three-dimensional point cloud, device, readable medium |
US20190206123A1 (en) * | 2017-12-29 | 2019-07-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for fusing point cloud data |
CN110146869A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter |
US20190327412A1 (en) * | 2018-04-24 | 2019-10-24 | Industrial Technology Research Institute | Building system and building method for panorama point cloud |
CN110827403A (en) * | 2019-11-04 | 2020-02-21 | 北京易控智驾科技有限公司 | Method and device for constructing three-dimensional point cloud map of mine |
CN110910483A (en) * | 2019-11-29 | 2020-03-24 | 广州极飞科技有限公司 | Three-dimensional reconstruction method and device and electronic equipment |
CN111462029A (en) * | 2020-03-27 | 2020-07-28 | 北京百度网讯科技有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
CN113409459A (en) * | 2021-06-08 | 2021-09-17 | 北京百度网讯科技有限公司 | Method, device and equipment for producing high-precision map and computer storage medium |
-
2020
- 2020-03-27 CN CN202010228592.8A patent/CN111311743B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103258345A (en) * | 2013-04-18 | 2013-08-21 | 中国林业科学研究院资源信息研究所 | Method for extracting parameters of tree branches based on ground laser radar three-dimensional scanning |
CN103927787A (en) * | 2014-04-30 | 2014-07-16 | 南京大学 | Method and device for improving three-dimensional reconstruction precision based on matrix recovery |
CN104599314A (en) * | 2014-06-12 | 2015-05-06 | 深圳奥比中光科技有限公司 | Three-dimensional model reconstruction method and system |
CA2914020A1 (en) * | 2014-12-10 | 2016-06-10 | Dassault Systemes | Texturing a 3d modeled object |
CN106307967A (en) * | 2015-06-30 | 2017-01-11 | 卡西欧计算机株式会社 | Drawing apparatus and drawing method for drawing apparatus |
CN105654483A (en) * | 2015-12-30 | 2016-06-08 | 四川川大智胜软件股份有限公司 | Three-dimensional point cloud full-automatic registration method |
CN106373141A (en) * | 2016-09-14 | 2017-02-01 | 上海航天控制技术研究所 | Tracking system and tracking method of relative movement angle and angular velocity of slowly rotating space fragment |
US20190073792A1 (en) * | 2017-09-05 | 2019-03-07 | Canon Kabushiki Kaisha | System and method for determining a camera pose |
US20190206123A1 (en) * | 2017-12-29 | 2019-07-04 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for fusing point cloud data |
CN108320329A (en) * | 2018-02-02 | 2018-07-24 | 维坤智能科技(上海)有限公司 | A kind of 3D map creating methods based on 3D laser |
US20190327412A1 (en) * | 2018-04-24 | 2019-10-24 | Industrial Technology Research Institute | Building system and building method for panorama point cloud |
CN109493375A (en) * | 2018-10-24 | 2019-03-19 | 深圳市易尚展示股份有限公司 | The Data Matching and merging method of three-dimensional point cloud, device, readable medium |
CN110146869A (en) * | 2019-05-21 | 2019-08-20 | 北京百度网讯科技有限公司 | Determine method, apparatus, electronic equipment and the storage medium of coordinate system conversion parameter |
CN110827403A (en) * | 2019-11-04 | 2020-02-21 | 北京易控智驾科技有限公司 | Method and device for constructing three-dimensional point cloud map of mine |
CN110910483A (en) * | 2019-11-29 | 2020-03-24 | 广州极飞科技有限公司 | Three-dimensional reconstruction method and device and electronic equipment |
CN111462029A (en) * | 2020-03-27 | 2020-07-28 | 北京百度网讯科技有限公司 | Visual point cloud and high-precision map fusion method and device and electronic equipment |
CN113409459A (en) * | 2021-06-08 | 2021-09-17 | 北京百度网讯科技有限公司 | Method, device and equipment for producing high-precision map and computer storage medium |
Non-Patent Citations (2)
Title |
---|
CHUNGUANG LI: "3D Visual SLAM Based on Multiple Iterative Closest Point", 《MATHEMATICAL PROBLEMS IN ENGINEERING》 * |
吴凡: "视觉SLAM的研究现状与展望", 《计算机应用研究》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022088799A1 (en) * | 2020-10-29 | 2022-05-05 | 陈志立 | Three-dimensional reconstruction method, three-dimensional reconstruction apparatus and storage medium |
CN112508773A (en) * | 2020-11-20 | 2021-03-16 | 小米科技(武汉)有限公司 | Image processing method and device, electronic device and storage medium |
CN112508773B (en) * | 2020-11-20 | 2024-02-09 | 小米科技(武汉)有限公司 | Image processing method and device, electronic equipment and storage medium |
CN116246026A (en) * | 2023-05-05 | 2023-06-09 | 北京百度网讯科技有限公司 | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device |
CN116246026B (en) * | 2023-05-05 | 2023-08-08 | 北京百度网讯科技有限公司 | Training method of three-dimensional reconstruction model, three-dimensional scene rendering method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111311743B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11615605B2 (en) | Vehicle information detection method, electronic device and storage medium | |
CN111462029B (en) | Visual point cloud and high-precision map fusion method and device and electronic equipment | |
CN112270669B (en) | Human body 3D key point detection method, model training method and related devices | |
CN111311743B (en) | Three-dimensional reconstruction precision testing method and device and electronic equipment | |
JP7258066B2 (en) | POSITIONING METHOD, POSITIONING DEVICE, AND ELECTRONIC DEVICE | |
CN111612852B (en) | Method and apparatus for verifying camera parameters | |
CN112101209B (en) | Method and apparatus for determining world coordinate point cloud for roadside computing device | |
CN111739005B (en) | Image detection method, device, electronic equipment and storage medium | |
CN111401251B (en) | Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium | |
CN111767853B (en) | Lane line detection method and device | |
CN111539973A (en) | Method and device for detecting pose of vehicle | |
CN111860167A (en) | Face fusion model acquisition and face fusion method, device and storage medium | |
CN111578951B (en) | Method and device for generating information in automatic driving | |
CN110675635B (en) | Method and device for acquiring external parameters of camera, electronic equipment and storage medium | |
US11721037B2 (en) | Indoor positioning method and apparatus, electronic device and storage medium | |
JP2022050311A (en) | Method for detecting lane change of vehicle, system, electronic apparatus, storage medium, roadside machine, cloud control platform, and computer program | |
KR102566300B1 (en) | Method for indoor localization and electronic device | |
US11694405B2 (en) | Method for displaying annotation information, electronic device and storage medium | |
CN111784834A (en) | Point cloud map generation method and device and electronic equipment | |
CN112102417B (en) | Method and device for determining world coordinates | |
CN112017304B (en) | Method, apparatus, electronic device and medium for presenting augmented reality data | |
CN111260722B (en) | Vehicle positioning method, device and storage medium | |
CN111967481A (en) | Visual positioning method and device, electronic equipment and storage medium | |
CN111311654B (en) | Camera position registration method and device, electronic equipment and storage medium | |
CN112200190B (en) | Method and device for determining position of interest point, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |