CN114519681A - Automatic calibration method and device, computer readable storage medium and terminal - Google Patents

Automatic calibration method and device, computer readable storage medium and terminal Download PDF

Info

Publication number
CN114519681A
CN114519681A CN202111677841.2A CN202111677841A CN114519681A CN 114519681 A CN114519681 A CN 114519681A CN 202111677841 A CN202111677841 A CN 202111677841A CN 114519681 A CN114519681 A CN 114519681A
Authority
CN
China
Prior art keywords
point cloud
pixel point
continuous line
determining
depth continuous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111677841.2A
Other languages
Chinese (zh)
Inventor
黄超
张�浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xiantu Intelligent Technology Co Ltd
Original Assignee
Shanghai Xiantu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xiantu Intelligent Technology Co Ltd filed Critical Shanghai Xiantu Intelligent Technology Co Ltd
Priority to CN202111677841.2A priority Critical patent/CN114519681A/en
Publication of CN114519681A publication Critical patent/CN114519681A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An automatic calibration method and device, a computer readable storage medium and a terminal are provided, the method comprises the following steps: acquiring original point cloud data and original image data acquired by adopting the same timestamp for the same area scene; respectively determining point cloud depth continuous line characteristics and image depth continuous line characteristics according to the original point cloud data and the original image data; determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a translational vector initial value; for each two-dimensional pixel point in the two-dimensional pixel point set, searching for an adjacent pixel point from the image depth continuous line characteristic to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point; and determining the optimal value of a rotation matrix and the optimal value of a translation vector based on the adjacent pixel point set and the normal vector set. The invention can realize full-automatic calibration among multiple sensors, reduce calibration cost and improve calibration efficiency and calibration precision.

Description

Automatic calibration method and device, computer readable storage medium and terminal
Technical Field
The invention relates to the technical field of multi-sensor external parameter calibration, in particular to an automatic calibration method and device, a computer readable storage medium and a terminal.
Background
With the rapid development of artificial intelligence technology, the automatic driving technology of motor vehicles is emerging, and laser radar and a camera are two important sensors in an automatic driving sensor suite, and the two sensors have advantages and disadvantages respectively. The laser radar transmits laser beams through a phase detection target, then compares the received reflected signals with the transmitted signals, and obtains the characteristics of the target such as accurate distance, direction, height, shape and the like after proper processing, the profile recognition performance of the laser radar sensor for the obstacle is very high, the laser radar sensor can provide the real morphological characteristics of the target which can be most directly reflected for a host vehicle, and the laser radar sensor has good robustness to weather, but has the defect that the texture and color information of the object cannot be carved; in contrast, the resolution of the camera is high, the acquired image can provide rich texture and color information, but the image acquired by the camera is susceptible to severe weather conditions, and the distance accuracy is poor, so that the method is not suitable for estimating information such as the shape, position, speed and acceleration of an object.
Because of the above-mentioned drawbacks of using one sensor alone, it is currently the mainstream technology to fuse the information sensed by multiple sensors. Data acquired by the laser radar and the camera are converted into a unified coordinate system through coordinate transformation, point cloud coordinate points of the laser radar can be converted into an image two-dimensional coordinate system of the camera, and the process of determining the coordinate transformation relation is the external reference calibration process of different sensors. The accurate calibration result can provide a more stable and good judgment result for later information fusion perception, so that a safer guarantee is provided for the whole unmanned vehicle system.
In the prior art, calibration is often performed by using a calibration device for calibration assistance in multi-sensor combined calibration, for example, most commonly, calibration is performed by using markers with specific patterns, such as checkerboards and two-dimensional codes, that is, feature points on the specific patterns are captured in image data acquired by a laser radar and a camera respectively, and the feature points are matched one by one, so that a 6-degree-of-freedom conversion matrix of the laser radar and the camera is calculated. The calibration technology depends on a specific device and a calibration site, and only can depend on offline manual auxiliary calibration, so that the labor cost is high, and the flexibility and the efficiency are low. In addition, in the prior art, the feature points on a common calibration device are concentrated, so that the 6-degree-of-freedom conversion matrix is seriously overfitted; or, because the size of the calibration device is not large, only short-distance calibration can be performed, and the obtained calibration result is not suitable for a plurality of wide scenes; moreover, in the prior art, single-frame point cloud data is mostly adopted, so that the point cloud data is sparse and cannot be accurately matched with feature points, and the precision of a calibration result is insufficient due to the factors.
Therefore, there is a need for an automatic calibration method, which can perform full-automatic calibration on point cloud data acquired by a laser sensor and image data acquired by an image sensor, reduce calibration cost, and improve calibration efficiency and calibration precision.
Disclosure of Invention
The invention solves the technical problems of high labor cost, low flexibility and efficiency and insufficient calibration precision of the existing laser sensor and image sensor combined calibration technology.
In order to solve the above problem, an embodiment of the present invention provides an automatic calibration method, including the following steps: determining original point cloud data and original image data acquired for the same area scene by adopting the same timestamp; determining a point cloud depth continuous line characteristic according to the original point cloud data, and determining an image depth continuous line characteristic according to the original image data; determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a preset translation vector initial value; for each two-dimensional pixel point in the two-dimensional pixel point set, searching an adjacent pixel point which is most adjacent to the two-dimensional pixel point from the image depth continuous line characteristics, and determining a normal vector of the adjacent pixel point to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point; and determining the optimal value of a rotation matrix and the optimal value of a translation vector based on the adjacent pixel point set and the normal vector set.
Optionally, determining the feature of the point cloud depth continuous line according to the original point cloud data includes: determining multi-frame point cloud data acquired for the same area scene within a preset time period before and/or after the acquisition time stamp of the original point cloud data; constructing accumulated dense point cloud data for the multiple frames of point cloud data, wherein the positive direction of the origin of the coordinate system of the accumulated dense point cloud data is consistent with the positive direction of the origin of the coordinate system of the original point cloud data; and extracting the point cloud depth continuous line features from the accumulated dense point cloud data.
Optionally, extracting the point cloud depth continuous line features from the accumulated dense point cloud data includes: down-sampling and rasterizing the accumulated dense point cloud data to obtain a plurality of grids; fitting the point cloud data in each grid by adopting a random sample consensus (RANSAC) algorithm to obtain a plurality of planes; and taking a line intersected among the plurality of planes as the point cloud depth continuous line feature.
Optionally, determining the feature of the image depth continuous line according to the original image data includes: removing distortion from the original image data and converting the original image data into a gray-scale image; and performing edge extraction on the gray level image by using a Laplacian operator to obtain the characteristics of the image depth continuous line.
Optionally, the following formula is adopted to determine a two-dimensional pixel point set of the point cloud depth continuous line feature:
p=π[(R|t)P];
wherein p represents the two-dimensional pixel point set; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature.
Optionally, the search algorithm is selected from: quad tree Quad-tree algorithm, octree Oct-tree algorithm and k neighbor algorithm.
Optionally, determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the neighboring pixel point set and the normal vector set includes: constructing a calibration loss function by adopting the adjacent pixel point set, the normal vector set, the initial value of the rotation matrix and the initial value of the translation vector; and minimizing the calibration loss function by adopting a preset iterative algorithm and a preset termination condition, and determining the optimal value of the rotation matrix and the optimal value of the translation vector.
Optionally, the calibration loss function is constructed by using the following formula:
f(x)=0.5*norm((transpose(n)*(π(R|t)P)-q));
wherein f (x) is used to represent the calibration loss function; norm () represents a vector norm calculation function; transpose () represents a transpose function; n represents the set of normal quantities; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature; q represents the set of neighboring pixels.
Optionally, the iterative algorithm is selected from: gradient descent algorithm, newton algorithm, gauss-newton algorithm, levenberg-marquardt algorithm.
An embodiment of the present invention further provides an automatic calibration apparatus, including:
the system comprises an original data acquisition module, a time stamp acquisition module and a time stamp acquisition module, wherein the original data acquisition module is used for determining original point cloud data and original image data acquired by adopting the same time stamp for a scene in the same area; the depth continuous line feature determining module is used for determining point cloud depth continuous line features according to the original point cloud data and determining image depth continuous line features according to the original image data; the two-dimensional pixel point set determining module is used for determining a two-dimensional pixel point set of the point cloud depth continuous line characteristic according to a preset rotation matrix initial value and a preset translation vector initial value; the adjacent pixel point set determining module is used for searching an adjacent pixel point which is most adjacent to the two-dimensional pixel point from the image depth continuous line characteristics for each two-dimensional pixel point in the two-dimensional pixel point set, and determining a normal vector of the adjacent pixel point to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point; and the optimal value calculation module is used for determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the adjacent pixel point set and the normal vector set.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the automatic calibration method.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory is stored with a computer program capable of running on the processor, and the processor executes the steps of the automatic calibration method when running the computer program.
Compared with the prior art, the technical scheme of the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the original point cloud data and the original image data acquired by adopting the same timestamp for the same area scene are determined; then determining point cloud depth continuous line features and image depth continuous line features according to the original point cloud data and the original image data respectively; determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a preset translation vector initial value; determining a nearest adjacent pixel point set and a normal vector set of adjacent pixel points based on the two-dimensional pixel point set; and finally, determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the adjacent pixel point set and the normal vector set. Compared with the prior multi-sensor combined calibration technology which depends on a specific calibration device and a specific calibration field, the multi-sensor combined calibration technology has high labor cost, low flexibility and efficiency and insufficient calibration precision, the embodiment of the invention carries out feature point matching on the depth continuous line features of point cloud data and the depth continuous line features of image data acquired by different sensors through the preset initial values of the rotation matrix and the translation vector, and then determines the optimal value of rotation verification and the optimal value of translation vector by adopting an iterative algorithm. The whole process is based on a full-automatic mode, an additional calibration field and a calibration device are not needed, the labor cost of calibration is saved, and the calibration flexibility and the calibration precision are improved.
Further, multi-frame point cloud data acquired for the same area scene in a preset time period before and/or after the acquisition time stamp of the original point cloud data are determined; and then constructing accumulated dense point cloud data for the multi-frame point cloud data, and extracting point cloud depth continuous line features from the accumulated dense point cloud data. The method aims to enrich the data volume of point cloud data in a multi-frame point cloud overlapping mode, solve the problem that the feature points cannot be accurately matched with the features of the image depth continuous lines due to the fact that single-frame point cloud data are too sparse, and improve calibration accuracy.
Furthermore, when the image depth continuous line features are determined according to the original image data, the original image is converted into a gray-scale image after distortion is removed, and then the image depth continuous line features are extracted from the gray-scale image, so that the extracted feature points can be more accurate, the subsequent feature point matching and operation results are more accurate, and the calibration precision is improved.
Further, the calibration loss function is minimized by constructing the calibration loss function and adopting a preset iterative algorithm and a preset termination condition to determine the optimal value of the rotation matrix and the optimal value of the translation vector, so that the effect of fully automatically completing the external reference calibration is achieved.
Drawings
FIG. 1 is a flow chart of a first automatic calibration method in an embodiment of the present invention;
FIG. 2 is a flowchart of one embodiment of step S12 of FIG. 1;
FIG. 3 is a flow chart of a second automatic calibration method in an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an automatic calibration device in an embodiment of the present invention.
Detailed Description
As described above, in the automatic driving technology of the motor vehicle, since the single sensor has respective advantages and disadvantages, it is a mainstream technology currently used in the automatic driving to fuse information sensed by a plurality of sensors. When a plurality of sensors are used in combination, different sensors need to be calibrated, for example, a point cloud coordinate point of a laser radar can be converted into an image two-dimensional coordinate system of a camera.
In the prior art, calibration is often performed by using a calibration device for calibration assistance in multi-sensor combined calibration, for example, most commonly, calibration is performed by using markers with specific patterns, such as checkerboards and two-dimensional codes, that is, feature points on the specific patterns are captured in image data acquired by a laser radar and a camera respectively, and the feature points are matched one by one, so that a 6-degree-of-freedom conversion matrix of the laser radar and the camera is calculated.
The inventor of the invention finds that the existing calibration technology depends on a specific device and a calibration site, often depends on offline manual auxiliary calibration, and has high labor cost, low flexibility and low efficiency. In addition, in the prior art, the feature points on a common calibration device are concentrated, so that the overfitting of a 6-freedom degree conversion matrix is serious; or, because the size of the calibration device is not large, only short-distance calibration can be performed, and the obtained calibration result is not suitable for a plurality of wide scenes; moreover, in the prior art, single-frame point cloud data is mostly adopted, so that the point cloud data is sparse and cannot be accurately matched with feature points, and the precision of a calibration result is insufficient due to the factors.
In the embodiment of the invention, the original point cloud data and the original image data acquired for the same area scene by adopting the same timestamp are determined; then determining point cloud depth continuous line features and image depth continuous line features according to the original point cloud data and the original image data respectively; determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a preset translation vector initial value; determining a nearest adjacent pixel point set and a normal vector set of adjacent pixel points based on the two-dimensional pixel point set; and finally, determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the adjacent pixel point set and the normal vector set. Compared with the prior multi-sensor combined calibration technology which depends on a specific calibration device and a specific calibration site, the multi-sensor combined calibration technology has high labor cost, low flexibility and efficiency and insufficient calibration precision, the embodiment of the invention performs feature point matching on the depth continuous line features of point cloud data and the depth continuous line features of image data acquired by different sensors through the preset initial values of the rotation matrix and the translation vector, and then determines the optimal value of rotation proof and the optimal value of translation vector by adopting an iterative algorithm. The whole process is based on a full-automatic mode, an additional calibration field and a calibration device are not needed, the labor cost of calibration is saved, and the calibration flexibility and the calibration precision are improved.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a flowchart of a first automatic calibration method according to an embodiment of the present invention. The first automatic calibration method may include steps S11 to S15:
step S11: determining original point cloud data and original image data acquired for the same area scene by adopting the same timestamp;
step S12: determining a point cloud depth continuous line characteristic according to the original point cloud data, and determining an image depth continuous line characteristic according to the original image data;
step S13: determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a preset translation vector initial value;
step S14: for each two-dimensional pixel point in the two-dimensional pixel point set, searching an adjacent pixel point which is most adjacent to the two-dimensional pixel point from the image depth continuous line characteristics, and determining a normal vector of the adjacent pixel point to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point;
step S15: and determining the optimal value of a rotation matrix and the optimal value of a translation vector based on the adjacent pixel point set and the normal vector set.
In a specific implementation of step S11, the point cloud data may be used to indicate a set of sampling points with spatial coordinates in a three-dimensional coordinate system, and the point cloud data may have geometric position information or attribute information such as color; the raw point cloud data may be acquired by a laser sensor for a certain area scene at a certain time stamp, for example, may be acquired by a laser radar; or acquired from a certain acquisition module after being acquired by the laser sensor.
The image data may be used to indicate a set of gray values of each pixel expressed in numerical values, which may be represented in a two-dimensional coordinate system. The real world image is generally represented by the intensity and frequency spectrum (color) of each point of light on the image, and when converting image information into data information, the image is divided into many small regions, which are called pixels, and the gray scale of the small regions can be represented by a numerical value. By sequentially extracting the information for each pixel, a discrete array can be used to represent a continuous image. The raw image data may be acquired by an image sensor for the same area scene at the same time stamp, for example, the raw image data may be acquired by an image sensor such as a camera, etc.; or acquired from a certain acquisition module after being acquired by the image sensor.
It can be understood that the reason for determining the raw point cloud data and the raw image data acquired with the same time stamp is: in the field of unmanned driving of motor vehicles, sensors such as laser radars and cameras are often placed at the same position right in front of a host vehicle, in the process of movement of the host vehicle, specific regional scenes sensed by the sensors at different times are changed, and data acquired by the same sensor at different times are different, so that the acquired original point cloud data and the original image data can be ensured to reflect or point to the same object in the real world by controlling the consistency of acquired timestamps and the consistency of the targeted regional scenes during acquisition, and the accuracy of subsequent calibration results is further ensured.
It should be noted that, in the specific implementation, before determining the raw point cloud data and the raw image data, the calibration of the laser sensor itself should be completed, and the internal reference calibration of the image sensor is also completed.
In a specific implementation of step S12, the depth continuous line feature may be an edge feature point set, the point cloud depth continuous line feature may be an edge feature point set extracted from the point cloud data, and the image depth continuous line feature may be an edge feature point set extracted from the image data.
Referring to fig. 2, fig. 2 is a flowchart of an embodiment of step S12 in fig. 1, and determining the point cloud depth continuous line feature according to the raw point cloud data may include steps S121 to S125, which are described below.
In step S121, multi-frame point cloud data acquired for the same area scene within a preset time period before and/or after the acquisition time stamp of the original point cloud data is determined.
In step S122, cumulative dense point cloud data is constructed for the plurality of frames of point cloud data.
It can be understood that, in a specific implementation, because the laser scanning beam is shielded by an object, a three-dimensional point cloud of the whole object may not be acquired through one scanning, and in addition, since one frame of point cloud data may be too sparse, the features of a real object may not be accurately acquired through only one frame of point cloud data, so that the object often needs to be scanned from different positions and angles to acquire multiple frames of point cloud data. For example, multiple lidar or one high beam lidar may be used at the same time, that is, a 360-degree lidar detection field (denoted as a lidar system) may be used to collect multiple frames of point cloud data for the same object, and then the multiple frames of point cloud data are fused to construct accumulated dense point cloud data, so as to enhance the point cloud data.
Specifically, the method for constructing the accumulated dense point cloud data from the plurality of frames of point cloud data may be various. In some non-limiting examples, the point clouds in different positions can be transformed to the same position by overlapping part information by using methods such as point cloud splicing, registration, merging and the like, so that the positive direction of the origin of the coordinate system of the accumulated dense point cloud data is consistent with the positive direction of the origin of the coordinate system of the original point cloud data.
In the embodiment of the invention, accumulated dense point cloud data is constructed by adopting multi-frame point cloud data, and then point cloud depth continuous line features are extracted from the accumulated dense point cloud data. The method aims to enhance and enrich the data volume of point cloud data in a multi-frame point cloud superposition mode, solves the problems that the characteristics of a real object cannot be reflected comprehensively due to the fact that single-frame point cloud data are too sparse, and the characteristic points cannot be matched with the characteristics of the image depth continuous line accurately, and can improve calibration accuracy.
In step S123, the accumulated dense point cloud data is down-sampled and rasterized to obtain a plurality of grids.
The down-sampling may be sampling (extracting) the accumulated dense point cloud data once every several samples, so that the obtained new data sequence is the down-sampling of the original sequence, and it can be understood that the down-sampling aims at extracting a part of effective data from the accumulated dense point cloud data; rasterization is to process the accumulated dense point cloud data with a grid to obtain a plurality of grids, where the point cloud data in each grid represents a small region of space. The purpose of rasterizing the point cloud data is to homogenize the point cloud data.
In step S124, for the point cloud data in each grid, fitting is performed by using a random sample consensus algorithm RANSAC to obtain a plurality of planes.
In step S125, a line intersecting between the plurality of planes is taken as the point cloud depth continuous line feature.
The Random Sample Consensus (RANSAC) is an iterative algorithm that can iteratively estimate parameters of a mathematical model from a set of observed data sets including "outliers". The basic assumptions of RANSAC are: the data consists of "local points", for example: the distribution of the data can be interpreted with some model parameters; "out-of-office points" are used to indicate data that cannot be adapted to the model, and "in-office points" are used to indicate data that can be adapted to the model. A simple example is to find a suitable 2-dimensional straight line from a set of observations. It is assumed that the observation data includes an in-office point and an out-office point, wherein the in-office point is approximately passed by a straight line, and the out-office point is far away from the straight line. The input to RANSAC is a set of observations, a parameterized model that can be interpreted or adapted to the observations, some trusted parameters. RANSAC achieves this goal by iteratively selecting a set of random subsets in the data, the selected subsets being assumed to be intra-locales. This process is repeated a fixed number of times, each time the resulting model is either discarded because there are too few local points or selected for use because it is better than the existing models.
Fitting is also called curve fitting, commonly called as a pull curve, and is a representation way of substituting the existing data into a mathematical expression through a mathematical method. Scientific and engineering problems can be solved by sampling, experiment, etc. several discrete data, from which we often want to obtain a continuous function (i.e. curve) or more dense discrete equation fitting with the known data, which is called fitting.
Specifically, the specific steps for determining the features of the point cloud depth continuous lines according to the original point cloud data may be performed with reference to the foregoing, and are not described herein again.
Further, in a specific implementation, determining image depth continuous line features from the raw image data includes: removing distortion from the original image data and converting the original image data into a gray-scale image; and performing edge extraction on the gray level image by using a Laplacian (Laplace) operator to obtain the image depth continuous line characteristics.
The image distortion may refer to image distortion caused by factors such as lens manufacturing accuracy and assembly process deviation, that is, errors in spectral characteristics and geometric characteristics, that is, radiation errors and geometric errors, which are generated between an image and a real scene reflected by the image, and the former is expressed as distortion of an image on gray scale; the latter appears as a deformation in the geometrical relationship. In this embodiment, the raw image data collected by the image sensor is usually corrected for radiation and geometry according to distortion parameters, which is to remove distortion.
The gray scale map corresponds to a color image, also called a gray scale map. The logarithmic relationship between white and black is divided into several levels called gray scale (the gray scale is divided into 256 levels from 0 to 255), and the image represented by gray scale is called a gray scale map.
The laplacian is a second order differential operator in an n-dimensional euclidean space, and is defined as a divergence () of a gradient () f, and the laplacian is a classical image edge enhancement operator, and can realize the detection of image edge features by sharpening an image. The image sharpening process is used for enhancing the gray contrast so as to enable a blurred image to be clearer; the essence of image blurring is that the image is subjected to an averaging operation or an integrating operation, so that the image can be subjected to an inverse operation, for example, a differential operation can highlight the image details and make the image clearer. Since laplacian is a differential operator, its application can enhance the abrupt gray level change region in the image and weaken the slow change region of gray level. Therefore, in the sharpening process, the laplacian operator can be selected to process the original image to generate an image describing abrupt gray changes, and then the laplacian image and the original image are superposed to generate a sharpened image.
Edge extraction may refer to the processing of a picture outline in digital image processing. An edge or an inflection point may be defined as a point where the change of the gray level is severe at the boundary of the image, and the inflection point may refer to a point where the concave-convex change of the function occurs, i.e., where the second derivative is zero. The basic idea of edge extraction is to first highlight a local edge in an image by using an edge enhancement operator, such as the laplacian operator described above, then define the "edge strength" of a pixel, and extract an edge point set by setting a threshold. Edge extraction of an image includes two basic contents: (1) and extracting an edge point set reflecting the gray level change by using an edge operator. (2) And removing certain boundary points or filling boundary discontinuous points in the edge point set, and connecting the edges into a complete line.
In the embodiment of the invention, the original image is converted into the gray image after distortion is removed, and then the edge of the gray image is extracted by adopting the Laplacian operator to obtain the characteristics of the image depth continuous line, so that the extracted characteristic points can be more accurate, the subsequent characteristic point matching and operation results are more accurate, and the calibration precision is improved.
With reference to fig. 1, in step S13, a two-dimensional pixel point set of the point cloud depth continuous line feature is determined according to a preset rotation matrix initial value and a preset translation vector initial value.
Wherein the rotation matrix may refer to a matrix that changes the direction of a vector but does not change the size of the vector when multiplied by one vector. In a three-dimensional coordinate system, the rotation is performed around one axis. The translation vector is also referred to as a translation transformed vector. The rotation matrix may be used to represent a rotational relationship between different data sets and the translation vector may be used to represent a translation transformation relationship between different data sets.
Further, determining a two-dimensional pixel point set of the point cloud depth continuous line features by adopting the following formula:
p=π[(R|t)P];
wherein p represents the two-dimensional pixel point set; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature.
The projection function may be a mapping function that converts a point in a three-dimensional space coordinate system to a point in a two-dimensional space coordinate system. In the specific implementation of the embodiment of the invention, according to a preset initial value of the rotation matrix and a preset initial value of the translation vector and by adopting a projection function pi, a two-dimensional pixel point set P corresponding to the point cloud depth continuous line feature P can be determined.
In the specific implementation of step S14, for each two-dimensional pixel point in the two-dimensional pixel point set, a neighboring pixel point that is most adjacent to the two-dimensional pixel point is found from the image depth continuous line feature, and a normal vector of the neighboring pixel point is determined, so as to obtain a neighboring pixel point set and a normal vector set of the neighboring pixel point.
Further, in some non-limiting embodiments, the search algorithm for finding the neighboring pixel point nearest to the two-dimensional pixel point from the image depth continuous line feature may be selected from: quad-tree (Quad-tree) algorithm, octree (Oct-tree) algorithm, k-nearest neighbor algorithm. And may also be selected from other search algorithms capable of finding neighboring points, and the search algorithm used in the embodiment of the present invention is not limited.
The searching of the nearest neighboring pixel point may be searching for one or more pixel points closest to the target pixel coordinate in a given two-dimensional image pixel set. The quadtree is a data structure which divides a 2D space equally and manages a tree data structure of objects in the space, and each node has at most four subtrees. In two-dimensional space, a planar pixel can be repeatedly divided into four parts, and the depth of the tree is determined by the complexity of pictures, computer memory and graphics. The quadtree algorithm performs a match search by constantly dividing the record to be searched into 4 parts until only one record remains.
An octree is a tree-like data structure for describing a three-dimensional space, each node of the octree represents a cubic volume element, each node has eight child nodes, and the volume elements represented by the eight child nodes are added together to be equal to the volume of a parent node. The basic principle of the octree algorithm is as follows: step (1), setting the maximum recursion depth; step (2), finding out the maximum size of the scene, and establishing a first cube according to the size; step (3), sequentially dropping the unit cell elements into cubes which can be contained and have no child nodes; if the maximum recursion depth is not reached, subdividing eight equal parts, and sharing all unit element elements contained in the cube to eight sub cubes; and (5) if the quantity of the unit elements allocated to the sub-cube is not zero and is the same as that of the parent cube, the sub-cube stops subdividing because the subdivided space is necessarily less allocated according to the space division theory, and if the subdivided space is the same in quantity, the subdivision quantity is the same, so that infinite cutting is caused. And (6) repeating the above steps of sequentially dropping the unit cell elements into the cubes which can be contained and have no child nodes until the maximum recursion depth is reached.
The k-nearest neighbor algorithm is to find k instances or k neighbors nearest to a new input instance in a training data set given a training data set, and classify the input instance into a class if most of the k instances belong to the class. K is a normal number greater than or equal to 1, and a specific value of k may be set according to different application scenario requirements, for example, k may be 5, and k may also be 10. The algorithm is more suitable for automatic classification of class domains with larger sample capacity, and the class domains with smaller sample capacity are easier to generate false classification by adopting the algorithm.
In specific implementation, for each two-dimensional pixel point in the two-dimensional pixel point set, a process of searching a nearest neighboring pixel point in the image depth continuous line feature by using a search algorithm to obtain a corresponding neighboring pixel point set and a normal vector set of the neighboring pixel point is a process of matching a feature point pair. It can be understood that the more accurate the feature point matching is, the more accurate the values of the rotation matrix and the translation vector calculated by the iterative algorithm in the subsequent steps are, the more stable and good judgment results can be provided for the later information fusion perception, so that a safer guarantee is provided for the whole unmanned vehicle system.
In a specific implementation of step S15, determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the neighboring pixel point set and the normal vector set includes: constructing a calibration loss function by adopting the adjacent pixel point set, the normal vector set, the initial value of the rotation matrix and the initial value of the translation vector; and minimizing the calibration loss function by adopting a preset iterative algorithm and a preset termination condition, and determining the optimal value of the rotation matrix and the optimal value of the translation vector.
Further, the calibration loss function is constructed using the following formula:
f(x)=0.5*norm((transpose(n)*(π(R|t)P)-q));
wherein f (x) is used to represent the calibration loss function; norm () represents a vector norm computation function; transpose () represents a transpose function; n represents the set of normal quantities; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature; q represents the set of neighboring pixels. Further, in some non-limiting embodiments, the iterative algorithm may be selected from: gradient descent algorithm, newton algorithm, gauss-newton algorithm, levenberg-marquardt algorithm. In other non-limiting embodiments, the iterative algorithm may also be selected from other algorithms that are capable of iteratively calculating an optimal value for a variable.
The preset termination condition may be a preset maximum iteration number, or a preset minimum function value of the calibration loss function.
Iterative algorithms such as a gradient descent algorithm, a newton algorithm (a newton-raphson method), a gauss-newton algorithm, a levenberg-marquardt algorithm and the like are algorithms which can continuously recur new values by using old values of variables, and a direct method (or a one-time solution method) corresponding to the iterative method is used, namely, the problem is solved at one time. Iterative algorithms are a basic approach to solving problems with computers. It uses the characteristics of quick operation speed and suitable for repetitive operation of computer to make the computer repeatedly execute a group of instructions (or a certain step), and when the group of instructions (or these steps) are executed every time, its new value can be derived from original value of variable.
Taking the gradient descent algorithm as an example, the iterative computation process is to solve the minimum value along the gradient descent direction (or solve the maximum value along the gradient ascent direction). In general, a gradient vector of 0 indicates that an extreme point is reached, and the magnitude of the gradient is also 0. Therefore, in a specific implementation, if the gradient descent algorithm is used to solve the optimized values of the rotation matrix and the translation vector, the termination condition of the iteration of the algorithm is set to be that the magnitude of the gradient vector is close to 0 or an appropriate number of iterations.
In specific implementation, the basic principle of solving the optimal value of the variable by using the newton algorithm, the gauss-newton algorithm and the levenberg-marquardt algorithm is similar to that described above, that is, points at which the derivative approaches to 0 infinitely (function convergence at this time) are searched through successive iteration, so that the value of the rotation matrix and the value of the translation vector at this time are determined as the optimal value of the rotation matrix and the optimal value of the translation vector, and the specific solving process of the algorithm is not expanded here.
In the embodiment of the invention, the calibration loss function is constructed, and the preset iterative algorithm and the preset termination condition are adopted to minimize the calibration loss function so as to determine the optimal value of the rotation matrix and the optimal value of the translation vector, thereby realizing the effect of fully automatically finishing the external reference calibration.
Referring to fig. 3, fig. 3 is a flowchart of a second automatic calibration method in the embodiment of the present invention. The second automatic calibration method may include steps S31 to S36, which are described below.
In step S31, the original point cloud data and the original image data acquired for the same area scene with the same time stamp are determined.
In step S32, a point cloud depth continuous line feature is determined from the raw point cloud data, and an image depth continuous line feature is determined from the raw image data.
In step S33, a projection function is used to determine a two-dimensional pixel point set of the point cloud depth continuous line feature according to a preset initial value of the rotation matrix and a preset initial value of the translation vector, and determine the two-dimensional pixel point set of the point cloud depth continuous line feature.
The formula adopted in step S33 is as follows:
p=π[(R|t)P];
wherein p represents the two-dimensional pixel point set; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature.
In step S34, an octree Oct-tree algorithm is adopted, and for each two-dimensional pixel point in the two-dimensional pixel point set, an adjacent pixel point that is most adjacent to the two-dimensional pixel point is found from the image depth continuous line feature, and a normal vector of the adjacent pixel point is determined, so as to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point.
In step S35, a calibration loss function is constructed based on the set of neighboring pixels, the set of normal vectors, the initial values of the rotation matrix and the initial values of the translation vectors.
The formula for constructing the calibration loss function is as follows:
f(x)=0.5*norm((transpose(n)*(π(R|t)P)-q));
wherein f (x) is used to represent the calibration loss function; norm () represents a vector norm calculation function; transpose () represents a transpose function; n represents the set of normal quantities; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature; q represents the set of neighboring pixels.
In step S36, the optimal value of the rotation matrix and the optimal value of the translation vector are determined by minimizing the calibration loss function using the levenberg-marquardt algorithm and a preset termination condition.
In the specific implementation, please refer to the above description for further details regarding steps S31 to S36, which are not described herein again.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an automatic calibration device in an embodiment of the present invention. The automatic calibration device may include:
an original data acquisition module 41, configured to determine original point cloud data and original image data acquired for a scene in the same area by using the same timestamp;
a depth continuous line feature determining module 42, configured to determine a point cloud depth continuous line feature according to the original point cloud data, and determine an image depth continuous line feature according to the original image data;
a two-dimensional pixel point set determining module 43, configured to determine a two-dimensional pixel point set of the point cloud depth continuous line feature according to a preset rotation matrix initial value and a preset translation vector initial value;
a neighboring pixel point set determining module 44, configured to, for each two-dimensional pixel point in the two-dimensional pixel point set, find a neighboring pixel point that is closest to the two-dimensional pixel point from the image depth continuous line feature, and determine a normal vector of the neighboring pixel point, so as to obtain a neighboring pixel point set and a normal vector set of the neighboring pixel point;
and an optimal value calculation module 45, configured to determine an optimal value of the rotation matrix and an optimal value of the translation vector based on the neighboring pixel point set and the normal vector set.
For the principle, specific implementation and beneficial effects of the automatic calibration device, please refer to the description related to the automatic calibration method shown in the foregoing and fig. 1 to 3, which is not repeated herein.
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program is processed to execute the steps of the automatic calibration method when running. The computer-readable storage medium may include a non-volatile memory (non-volatile) or a non-transitory memory, and may further include an optical disc, a mechanical hard disk, a solid state hard disk, and the like.
Specifically, in the embodiment of the present invention, the processor may be a Central Processing Unit (CPU), and the processor may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will also be appreciated that the memory in the embodiments of the subject application can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile memory may be a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash memory. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of example and not limitation, many forms of Random Access Memory (RAM) are available, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (enhanced SDRAM), SDRAM (SLDRAM), synchlink DRAM (SLDRAM), and direct bus RAM (DR RAM).
The embodiment of the invention also provides a terminal, which comprises a memory and a processor, wherein the memory is stored with a computer program capable of running on the processor, and the processor executes the steps of the automatic calibration method when running the computer program. The terminal can include but is not limited to a mobile phone, a computer, a tablet computer and other terminal devices, and can also be a server, a cloud platform and the like.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document indicates that the former and latter related objects are in an "or" relationship.
The "plurality" appearing in the embodiments of the present application means two or more.
The descriptions of the first, second, etc. appearing in the embodiments of the present application are only for illustrating and differentiating the objects, and do not represent the order or the particular limitation of the number of the devices in the embodiments of the present application, and do not constitute any limitation to the embodiments of the present application.
It should be noted that, the sequence numbers of the steps in this embodiment do not represent a limitation on the execution order of the steps.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (12)

1. An automatic calibration method, comprising:
determining original point cloud data and original image data acquired for the same area scene by adopting the same timestamp;
determining a point cloud depth continuous line characteristic according to the original point cloud data, and determining an image depth continuous line characteristic according to the original image data;
determining a two-dimensional pixel point set of the point cloud depth continuous line characteristics according to a preset rotation matrix initial value and a preset translation vector initial value;
for each two-dimensional pixel point in the two-dimensional pixel point set, searching an adjacent pixel point which is most adjacent to the two-dimensional pixel point from the image depth continuous line characteristics, and determining a normal vector of the adjacent pixel point to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point;
and determining the optimal value of a rotation matrix and the optimal value of a translation vector based on the adjacent pixel point set and the normal vector set.
2. The method of claim 1, wherein determining point cloud depth continuous line features from the raw point cloud data comprises:
determining multi-frame point cloud data acquired for the same area scene within a preset time period before and/or after the acquisition time stamp of the original point cloud data;
constructing accumulated dense point cloud data for the multiple frames of point cloud data, wherein the positive direction of the origin of the coordinate system of the accumulated dense point cloud data is consistent with the positive direction of the origin of the coordinate system of the original point cloud data;
and extracting the point cloud depth continuous line features from the accumulated dense point cloud data.
3. The method of claim 2, wherein extracting the point cloud depth continuous line features from the accumulated dense point cloud data comprises:
down-sampling and rasterizing the accumulated dense point cloud data to obtain a plurality of grids;
fitting the point cloud data in each grid by adopting a random sample consensus (RANSAC) algorithm to obtain a plurality of planes;
and taking a line intersected among the plurality of planes as the point cloud depth continuous line feature.
4. The method of claim 1, wherein determining image depth continuous line features from the raw image data comprises:
removing distortion from the original image data and converting the original image data into a gray-scale image;
and performing edge extraction on the gray level image by using a Laplacian operator to obtain the characteristics of the image depth continuous line.
5. The method of claim 1, wherein the set of two-dimensional pixels of the point cloud depth continuous line feature is determined using the following formula:
p=π[(R|t)P];
wherein p represents the two-dimensional pixel point set; pi represents a projection function; r represents an initial value of a rotation matrix; t represents an initial value of the translation vector; p represents the point cloud depth continuous line feature.
6. The method of claim 1, wherein the searching algorithm for finding the neighboring pixel point nearest to the two-dimensional pixel point from the image depth continuous line feature is selected from:
quad tree Quad-tree algorithm, octree Oct-tree algorithm and k neighbor algorithm.
7. The method of claim 1, wherein determining an optimal value of a rotation matrix and an optimal value of a translation vector based on the set of neighboring pixels and the set of normal vectors comprises:
constructing a calibration loss function by adopting the adjacent pixel point set, the normal vector set, the initial value of the rotation matrix and the initial value of the translation vector;
and minimizing the calibration loss function by adopting a preset iterative algorithm and a preset termination condition, and determining the optimal value of the rotation matrix and the optimal value of the translation vector.
8. The method of claim 7, wherein the calibration loss function is constructed using the following formula:
f(x)=0.5*norm((transpose(n)*(π(R|t)P)-q));
wherein f (x) is used to represent the calibration loss function; norm () represents a vector norm calculation function; transpose () represents a transpose function; n represents the set of normal quantities; pi represents a projection function; r represents a rotation matrix; t represents a translation vector; p represents the point cloud depth continuous line feature; q represents the set of neighboring pixels.
9. The method of claim 7, wherein the iterative algorithm is selected from the group consisting of:
gradient descent algorithm, newton algorithm, gauss-newton algorithm, levenberg-marquardt algorithm.
10. An automatic calibration device, comprising:
the original data determining module is used for determining original point cloud data and original image data acquired by adopting the same timestamp for the same area scene;
the depth continuous line feature determining module is used for determining point cloud depth continuous line features according to the original point cloud data and determining image depth continuous line features according to the original image data;
the two-dimensional pixel point set determining module is used for determining a two-dimensional pixel point set of the point cloud depth continuous line characteristic according to a preset rotation matrix initial value and a preset translation vector initial value;
the adjacent pixel point set determining module is used for searching an adjacent pixel point which is most adjacent to the two-dimensional pixel point from the image depth continuous line characteristics for each two-dimensional pixel point in the two-dimensional pixel point set, and determining a normal vector of the adjacent pixel point to obtain an adjacent pixel point set and a normal vector set of the adjacent pixel point;
and the optimal value calculation module is used for determining the optimal value of the rotation matrix and the optimal value of the translation vector based on the adjacent pixel point set and the normal vector set.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the automatic calibration method according to any one of claims 1 to 9.
12. A terminal comprising a memory and a processor, said memory having stored thereon a computer program operable on said processor, wherein said processor, when executing said computer program, performs the steps of the automatic calibration method according to any one of claims 1 to 9.
CN202111677841.2A 2021-12-31 2021-12-31 Automatic calibration method and device, computer readable storage medium and terminal Pending CN114519681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111677841.2A CN114519681A (en) 2021-12-31 2021-12-31 Automatic calibration method and device, computer readable storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111677841.2A CN114519681A (en) 2021-12-31 2021-12-31 Automatic calibration method and device, computer readable storage medium and terminal

Publications (1)

Publication Number Publication Date
CN114519681A true CN114519681A (en) 2022-05-20

Family

ID=81596690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111677841.2A Pending CN114519681A (en) 2021-12-31 2021-12-31 Automatic calibration method and device, computer readable storage medium and terminal

Country Status (1)

Country Link
CN (1) CN114519681A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597074A (en) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 Method, system, device and medium for multi-sensor information fusion
CN117635030A (en) * 2023-12-07 2024-03-01 苏州银橡智能科技有限公司 Chemical storage management method and system based on cloud computing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597074A (en) * 2023-04-18 2023-08-15 五八智能科技(杭州)有限公司 Method, system, device and medium for multi-sensor information fusion
CN117635030A (en) * 2023-12-07 2024-03-01 苏州银橡智能科技有限公司 Chemical storage management method and system based on cloud computing
CN117635030B (en) * 2023-12-07 2024-04-02 苏州银橡智能科技有限公司 Chemical storage management method and system based on cloud computing

Similar Documents

Publication Publication Date Title
Wang et al. Fusing bird’s eye view lidar point cloud and front view camera image for 3d object detection
Fan et al. Road surface 3D reconstruction based on dense subpixel disparity map estimation
WO2022088982A1 (en) Three-dimensional scene constructing method, apparatus and system, and storage medium
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
US10477178B2 (en) High-speed and tunable scene reconstruction systems and methods using stereo imagery
CN113340334B (en) Sensor calibration method and device for unmanned vehicle and electronic equipment
CN114519681A (en) Automatic calibration method and device, computer readable storage medium and terminal
CN114565644B (en) Three-dimensional moving object detection method, device and equipment
CN107909018B (en) Stable multi-mode remote sensing image matching method and system
CN114217665B (en) Method and device for synchronizing time of camera and laser radar and storage medium
Wang et al. Fusing bird view lidar point cloud and front view camera image for deep object detection
Karsli et al. Automatic building extraction from very high-resolution image and LiDAR data with SVM algorithm
Zhang et al. Lidar-guided stereo matching with a spatial consistency constraint
Forlani et al. Where is photogrammetry heading to? State of the art and trends
CN112465849B (en) Registration method for laser point cloud and sequence image of unmanned aerial vehicle
CN116246119A (en) 3D target detection method, electronic device and storage medium
CN118097123B (en) Three-dimensional target detection method, system, equipment and medium based on point cloud and image
CN116188931A (en) Processing method and device for detecting point cloud target based on fusion characteristics
CN115097419A (en) External parameter calibration method and device for laser radar IMU
US20240193788A1 (en) Method, device, computer system for detecting pedestrian based on 3d point clouds
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
CN114549779A (en) Scene model reconstruction method and device, electronic equipment and storage medium
Zhou et al. 3D building change detection between current VHR images and past lidar data
Tao et al. SiLVR: Scalable Lidar-Visual Reconstruction with Neural Radiance Fields for Robotic Inspection
Atik et al. An automatic image matching algorithm based on thin plate splines

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination