CN115393448A - Laser radar and camera external parameter online calibration method and device and storage medium - Google Patents

Laser radar and camera external parameter online calibration method and device and storage medium Download PDF

Info

Publication number
CN115393448A
CN115393448A CN202210928215.4A CN202210928215A CN115393448A CN 115393448 A CN115393448 A CN 115393448A CN 202210928215 A CN202210928215 A CN 202210928215A CN 115393448 A CN115393448 A CN 115393448A
Authority
CN
China
Prior art keywords
branch
camera
calibration
depth
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210928215.4A
Other languages
Chinese (zh)
Inventor
陈启军
苏帅
刘成菊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202210928215.4A priority Critical patent/CN115393448A/en
Publication of CN115393448A publication Critical patent/CN115393448A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • G06T3/067
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention relates to a method, a device and a storage medium for online calibration of laser radar and camera external parameters based on a deep neural network, wherein the method comprises the following steps: building a laser radar and camera combined calibration platform; calibrating internal parameters of the camera; synchronizing the time stamps; acquiring a gray image and a point cloud and finishing preprocessing; initializing the external ginseng; converting the point cloud to a camera coordinate system; respectively covering the point cloud and the gray level image according to a pre-configured covering proportion based on a random mask mode; projecting the covering point cloud to an image plane to obtain depth information projection and reflectivity information projection; constructing an external parameter online calibration network, which consists of an input branch, a prediction branch, a fusion branch and a structure output module, and realizing external parameter calibration by fusing the characteristics of depth information projection, a depth map, reflectivity information projection and a covering gray level image; training an external parameter online calibration network; and finishing the online calibration of the external parameters. Compared with the prior art, the invention has the advantages of good calibration effect, wide application range and the like.

Description

Laser radar and camera external parameter online calibration method and device and storage medium
Technical Field
The invention relates to the field of surveying and mapping, in particular to a method and a device for online calibration of laser radar and camera external parameters based on a deep neural network and a storage medium.
Background
The conventional laser radar-camera external parameter online calibration algorithm generally uses a PCA algorithm to segment rod-shaped targets in point cloud data, and uses reflectivity and ground segmentation to obtain lane lines on a road surface; for the line features in the image data, the prior method usually uses a semantic segmentation method to obtain sparse line features. The cross-modal corresponding line matching algorithm firstly projects the Point cloud to the image, and then uses an ICP (Iterative Closest Point) algorithm to find the nearest neighbor relation of each Point projected to the image and the characteristic line in the image. This algorithm is very inefficient and can only handle cases where the lidar and camera parameters are less out of alignment. The optimization problem of the calibration system back end in the past is usually a least square problem and is solved by using a classic LM (levenberg markquardt) algorithm, and this requires that the features of the calibration system front end are matched with good robustness. Unlike the corresponding point association between cameras, the data association between the trans-modal lidar and the cameras is more difficult, often requiring more distinct and distinct line features in the scene. The end-to-end laser radar-camera calibration system can autonomously learn hidden space characteristics beneficial to cross-modal data association from original laser data and image data, so that accurate laser radar-camera external parameter online calibration can be completed in a wider scene.
In the past, the external reference calibration of the end-to-end laser radar-camera only considers the depth information of a laser point projected to an image plane, the depth information represents the three-dimensional space structure of a scene, and the reflectivity information represents the texture of the scene to a certain extent.
Disclosure of Invention
The invention aims to provide a method, a device and a storage medium for online calibrating external parameters of a laser radar and a camera based on a deep neural network, which are used for realizing external parameter calibration by fusing depth information and reflectivity information and improving the calibration accuracy.
The purpose of the invention can be realized by the following technical scheme:
a laser radar and camera external parameter online calibration method based on a deep neural network comprises the following steps:
step 1) building a laser radar and camera combined calibration platform;
step 2) determining a distortion model based on the curvature of a lens of the camera, and calibrating camera internal parameters by adopting a calibration instrument, wherein the camera internal parameters comprise a distortion coefficient and a projection matrix;
step 3) synchronizing the timestamps of the laser radar and the camera based on the synchronous signals;
step 4) in the movement process of the combined calibration platform, according to the rotation direction and the rotation angle of a laser inside the laser radar, giving microsecond-level time stamps to original point clouds acquired by the laser radar, and removing movement distortion to obtain preprocessed point clouds;
step 5) acquiring an image shot by a camera in the motion process of the combined calibration platform and converting the image into a gray image;
step 6) initializing external parameters of the laser radar and the camera, wherein the external parameters represent the relative pose relationship of the laser radar and the camera;
step 7) converting the preprocessed point cloud to a camera coordinate system based on the initialized external parameters to obtain a converted point cloud;
step 8) respectively covering and converting the point cloud and the gray level image according to a pre-configured covering proportion based on a random mask mode to obtain a covered point cloud and a covered gray level image;
step 9) projecting the covering point cloud to an image plane according to the distortion coefficient and the projection matrix to obtain depth information projection and reflectivity information projection;
step 10) constructing an external parameter online calibration network of the end-to-end laser radar and the camera, wherein the external parameter online calibration network consists of an input branch, a prediction branch, a fusion branch and a structure output module,
the input branch comprises a depth information branch, a depth estimation branch, a reflectivity information branch and an image characteristic branch, wherein the input of the depth information branch is depth information projection, the output of the depth information branch is the characteristic of extracted depth information projection, the input of the depth estimation branch is a depth map obtained by performing depth estimation on a covering gray level image based on a depth estimation network, and the output of the depth map is the characteristic of the extracted depth map, the input of the reflectivity information branch is reflectivity information projection, the output of the reflectivity information branch is the cascading result of the characteristic of the extracted reflectivity information projection and the output of the depth information branch endowed with channel attention, the input of the image characteristic branch is a covering gray level image, and the output of the image characteristic branch is the characteristic of the extracted covering gray level image;
the outputs of the depth information branch and the depth estimation branch together form a first prediction branch, the output of the first prediction branch is a six-degree-of-freedom argument,
the outputs of the reflectivity information branch and the image characteristic branch together form a second prediction branch, the output of the second prediction branch is a six-degree-of-freedom external parameter,
the outputs of the depth information branch, the depth estimation branch, the reflectivity information branch and the image characteristic branch form a fusion prediction branch together, the output of the fusion prediction branch is six-degree-of-freedom external parameter,
the result output module supervises external parameter estimation of the three branches based on the loss weight, and takes an external parameter estimation result of the fusion prediction branch as a final external parameter calibration result;
step 11) training an external parameter online calibration network on cloud computing resources;
and step 12) finishing the online calibration of the external parameters of the laser radar and the camera based on the trained external parameter online calibration network.
And the position setting of the laser radar and the camera in the combined calibration platform is determined based on the overlapping area of the sensing ranges of the laser radar and the camera.
The distortion model includes a pinpole model, a Kannala-Brandt model, a MEI model, and a Scaramuzza model.
The synchronous signal comprises a synchronous line and a GPS.
The depth estimation network is a monocular depth estimation network or a binocular depth estimation network.
The six-degree-of-freedom external parameters comprise 3 translation parameters and 3 rotation parameters.
The depth information branch and the reflectivity information branch do not share weights.
The loss weight of the fused predicted branch is greater than the loss weight of the deep predicted branch and the loss weight of the reflectivity predicted branch.
A laser radar and camera external parameter online calibration device based on a deep neural network comprises a memory, a processor and a program stored in the memory, wherein the processor executes the program to realize the method.
A storage medium having stored thereon a program which, when executed, implements the method as described above.
Compared with the prior art, the invention has the following beneficial effects:
(1) The method constructs an external parameter online calibration network of the end-to-end laser radar and the camera, fuses depth information and reflectivity information based on a characteristic level information fusion method, realizes external parameter calibration, is suitable for more complicated calibration scenes, and improves the calibration performance.
(2) According to the invention, the point cloud and the image are randomly covered by adopting a random mask mode, so that data enhancement is realized, the diversity of data is greatly enhanced under the condition of not increasing the acquisition cost, and the problem of data overfitting is effectively overcome.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
fig. 2 is a schematic structural diagram of an external parameter online calibration network of an end-to-end lidar and a camera according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
A method for online calibration of external parameters of a laser radar and a camera based on a deep neural network is disclosed, as shown in FIG. 1, and comprises the following steps:
step 1) a laser radar and camera combined calibration platform is built.
The position setting of the laser radar and the camera in the combined calibration platform is determined based on the overlapping area of the sensing ranges of the laser radar and the camera, and the overlapping area is ensured to be large.
And 2) determining a distortion model based on the curvature of a lens of the camera, and adopting a high-precision calibration instrument (such as: high-precision checkerboard target), wherein the camera internal parameters comprise distortion coefficients and a projection matrix.
The distortion model includes a pinpole model, a Kannala-Brandt model, a MEI model, and a Scaramuzza model.
And 3) synchronizing the timestamps of the laser radar and the camera based on synchronous signals such as a synchronous line or a GPS.
And 4) in the movement process of the combined calibration platform, according to the rotation direction and the rotation angle of a laser inside the laser radar, giving microsecond-level time stamps to the original point cloud acquired by the laser radar, and removing movement distortion to obtain the preprocessed point cloud.
And 5) acquiring an image shot by the camera in the movement process of the combined calibration platform and converting the image into a gray image.
And 6) initializing external parameters of the laser radar and the camera, wherein the external parameters represent the relative pose relationship of the laser radar and the camera.
And 7) converting the preprocessed point cloud into a camera coordinate system based on the initialized external parameters to obtain a converted point cloud.
And 8) respectively covering and converting the point cloud and the gray level image according to a pre-configured covering proportion based on a random mask mode to obtain a covered point cloud and a covered gray level image.
The random mask can achieve the purpose of data augmentation, so that the robustness of the model is improved, and the application scene of the algorithm can be effectively expanded.
And 9) projecting the covering point cloud to an image plane according to the distortion coefficient and the projection matrix to obtain depth information projection and reflectivity information projection.
And step 10) constructing an external parameter online calibration network of the end-to-end laser radar and the camera, as shown in FIG. 2.
The external parameter online calibration network consists of an input branch, a prediction branch, a fusion branch and a structure output module,
the input branches include a depth information branch, a depth estimation branch, a reflectivity information branch, and an image feature branch. Since the depth and reflectivity characterizations have differences, the depth information branch and the reflectivity information branch do not share weights.
The input of the depth information branch is the depth information projection, and the output is the characteristic of the extracted depth information projection.
The input of the depth estimation branch is a depth map obtained by performing depth estimation on the covering gray level image based on a depth estimation network, the depth estimation network is a monocular depth estimation network or a binocular depth estimation network, the depth estimation network is specifically determined according to an applicable scene and actual requirements, and the output is the characteristics of the extracted depth map. This depth estimation process is specifically a conventional arrangement in the art, and is not described herein in detail to avoid obscuring the purpose of the present application.
The input of the reflectivity information branch is reflectivity information projection, and the output is a cascading result of the extracted reflectivity information projection characteristics and the output of the depth information branch endowed with channel attention. The purpose of the reflectivity information branch cascade depth information branch is to enable the reflectivity information branch to learn the depth information, reduce the influence of the depth information on the reflectivity attenuation, and enable the reflectivity branch to retain the texture information as much as possible.
Inputting the image characteristic branch into a covering gray image, and outputting the extracted characteristic of the covering gray image;
the outputs of the depth information branch and the depth estimation branch together form a first prediction branch, and the output of the first prediction branch is a six-degree-of-freedom external parameter.
The outputs of the reflectivity information branch and the image characteristic branch together form a second prediction branch, and the output of the second prediction branch is a six-degree-of-freedom external parameter.
The outputs of the depth information branch, the depth estimation branch, the reflectivity information branch and the image characteristic branch form a fusion prediction branch together, and the output of the fusion prediction branch is six-degree-of-freedom external parameter.
The six-degree-of-freedom external parameters comprise 3 translation parameters and 3 rotation parameters.
And the result output module supervises the external parameter estimation of the three branches based on the loss weight and takes the external parameter estimation result of the fusion prediction branch as a final external parameter calibration result. The loss weight of the fused predicted branch is greater than the loss weight of the deep predicted branch and the loss weight of the reflectivity predicted branch.
Step 11) training an external parameter online calibration network on cloud computing resources;
and step 12) finishing the online calibration of the laser radar and the camera external parameter based on the trained external parameter online calibration network.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, and various media capable of storing program codes.
In this embodiment, the verification of the external parameter online calibration method is performed by using a 64-line laser radar and a Pinhole camera imaging model camera. The rotation parameters of the initial external parameters are all set to be 0, the translation parameters are respectively set to be tx = -40mm, ty = -750mm and tz =2700mm, the resolution of an input image is uniformly adjusted to be 640 multiplied by 480 pixels, and an Adam optimizer is adopted for training. The coverage ratio of the random mask is 30%, and the data can be expanded to 30 times of the original data. The number of iterations of training is set to 100, the initial learning rate is 0.001, and the learning rate attenuation coefficient is set to 0.9. The augmented data is divided into 4:3: and 3, respectively used for training, verification and testing. In the embodiment, the performance of the method is evaluated by using the reprojection error, and the evaluation result is that the proposed method can obtain the average reprojection error within 2 pixels, so that the requirements of most three-dimensional reconstruction application scenes are met, and a better calibration effect is achieved.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions that can be obtained by a person skilled in the art through logic analysis, reasoning or limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (10)

1. A laser radar and camera external parameter online calibration method based on a deep neural network is characterized by comprising the following steps:
step 1) building a laser radar and camera combined calibration platform;
step 2) determining a distortion model based on the curvature of a lens of the camera, and calibrating camera internal parameters by adopting a calibration instrument, wherein the camera internal parameters comprise a distortion coefficient and a projection matrix;
step 3) synchronizing the timestamps of the laser radar and the camera based on the synchronous signals;
step 4) in the movement process of the combined calibration platform, according to the rotation direction and the rotation angle of a laser inside the laser radar, giving microsecond-level time stamps to original point clouds acquired by the laser radar, and removing movement distortion to obtain preprocessed point clouds;
step 5) acquiring an image shot by a camera in the movement process of the combined calibration platform and converting the image into a gray image;
step 6) initializing external parameters of the laser radar and the camera, wherein the external parameters represent the relative pose relationship of the laser radar and the camera;
step 7) converting the preprocessed point cloud into a camera coordinate system based on the initialized external parameters to obtain a converted point cloud;
step 8) respectively covering and converting the point cloud and the gray level image according to a pre-configured covering proportion based on a random mask mode to obtain a covered point cloud and a covered gray level image;
step 9) projecting the covering point cloud to an image plane according to the distortion coefficient and the projection matrix to obtain depth information projection and reflectivity information projection;
step 10) constructing an external parameter online calibration network of the end-to-end laser radar and the camera, wherein the external parameter online calibration network consists of an input branch, a prediction branch, a fusion branch and a structure output module,
the input branch comprises a depth information branch, a depth estimation branch, a reflectivity information branch and an image characteristic branch, wherein the input of the depth information branch is depth information projection, the output of the depth information branch is the characteristic of extracted depth information projection, the input of the depth estimation branch is a depth map obtained by performing depth estimation on a covering gray level image based on a depth estimation network, and the output of the depth map is the characteristic of the extracted depth map, the input of the reflectivity information branch is reflectivity information projection, the output of the reflectivity information branch is the cascading result of the characteristic of the extracted reflectivity information projection and the output of the depth information branch endowed with channel attention, the input of the image characteristic branch is a covering gray level image, and the output of the image characteristic branch is the characteristic of the extracted covering gray level image;
the outputs of the depth information branch and the depth estimation branch together form a first prediction branch, the output of the first prediction branch is a six-degree-of-freedom argument,
the outputs of the reflectivity information branch and the image characteristic branch together form a second prediction branch, the output of the second prediction branch is a six-degree-of-freedom external parameter,
the outputs of the depth information branch, the depth estimation branch, the reflectivity information branch and the image characteristic branch form a fusion prediction branch together, the output of the fusion prediction branch is six-degree-of-freedom external parameter,
the result output module supervises external parameter estimation of the three branches based on the loss weight, and takes an external parameter estimation result of the fusion prediction branch as a final external parameter calibration result;
step 11) training an external parameter online calibration network on cloud computing resources;
and step 12) finishing the online calibration of the laser radar and the camera external parameter based on the trained external parameter online calibration network.
2. The method for online calibration of the lidar and the camera external parameters based on the deep neural network as claimed in claim 1, wherein the position setting of the lidar and the camera in the combined calibration platform is determined based on the overlapping area of the sensing ranges of the lidar and the camera.
3. The method for online calibration of the lidar and the camera external parameter based on the deep neural network as claimed in claim 1, wherein the distortion model comprises a Pinhole model, a Kannala-Brandt model, a MEI model, and a Scaramuzza model.
4. The method for online calibration of lidar and camera external parameters based on the deep neural network of claim 1, wherein the synchronization signal comprises a synchronization line and a GPS.
5. The method for online calibration of the lidar and the camera extrinsic parameters based on the depth neural network as claimed in claim 1, wherein the depth estimation network is a monocular depth estimation network or a binocular depth estimation network.
6. The method for online calibration of the lidar and the camera external parameters based on the deep neural network as claimed in claim 1, wherein the six-degree-of-freedom external parameters comprise 3 translation parameters and 3 rotation parameters.
7. The method for online calibration of the lidar and the camera external parameters based on the deep neural network as claimed in claim 1, wherein the depth information branch and the reflectivity information branch do not share a weight.
8. The method for online calibration of lidar and camera parameters based on the deep neural network of claim 1, wherein the loss weight of the fused prediction branch is greater than the loss weight of the deep prediction branch and the loss weight of the reflectivity prediction branch.
9. A device for online calibration of lidar and camera extrinsic parameters based on a deep neural network, comprising a memory, a processor, and a program stored in the memory, wherein the processor implements the method according to any one of claims 1 to 8 when executing the program.
10. A storage medium having a program stored thereon, wherein the program, when executed, implements the method of any of claims 1-8.
CN202210928215.4A 2022-08-03 2022-08-03 Laser radar and camera external parameter online calibration method and device and storage medium Pending CN115393448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210928215.4A CN115393448A (en) 2022-08-03 2022-08-03 Laser radar and camera external parameter online calibration method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210928215.4A CN115393448A (en) 2022-08-03 2022-08-03 Laser radar and camera external parameter online calibration method and device and storage medium

Publications (1)

Publication Number Publication Date
CN115393448A true CN115393448A (en) 2022-11-25

Family

ID=84118164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210928215.4A Pending CN115393448A (en) 2022-08-03 2022-08-03 Laser radar and camera external parameter online calibration method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115393448A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116203542A (en) * 2022-12-31 2023-06-02 中山市博测达电子科技有限公司 Laser radar distortion test calibration method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116203542A (en) * 2022-12-31 2023-06-02 中山市博测达电子科技有限公司 Laser radar distortion test calibration method
CN116203542B (en) * 2022-12-31 2023-10-03 中山市博测达电子科技有限公司 Laser radar distortion test calibration method

Similar Documents

Publication Publication Date Title
CN110675418B (en) Target track optimization method based on DS evidence theory
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
Kumar et al. Monocular fisheye camera depth estimation using sparse lidar supervision
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN110689562A (en) Trajectory loop detection optimization method based on generation of countermeasure network
CN113610172B (en) Neural network model training method and device and sensing data fusion method and device
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN113050074B (en) Camera and laser radar calibration system and calibration method in unmanned environment perception
CN113963117B (en) Multi-view three-dimensional reconstruction method and device based on variable convolution depth network
CN116612468A (en) Three-dimensional target detection method based on multi-mode fusion and depth attention mechanism
CN113298947A (en) Multi-source data fusion-based three-dimensional modeling method medium and system for transformer substation
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN116310219A (en) Three-dimensional foot shape generation method based on conditional diffusion model
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN114332125A (en) Point cloud reconstruction method and device, electronic equipment and storage medium
CN116486368A (en) Multi-mode fusion three-dimensional target robust detection method based on automatic driving scene
CN115393448A (en) Laser radar and camera external parameter online calibration method and device and storage medium
Cui et al. Dense depth-map estimation based on fusion of event camera and sparse LiDAR
CN113269689B (en) Depth image complement method and system based on normal vector and Gaussian weight constraint
CN112489189B (en) Neural network training method and system
CN114494395A (en) Depth map generation method, device and equipment based on plane prior and storage medium
CN114519681A (en) Automatic calibration method and device, computer readable storage medium and terminal
CN116740488B (en) Training method and device for feature extraction model for visual positioning
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
CN112487893B (en) Three-dimensional target identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination