CN117934710A - Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask - Google Patents

Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask Download PDF

Info

Publication number
CN117934710A
CN117934710A CN202410039587.0A CN202410039587A CN117934710A CN 117934710 A CN117934710 A CN 117934710A CN 202410039587 A CN202410039587 A CN 202410039587A CN 117934710 A CN117934710 A CN 117934710A
Authority
CN
China
Prior art keywords
rendering
color
normal vector
ray
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410039587.0A
Other languages
Chinese (zh)
Inventor
禹鑫燚
陆利钦
徐光锴
欧林林
沈春华
张卫东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202410039587.0A priority Critical patent/CN117934710A/en
Publication of CN117934710A publication Critical patent/CN117934710A/en
Pending legal-status Critical Current

Links

Abstract

A neural radiation field three-dimensional reconstruction method and device based on an adaptive mask, the method comprises the following steps: acquiring a multi-view image and a pose of an indoor scene; estimating a normal vector diagram corresponding to each image; obtaining a geometric feature vector and a symbol distance function value corresponding to a sampling point; obtaining a view angle decoupling color and a view angle coupling color corresponding to the sampling point; obtaining a rendering color corresponding to the sampling light, rendering a view decoupling color, rendering a normal vector and rendering a depth; randomly generating a virtual ray near the sampling ray, obtaining a rendering color corresponding to the virtual ray, rendering a view angle decoupling color, rendering a normal vector and rendering depth; calculating a corresponding loss function value by using a given supervisory signal; obtaining corresponding masks, adaptively selecting different loss functions to perform back propagation, and optimizing model parameters; checking the current training iteration times, and continuing training when the current training iteration times are smaller than the set times; otherwise, stopping training; and predicting and acquiring SDF values of the space points by using the trained model, and extracting the geometric surface of the scene.

Description

Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask
Technical Field
The invention relates to the technical field of three-dimensional reconstruction of indoor scenes, in particular to a three-dimensional reconstruction method and device for guiding training of a nerve radiation field by utilizing geometry and color consistency under different view angles to carry out constraint and using an adaptive mask.
Background
Recovering a three-dimensional scene from a series of multi-view images is an important visual task. Including capturing two-dimensional images with a camera and then reconstructing a real world three-dimensional scene or object by computer vision and graphics related methods. Three-dimensional reconstruction techniques are widely used in a variety of fields, such as Virtual Reality (Augmented Reality), reality augmentation (Augmented Reality), robot navigation, scene design, and the like.
Three-dimensional reconstruction technology based on multi-view stereo matching often has reconstruction failure in more non-texture areas and indoor occasions with illumination changes. With the development of deep learning, more and more researches begin to adopt a method based on deep learning for three-dimensional reconstruction. The method learns the mapping relation from the monocular image to three-dimensional information such as depth, normal vector and the like through training a neural network, and reconstructs the surface of the object through a depth fusion method. However, the method often lacks consistency under multiple view angles, and is difficult to meet the reconstruction precision requirement. At present, the whole scene is directly modeled and learned by utilizing a nerve radiation field (Neural RADIANCE FIELD) to become a new three-dimensional reconstruction technical direction. The nerve radiation field directly uses the multi-view images as supervision signals, so that the multi-view consistency is maintained, and meanwhile, certain influence caused by image noise is restrained.
The patent document CN117152357A discloses a three-dimensional reconstruction method of a nerve radiation field based on ray guiding and surface optimization, which reduces the calculated amount under the condition of not affecting the reconstruction quality. The method reduces the number of sampling light rays, but has limitation on the surface reconstruction accuracy; patent document CN117036612a discloses a method of converting the volume density in the neural radiation field into SDF and using multi-view image consistency to improve reconstruction accuracy. The method can further improve the reconstruction quality, but is difficult to process the occasion that the image changes along with illumination.
Disclosure of Invention
Aiming at the problem that the existing nerve radiation field-based complex indoor scene surface is difficult to reconstruct, the invention provides a nerve radiation field three-dimensional reconstruction method and device based on an adaptive mask.
In the training process of the nerve radiation field, the invention acquires the light mask aiming at whether the normal vector obtained by rendering the current reconstruction model under different visual angles has consistency and whether the visual angles have shielding. Under the guidance of the mask, training of the nerve radiation field is performed by using different loss functions for different reconstruction regions in a self-adaptive way. In addition, the invention further provides the depth geometric consistency under multiple view angles and the consistency of view angle decoupling colors to further improve the accuracy of scene reconstruction.
The invention provides a nerve radiation field three-dimensional reconstruction method based on an adaptive mask, which comprises the following specific steps:
Step 1: acquiring a multi-view image { I k}k=1…n of an indoor scene by using a camera, and acquiring a pose { P k}k=1…n corresponding to each image by a SFM (Structure from Motion) method;
Step 2: estimating normal vector diagram corresponding to each image by using the existing monocular image normal vector estimation model (such as Omnidata)
Step 3: inputting a sampling point x on a sampling ray r into a geometric network f g by means of image pose to obtain a corresponding geometric feature vector z and a Symbol Distance Function (SDF) value
Step 4: decoupling color networks using viewing anglesViewing angle coupled color network/>Obtaining a view decoupling color/>, corresponding to a sampling point x, from an output value of a geometric networkAnd viewing angle coupled color/>
Step 4-1: for the obtained SDF values, according to the definition of normal vectors in spaceObtaining the corresponding normal vector/>, and obtaining the bias guide
Step 4-2: depending on whether color is coupled to viewing angle, the normal vector isThe geometric feature vector z and the view angle v are connected and integrated, and a corresponding color network is input to obtain the view angle coupling color/>And viewing angle decoupling color/>Adding the two colors to obtain the color/>, corresponding to the sampling point x
Step 5: according to volume rendering (equation 2-equation 4), the SDF predictor is combinedIntegrating the color of the sampling point on the sampling light ray r and the normal vector to obtain the rendering color/>, corresponding to the sampling light rayRendering perspective decoupled colorsRendering normal vector/>Rendering depth/>
Step 6: similar to steps 3-5, a virtual ray r v is randomly generated around the sampled ray r, and the corresponding rendering color is predicted using the networkRendering perspective decoupling color/>Rendering normal vector/>Rendering depth
Step 7: calculating a corresponding loss function value by using a given supervisory signal;
Step 7-1: using a given two-dimensional image, supervising the rendering of colors
Step 7-2: normal vector estimation using modelSupervision rendering normal vector/>
Step 7-3: according to the depth geometric consistency in the three-dimensional space, the rendering depth of the current sampling light ray r and the virtual light ray r v is monitored;
Step 7-4: according to the color consistency of the decoupling colors of the viewing angles under different viewing angles, the rendering colors of the current sampling light ray r and the virtual light ray r v are monitored; in addition, as only a small amount of light decoupling colors exist in the actual scene, the light decoupling colors are increased Is regularized by L1;
Step 7-5: in order to further improve the prediction accuracy of the SDF value, eikonal regularization is carried out on the SDF value;
Step 8: according to whether the rendering normal vector of the current model meets the normal vector consistency under multiple view angles and whether the view angles are shielded, obtaining corresponding masks (masks) and adaptively selecting different loss functions for back propagation, and optimizing model parameters;
Step 8-1: if the training iteration number n of the current model does not exceed the set value n t, monitoring all ray rendering colors and rendering normal vectors, and integrating the loss function Is that;
step 8-2: if the training iteration number n of the current model exceeds a set value n t, calculating a plurality of corresponding light masks, and selecting different supervision signals to guide the training of the model;
Step 8-2-1: calculating an adaptive check mask based on whether the difference between the rendering normal vector of the current sampled ray r and the virtual ray r v is less than a threshold value E
Step 8-2-2: calculating the validity mask of the virtual line of sight r v according to whether the SDF value corresponding to the starting point o v of the virtual ray r v is greater than zero, i.e. outside the object
Step 8-2-3: judging whether the sight line is blocked according to the SDF value sign change predicted along the light sampling point, and calculating the shielding mask of the sight line(Including the current sampled ray r and the virtual ray r v);
step 8-2-4: integrating the calculated light masks, calculating an effective virtual view angle and a light mask without line-of-sight occlusion, but which does not conform to the multi-view rendering normal vector consistency Light mask/>, consistent
Step 8-2-5: calculating an overall loss function using a ray mask
Step 8-3: using loss functionsOptimized geometry network f g, color network/>And/>Parameters;
step 9: checking the current training iteration number N, and repeating the steps 3-8 if the current training iteration number N is smaller than the set number N; otherwise, stopping training;
step 10: predicting and acquiring SDF values of the space points by using the trained model; and extracting the geometric surface of the scene by combining a Matching Cube algorithm.
A second aspect of the present invention relates to an adaptive mask-based neural radiation field three-dimensional reconstruction device, comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors being configured to implement the adaptive mask-based neural radiation field three-dimensional reconstruction method of the present invention when the executable code is executed.
A third aspect of the invention relates to a computer readable storage medium having stored thereon a program which, when executed by a processor, implements the inventive method for three-dimensional reconstruction of a neural radiation field based on an adaptive mask.
The invention uses image signals to monitor and return the SDF value of the sampling point in the scene to reconstruct three-dimension by volume rendering. The invention utilizes the light mask obtained by calculation of the current reconstruction model to adaptively guide the training of the nerve radiation field. Besides, the invention further provides that the geometric accuracy of the reconstructed geometric surface is further improved by means of depth under different visual angles and consistency of decoupling colors of the visual angles. Compared with the prior art, the method disclosed by the patent can restore more accurate indoor scene surfaces under the condition of not adding additional data, and solves the problem of missing geometric details of the current three-dimensional reconstruction method based on the nerve radiation field.
In summary, the beneficial effects of the invention are as follows:
1. Compared with other nerve radiation field reconstruction methods, the method can obtain more accurate geometric reconstruction results.
2. The invention only uses the multi-view image as a data source, and the reconstruction result is improved without additional data, and the threshold of the application equipment is low.
3. Besides the multi-view information of the image, the method improves the robustness of the neural radiation field-based reconstruction method from the perspective of light decoupling of colors.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a flowchart of an algorithm of the present invention;
FIG. 3 is a flow chart of an adaptive masking algorithm of the present invention
FIG. 4 is a network block diagram of the present invention;
Fig. 5 is a schematic diagram of an adaptive mask of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
Example 1
As shown in fig. 1 and 2, the invention relates to a mask-based neural radiation field self-adaptive three-dimensional reconstruction method, which comprises the following specific steps:
Step 1: acquiring a multi-view image { I k}k=1…n of an indoor scene by using a camera, and acquiring a pose { P k}k=1…n corresponding to each image by a SFM (Structure from Motion) method;
Step 2: estimating normal vector diagram corresponding to each image by using the existing monocular image normal vector estimation model (such as Omnidata)
Step 3: as shown in fig. 4, with the aid of image pose, a sampling point x on a view ray r is input into a geometric network f g to obtain a corresponding geometric feature vector z and a Sign Distance Function (SDF) value
Step 4: decoupling color networks using viewing anglesViewing angle coupled color network/>Obtaining a view decoupling color/>, corresponding to a sampling point x, from an output value of a geometric networkAnd viewing angle coupled color/>
Step 4-1: according to definition of normal vector in space, obtaining corresponding normal vector by deflecting the obtained SDF value
Step 4-2: according to the view angle coupling relation, the normal vector is calculatedThe geometric feature vector z and the view angle v are connected and integrated, and a corresponding color network is input to obtain the view angle decoupling color/>And viewing angle coupled color/>Adding the two colors to obtain the color/>, corresponding to the sampling point x
Step 5: according to volume rendering (equation 2-equation 4), the SDF predictor is combinedIntegrating the color of the sampling point on the sampling light ray r and the normal vector to obtain the rendering color/>, corresponding to the sampling light rayRendering perspective decoupled colorsRendering normal vector/>Rendering depth/>
Step 6: as shown in FIG. 5, similar to steps 3-5, a virtual ray r v is randomly generated around the sampled ray r and the corresponding rendered color is predicted using the networkRendering perspective decoupling color/>Rendering normal vector/>Rendering depth/>
Step 7: using given monitor signal to calculate corresponding loss function value;
Step 7-1: using a given two-dimensional image, supervising the rendering of colors
Step 7-2: normal vector estimation using modelSupervision rendering normal vector/>
Step 7-3: according to the depth geometric consistency in the three-dimensional space, the rendering depth of the current sampling light ray r and the virtual light ray r v is monitored;
Step 7-4: according to the color consistency of the decoupling colors of the viewing angles under different viewing angles, the rendering colors of the current sampling light ray r and the virtual light ray r v are monitored; in addition, as only a small amount of light decoupling colors exist in the actual scene, the light decoupling colors are increased Is regularized by L1;
Step 7-5: in order to further improve the prediction accuracy of the SDF value, eikonal regularization is carried out on the SDF value;
step 8: as shown in fig. 3 and fig. 5, according to whether the rendering normal vector of the current model meets the normal vector consistency under multiple view angles and whether the view angles are shielded, obtaining corresponding masks (masks) to adaptively select different loss functions for back propagation, and optimizing model parameters;
Step 8-1: if the current model training iteration number n does not exceed the set value n t, monitoring the rendering colors and the rendering normal vectors of all the rays, and the overall loss function Is that;
step 8-2: if the training iteration number n of the current model exceeds a set value n t, calculating a plurality of corresponding light masks, and selecting different supervision signals to guide the training of the model;
Step 8-2-1: calculating an adaptive check mask based on whether the difference between the rendering normal vector of the current sampled ray r and the virtual ray r v is less than a threshold value E
Step 8-2-2: calculating the validity mask of the virtual line of sight r v according to whether the SDF value corresponding to the starting point o v of the virtual ray r v is greater than zero, i.e. outside the object
Step 8-2-3: judging whether the sight line is blocked according to the SDF value sign change predicted along the light sampling point, and calculating the shielding mask of the sight line(Including the current sampled ray r and the virtual ray r v);
step 8-2-4: integrating the calculated light masks, calculating an effective virtual view angle and a light mask without line-of-sight occlusion, but which does not conform to the multi-view rendering normal vector consistency Light mask/>, consistent
Step 8-2-5: using the calculated ray masks M v and M r, the overall loss function is calculated
Step 8-3: optimizing the geometric network f g and the color network using the loss functionAnd/>Parameters;
step 9: checking the current training iteration number N, and repeating the steps 3 to 8 if the current training iteration number N is smaller than the set number N; otherwise, stopping training;
step 10: predicting and acquiring SDF values of the space points by using the trained model; and extracting the geometric surface of the scene by combining a Matching Cube algorithm.
In ScanNet and in practical verification of the Replica dataset, the present example achieves more accurate reconstruction results than existing methods. The evaluation index of the results mainly comprises Chamfer-L 1 and F-socre. Wherein, chamfer-L 1 mainly evaluates the difference between the reconstruction result and the true value, and the smaller the value is, the better the value is; f-socre mainly evaluates the quality of the reconstruction result, the larger the value the better. Wherein, in ScanNet dataset, four real indoor scenes are randomly selected for reconstruction, the reconstruction result evaluation index is averaged and compared with the existing three reconstruction methods, and the result is shown in table 1; in the Replica dataset, five synthetic indoor scenes are randomly selected for reconstruction, the reconstruction index of each scene is compared with MonoSDF method, and the average value is taken, and the result is shown in table 2.
Table 1 quantitative comparison on ScanNet dataset
Method Chamfer-L1 F-socre↑
NeuralRecon 0.084 0.595
NeuRIS 0.050 0.692
MonoSDF(MLP) 0.042 0.733
ours 0.040 0.780
Table 2 quantitative comparison on Replica dataset
Example 2
The present embodiment relates to an adaptive mask-based neural radiation field three-dimensional reconstruction device, including a memory and one or more processors, where the memory stores executable codes, and the one or more processors are configured to implement the adaptive mask-based neural radiation field three-dimensional reconstruction method of embodiment 1 when executing the executable codes.
Example 3
The present embodiment relates to a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the neural radiation field three-dimensional reconstruction method based on an adaptive mask of embodiment 1.
The embodiments described in the present specification are merely examples of implementation forms of the inventive concept, and the scope of protection of the present invention should not be construed as being limited to the specific forms set forth in the embodiments, and the scope of protection of the present invention and equivalent technical means that can be conceived by those skilled in the art based on the inventive concept.

Claims (7)

1. The three-dimensional reconstruction method of the nerve radiation field based on the self-adaptive mask is characterized by comprising the following steps of:
Step 1: acquiring a multi-view image { I k}k=1…n of an indoor scene by using a camera, and acquiring a pose { P k}k=1…n corresponding to each image by a SFM (Structure from Motion) method;
step 2: estimating normal vector diagram corresponding to each image by using the existing monocular image normal vector estimation model
Step 3: inputting a sampling point x on a sampling ray r into a geometric network f g by means of image pose to obtain a corresponding geometric feature vector z and a Symbol Distance Function (SDF) value
Step 4: decoupling color networks using viewing anglesViewing angle coupled color network/>Obtaining a view decoupling color/>, corresponding to a sampling point x, from an output value of a geometric networkAnd viewing angle coupled color/>
Step 5: according to volume rendering (equations 1-3), the SDF predictor is combinedIntegrating the color of the sampling point on the sampling light ray r and the normal vector to obtain the rendering color/>, corresponding to the sampling light rayRendering perspective decoupling color/>Rendering normal vector/>Rendering depth/>
Step 6: similar to steps 3-5, a virtual ray r υ is randomly generated around the sampled ray r, and the corresponding rendering color is predicted using the networkRendering perspective decoupling color/>Rendering normal vector/>Rendering depth/>
Step 7: calculating a corresponding loss function value by using a given supervisory signal;
Step 8: according to whether the rendering normal vector of the current model meets the normal vector consistency under multiple view angles and whether the view angles are shielded, obtaining corresponding masks (masks) and adaptively selecting different loss functions for back propagation, and optimizing model parameters;
step 9: checking the current training iteration number N, and repeating the steps 3-8 if the current training iteration number N is smaller than the set number N; otherwise, stopping training;
step 10: predicting and acquiring SDF values of the space points by using the trained model; and extracting the geometric surface of the scene by combining a Matching Cube algorithm.
2. The adaptive mask-based neural radiation field three-dimensional reconstruction method according to claim 1, wherein the monocular image normal vector estimation model of step 2 is a Omnidata model.
3. The method for three-dimensional reconstruction of a neural radiation field based on an adaptive mask according to claim 1, wherein step 4 specifically comprises:
Step 4-1: for the obtained SDF values, according to the definition of normal vectors in space Obtaining the corresponding normal vector/>, and obtaining the bias guide
Step 4-2: depending on whether color is coupled to viewing angle, the normal vector isThe geometric feature vector z and the view angle v are connected and integrated, and a corresponding color network is input to obtain the view angle coupling color/>And viewing angle decoupling color/>Adding the two colors to obtain the color/>, corresponding to the sampling point x
4. The method for three-dimensional reconstruction of a neural radiation field based on an adaptive mask according to claim 1, wherein step 7 specifically comprises:
Step 7-1: using a given two-dimensional image, supervising the rendering of colors
Step 7-2: normal vector estimation using modelSupervision rendering normal vector/>
Step 7-3: according to the depth geometric consistency in the three-dimensional space, the rendering depth of the current sampling light ray r and the virtual light ray r υ is monitored;
Step 7-4: according to the color consistency of the decoupling colors of the viewing angles under different viewing angles, the rendering colors of the current sampling light ray r and the virtual light ray r υ are monitored; in addition, as only a small amount of light decoupling colors exist in the actual scene, the light decoupling colors are increased Is regularized by L1;
Step 7-5: in order to further improve the prediction accuracy of the SDF value, eikonal regularization is carried out on the SDF value;
5. the method for three-dimensional reconstruction of a neural radiation field based on an adaptive mask according to claim 1, wherein step 8 specifically comprises:
Step 8-1: if the training iteration number n of the current model does not exceed the set value n t, monitoring all ray rendering colors and rendering normal vectors, and integrating the loss function Is that;
step 8-2: if the training iteration number n of the current model exceeds a set value n t, calculating a plurality of corresponding light masks, and selecting different supervision signals to guide the training of the model;
Step 8-2-1: calculating an adaptive check mask based on whether the difference between the rendering normal vector of the current sampled ray r and the virtual ray r υ is less than a threshold value E
Step 8-2-2: calculating the validity mask of the virtual line of sight r υ according to whether the SDF value corresponding to the starting point o υ of the virtual ray r υ is greater than zero, i.e. outside the object
Step 8-2-3: judging whether the sight line is blocked according to the SDF value sign change predicted along the light sampling point, and calculating the shielding mask of the sight line(Including the current sampled ray r and the virtual ray r υ);
step 8-2-4: integrating the calculated light masks, calculating an effective virtual view angle and a light mask without line-of-sight occlusion, but which does not conform to the multi-view rendering normal vector consistency Light mask/>, consistent
Step 8-2-5: calculating an overall loss function using a ray mask
Step 8-3: using loss functionsOptimized geometry network f g, color network/>And/>Parameters.
6. An adaptive mask-based neural radiation field three-dimensional reconstruction method apparatus, comprising a memory and one or more processors, the memory having executable code stored therein, the one or more processors, when executing the executable code, configured to implement the adaptive mask-based neural radiation field three-dimensional reconstruction method of any one of claims 1-5.
7. A computer readable storage medium, having stored thereon a program which, when executed by a processor, implements the adaptive mask-based neural radiation field three-dimensional reconstruction method of any one of claims 1-5.
CN202410039587.0A 2024-01-10 2024-01-10 Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask Pending CN117934710A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410039587.0A CN117934710A (en) 2024-01-10 2024-01-10 Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410039587.0A CN117934710A (en) 2024-01-10 2024-01-10 Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask

Publications (1)

Publication Number Publication Date
CN117934710A true CN117934710A (en) 2024-04-26

Family

ID=90760728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410039587.0A Pending CN117934710A (en) 2024-01-10 2024-01-10 Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask

Country Status (1)

Country Link
CN (1) CN117934710A (en)

Similar Documents

Publication Publication Date Title
Bloesch et al. Codeslam—learning a compact, optimisable representation for dense visual slam
CN114782691B (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN111311708B (en) Visual SLAM method based on semantic optical flow and inverse depth filtering
Qu et al. Depth completion via deep basis fitting
CN115393410A (en) Monocular view depth estimation method based on nerve radiation field and semantic segmentation
CN113592913A (en) Method for eliminating uncertainty of self-supervision three-dimensional reconstruction
CN114372523A (en) Binocular matching uncertainty estimation method based on evidence deep learning
Stölzle et al. Reconstructing occluded elevation information in terrain maps with self-supervised learning
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
CN115880720A (en) Non-labeling scene self-adaptive human body posture and shape estimation method based on confidence degree sharing
CN117274515A (en) Visual SLAM method and system based on ORB and NeRF mapping
Lisus et al. Towards open world nerf-based slam
Zhuang et al. A dense stereo matching method based on optimized direction-information images for the real underwater measurement environment
Huang et al. ES-Net: An efficient stereo matching network
CN112613460A (en) Face generation model establishing method and face generation method
Yang et al. Underwater self-supervised depth estimation
CN117036442A (en) Robust monocular depth completion method, system and storage medium
CN111275751A (en) Unsupervised absolute scale calculation method and system
Li et al. Unsupervised joint learning of depth, optical flow, ego-motion from video
CN114998411B (en) Self-supervision monocular depth estimation method and device combining space-time enhancement luminosity loss
CN117934710A (en) Neural radiation field three-dimensional reconstruction method and device based on self-adaptive mask
CN116452748A (en) Implicit three-dimensional reconstruction method, system, storage medium and terminal based on differential volume rendering
Harms et al. Accuracy analysis of surface normal reconstruction in stereo vision
CN115409949A (en) Model training method, visual angle image generation method, device, equipment and medium
Suetens et al. 3D reconstruction of the blood vessels of the brain from a stereoscopic pair of subtraction angiograms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination