CN111583131B - Defogging method based on binocular image - Google Patents

Defogging method based on binocular image Download PDF

Info

Publication number
CN111583131B
CN111583131B CN202010300709.9A CN202010300709A CN111583131B CN 111583131 B CN111583131 B CN 111583131B CN 202010300709 A CN202010300709 A CN 202010300709A CN 111583131 B CN111583131 B CN 111583131B
Authority
CN
China
Prior art keywords
image
binocular
transmission
foggy
atmospheric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010300709.9A
Other languages
Chinese (zh)
Other versions
CN111583131A (en
Inventor
聂晶
庞彦伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010300709.9A priority Critical patent/CN111583131B/en
Publication of CN111583131A publication Critical patent/CN111583131A/en
Application granted granted Critical
Publication of CN111583131B publication Critical patent/CN111583131B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a defogging method based on binocular images, which comprises the following steps: firstly, constructing a binocular foggy picture database, calculating a discontinuous distance graph of a left graph, setting the concentrations of three types of fogs according to an atmospheric scattering model on the premise of knowing the left and right distance graphs, randomly selecting a value from [0.7,1] as an atmospheric optical parameter, and synthesizing a binocular foggy data set; respectively designing a transmission diagram estimation network and an atmospheric optical parameter prediction network; and training the transmission image estimation network and the atmospheric optical parameter prediction network simultaneously by using the data set constructed in the first step, and training a defogging model based on a binocular image.

Description

Defogging method based on binocular image
Technical Field
The invention belongs to the field of deep learning and computer vision, and particularly relates to a binocular image-based defogging method by using a deep convolutional neural network.
Background
Severe weather such as fog, rain, smoke can seriously affect the quality of pictures, and poor quality pictures can greatly reduce the performance of computer vision tasks such as image-based object detection. In the unmanned scene, the object detection plays an important role in the decision planning of the driving route. Wherein the 2D object detection frames the position of the object and identifies the type of the object, compared to 3D object detection which can also detect distance information. During driving, if a pedestrian is detected in front, the vehicle is decelerated to avoid, and if a red light is detected, the vehicle is stopped. In the process, it is important to detect obstacles, pedestrians and traffic lights, and the obtaining of distance information of the obstacles, the pedestrians and the traffic lights is particularly critical to driving decision, so that 3D object detection is an important subject in unmanned driving. 3D object detection techniques can be broadly divided into two categories at present: lidar-based 3D object detection and binocular camera-based object detection. The radar cost is high, the perception distance is relatively limited (within 100 m), the binocular camera cost is low, the depth distance predicted by utilizing the parallax relation of the left image and the right image is more accurate than that of a monocular camera, the perception distance is in direct proportion to the focal length of the cameras and the distance between the two cameras, and therefore the application prospect is wider based on the binocular 3D detection [1 ]. Under severe weather such as heavy fog, the quality of the input binocular image is poor, and the accuracy of 3D detection is seriously influenced. Therefore, in severe weather, the high-quality fog-free binocular image is recovered, which is particularly important for unmanned driving.
Some existing defogging algorithms are defogging of monocular images. Usually based on an atmospheric scattering model, the formula is as follows
I(x)=J(x)t(x)+A(1-t(x))
Where I (x) is a foggy image, J (x) is a clear picture, t (x) is a transmission diagram, where t (x) e -βd(x) D (x) distance map of object and camera imaging point, β is constant, and the greater β is the fog density. The transmission plot t (x) is proportional to the distance d (x). And A is an atmospheric optical parameter.
The existing monocular defogging algorithm based on deep learning inputs a foggy picture, a convolutional neural network is used for predicting a transmission map t (x), a traditional method or a shallower neural network is used for predicting an atmospheric parameter A, and finally the foggy-removed picture is obtained by using the following formula:
Figure BDA0002453873150000011
since t (x) is primarily related to the distance of the object from the camera imaging point.
Reference documents:
[1]Peiliang Li,Xiaozhi Chen,and Shaojie Shen.Stereo R-CNN based 3D Object Detection for Autonomous Driving.In Proc.CVPR2019.
[2]Marius Cordts,Mohamed Omran,Sebastian Ramos,Timo Rehfeld,Markus Enzweiler,Rodrigo Benenson,Uwe Franke,Stefan Roth,and Bernt Schiele.The cityscapes dataset for semantic urban scene understanding.In Proc.CVPR2016.
disclosure of Invention
The invention provides the defogging method capable of recovering the clear high-quality binocular image in consideration of the fact that the binocular image can predict depth information and the actual application demands for defogging of the binocular image. In order to achieve the purpose, the technical scheme of the invention is as follows:
a defogging method based on binocular images comprises the following steps:
the method comprises the steps of firstly, constructing a binocular foggy picture database, calculating a discontinuous distance graph of a left picture for pictures of the existing database, completing the distance graph of the left picture by using Image Inpainting, correspondingly obtaining the discontinuous distance graph of a right picture by using the parallax relation of the left picture and the right picture, continuing to complete the distance graph of the right picture by using the Image Inpainting, setting the concentration of three types of fog according to an atmospheric scattering model on the premise of knowing the left and right distance graphs, randomly selecting a value from [0.7,1] as an atmospheric light parameter, synthesizing a binocular foggy data set, and dividing the data set into a training data set and a testing data set; labels of the training set are clear binocular images, left and right transmission images of corresponding pictures and atmospheric optical parameters;
secondly, designing a transmission diagram estimation network and an atmospheric optical parameter prediction network respectively:
the transmission map estimation network comprises a feature extraction module shared by convolution parameters, a binocular transmission module and a transmission map prediction module, wherein the feature extraction module shared by the convolution parameters is an encoder-decoder structure, 5 layers of convolutional layers and pooling are firstly utilized for down-sampling, then bilinear interpolation and 4 layers of convolutional layers are utilized for up-sampling to restore the size of an original image, and the features output by the convolutional layers with the same resolution ratio are fused by cross-connection to obtain robust and effective features; inputting the foggy left image and the foggy right image into two identical feature extraction modules with shared convolution parameters to obtain features of the left image and the right image, then inputting the features of the left image and the right image into a binocular transmission module, and better fusing depth information by learning a relation matrix in the horizontal direction and utilizing a parallax relation of the left image and the right image to more accurately predict a transmission image;
the atmospheric optical parameter prediction network only inputs a fogged left image, the network is of an encoder-decoder structure, the sampling is carried out by 4 times through 4 layers of convolution and pooling, then the upsampling and the convolutional layer are alternately recovered to the size of an original image for 2 times, and the atmospheric optical parameter is predicted through a 3 x 3 convolution;
thirdly, training a transmission diagram estimation network and an atmospheric optical parameter prediction network simultaneously by using the data set constructed in the first step: calculating loss values by using the MSE loss function to the predicted transmission diagram of the left diagram, the transmission diagram of the right diagram and the atmospheric light parameter; and simultaneously, calculating loss values of the left image and the right image recovered according to the atmospheric scattering model by using an MSE loss function, wherein the two loss values enable the whole network to be jointly optimized, and training a defogging model based on a binocular image.
The invention designs a defogging algorithm based on a binocular image by utilizing an algorithm of a deep convolutional neural network, improves the performance of the defogging algorithm by utilizing the depth information contained in the binocular image, and can simultaneously recover a high-quality left and right pair of images without fog. Compared with the existing monocular defogging algorithm, the defogged image is clearer, and the subjective and objective evaluation performance is higher. Meanwhile, the recovered left and right high-quality pictures are beneficial to subsequent tasks such as 3D detection based on binocular images and play an important role in unmanned driving.
Drawings
FIG. 1 transmission map estimation network
FIG. 2 atmospheric optical parameter prediction network
FIG. 3 model structure diagram of the defogging algorithm based on the binocular image
Fig. 4 input foggy binocular image
FIG. 5 binocular image enhanced using a binocular image based defogging algorithm
Detailed Description
In order to make the technical scheme of the invention clearer, the invention is further explained below by combining the attached drawings. The invention is realized by the following steps:
first, a data set is prepared. Cityscapes is a public data set [2] of urban streets, wherein the training set comprises 2975 pictures, the verification set comprises 500 pictures, the test set comprises 1525 pictures, pixel-level labels and object-level labels, and the Cityscapes can be used for researching semantic segmentation tasks and object detection tasks. The cityscaps have a left image and a right image and have disparity labels, which provides convenience for the synthesis of foggy binocular data sets. The steps for synthesizing the binocular hazy dataset are as follows:
(1) we use disparity maps and camera parameters
Figure BDA0002453873150000031
The discrete depth map depth (i, j) can be determined, where f x Is the focal length of the camera, B is the distance between the left and right cameras, disparity (i, j) is the label of the disparity map, where i is the abscissa of the picture and j is the ordinate of the picture.
(2) By utilizing the trigonometric similarity relationship and the pythagorean theorem, the distance d from the object to the imaging point of the left image can be calculated l :
Figure BDA0002453873150000032
Wherein (c) x ,c y ) Is the coordinate of the camera.
(3)d l Is a discontinuous left distance graph d, which is formed by using Image Inpainting l Complement, thus, according to the formula
Figure BDA0002453873150000033
We can obtain the transmission map t of the left image l (x) Where x is the pixel at a point in the picture. Randomly generating an atmospheric optical parameter value A, wherein A belongs to [0.7,1]]By using the following formula of the atmospheric scattering model, we can synthesize a foggy left image.
I l (x)=J l (x)t l (x)+A(1-t l (x))
Wherein I l (x) Is the synthesized foggy left picture, J l (x) Is the clear left image.
(4) In order to synthesize the right image with fog, we need to first find the right distance image d r Let (i, j) be d r A point of (d) r This can be obtained by the following equation:
Figure BDA0002453873150000034
(5)d r still a discontinuous right distance map, still using Image Inpainting to make discontinuous d r Completing, so we can get the transmission diagram of the right diagram
Figure BDA0002453873150000041
x is the position of a point in the picture. By adopting the atmospheric optical parameters randomly generated in the step (3) and utilizing the following atmospheric scattering model formula, the right graph with fog can be synthesized.
I r (x)=J r (x)t r (x)+A(1-t r (x))
Wherein I r (x) Is the synthesized foggy left picture, J r (x) Is the clear left image.
In this way, we have synthesized a binocular foggy image, with the magnitude of the β value to control the fog density, with larger β being more foggy. We assigned β to 0.05, 0.1 and 0.2 and synthesized three different concentrations of fog, whereby there were 8925 pairs of binocular foggy pictures in the training set, 1500 pairs of binocular foggy pictures in the validation set, and 4575 pairs of binocular foggy pictures in the test set.
And secondly, training a transmission diagram estimation network and an atmospheric light parameter prediction network in a combined manner.
(1) The transmission diagram estimation network is divided into a feature extraction module shared by convolution parameters, a binocular transmission module and a transmission diagram prediction module as shown in fig. 1. As shown in fig. 1(b), the feature extraction module shared by convolution parameters is an encoder-decoder structure, and performs downsampling by using 5 layers of convolutional layers and pooling, performs upsampling by using bilinear interpolation and 4 layers of convolutional layers to restore the original image size, and performs cross-connection fusion on features output by convolutional layers with the same resolution to obtain robust and effective features. The method comprises the steps of inputting a foggy left image and a foggy right image into two identical feature extraction modules with convolution parameter sharing to obtain features of the left image and the right image, and then inputting the features of the left image and the right image into a binocular transmission module, wherein the left image and the right image are aligned in the vertical direction, and are not matched in the horizontal direction due to parallax, so that depth information is better fused by learning a relation matrix in the horizontal direction and utilizing the parallax relation of the left image and the right image, and the transmission image is more accurately predicted.
As shown in fig. 1(c), taking the transmission diagram of the learning left diagram as an example,
firstly, a left characteristic diagram F is obtained l ∈R B×C×H×W Theta is obtained by a 1 x 1 convolutional layer ll ∈R B×C×H×W ) Will theta l Change in dimension of (a) to [ theta ] l ∈R BH×W×C Right characteristic diagram F r ∈R B×C×H×W Obtained by two parallel 1 × 1 convolutional layers
Figure BDA0002453873150000042
Figure BDA0002453873150000043
And gamma rl ∈R B×C×H×W ) Will be
Figure BDA0002453873150000044
Become into
Figure BDA0002453873150000045
The horizontal relation matrix A of the right image to the left image can be obtained by the following formula r→l ∈R BH×W×W
Figure BDA0002453873150000046
Wherein,
Figure BDA0002453873150000047
which represents the matrix multiplication at the base-wise, softmax is the softmax activation function.
②S l =W o (cat(A r→l ×γ r ,F l ))
Wherein cat represents condensation, W o Represents a 1X 1 convolutional layer, A r→l ×γ r The depth information of the right image to the left image is transmitted and then is compared with the left characteristic image F l Joined together, through a 1 x 1 convolutional layer to obtain an output characteristic S l ∈R B×C×H×W 。S l Input to the transmission map prediction module (a 3 x 3 convolutional layer) for predicting the left transmission map.
The calculation process of the right transmission diagram is very similar to that of the left transmission diagram, and the horizontal relation matrix A of the left diagram and the right diagram can be learned in sequence only by exchanging the input of the left characteristic diagram and the input of the right characteristic diagram and repeating the first step and the second step l→r ∈R BH×W×W
Output characteristic S r ∈R B×C×H×W 。S r Input to the transmission map prediction module (a 3 x 3 convolutional layer) for predicting the right transmission map.
(2) As shown in fig. 2, the atmospheric optical parameter prediction network only needs to input a left image with fog, and passes through an encoder-decoder structure: the atmospheric optical parameters are predicted by 4 times of sampling through 4 layers of convolution and pooling operations, then the up-sampling and the convolution layers are restored to the original size for 2 times alternately, and the atmospheric optical parameters are predicted through a 3 x 3 convolution.
(3) During training, the two networks are jointly trained according to the atmospheric scattering model
Figure BDA0002453873150000051
And
Figure BDA0002453873150000052
clear and fog-free binocular images are obtained through calculation, the MSE loss function is used for optimizing a transmission image estimation network according to the loss values of the transmission image of the predicted left image and the transmission image of the predicted right image through a gradient descent method of reverse transfer; and calculating an atmospheric optical parameter calculation loss value by using an MSE loss function, and optimizing an atmospheric optical parameter prediction network by using a gradient descent method of reverse transfer. At the same time useThe MSE loss function calculates a loss value for the left graph and the right graph recovered according to the atmospheric scattering model, and the loss value enables the whole network to be jointly optimized. Finally, a defogging model based on binocular images is trained.
Thirdly, testing the detection effect of the system
(1) Preparing test set data, calling a designed network structure and trained network parameters, and inputting foggy binocular images (left image and right image) in the test set into a trained model in pairs.
(2) Binocular fogged image (I) l (x) And I r (x) In turn, the feature extraction module, the binocular transmission module and the transmission map prediction module that share the convolution parameters of the transmission map estimation network complete the left-image transmission map t l (x) And right picture transmission picture t r (x) Inputting the fog image of the left image into an atmospheric optical parameter prediction network to estimate atmospheric optical parameters A, and performing atmospheric scattering model estimation
Figure BDA0002453873150000053
And
Figure BDA0002453873150000054
a clear binocular picture after fog removal was obtained.

Claims (1)

1. A defogging method based on binocular images comprises the following steps:
the method comprises the steps of firstly, constructing a binocular foggy picture database, calculating a discontinuous distance graph of a left picture for pictures of the existing database, completing the distance graph of the left picture by using Image Inpainting, correspondingly obtaining the discontinuous distance graph of a right picture by using the parallax relation of the left picture and the right picture, continuing to complete the distance graph of the right picture by using the Image Inpainting, setting the concentration of three types of fogs according to an atmospheric scattering model on the premise of knowing the left and right distance graphs, randomly selecting a value from [0.7,1] as an atmospheric optical parameter, synthesizing a binocular foggy data set, and dividing the binocular foggy data set into a training data set and a testing data set; labels of the training set are clear binocular images, left and right transmission images of corresponding pictures and atmospheric optical parameters;
secondly, designing a transmission diagram estimation network and an atmospheric optical parameter prediction network respectively:
the transmission diagram estimation network comprises a convolution parameter shared feature extraction module, a binocular transmission module and a transmission diagram prediction module, wherein the convolution parameter shared feature extraction module is of an encoder-decoder structure, and 5 convolution layers are alternately connected with 3 pooling layers to perform downsampling; then, 3 bilinear interpolation layers and 4 convolution layers are alternately and sequentially connected, and upsampling is carried out to restore the original image size; the features output by the convolutional layers with the same resolution are fused by cross connection to obtain robust and effective features; inputting the foggy left image and the foggy right image into two identical feature extraction modules with shared convolution parameters to obtain features of the left image and the right image, then inputting the features of the left image and the right image into a binocular transmission module, and better fusing depth information by learning a relation matrix in the horizontal direction and utilizing a parallax relation of the left image and the right image to more accurately predict a transmission image;
the atmospheric optical parameter prediction network only inputs a fogged left image, is of an encoder-decoder structure, is alternately and sequentially connected with 2 pooling layers through 4 convolution layers, and performs downsampling by 4 times; then, 2 bilinear up-sampling layers and 2 convolution layers are alternately and sequentially connected, the size of the original image of the feature map recovery value is predicted through a 3 x 3 convolution;
thirdly, training the transmission diagram estimation network and the atmospheric optical parameter prediction network designed in the second step by using the binocular foggy data set obtained in the first step: calculating loss values by using the MSE loss function to the predicted transmission diagram of the left diagram, the transmission diagram of the right diagram and the atmospheric light parameter; and simultaneously, calculating loss values of the left image and the right image recovered according to the atmospheric scattering model by using an MSE loss function, wherein the two loss values enable the whole network to be jointly optimized, and training a defogging model based on a binocular image.
CN202010300709.9A 2020-04-16 2020-04-16 Defogging method based on binocular image Expired - Fee Related CN111583131B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010300709.9A CN111583131B (en) 2020-04-16 2020-04-16 Defogging method based on binocular image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010300709.9A CN111583131B (en) 2020-04-16 2020-04-16 Defogging method based on binocular image

Publications (2)

Publication Number Publication Date
CN111583131A CN111583131A (en) 2020-08-25
CN111583131B true CN111583131B (en) 2022-08-05

Family

ID=72122571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010300709.9A Expired - Fee Related CN111583131B (en) 2020-04-16 2020-04-16 Defogging method based on binocular image

Country Status (1)

Country Link
CN (1) CN111583131B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487516B (en) * 2021-07-26 2022-09-06 河南师范大学 Defogging processing method for image data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033597A (en) * 2015-03-17 2016-10-19 杭州海康威视数字技术股份有限公司 Image defogging method and equipment thereof
CN107038718A (en) * 2017-03-31 2017-08-11 天津大学 Depth computing method under haze environment
CN107256562A (en) * 2017-05-25 2017-10-17 山东师范大学 Image defogging method and device based on binocular vision system
CN109919889A (en) * 2019-02-28 2019-06-21 温州大学 A kind of visibility detection algorithm based on binocular parallax

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9870600B2 (en) * 2015-01-06 2018-01-16 The Regents Of The University Of California Raw sensor image and video de-hazing and atmospheric light analysis methods and systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033597A (en) * 2015-03-17 2016-10-19 杭州海康威视数字技术股份有限公司 Image defogging method and equipment thereof
CN107038718A (en) * 2017-03-31 2017-08-11 天津大学 Depth computing method under haze environment
CN107256562A (en) * 2017-05-25 2017-10-17 山东师范大学 Image defogging method and device based on binocular vision system
CN109919889A (en) * 2019-02-28 2019-06-21 温州大学 A kind of visibility detection algorithm based on binocular parallax

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Research on binocular clear image hazing algorithm based on depth map;Wenjun Song等;《2019 14th IEEE Conference on Industrial Electronics and Applications》;20190916;全文 *
Single Image Dehazing via Lightweight Multi-scale Networks;Guiying Tang等;《2019 IEEE International Conference on Big Data》;20200224;全文 *
图像去雾技术研究与实现;崔运前;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20170715;全文 *
基于双目立体匹配算法的雾霾图像清晰化处理研究;宋雯君;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20191215;全文 *

Also Published As

Publication number Publication date
CN111583131A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN112435325B (en) VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN108961327B (en) Monocular depth estimation method and device, equipment and storage medium thereof
AU2017324923B2 (en) Predicting depth from image data using a statistical model
CN108986136B (en) Binocular scene flow determination method and system based on semantic segmentation
Vaudrey et al. Differences between stereo and motion behaviour on synthetic and real-world stereo sequences
CN111209770B (en) Lane line identification method and device
KR101359660B1 (en) Augmented reality system for head-up display
CN108062769B (en) Rapid depth recovery method for three-dimensional reconstruction
CN111563415A (en) Binocular vision-based three-dimensional target detection system and method
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
CN113284173B (en) End-to-end scene flow and pose joint learning method based on false laser radar
Xie et al. Video depth estimation by fusing flow-to-depth proposals
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN111583131B (en) Defogging method based on binocular image
CN113379619B (en) Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
Nie et al. Context and detail interaction network for stereo rain streak and raindrop removal
Shi et al. Stereo waterdrop removal with row-wise dilated attention
CN113706599B (en) Binocular depth estimation method based on pseudo label fusion
CN115359067A (en) Continuous convolution network-based point-by-point fusion point cloud semantic segmentation method
CN111695403B (en) Depth perception convolutional neural network-based 2D and 3D image synchronous detection method
CN110766797B (en) Three-dimensional map repairing method based on GAN
CN113191944A (en) Multi-channel image content feature fusion style migration method and system
Rill Speed estimation evaluation on the KITTI benchmark based on motion and monocular depth information
RILL Intuitive Estimation of Speed using Motion and Monocular Depth Information
Kang et al. Underwater Monocular Vision 3D Reconstruction Based on Cascaded Epipolar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220805