CN116823908B - Monocular image depth estimation method based on multi-scale feature correlation enhancement - Google Patents

Monocular image depth estimation method based on multi-scale feature correlation enhancement Download PDF

Info

Publication number
CN116823908B
CN116823908B CN202310758435.1A CN202310758435A CN116823908B CN 116823908 B CN116823908 B CN 116823908B CN 202310758435 A CN202310758435 A CN 202310758435A CN 116823908 B CN116823908 B CN 116823908B
Authority
CN
China
Prior art keywords
depth
module
feature
feature map
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310758435.1A
Other languages
Chinese (zh)
Other versions
CN116823908A (en
Inventor
明悦
韦秋吉
洪开
吕柏阳
赵盼孜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202310758435.1A priority Critical patent/CN116823908B/en
Publication of CN116823908A publication Critical patent/CN116823908A/en
Application granted granted Critical
Publication of CN116823908B publication Critical patent/CN116823908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a monocular image depth estimation method based on multi-scale feature correlation enhancement. The method comprises the following steps: performing data enhancement preprocessing operation on the input RGB image by utilizing a multi-mode RGB-Depth fusion module; extracting a multi-scale characteristic map after data enhancement by using a multi-scale depth coding module; in the decoding stage, an RFF module is used for obtaining a fine-granularity feature map, an MFCE module is used for enhancing the correlation of features among different scales in the multi-scale features, and the feature map is fused and optimized by combining the RFF module and the MFCE module, so that a pixel-by-pixel depth map is obtained; and optimizing the training of the whole monocular depth estimation network model through the depth representation objective function, and ensuring generalization capability. The method of the invention enhances the correlation between the global features and the local features, learns effective appearance structure information, solves the problem of false estimation of the appearance structure caused by texture deviation, and reconstructs a clear and dense monocular depth map.

Description

Monocular image depth estimation method based on multi-scale feature correlation enhancement
Technical Field
The invention relates to the technical field of image processing, in particular to a monocular image depth estimation method based on multi-scale feature correlation enhancement.
Background
Depth Estimation (Depth Estimation) aims to recover Depth of field information from images, is an important research direction in computer vision, and has been widely applied to the fields of three-dimensional reconstruction, robot navigation, automatic driving and the like. With the progress of deep learning technology, a depth estimation method based on convolutional neural network (Convolutional Neural Network, CNN) has gradually become an important research point in this field. Depth Estimation can be broadly divided into monocular depth Estimation (Monocular Depth Estimation), binocular depth Estimation (Stereo Depth Estimation), and Multi-mesh depth Estimation (Multi-VIEW DEPTH Estimation). Compared with binocular depth estimation and multi-view depth estimation, monocular depth estimation can complete initial image acquisition work by only one camera, acquisition cost and equipment complexity are reduced, and the requirements of practical application are met. However, the process of recovering three-dimensional scene depth information from a single two-dimensional image is uncertain and multi-explanatory, making monocular depth estimation an ill-posed problem, resulting in inherent scale blurring that makes depth recovery challenging. In recent years, more and more researchers have been focusing on depth estimation based on monocular images, and this task has also gradually become a research hotspot and research difficulty in the field of image depth estimation.
The monocular depth estimation has great application value in the actual scene: in an automatic driving system, monocular depth estimation can help a vehicle to perceive surrounding environment, including detecting the distance of a front obstacle and estimating depth information of a road, so as to ensure safe running of the vehicle; the monocular depth estimation can be used in augmented reality application, so that a virtual object can accurately interact with the real world, and the accurate positioning, shielding and shielding effects of the virtual object can be realized by estimating the depth information of the object in the scene, so that more realistic augmented reality experience is provided; the monocular depth estimation can be used in a human-computer interaction interface, such as gesture recognition and gesture estimation, and by analyzing the depth position of a human body in space, the system can recognize gesture actions or human body gestures, so that natural and visual user interface operation is realized; the monocular depth estimation can be used in a video monitoring system to provide more accurate scene analysis and object tracking, and the spatial relationship in the scene can be better understood by estimating the depth information of the object, and the scene recognition, anomaly detection and safety monitoring can be performed; monocular depth estimation is also very useful for robot navigation and environmental awareness, by estimating the depth of objects and obstacles, the robot can plan paths, avoid obstacles and navigate to achieve accurate and safe movements.
Texture bias causes a problem of false estimation of the appearance structure. Because of the complexity and uneven distribution of the texture of the object in the actual scene, the local area with rich texture is more easily captured by the network model. Most existing CNN (Convolutional Neural Networks, convolutional neural network) methods tend to pay more attention to local texture features and ignore global structure information when monocular depth estimation is performed, which easily causes texture deviation phenomenon in the predicted depth map. In practical application, the judgment of the practical distance of the equipment such as a robot to the object is affected.
The powerful image processing capability of the depth neural network in recent years improves the performance of the depth estimation and also provides an end-to-end solution for the implementation of monocular depth estimation. The method can be divided into a data preprocessing method, a depth feature encoding method and a depth feature decoding method according to the algorithm flow of monocular depth estimation.
The data preprocessing method comprises the following steps: the data preprocessing of the monocular depth estimation is optimized and adjusted for the input image to better perform the subsequent depth estimation tasks. These operations include scaling, normalization, data enhancement, etc., which help reduce noise, improve model generalization ability and robustness, while ensuring that the input requirements of the deep learning model are met. In recent years, many preprocessing works have focused on data enhancement, super resolution, and the like to improve the quality and diversity of input images.
Depth feature encoding method: depth feature coding refers to the process of extracting a representation of depth-related features from an input image, which features are fed into a subsequent depth estimation module, such as a decoder or regression module, for predicting a depth map. In conventional approaches, depth feature coding relies primarily on manually designed algorithms. Common methods include SIFT (Scale-INVARIANT FEATURE TRANSFORM ), SURF (Speeded-Up Robust Features, accelerated robust features), ORB (Oriented FAST and Rotated BRIEF, algorithm for fast feature point extraction and description), and the like. These algorithms detect key points in the image and calculate corresponding feature descriptors, then find corresponding points by feature matching, and use these matching points to calculate the depth of the object in three-dimensional space. However, these conventional methods do not provide sufficient discrimination in coping with complex scene and illumination changes due to limited characterization capabilities. The deep learning method automatically extracts image features in a layering mode, and has stronger characterization capability and higher accuracy. The depth feature encoding process is mostly done automatically by CNN and Transformer, learning abstract and layered feature representations from the input image. Depth feature encoding methods can be broadly divided into two categories:
(1) A convolutional neural network-based encoding method;
(2) A method of encoding based on a transducer. The depth feature coding based on CNN is to extract features of the input image through a convolution layer, an activation function and a pooling layer, and then extract high-level semantic information by gradually adjusting the size of the convolution kernel and the number of channels. The transform-based coding method is to divide an input image into a plurality of non-overlapping image blocks, then linearly embed each image block into a vector, process the vectors through a self-attention mechanism and position coding, and finally perform feature extraction and depth estimation through a multi-layer transform.
The depth feature decoding method comprises the following steps: the depth feature decoding process refers to mapping the high-dimensional features extracted by the encoder to a depth space to generate a depth prediction map. The decoding process typically involves upsampling, fusion, and reconstruction operations. First, an up-sampling operation is performed on the feature map to increase its size to be the same as or close to the input image size. Then, the up-sampled feature map is fused to capture multi-scale information.
At present, a monocular image depth estimation method in the prior art comprises the following steps: a data preprocessing method. For the monocular depth estimation task, there have been many data preprocessing works in recent years focusing on data enhancement, super resolution, and the like to improve the quality and diversity of the input image. The learner uses the original image and the horizontally flipped image to enhance the data. The learner encourages model adaptation to apply super-resolution to image areas to reduce image distortion by pasting low resolution images to the same area of high resolution images or pasting partial areas of high resolution images to the same location of low resolution images. The learner introduced CutMix enhancement strategies, namely, local image blocks (patch) are acquired in an image in a "cut-and-paste" mode, wherein ground real depth labels are mixed into the patch in proportion to increase diversity, and regularization effect of a training pixel reserved area is utilized. The learner proposes a data enhancement method for instance segmentation, in which the copied instance object is randomly pasted to any position on the image in a Copy-paste enhancement (Copy-Paste augmentation) manner, so that the robustness is improved while the training cost is not increased.
Although the preprocessing methods described above increase the diversity of images by way of data enhancement, these methods tend to introduce problems such as over-sharpening of the image or destruction of the image geometry. The learner proposed an adaptive super-resolution method, although increasing the number of samples of an image, not so much changed in appearance of the image, and also increased the risk of oversharpening the image, resulting in an increase in error of depth estimation. The "cut-and-paste" data enhancement method proposed by the scholars greatly changes the appearance of the image, but also destroys the geometric structure in the image at the same time, and reduces the stability of the training model.
Drawbacks of one of the monocular image depth estimation methods in the above prior art include: although the preprocessing methods such as exposure correction, feature point matching or image rotation shearing can be used for better improving the quality of input samples, the methods cannot solve the problem of structural limitation of RGB images, and cannot reduce irrelevant detail interference caused by dense areas in the images, so that depth feature encoding is insufficient.
Another monocular image depth estimation method in the prior art includes: in combination with a depth feature decoding method of a convolutional neural network, a learner proposes a decoding network based on rapid upsampling, but the convolutional kernel of the network is smaller, the receptive field of the network is limited, and in the process of feature decoding, only simple bilinear interpolation is adopted to improve the resolution of a depth map, so that more depth features are lost in the network. In order to reduce the loss of the features, a learner adds jump connection between a decoding network layer and a corresponding coding network layer, and fuses a coarse depth map in the decoding network with a fine spatial feature map in the coding network, so that the mapping and expression of the depth features in the decoding process are enhanced, and the accuracy of depth estimation is improved. In addition to employing a jump connection to enhance feature decoding, there are also students using two different modules in a multi-scale feature fusion network architecture, the first module convolving with filters of different sizes, merging all individual feature maps. The second module uses extended convolution instead of full join layers, thereby reducing computation and increasing acceptance domain. However, these feature fusion methods do not sufficiently eliminate features having low correlation. Therefore, the utilization of the underlying features in the predicted depth map is not always sufficiently improved.
The drawbacks of another monocular image depth estimation method in the above prior art include: although the depth feature decoding method based on the convolutional neural network greatly improves the precision of the pixel level in monocular depth estimation, CNN mainly depends on a local perception mechanism, so that the correlation between global features and local features is insufficient, and the problem of global appearance structure information loss still exists in the feature learning process. Furthermore, downsampling operations in the encoder-decoder architecture result in loss of detail information, making integration of global features and local features difficult. As the number of network layers increases, extraneous detail features are continually passed along the feature fusion process, thereby exacerbating the texture bias.
Disclosure of Invention
The embodiment of the invention provides a monocular image depth estimation method based on multi-scale feature correlation enhancement, which is used for effectively extracting depth information of a monocular image.
In order to achieve the above purpose, the present invention adopts the following technical scheme.
A monocular image depth estimation method based on multi-scale feature correlation enhancement, comprising:
Performing data enhancement preprocessing operation on the input RGB image by utilizing a multi-mode RGB-Depth fusion module;
Extracting a multi-scale characteristic map after data enhancement by using a multi-scale depth coding module;
And in the decoding stage, an RFF module is used for acquiring a fine-grained feature map according to the multi-scale feature map, an MFCE module is used for enhancing the correlation of features among different scales in the multi-scale features, and the pixel-by-pixel depth map of the input RGB image is acquired by combining the RFF module and the MFCE module to fuse and optimize the feature map.
Preferably, the preprocessing operation for data enhancement on the input RGB image by using the multi-mode RGB-Depth fusion module includes:
The multi-mode RGB-Depth fusion module fuses the ground real Depth map into the RGB image in a slicing mode, randomly selects a part of the Depth map in the horizontal and vertical directions to be pasted on the same position of the color image, and uses Representing RGB images, forming RGB-D images with depth information, usingRepresenting the ground truth depth map, W and H are the width and height of the image, respectively, C s and C t represent the number of channels in the RGB image and the ground truth depth map, respectively, and the data enhanced image x' s is represented as:
x′s=M×xs+(1-M)×xt (1)
If the numbers of C s and C t are different, combining the RGB image and the ground real depth map in the channel direction to make the channel numbers of the two consistent, and expressing the M matrix (M e {0,1 }) as the area where x s is replaced by x t, and expressing the positions of the width and height (w, h) and the replacement area as follows:
(w,h)=(min((W-a×W)×c×p,1),min((H-a×H)×c×p,1)) (2)
image[x:x+w,:,i]=depth[x:x+w,:] (3)
image[:,y:y+h,i]=depth[:,y:y+h] (4)
where x=a×w, y=a×h, i denotes the three channel numbers of the RGB image, a and c are coefficients ranging between (0, 1), and p denotes a super parameter (p e (0, 1)).
Preferably, the decoding stage uses the RFF module to obtain a fine-grained feature map, uses the MFCE module to enhance correlation of features between different scales in the multi-scale features, fuses and optimizes the feature map by combining the RFF module and the MFCE module, and obtains a pixel-by-pixel depth map, including:
Providing that the multi-scale feature comprises a low-resolution feature map F 1 and a higher-resolution feature map F 2 with different resolutions, an RFF module improves the resolution of the low-resolution feature map F 1 to be the same as that of the higher-resolution feature map F 2 through upsampling of bilinear interpolation, the low-resolution feature map F 1 and the higher-resolution feature map F 2 are spliced in the same dimension to obtain a feature map F 3, the feature map F 3 is convolved by two branches to obtain features of different receptive fields, an upper branch adopts two-dimensional convolution with a convolution kernel of 3 to extract features, input data are standardized by a BatchNorm neural network layer, and finally nonlinear relations among network layers are increased by a ReLU activation function; the lower branch adopts 5 multiplied by 5 two-dimensional convolution to extract features and is normalized by BatchNorm, and the features obtained by the upper branch and the lower branch are fused to obtain a fusion feature map F RFF:
F3=Cat(Up(F1,F2)) (5)
FRFF=Cov5,5(Cov3,3(F3))+Cov5,5(F3) (6)
Where Up (-) represents the upsampling process as bilinear interpolation, cov 3,3 (-) and Cov 5,5 (-) represent the 3×3 convolution and the 5×5 convolution, respectively;
let the multi-scale feature map input in the MFCE module be W and H are respectively expressed as the width and the height of the feature map, C represents the channel number of the feature map, the low-resolution feature map F 1 in F and the feature map F 2 with higher resolution are fused through a first RFF module, the enhanced feature map F E,FE is generated, the features F E1、FE2 and F E3 are extracted through an adaptive average pooling layer, the F E1、FE2 and the F E3 are subjected to channel splicing and are subjected to convolution processing of 1×1 to form a global feature F G, the F E is subjected to parallel processing of asymmetric convolution and standard convolution to form a feature F L, the feature F G and the feature F L are subjected to channel splicing, and the optimized feature map F MFCE is obtained through convolution kernel processing of 1×1, and the calculation process of the MFCE module is as follows:
FE=RFF(F1,F2) (7)
FEi=RFF(F1,AAPi(FE))(i=1,2,3) (8)
FG=Cov1,1(Cat(FE1,FE2,FE3)) (9)
FL=Cov9,1(Cov1,9(FE))+Cov3,3(FE) (10)
FMFCE=Cov1,1(Cat(FG,FL)) (11)
wherein Cov n,m (·) represents two-dimensional convolution with a convolution kernel of size n×m, cat (·) represents stitching of the feature map on the channel, and RFF represents a multi-scale feature fusion module;
And outputting the pixel-by-pixel depth map of the input RGB image through an RFF module and an MFCE module.
Preferably, the method further comprises:
Parameters and training processes of the multi-modal RGB-Depth fusion module, the multi-scale Depth coding module, the RFF module, and the MFCE module are optimized by a Depth characterization objective function.
According to the technical scheme provided by the embodiment of the invention, the monocular image depth estimation algorithm with enhanced multi-scale feature correlation provided by the embodiment of the invention not only enhances the features of an input image and provides more geometric information and semantic information for a depth estimation model, but also enhances the correlation between global features and local features, learns effective appearance structure information, solves the problem of false estimation of the appearance structure caused by texture deviation, and reconstructs a clear and dense monocular depth map.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a process flow diagram of a monocular image depth estimation method based on multi-scale feature correlation enhancement provided by an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a process of a multi-mode RGB-Depth fusion module according to an embodiment of the present invention;
FIG. 3 is a network structure diagram of a multi-scale depth decoder according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a process of an RFF module according to an embodiment of the present invention;
fig. 5 is a process flow diagram of an MFCE module according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the present invention and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the purpose of facilitating an understanding of the embodiments of the invention, reference will now be made to the drawings of several specific embodiments illustrated in the drawings and in no way should be taken to limit the embodiments of the invention.
Monocular depth estimation (Monocular Depth Estimation) refers to the process of recovering depth of field information from a single two-dimensional image. Multiscale feature fusion (Multi-scale Feature Fusion) refers to the process of fusing feature maps of different scale sizes in some way.
In order to enrich geometric information and semantic information in monocular images and enhance correlation between global features and local features and solve the problem of false estimation of appearance structures caused by texture deviation, the embodiment of the invention provides a monocular image depth estimation method based on multi-scale feature correlation enhancement, the processing flow is shown in fig. 1, and the flow comprises four processing steps:
and S1, performing data enhancement preprocessing operation on the input RGB image by utilizing a multi-mode RGB-Depth fusion module so as to enhance the input characteristics of the image and realize image correction.
And S2, extracting the preprocessed multi-scale feature map by using a multi-scale depth coding module.
And S3, acquiring a fine-grained feature map according to the Multi-scale feature map by using an RFF (Relevant Feature Fusion, related feature fusion) module in a decoding stage, enhancing the correlation of different scale features in the Multi-scale features by using an MFCE (Multi-scale Feature Correlation Enhancement, multi-scale feature related enhancement) module, fusing and optimizing the feature map by combining the RFF module and the MFCE module, and acquiring a pixel-by-pixel depth map.
And S4, optimizing the training of the whole monocular depth estimation network model through the depth representation objective function, and ensuring generalization capability.
Specifically, the step S1 includes: in order to improve the global feature extraction capability of a monocular Depth estimation algorithm and alleviate the problem of false estimation of an appearance structure caused by texture deviation, the method firstly designs a multi-mode RGB-Depth fusion module in an image preprocessing stage, introduces an extra mode Depth into an original RGB image, relieves the uncertainty of directly acquiring information from the RGB image, and reduces the noise of an input image. Then in the depth feature decoding stage, a multi-scale feature fusion module and a multi-scale feature correlation enhancement module are designed, wherein the multi-scale feature fusion module is used for fusing receptive fields with different sizes and enhancing the correlation among features; the multi-scale feature correlation enhancement module learns the correlation between the global features and the local features by the combination of the multi-level average pooling layer and the multi-level convolution layer, so that the receptive field is enlarged, and the global information is optimized.
Fig. 2 is a process flow diagram of a multi-mode RGB-Depth fusion module according to an embodiment of the invention. The multi-mode RGB-Depth fusion module adopts a Depth map fusion data enhancement method, namely, a ground real Depth map is fused into an RGB image to form an RGB-D image with Depth information, and the RGB-D image is used as the input of a network model, so that the diversity of visual information is improved, and the noise of an input image is reduced. The multi-mode RGB-Depth fusion module of the invention adopts the idea of slicing, as shown in figure 2, and randomly selects a part of the area of the Depth map in the horizontal and vertical directions to be pasted on the same position of the color image as an input image. By usingRepresenting RGB images, forming RGB-D images with depth information, usingRepresenting the ground true depth map, W and H are the width and height of the image, respectively, and C s and C t represent the number of channels in the input image and depth map, respectively. The data enhanced image x' s can be expressed as:
x′s=M×xs+(1-M)×xt (1)
If the numbers of C s and C t are different, they are combined in advance in the channel direction so that the channel numbers are identical. M matrix (M ε {0,1 }) represents the area where x s was replaced by x t. The width and height (w, h) and the location of the replacement area can be expressed as:
(w,h)=(min((W-a×W)×c×p,1),min((H-a×H)×c×p,1)) (2)
image[x:x+w,:,i]=depth[x:x+w,:] (3)
image[:,y:y+h,i]=depth[:,y:y+h] (4)
where x=a×w, y=a×h, i denotes the three channel numbers of the RGB image, a and c are coefficients ranging between (0, 1), and p denotes a super parameter (p e (0, 1)).
The network structure of a multi-scale depth decoder according to the embodiment of the present invention is shown in fig. 3. Aiming at four feature graphs with different scales output by multi-scale depth coding, a RFF (Relevant Feature Fusion, related feature fusion) module is used for fusing a high-resolution feature graph 1 and a feature graph 2 to obtain fine-granularity local features; and the MFCE module is used for fusing the low-resolution feature map 3 and the feature map 4, learning the correlation between adjacent features and optimizing the global feature characterization.
And then, the features output by the RFF module and the MFCE module are sent into the RFF module to further fuse global information and local information, the global features and the local features are spliced on the channel by a feature splicing operation, the feature map is restored to the same pixel size as the input image by an up-sampling operation, the feature map is represented by two layers of 3 multiplied by 3 convolution (Conv module) optimization features, and finally, the feature map is mapped into a depth map by a Sigmoid function.
Fig. 4 is a process flow diagram of an RFF module according to an embodiment of the present invention. The RFF module takes the feature map as an input to the network for fusing the low resolution feature representation between the two features. The network structure of the RFF module is shown in fig. 4. First, upsampling the low-resolution feature F 1 by bilinear interpolation increases the resolution to be the same as F 2, and stitch-bonding in the same dimension to obtain a feature map F 3. Then, the features of different receptive fields are obtained through convolution of two branches, the features are extracted through two-dimensional convolution with a convolution kernel of 3, input data are standardized through BatchNorm neural network layers, training of a stable network is facilitated, and finally nonlinear relations among all layers of the network are increased through a ReLU activation function; the down leg extracts features using a 5 x 5 two-dimensional convolution and is normalized by BatchNorm. The receptive fields of the features acquired by the upper branch and the lower branch are different, and richer fine granularity information can be acquired by fusing the two. The computation process of the multi-scale feature fusion module can be expressed as:
F3=Cat(Up(F1,F2)) (5)
FRFF=Cov5,5(Cov3,3(F3))+Cov5,5(F3) (6)
where Up (-) represents the upsampling process for bilinear interpolation, cov 3,3 (-) and Cov 5,5 (-) represent the 3×3 convolution and the 5×5 convolution, respectively.
Fig. 5 is a flowchart of a MFCE (Multi-scale Feature Correlation Enhancement, multi-scale feature correlation enhancement module) module according to an embodiment of the present invention. In order to enhance the description of the shape information, the MFCE provided by the present invention enhances the expression of local detail information and global shape information by fusing the context information of neighboring features.
The input characteristic diagram of the local network is recorded asW and H are denoted as the width and height of the feature map, respectively, and C represents the number of channels of the feature map. The input low resolution feature map F 1 is fused with the higher resolution feature map F 2 via a first RFF module (Relevant Feature Fusion, related feature fusion) to enhance the correlation between different resolution features and generate an enhanced feature map F E, Wherein the dimension of feature map F E is the same as the dimension of feature map F 2. Second, as shown in fig. 5, F E extracts important features more efficiently through the APP layer (ADAPTIVE AVERAGE Pooling, adaptive averaging pooling), and reduces the size of the tensor, transforming the image into a low-dimensional space, facilitating capturing features over a larger range. The APP layers with different core sizes can adaptively adjust different image sizes to obtain more global shape information, and form features F E1、FE2 and F E3 together with F 1 as inputs of the RFF module. The F E1、FE2 and F E3 are then channel stitched and a1 x1 convolution process is performed to form a refined global feature F G, where the kernel sizes of the adaptive average pooling layer (AAP) are 2 x 2,4 x 4,6 x 6, respectively. Meanwhile, in order to reduce information redundancy caused by symmetric convolution and reduce the number of parameters and the calculated amount, a strategy processing F E for parallel processing of asymmetric convolution and standard convolution is adopted. The present invention employs a 1x 9 asymmetric convolution kernel and a 9 x1 asymmetric convolution to enhance the local key features in different directions. F E the feature F L is formed by parallel processing of asymmetric convolution and standard convolution to increase the diversity of local features and enhance the expressive power of the features. Finally, the invention splices F G and F L according to the channel to enhance the context correlation between the areas of the image, eliminates the artifacts brought by the network through the convolution kernel processing of 1 multiplied by 1, and better recovers the shape information. The calculation process of the multi-scale characteristic correlation enhancement module is as follows:
FE=RFF(F1,F2) (7)
FEi=RFF(F1,AAPi(FE))(i=1,2,3) (8)
FG=Cov1,1(Cat(FE1,FE2,FE3)) (9)
FL=Cov9,1(Cov1,9(FE))+Cov3,3(FE) (10)
FMFCE=Cov1,1(Cat(FG,FL)) (11)
wherein Cov n,m (·) represents two-dimensional convolution with a convolution kernel of size n×m, cat (·) represents stitching of feature graphs on channels, and RFF represents a relevant feature fusion module.
The pixel-by-pixel depth map "means a depth value for each pixel of an image, and the pixel-by-pixel depth map of the input RGB image is output through the RFF module and the MFCE module.
And optimizing parameters and training processes of a monocular Depth estimation network model formed by the multi-mode RGB-Depth fusion module, the multi-scale Depth coding module, the RFF module and the MFCE module through a Depth representation objective function.
In summary, the monocular image depth estimation algorithm with enhanced multi-scale feature correlation provided by the embodiment of the invention not only enhances the features of the input image and provides more geometric information and semantic information for the depth estimation model, but also enhances the correlation between global features and local features, learns effective appearance structure information, solves the problem of false estimation of the appearance structure caused by texture deviation, and reconstructs a clear and dense monocular depth map.
The embodiment of the invention provides a monocular image depth estimation algorithm based on multi-scale feature correlation enhancement. The algorithm adopts a multi-mode RGB-Depth fusion module to enhance the characteristics of an input image; adopting a related characteristic fusion module to fuse information of different receptive fields; the multi-scale characteristic correlation enhancement module is adopted to enhance the correlation among the characteristics, so that the expression of the appearance structure information is promoted, and the depth information of the monocular image can be effectively extracted.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
From the above description of embodiments, it will be apparent to those skilled in the art that the present invention may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present invention.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus or system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, with reference to the description of method embodiments in part. The apparatus and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (2)

1. A monocular image depth estimation method based on multi-scale feature correlation enhancement, comprising:
Performing data enhancement preprocessing operation on the input RGB image by utilizing a multi-mode RGB-Depth fusion module;
Extracting a multi-scale characteristic map after data enhancement by using a multi-scale depth coding module;
in the decoding stage, an RFF module is used for acquiring a fine-grained feature map according to the multi-scale feature map, an MFCE module is used for enhancing the correlation of features among different scales in the multi-scale features, and the pixel-by-pixel depth map of the input RGB image is acquired by combining the RFF module and the MFCE module to fuse and optimize the feature map;
The preprocessing operation for data enhancement of the input RGB image by using the multi-mode RGB-Depth fusion module comprises the following steps:
The multi-mode RGB-Depth fusion module fuses the ground real Depth map into the RGB image in a slicing mode, randomly selects a part of the Depth map in the horizontal and vertical directions to be pasted on the same position of the color image, and uses Representing RGB images, forming RGB-D images with depth information, usingRepresenting the ground truth depth map, W and H are the width and height of the image, respectively, C s and C t represent the number of channels in the RGB image and the ground truth depth map, respectively, and the data enhanced image x' s is represented as:
x′s=M×xs+(1-M)×xt (1)
If the numbers of C s and C t are different, combining the RGB image and the ground real depth map in the channel direction to make the channel numbers of the two consistent, and expressing the M matrix (M e {0,1 }) as the area where x s is replaced by x t, and expressing the positions of the width and height (w, h) and the replacement area as follows:
(w,h)=(min((W-a×W)×c×p,1),min((H-a×H)×c×p,1)) (2)
image[x:x+w,:,i]=depth[x:x+w,:] (3)
image[:,y:y+h,i]=depth[:,y:y+h] (4)
Where x=a×w, y=a×h, i denotes the three channel numbers of the RGB image, a and c are coefficients ranging between (0, 1), and p denotes a super parameter (p e (0, 1 ]);
The decoding stage uses an RFF module to obtain a fine-granularity feature map, uses an MFCE module to enhance the correlation of features among different scales in the multi-scale features, fuses and optimizes the feature map by combining the RFF module and the MFCE module, and obtains a pixel-by-pixel depth map, and comprises the following steps:
Providing that the multi-scale feature comprises a low-resolution feature map F 1 and a higher-resolution feature map F 2 with different resolutions, an RFF module improves the resolution of the low-resolution feature map F 1 to be the same as that of the higher-resolution feature map F 2 through upsampling of bilinear interpolation, the low-resolution feature map F 1 and the higher-resolution feature map F 2 are spliced in the same dimension to obtain a feature map F 3, the feature map F 3 is convolved by two branches to obtain features of different receptive fields, an upper branch adopts two-dimensional convolution with a convolution kernel of 3 to extract features, input data are standardized by a BatchNorm neural network layer, and finally nonlinear relations among network layers are increased by a ReLU activation function; the lower branch adopts 5 multiplied by 5 two-dimensional convolution to extract features and is normalized by BatchNorm, and the features obtained by the upper branch and the lower branch are fused to obtain a fusion feature map F RFF:
F3=Cat(Up(F1,F2)) (5)
FRFF=Cov5,5(Cov3,3(F3))+Cov5,5(F3) (6)
Where Up (-) represents the upsampling process as bilinear interpolation, cov 3,3 (-) and Cov 5,5 (-) represent the 3×3 convolution and the 5×5 convolution, respectively;
let the multi-scale feature map input in the MFCE module be W and H are respectively expressed as the width and the height of the feature map, C represents the channel number of the feature map, the low-resolution feature map F 1 in F and the feature map F 2 with higher resolution are fused through a first RFF module, the enhanced feature map F E,FE is generated, the features F E1、FE2 and F E3 are extracted through an adaptive average pooling layer, the F E1、FE2 and the F E3 are subjected to channel splicing and are subjected to convolution processing of 1×1 to form a global feature F G, the F E is subjected to parallel processing of asymmetric convolution and standard convolution to form a feature F L, the feature F G and the feature F L are subjected to channel splicing, and the optimized feature map F MFCE is obtained through convolution kernel processing of 1×1, and the calculation process of the MFCE module is as follows:
FE=RFF(F1,F2) (7)
FEi=RFF(F1,AAPi(FE))(i=1,2,3) (8)
FG=Cov1,1(Cat(FE1,FE2,FE3)) (9)
FL=Cov9,1(Cov1,9(FE))+Cov3,3(FE) (10)
FMFCE=Cov1,1(Cat(FG,FL)) (11)
wherein Cov n,m (·) represents two-dimensional convolution with a convolution kernel of size n×m, cat (·) represents stitching of the feature map on the channel, and RFF represents a multi-scale feature fusion module;
And outputting the pixel-by-pixel depth map of the input RGB image through an RFF module and an MFCE module.
2. The method of claim 1, wherein the method further comprises:
Parameters and training processes of the multi-modal RGB-Depth fusion module, the multi-scale Depth coding module, the RFF module, and the MFCE module are optimized by a Depth characterization objective function.
CN202310758435.1A 2023-06-26 2023-06-26 Monocular image depth estimation method based on multi-scale feature correlation enhancement Active CN116823908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310758435.1A CN116823908B (en) 2023-06-26 2023-06-26 Monocular image depth estimation method based on multi-scale feature correlation enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310758435.1A CN116823908B (en) 2023-06-26 2023-06-26 Monocular image depth estimation method based on multi-scale feature correlation enhancement

Publications (2)

Publication Number Publication Date
CN116823908A CN116823908A (en) 2023-09-29
CN116823908B true CN116823908B (en) 2024-09-03

Family

ID=88126939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310758435.1A Active CN116823908B (en) 2023-06-26 2023-06-26 Monocular image depth estimation method based on multi-scale feature correlation enhancement

Country Status (1)

Country Link
CN (1) CN116823908B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726666B (en) * 2024-02-08 2024-06-04 北京邮电大学 Cross-camera monocular picture measurement depth estimation method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396645A (en) * 2020-11-06 2021-02-23 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN113870335A (en) * 2021-10-22 2021-12-31 重庆邮电大学 Monocular depth estimation method based on multi-scale feature fusion

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738697B (en) * 2019-10-10 2023-04-07 福州大学 Monocular depth estimation method based on deep learning
CN110956094B (en) * 2019-11-09 2023-12-01 北京工业大学 RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
KR102262832B1 (en) * 2019-11-29 2021-06-08 연세대학교 산학협력단 Device and Method for Estimating Depth of Monocular Video Image
CN112001960B (en) * 2020-08-25 2022-09-30 中国人民解放军91550部队 Monocular image depth estimation method based on multi-scale residual error pyramid attention network model
CN114359162A (en) * 2021-12-10 2022-04-15 北京大学深圳研究生院 Saliency detection method, saliency detection device, and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396645A (en) * 2020-11-06 2021-02-23 华中科技大学 Monocular image depth estimation method and system based on convolution residual learning
CN113870335A (en) * 2021-10-22 2021-12-31 重庆邮电大学 Monocular depth estimation method based on multi-scale feature fusion

Also Published As

Publication number Publication date
CN116823908A (en) 2023-09-29

Similar Documents

Publication Publication Date Title
Bashir et al. A comprehensive review of deep learning-based single image super-resolution
Zhou et al. GMNet: Graded-feature multilabel-learning network for RGB-thermal urban scene semantic segmentation
CN110738697B (en) Monocular depth estimation method based on deep learning
CN111047548B (en) Attitude transformation data processing method and device, computer equipment and storage medium
CN109377530B (en) Binocular depth estimation method based on depth neural network
Yan et al. Ddrnet: Depth map denoising and refinement for consumer depth cameras using cascaded cnns
Meng et al. Single-image dehazing based on two-stream convolutional neural network
KR20210025942A (en) Method for stereo matching usiing end-to-end convolutional neural network
CN116309648A (en) Medical image segmentation model construction method based on multi-attention fusion
CN116343052B (en) Attention and multiscale-based dual-temporal remote sensing image change detection network
CN116823908B (en) Monocular image depth estimation method based on multi-scale feature correlation enhancement
CN114881871A (en) Attention-fused single image rain removing method
Gao A method for face image inpainting based on generative adversarial networks
Abdulwahab et al. Monocular depth map estimation based on a multi-scale deep architecture and curvilinear saliency feature boosting
Lin et al. Efficient and high-quality monocular depth estimation via gated multi-scale network
Zhang et al. MFFE: multi-scale feature fusion enhanced net for image dehazing
Han et al. Self-supervised monocular Depth estimation with multi-scale structure similarity loss
Li et al. Depth estimation based on monocular camera sensors in autonomous vehicles: A self-supervised learning approach
Zhou et al. GAF-Net: Geometric Contextual Feature Aggregation and Adaptive Fusion for Large-Scale Point Cloud Semantic Segmentation
Chen et al. DDGAN: Dense Residual Module and Dual-stream Attention-Guided Generative Adversarial Network for colorizing near-infrared images
Zhao et al. Joint distortion rectification and super-resolution for self-driving scene perception
Khan et al. A robust light-weight fused-feature encoder-decoder model for monocular facial depth estimation from single images trained on synthetic data
Zhuang et al. Dimensional transformation mixer for ultra-high-definition industrial camera dehazing
CN115345781A (en) Multi-view video stitching method based on deep learning
Chen et al. Exploring efficient and effective generative adversarial network for thermal infrared image colorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant