CN113870335A - Monocular depth estimation method based on multi-scale feature fusion - Google Patents

Monocular depth estimation method based on multi-scale feature fusion Download PDF

Info

Publication number
CN113870335A
CN113870335A CN202111232322.5A CN202111232322A CN113870335A CN 113870335 A CN113870335 A CN 113870335A CN 202111232322 A CN202111232322 A CN 202111232322A CN 113870335 A CN113870335 A CN 113870335A
Authority
CN
China
Prior art keywords
feature
features
network
scale
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111232322.5A
Other languages
Chinese (zh)
Other versions
CN113870335B (en
Inventor
周非
邓朝龙
张黎敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111232322.5A priority Critical patent/CN113870335B/en
Publication of CN113870335A publication Critical patent/CN113870335A/en
Application granted granted Critical
Publication of CN113870335B publication Critical patent/CN113870335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a monocular depth estimation method based on multi-scale feature fusion, which belongs to the field of three-dimensional scene perception and comprises the following steps: s1: introducing a Non-Local attention mechanism, and constructing a mixed normalization function; s2: an attention mechanism is introduced among the local layer features, the deep layer features and the shallow layer features of the feature extraction network, and an association information matrix among the features on the feature map is calculated; s3: constructing a multi-scale feature fusion module; s4: a cavity space pyramid pooling module is introduced into the decoding network, so that the field of experience of convolution is enlarged, and the network is forced to learn more local detail information. The invention effectively realizes the cross-space and cross-scale feature fusion among the hierarchical features of the feature extraction network, improves the ability of network learning local details and enables the depth map to complete fine-grained prediction in the reconstruction process, and the introduced parameters are relatively lower compared with the whole network.

Description

Monocular depth estimation method based on multi-scale feature fusion
Technical Field
The invention belongs to the field of three-dimensional scene perception, and relates to a monocular depth estimation method based on multi-scale feature fusion.
Background
Currently, mainstream monocular depth estimation methods are classified into an unsupervised learning method and a supervised learning method. The unsupervised learning method does not need to collect a real depth label, a stereo image pair formed by an original image and a target image is utilized during training, firstly, a depth map of the original image is predicted by utilizing an encoder, then, the original image is reconstructed by combining the target image and the predicted depth map by utilizing a decoder, and the reconstructed image is compared with the original image to calculate loss. The supervised learning method is one of the most popular methods at present, and usually a depth camera or a laser radar is used for collecting a depth label, and image depth estimation is processed as a regression task or a classification task. Most coding networks of monocular depth estimation models lose spatial structure information due to insufficient feature extraction and feature extraction stages, and due to the fact that the actual scene structure is complex, the spatial structure relation between feature contexts is difficult to consider by common local convolution modules, and the phenomena of scale blurring and distortion of the estimated depth map occur. In order to solve the problem, a multi-scale feature fusion module is constructed by using a residual pyramid network, and a depth map with more obvious structural hierarchy is obtained by extracting features with different scales in a document of Structure-aware pyramid network for monolithic depth estimation (IJCAI) 2019. The document "Chen et al, Attention-based context aggregation network for cellular depth estimation.2019" uses an Attention-based aggregation network to capture continuous context information and integrate image-level and pixel-level context information, but does not enable the capture of context information and the interaction of spatial information between features at multiple scales.
In summary, the problems existing in the technical field of monocular depth estimation at present are as follows: 1) in the field of image depth estimation based on deep learning, most of network structures adopt coding and decoding structures, and the coding network can cause the problems of insufficient feature extraction, loss of spatial information and the like in the feature extraction stage, so that the network is difficult to learn some detailed information in the image. 2) The decoding network can lose partial image characteristics in the continuous up-sampling process of high-dimensional semantic characteristics, so that the depth map reconstruction effect is poor, and the prediction of a fine-grained depth map is not facilitated. 3) The monocular depth estimation faces a complex actual scene structure, and if the spatial structure relationship in the scene is not effectively considered, the estimated depth map is not high in precision.
Disclosure of Invention
In view of this, the present invention provides a monocular depth estimation method based on multi-scale feature fusion, which introduces a Non-Local module aiming at the problems that monocular depth estimation coding network feature extraction is insufficient, and the network is difficult to learn more detailed information due to the fact that spatial information is easily lost in the feature extraction stage, and constructs a multi-scale feature fusion module based on an attention mechanism while improving the Non-Local module. In a decoding network, the defect of the receptive field of a common local convolution module is made up by adopting the cavity convolution in a cavity space pyramid pooling module, the problems of image feature loss and the like caused by up-sampling in the depth map reconstruction process are greatly solved, the accuracy of monocular depth estimation is improved, and the series problems of depth map scale blurring, distortion and the like are solved.
In order to achieve the purpose, the invention provides the following technical scheme:
a monocular depth estimation method based on multi-scale feature fusion comprises the following steps:
s1: introducing a Non-Local attention mechanism, and constructing a mixed normalization function;
s2: an attention mechanism is introduced among the local layer features, the deep layer features and the shallow layer features of the feature extraction network, and an association information matrix among the features on the feature map is calculated;
s3: constructing a multi-scale feature fusion module;
s4: a cavity space pyramid pooling module is introduced into the decoding network, so that the field of experience of convolution is enlarged, and the network is forced to learn more local detail information.
Further, the step S1 includes:
on the basis of Non-Local, constructing a mixed SoftMox layer as a normalization function, wherein the calculation formula of the normalization function is as follows:
Figure BDA0003316479460000026
Figure BDA0003316479460000027
wherein
Figure BDA0003316479460000028
Is the similarity score of the nth part, i is the current pixel point on the feature map, j is all the pixel points on the feature map, pinDenotes the nth aggregation weight, N denotes the number of feature map partitions, wnIs a linear vector that can be learned in network training,
Figure BDA0003316479460000029
is corresponding to each region k on the feature map XjThe arithmetic mean of (a).
Further, the step S2 specifically includes the following steps:
s21: by self-conversion, other feature points k on the feature map are utilizedjTo the current feature point qiAnd (3) performing relational modeling, wherein the calculation formula is as follows:
Figure BDA0003316479460000021
Figure BDA0003316479460000022
wherein, wi,jRepresenting a spatial attention map, Fmos(. represents a normalization function, q)i,nDenotes an index, kj,nA key is represented that is a key of the key,
Figure BDA0003316479460000023
which means that the multiplication is performed element by element,
Figure BDA0003316479460000024
representing the feature map after self-transformation, vjRepresents a value;
s22: by means of top-down feature conversion, high-dimensional semantic information is utilized
Figure BDA0003316479460000025
Contextual information for low dimensional features
Figure BDA0003316479460000031
Modeling is carried out, and the calculation formula is as follows:
wi,j=Fmos(Feud(qi,n,kj,n))
Figure BDA0003316479460000032
in the formula, Feud(. The) represents the Euclidean distance between two pixel points on the characteristic diagram;
s23: through feature transformation from bottom to top, relevant information modeling is carried out among feature diagram channels with different scales, and a specific calculation formula is as follows:
w=GAP(K)
Qatt=Fatt(Q,w)
Vdow=Fsconv(V)
Figure BDA0003316479460000033
wherein w represents the channel attention diagram, GAP represents the global average pooling, K represents the key of the network shallow feature diagram, QattRepresenting features weighted by channel attention, Fatt(. for) an outer product function, Q denotes an index of the network deep profile, VdowRepresenting a feature map after downsampling, Fsconv(. cndot.) is a 3X 3 convolution with a step size, V represents the value of the network shallow feature map,
Figure BDA0003316479460000034
feature graphs representing bottom-up conversion, Fconv(. is) a 3X 3 convolution for refinement, Fadd(-) denotes where two feature maps are added element by element and then passed through a 3 × 3 convolution againAnd (6) processing.
Further, in step S3, the three feature conversions in step S2 are performed on the middle 4-layer features of the coding network, respectively, to obtain a plurality of enhanced high-level features, then the enhanced features are feature rearranged according to a scale, the features with the same size are concatenated with the original features on the coding network, and finally, the channel dimension of the enhanced features is restored to the same dimension in the input through a 3 × 3 convolution.
Further, the step S4 specifically includes the following steps:
s41: a plurality of cavity space pyramid pooling modules are crosswise embedded between feature maps with different resolution ratios, so that feature pyramid information with a great receptive field is captured, and lost feature information in the up-sampling process is compensated;
s42: selecting a deconvolution module with learnable parameters based on an upsampling method; for all the up-sampling modules, the output of the previous cavity space pyramid pooling module is deconvoluted to change the size of the feature map to twice of the original size, and then the corresponding features output by the multi-scale feature fusion module and the depth map roughly estimated by the previous scale are cascaded;
s43: and for all the cavity space pyramid pooling modules, respectively executing cavity convolution containing different cavity rates on the input features, and outputting the cascade of different cavity convolution output features.
The invention has the beneficial effects that: 1) the invention is inspired by space attention and channel attention, provides a multi-scale feature fusion module based on an attention mechanism, and effectively realizes cross-space and cross-scale feature fusion among hierarchical features of a feature extraction network; 2) according to the invention, a cavity space pyramid pooling module in the semantic segmentation field is introduced into the decoding network, so that the capability of network learning local details is improved, the depth map is predicted at a fine granularity in the reconstruction process, and the introduced parameters are relatively low compared with the whole network. Simulation results show that the method has higher performance compared with SARPN and ACAN algorithms.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a monocular depth estimation algorithm based on multi-scale feature fusion in accordance with the present invention;
FIG. 2 is a flow chart of the attention mechanism construction of the present invention, (a) based on a self-switching attention module, (b) based on a top-down switching attention module, and (c) based on a bottom-up switching attention module;
FIG. 3 is a flow chart of the multi-scale feature fusion module constructed based on the attention mechanism of the present invention;
FIG. 4 is a flow chart of the void space pyramid pooling module-based construction provided by the present invention;
FIG. 5 is an ablation experiment performed based on a multi-scale feature fusion module according to the present invention, in which line 3 is a depth map of basic network prediction, and line 4 is a depth map of prediction after improving the feature extraction capability of a coding network based on the multi-scale feature fusion module;
fig. 6 is an ablation experiment performed based on the cavity space pyramid pooling module according to the present invention, in which the 3 rd column is a depth map predicted by a basic network, and the 4 th column is a depth map predicted after the depth map reconstruction capability of a decoding network is improved based on the cavity space pyramid pooling module;
FIG. 7 is a comparison diagram between the improved whole monocular depth estimation network and the ACAN network prediction depth map, wherein the 2 nd row in the diagram is the ACAN network prediction depth map, and the 3 rd row is the improved network prediction depth map of the invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
Please refer to fig. 1 to 6, which illustrate a monocular depth estimation method based on multi-scale feature fusion.
Fig. 1 is a structural diagram of a monocular depth estimation network based on multi-scale feature fusion according to an embodiment of the present invention, and as shown in the figure, a monocular depth estimation algorithm based on multi-scale feature fusion according to an embodiment of the present invention includes:
in the feature extraction stage of the backbone network, attention mechanisms are introduced among different layer features, and the construction details of the attention mechanisms are shown in FIG. 2. In order to make the standard SoftMax layer more effective on the image, a mixed SoftMax function is constructed, and the calculation formula of the normalization function is as follows:
Figure BDA0003316479460000051
Figure BDA0003316479460000052
wherein wnIs a linear vector that can be learned in network training,
Figure BDA0003316479460000053
is corresponding to each region k on the feature map XjThe arithmetic mean value of (a) is calculated,
Figure BDA0003316479460000054
is the similarity score of the nth portion.
Then, the construction based on three feature transformations, the implementation details of which are shown in fig. 3. Firstly, other feature points k on the feature map are utilized through self-conversionjTo the current feature point qiAnd (3) performing relational modeling, wherein the calculation formula is as follows:
Figure BDA0003316479460000055
Figure BDA0003316479460000056
wherein,
Figure BDA0003316479460000057
representing element-by-element multiplication.
Secondly, through the feature conversion from top to bottom,utilizing high dimensional semantic information
Figure BDA0003316479460000058
Contextual information for low dimensional features
Figure BDA0003316479460000059
Modeling is carried out, and the calculation formula is as follows:
wi,j=Fmos(Feud(qi,n,kj,n))
Figure BDA0003316479460000061
in the formula, FeudAnd (-) represents the Euclidean distance between two pixel points on the characteristic diagram.
Finally, performing related information modeling between feature map channels with different scales through feature conversion from bottom to top, wherein a specific calculation formula is as follows:
w=GAP(K)
Qatt=Fatt(Q,w)
Vdow=Fsconv(V)
Figure BDA0003316479460000062
wherein, Fatt(. represents an outer product function, Fsconv(. is) a 3X 3 convolution with step size, Fconv(. is) a 3X 3 convolution for refinement, Fadd(. cndot.) denotes that the two feature maps are added element by element and then processed again by 3 × 3 convolution.
The construction of a multi-scale feature fusion module based on an attention mechanism is shown in fig. 3, firstly, self-conversion, top-down conversion and bottom-up conversion are respectively carried out on the middle four layers of features of a coding network to obtain a plurality of enhanced high-level features, then, the enhanced features are rearranged according to scales, the features with the same size and the original features on the coding network are cascaded, and finally, the channel dimensions of the enhanced features are restored to the same dimensions during input through a 3 x 3 convolution, the context relationship of a scene space structure is better considered for the features output by the multi-scale feature fusion module than the input features, and the feature extraction capability of the feature extraction network is greatly improved.
Finally, aiming at the problem that the coding network is easy to lose image characteristics in the process of reconstructing the depth map, a cavity space pyramid pooling module is introduced, and the structure of the module is shown in fig. 4. The method comprises the following specific steps:
1) a plurality of cavity space pyramid pooling modules are crosswise embedded between feature maps with different resolution ratios, so that feature pyramid information with a great receptive field is captured, and lost feature information in the up-sampling process is compensated;
2) an upsampling based approach selects a deconvolution module whose parameters can be learned. For all the up-sampling modules, the output of the previous cavity space pyramid pooling module is deconvoluted to change the size of the feature map to twice of the original size, and then the corresponding features output by the multi-scale feature fusion module and the depth map roughly estimated by the previous scale are cascaded;
3) for all the cavity space pyramid pooling modules, cavity convolution including different cavity rates is performed on input features respectively, and the output of the module is the cascade of different cavity convolution output features.
The invention uses the data in NYU-Depth v2 and KITTI data sets to carry out experiments on the proposed monocular Depth estimation method based on multi-scale feature fusion, the NYU-Depth v2 data set is obtained by acquiring indoor scenes by a Microsoft Kinect RGB-D camera and consists of a Depth map and an RGB map, and the KITTI data set is acquired by a radar sensor and a vehicle-mounted camera under various road environments. These two data sets are common data sets for monocular depth estimation in indoor and outdoor scenarios.
Fig. 5 is a comparison diagram of results before and after introducing the multi-scale feature fusion module of the present invention, in which line 1 is an input RGB image, line 2 is a real depth map, line 3 is a depth map predicted by the basic network of the present invention, and line 4 is a depth map predicted after introducing the multi-scale feature fusion module. It can be seen from the figure that the depth map scale of the basic network prediction is very fuzzy, and the depth prediction of the object edge is unclear. After the feature extraction capability of the coding network is improved through the multi-scale feature fusion module, the prediction of the depth map becomes more accurate, and the edge contour of the object is clearer.
Fig. 6 is a comparison graph of results before and after a hole space pyramid pooling module is introduced into a decoding network, in the graph, the 1 st column is an input RGB image, the 2 nd column is a real depth map, the 3 rd column is a depth map predicted by a basic network of the present invention, and the 4 th column is a depth map predicted after the hole space pyramid pooling module is introduced. It can be seen from the figure that the improvement made to decode the network makes the network more accurate in restoring the local details of some objects. And observing the bookshelf in the white frame in the row 2, wherein the real depth map of the bookshelf is missing, and the improved network completes the missing depth map. Therefore, the introduction of the void space pyramid pooling module can effectively solve the loss of image detail information and complete the prediction of fine granularity.
FIG. 7 is a schematic diagram comparing the predicted depth maps between the entire monocular depth estimation network and the ACAN monocular depth estimation network improved by the present invention. With the KITTI data set, line 1 in the figure is the input RGB image, line 2 ACAN network estimated depth map, line 3 is the network estimated depth map improved by the present invention. As can be seen from the graph, the prediction of the target of the ACAN network on a near point is unclear, the prediction of the depth of the target at a far point is often failed, and the depth map of the network prediction improved by the invention can keep clear outline and detail information for various targets, thereby greatly improving the accuracy of monocular depth estimation.
Table 1 shows the average relative error, the root mean square error, the logarithmic mean error and the accuracy under the threshold value of the method of the invention and other different algorithms under the NYU-Depth v2 data set. As can be seen from the data in Table 1, the method of the invention achieves better results on most indexes, and improves the accuracy of depth map estimation to a certain extent. Compared with SARPN, the method of the invention improves the accuracy of the threshold value by 1.2%, and reduces the errors of other indexes to different degrees. Compared with the ACAN, the average relative error of the method is reduced by 16%, and the threshold accuracy is improved by 5.3%. It is apparent that the advantages of using a multi-scale feature fusion module based on an attention mechanism are realized.
TABLE 1
Figure BDA0003316479460000071
The monocular depth estimation algorithm based on the multi-scale feature fusion effectively solves the problems of insufficient feature extraction of the feature extraction network, loss of spatial information and the like, and improves the accuracy of the network prediction depth map. The attention mechanism is applied to the hierarchical features of the backbone network for the first time, and the spatial structure relationship among the features is emphatically considered. And a cavity space pyramid pooling module is introduced into the decoding network, so that the problem of image feature loss in the process of reconstructing the depth map is solved, and the receptive field of local convolution is enlarged. Simulation results show that the monocular depth estimation algorithm with multi-scale feature fusion provided by the invention has good effects in the aspects of precision, object edge contour reconstruction and the like, and has good performance.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (5)

1. A monocular depth estimation method based on multi-scale feature fusion is characterized in that: the method comprises the following steps:
s1: introducing a Non-Local attention mechanism, and constructing a mixed normalization function;
s2: an attention mechanism is introduced among the local layer features, the deep layer features and the shallow layer features of the feature extraction network, and an association information matrix among the features on the feature map is calculated;
s3: constructing a multi-scale feature fusion module;
s4: a cavity space pyramid pooling module is introduced into the decoding network, so that the field of experience of convolution is enlarged, and the network is forced to learn more local detail information.
2. The monocular depth estimation method based on multi-scale feature fusion of claim 1, wherein: the step S1 includes:
on the basis of Non-Local, constructing a mixed SoftMox layer as a normalization function, wherein the calculation formula of the normalization function is as follows:
Figure FDA0003316479450000011
Figure FDA0003316479450000012
wherein
Figure FDA0003316479450000013
Is the similarity score of the nth part, i is the current pixel point on the feature map, j is all the pixel points on the feature map, pinDenotes the nth aggregation weight, N denotes the number of feature map partitions, wnIs a linear vector that can be learned in network training,
Figure FDA0003316479450000014
is corresponding to each region k on the feature map XjThe arithmetic mean of (a).
3. The monocular depth estimation method based on multi-scale feature fusion of claim 1, wherein: the step S2 specifically includes the following steps:
s21: by self-conversion, other feature points k on the feature map are utilizedjTo the current featurePoint qiAnd (3) performing relational modeling, wherein the calculation formula is as follows:
Figure FDA0003316479450000015
Figure FDA0003316479450000016
wherein, wi,jRepresenting a spatial attention map, Fmos(. represents a normalization function, q)i,nDenotes an index, kj,nA key is represented that is a key of the key,
Figure FDA0003316479450000017
which means that the multiplication is performed element by element,
Figure FDA0003316479450000018
representing the feature map after self-transformation, vjRepresents a value;
s22: by means of top-down feature conversion, high-dimensional semantic information is utilized
Figure FDA0003316479450000019
Contextual information for low dimensional features
Figure FDA00033164794500000110
Modeling is carried out, and the calculation formula is as follows:
wi,j=Fmos(Feud(qi,n,kj,n))
Figure FDA00033164794500000111
in the formula, Feud(. The) represents the Euclidean distance between two pixel points on the characteristic diagram;
s23: through feature transformation from bottom to top, relevant information modeling is carried out among feature diagram channels with different scales, and a specific calculation formula is as follows:
w=GAP(K)
Qatt=Fatt(Q,w)
Vdow=Fsconv(V)
Figure FDA0003316479450000021
wherein w represents the channel attention diagram, GAP represents the global average pooling, K represents the key of the network shallow feature diagram, QattRepresenting features weighted by channel attention, Fatt(. for) an outer product function, Q denotes an index of the network deep profile, VdowRepresenting a feature map after downsampling, Fsconv(. cndot.) is a 3X 3 convolution with a step size, V represents the value of the network shallow feature map,
Figure FDA0003316479450000022
feature graphs representing bottom-up conversion, Fconv(. is) a 3X 3 convolution for refinement, Fadd(. cndot.) denotes that the two feature maps are added element by element and then processed again by 3 × 3 convolution.
4. The monocular depth estimation method based on multi-scale feature fusion of claim 1, wherein: in step S3, the three feature conversions in step S2 are performed on the middle 4-layer features of the coding network, respectively, to obtain a plurality of enhanced high-level features, then the enhanced features are re-arranged in terms of scale, the features with the same size are concatenated with the original features on the coding network, and finally the channel dimensions of the enhanced features are restored to the same dimensions in input through a 3 × 3 convolution.
5. The monocular depth estimation method based on multi-scale feature fusion of claim 1, wherein: the step S4 specifically includes the following steps:
s41: a plurality of cavity space pyramid pooling modules are crossly embedded among feature maps with different resolution ratios, and feature pyramid information with a great receptive field is captured;
s42: selecting a deconvolution module with learnable parameters based on an upsampling method; for all the up-sampling modules, the output of the previous cavity space pyramid pooling module is deconvoluted to change the size of the feature map to twice of the original size, and then the corresponding features output by the multi-scale feature fusion module and the depth map roughly estimated by the previous scale are cascaded;
s43: and for all the cavity space pyramid pooling modules, respectively executing cavity convolution containing different cavity rates on the input features, and outputting the cascade of different cavity convolution output features.
CN202111232322.5A 2021-10-22 2021-10-22 Monocular depth estimation method based on multi-scale feature fusion Active CN113870335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111232322.5A CN113870335B (en) 2021-10-22 2021-10-22 Monocular depth estimation method based on multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111232322.5A CN113870335B (en) 2021-10-22 2021-10-22 Monocular depth estimation method based on multi-scale feature fusion

Publications (2)

Publication Number Publication Date
CN113870335A true CN113870335A (en) 2021-12-31
CN113870335B CN113870335B (en) 2024-07-30

Family

ID=78997259

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111232322.5A Active CN113870335B (en) 2021-10-22 2021-10-22 Monocular depth estimation method based on multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN113870335B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114693823A (en) * 2022-03-09 2022-07-01 天津大学 Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN115115686A (en) * 2022-08-22 2022-09-27 中国矿业大学 Mine image unsupervised monocular depth estimation method based on fine-grained multi-feature fusion
CN115359271A (en) * 2022-08-15 2022-11-18 中国科学院国家空间科学中心 Large-scale invariance deep space small celestial body image matching method
CN115580564A (en) * 2022-11-09 2023-01-06 深圳桥通物联科技有限公司 Dynamic calling device for communication gateway of Internet of things
CN116342675A (en) * 2023-05-29 2023-06-27 南昌航空大学 Real-time monocular depth estimation method, system, electronic equipment and storage medium
CN116823908A (en) * 2023-06-26 2023-09-29 北京邮电大学 Monocular image depth estimation method based on multi-scale feature correlation enhancement
CN117078236A (en) * 2023-10-18 2023-11-17 广东工业大学 Intelligent maintenance method and device for complex equipment, electronic equipment and storage medium
CN118212637A (en) * 2024-05-17 2024-06-18 山东浪潮科学研究院有限公司 Automatic image quality assessment method and system for character recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014095560A1 (en) * 2012-12-18 2014-06-26 Universitat Pompeu Fabra Method for recovering a relative depth map from a single image or a sequence of still images
US20200265597A1 (en) * 2018-03-14 2020-08-20 Dalian University Of Technology Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN112785636A (en) * 2021-02-18 2021-05-11 上海理工大学 Multi-scale enhanced monocular depth estimation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014095560A1 (en) * 2012-12-18 2014-06-26 Universitat Pompeu Fabra Method for recovering a relative depth map from a single image or a sequence of still images
US20200265597A1 (en) * 2018-03-14 2020-08-20 Dalian University Of Technology Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN112785636A (en) * 2021-02-18 2021-05-11 上海理工大学 Multi-scale enhanced monocular depth estimation method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JING LIU等: "Multi-Scale Residual Pyramid Attention Network for Monocular Depth Estimation", 《2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)》, 5 May 2021 (2021-05-05) *
刘香凝;赵洋;王荣刚;: "基于自注意力机制的多阶段无监督单目深度估计网络", 信号处理, no. 09, 19 August 2020 (2020-08-19) *
邓朝龙: "基于多尺度特征融合的单目深度估计研究", 《万方数据》, 6 July 2023 (2023-07-06) *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565655B (en) * 2022-02-28 2024-02-02 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114565655A (en) * 2022-02-28 2022-05-31 上海应用技术大学 Depth estimation method and device based on pyramid segmentation attention
CN114693823A (en) * 2022-03-09 2022-07-01 天津大学 Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN114693823B (en) * 2022-03-09 2024-06-04 天津大学 Magnetic resonance image reconstruction method based on space-frequency double-domain parallel reconstruction
CN115359271A (en) * 2022-08-15 2022-11-18 中国科学院国家空间科学中心 Large-scale invariance deep space small celestial body image matching method
CN115115686A (en) * 2022-08-22 2022-09-27 中国矿业大学 Mine image unsupervised monocular depth estimation method based on fine-grained multi-feature fusion
CN115580564A (en) * 2022-11-09 2023-01-06 深圳桥通物联科技有限公司 Dynamic calling device for communication gateway of Internet of things
CN115580564B (en) * 2022-11-09 2023-04-18 深圳桥通物联科技有限公司 Dynamic calling device for communication gateway of Internet of things
CN116342675A (en) * 2023-05-29 2023-06-27 南昌航空大学 Real-time monocular depth estimation method, system, electronic equipment and storage medium
CN116342675B (en) * 2023-05-29 2023-08-11 南昌航空大学 Real-time monocular depth estimation method, system, electronic equipment and storage medium
CN116823908A (en) * 2023-06-26 2023-09-29 北京邮电大学 Monocular image depth estimation method based on multi-scale feature correlation enhancement
CN116823908B (en) * 2023-06-26 2024-09-03 北京邮电大学 Monocular image depth estimation method based on multi-scale feature correlation enhancement
CN117078236A (en) * 2023-10-18 2023-11-17 广东工业大学 Intelligent maintenance method and device for complex equipment, electronic equipment and storage medium
CN117078236B (en) * 2023-10-18 2024-02-02 广东工业大学 Intelligent maintenance method and device for complex equipment, electronic equipment and storage medium
CN118212637A (en) * 2024-05-17 2024-06-18 山东浪潮科学研究院有限公司 Automatic image quality assessment method and system for character recognition

Also Published As

Publication number Publication date
CN113870335B (en) 2024-07-30

Similar Documents

Publication Publication Date Title
CN113870335B (en) Monocular depth estimation method based on multi-scale feature fusion
CN110443842B (en) Depth map prediction method based on visual angle fusion
CN110111366B (en) End-to-end optical flow estimation method based on multistage loss
CN113888744A (en) Image semantic segmentation method based on Transformer visual upsampling module
CN108288270B (en) Target detection method based on channel pruning and full convolution deep learning
CN112396607A (en) Streetscape image semantic segmentation method for deformable convolution fusion enhancement
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN111523546A (en) Image semantic segmentation method, system and computer storage medium
CN112634296A (en) RGB-D image semantic segmentation method and terminal for guiding edge information distillation through door mechanism
CN113011329A (en) Pyramid network based on multi-scale features and dense crowd counting method
CN113033570A (en) Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information
CN113240683B (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN113066089B (en) Real-time image semantic segmentation method based on attention guide mechanism
CN115035171B (en) Self-supervision monocular depth estimation method based on self-attention guide feature fusion
CN115775316A (en) Image semantic segmentation method based on multi-scale attention mechanism
CN114821050A (en) Named image segmentation method based on transformer
CN116863194A (en) Foot ulcer image classification method, system, equipment and medium
CN114612902A (en) Image semantic segmentation method, device, equipment, storage medium and program product
CN116596966A (en) Segmentation and tracking method based on attention and feature fusion
CN115631513A (en) Multi-scale pedestrian re-identification method based on Transformer
CN116402851A (en) Infrared dim target tracking method under complex background
CN117011655A (en) Adaptive region selection feature fusion based method, target tracking method and system
CN116612385A (en) Remote sensing image multiclass information extraction method and system based on depth high-resolution relation graph convolution
CN114494284B (en) Scene analysis model and method based on explicit supervision area relation
CN115731280A (en) Self-supervision monocular depth estimation method based on Swin-Transformer and CNN parallel network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant