CN111179326A - Monocular depth estimation algorithm, system, equipment and storage medium - Google Patents

Monocular depth estimation algorithm, system, equipment and storage medium Download PDF

Info

Publication number
CN111179326A
CN111179326A CN201911378572.2A CN201911378572A CN111179326A CN 111179326 A CN111179326 A CN 111179326A CN 201911378572 A CN201911378572 A CN 201911378572A CN 111179326 A CN111179326 A CN 111179326A
Authority
CN
China
Prior art keywords
depth
map
depth map
resolution level
common
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911378572.2A
Other languages
Chinese (zh)
Other versions
CN111179326B (en
Inventor
朱晓宁
赵珊珊
吴喆峰
王学斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingying Digital Technology Co Ltd
Original Assignee
Jingying Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingying Digital Technology Co Ltd filed Critical Jingying Digital Technology Co Ltd
Priority to CN201911378572.2A priority Critical patent/CN111179326B/en
Publication of CN111179326A publication Critical patent/CN111179326A/en
Application granted granted Critical
Publication of CN111179326B publication Critical patent/CN111179326B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a monocular depth estimation algorithm, a system, equipment and a storage medium, aiming at a monocular image, firstly, an original depth map and a plurality of common depth maps with different resolution levels are estimated by utilizing a convolutional neural network; secondly, recovering a relative depth map from the selective estimation data according to rank-1 properties of the paired comparison matrix; third, we decompose the normal depth map and the relative depth map into multiple depth components and perform an optimized reorganization on them to reconstruct the optimal depth map. Experimental results show that the depth estimation method and device have good depth estimation performance.

Description

Monocular depth estimation algorithm, system, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image depth estimation, in particular to a monocular depth estimation algorithm, a system, equipment and a storage medium.
Background
Depth estimation is a fundamental problem for computer vision to estimate scene depth information from images. Depth estimation provides important geometric cues for some visual applications, such as image synthesis, scene recognition, pose estimation, and robotics. Currently, for multi-view image or video sequences, there are various depth estimation techniques with significant effects. However, the existing method cannot realize the depth estimation of the monocular image.
Disclosure of Invention
Therefore, embodiments of the present invention provide a monocular depth estimation algorithm, system, device and storage medium, so as to solve the technical problem that depth estimation is not available for monocular images in the current image depth estimation technology.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
according to a first aspect of embodiments of the present invention, there is provided a monocular depth estimation algorithm, the algorithm including: receiving a monocular input image I, and acquiring an original depth map D of the input image I; constructing a common depth map D of several different resolution levels based on the original depth map DnWherein n is a resolution level; based on the common depth map DnConstructing a relative depth map R for each resolution leveln(ii) a Using the common depth map D for each resolution levelnAnd the relative depth map RnRespectively obtaining common depth detail map F of each resolution levelnAnd a relative depth detail map fn(ii) a Utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and reconstructing an optimal depth map according to the average depth component of the detail map of each resolution level.
Further, the construction method of the common depth map comprises the following steps: calculating the geometric mean g (D) of the original depth map D, and taking the geometric mean g (D) of the original depth map D as the common depth map D with the lowest resolution level0(ii) a And a common depth map D using the lowest resolution level0Obtaining a common depth map D of each resolution level by a convolution recurrence algorithm of common depth maps of adjacent resolution levelsn
Further, the construction method of the relative depth map comprises the following steps: common depth map D according to adjacent resolution levelsn-1And DnConstructing a sparse comparison matrix Pn,n-1(ii) a Using alternationLeast squares ALS algorithm to sparsely compare a matrix Pn,n-1Reverting to a dense comparison matrix
Figure BDA0002341663600000021
And by applying to said dense comparison matrix
Figure BDA0002341663600000022
Left vector matrix of
Figure BDA0002341663600000023
Normalization and reshaping processes are performed to reconstruct a relative depth map for each resolution level.
Further, the common depth map D of each resolution level is utilizednObtaining the common depth detail map F for each resolution levelnThe method comprises the following steps: first predetermined upsampling operation matrix U using one lower resolution leveln-1For ordinary depth map D of lower one-level resolution leveln-1Performing a first upsampling operation U; and using the common depth map D for each resolution levelnCommon depth map D 'of lower level resolution level after first up-sampling operation'n-1Obtaining the common depth detail map F of each resolution level by Hadamard division in the element directionn
Further, the relative depth map R for each resolution level is utilizednObtaining the relative depth detail map f for each resolution level separatelynThe method comprises the following steps: second Preset Up-sampling operation matrix U 'with Primary lower resolution level'n-1For a relative depth map R of one lower resolution leveln-1Performing an upper second upsampling operation U'; and utilizing the relative depth map R for each resolution levelnRelative depth map R 'with lower level of resolution after the second upsampling operation'n-1Hadamard division in the element direction to obtain the relative depth detail map f of each resolution leveln
Further, the common depth detail map F for each resolution level is utilizednAnd the relative depth is smallSection view fnCalculating the average depth component of each resolution level detail map respectively, comprising: using the common depth detail map FnCalculating said common depth map D at a predetermined resolution levelnCommon depth components of a plurality of detail maps with different resolution levels respectively corresponding to the common depth components; using the relative depth detail map fnCalculating the relative depth map R at a predetermined resolution levelnRelative depth components of respective corresponding detail maps of a number of different resolution levels; and respectively calculating the average value of the common depth component and the relative depth component of each same resolution level to obtain the average depth component of each corresponding resolution level detail map.
According to a second aspect of embodiments of the present invention, there is provided a monocular depth estimation system, the system comprising: the image input module is used for receiving a monocular input image I; the original depth map acquisition module is used for acquiring an original depth map D of the input image I; a common depth map construction module for constructing common depth maps D of several different resolution levels based on the original depth map DnWherein n is a resolution level; a relative depth map construction module for constructing a relative depth map based on the common depth map DnConstructing a relative depth map R for each resolution leveln(ii) a A common depth detail map acquisition module for utilizing the common depth map D of each resolution levelnObtaining a common depth detail map F for each resolution leveln(ii) a A relative depth detail map acquisition module for utilizing the relative depth map R of each resolution levelnObtaining a relative depth detail map f of each resolution level respectivelyn(ii) a An average depth component calculation module for utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and the optimal depth map reconstruction module is used for reconstructing an optimal depth map according to the average depth component of the detail map of each resolution level.
Further, the original depth map obtaining module is formed by an encoder, and the common depth map constructing moduleThe block and the relative depth map building module are respectively formed by a plurality of pairs of decoders; the encoder employs a DenseNet-BC convolutional neural network model, which includes: a convolution layer, a maximum pooling layer, three pairs of dense blocks and a transition layer; each pair of decoders includes: for constructing the common depth map DnAnd for constructing the relative depth map RnEach decoder comprising: one dense block, at least one full stripe mask block; each relative depth map decoder also includes an alternating least squares layer, and the decoder uses the last dense block of DenseNet-BC.
According to a third aspect of embodiments of the present invention, there is provided a monocular depth estimating device, the device including: a processor and a memory; the memory is used for storing one or more program instructions; a processor for executing one or more program instructions for executing any one of the algorithm steps of the above roadheader operation monitoring algorithm.
According to a fourth aspect of embodiments of the present invention, there is provided a computer storage medium having one or more program instructions embodied therein for performing any one of the algorithm steps of the monocular depth estimation algorithm described above.
The embodiment of the invention has the following advantages: the embodiment of the invention provides a monocular depth estimation scheme based on a relative depth map aiming at a monocular image; firstly, estimating an original depth map and a plurality of common depth maps with different resolution levels by utilizing a convolutional neural network; secondly, recovering a relative depth map from the selective estimation data according to rank-1 properties of the paired comparison matrix; thirdly, decomposing the common depth map and the relative depth map into a plurality of depth components, and performing optimized recombination on the decomposed depth components to reconstruct an optimal depth map. Experimental results show that the depth estimation method and device have good depth estimation performance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions that the present invention can be implemented, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the effects and the achievable by the present invention, should still fall within the range that the technical contents disclosed in the present invention can cover.
Fig. 1 is a schematic diagram of a logic structure of a monocular depth estimation system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of logical structures of a general depth map building module and a relative depth map building module according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a monocular depth estimation algorithm according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for constructing a relative depth map according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and other advantages and effects of the present invention will become apparent to those skilled in the art from the present disclosure.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, circuits, and algorithms are omitted so as not to obscure the description of the present invention with unnecessary detail.
The embodiment of the invention provides a monocular depth estimation algorithm, a system, equipment and a storage medium for a single image. In the embodiment, a concept of relative depth is provided, and a relative depth map is constructed based on rank-1 properties of a pair comparison matrix; and realizing the depth estimation of the monocular image by using the depth map decomposition and the depth component combination.
Referring to fig. 1, a monocular depth estimation system provided in an embodiment of the present invention includes: the system comprises: the image input module 01 is used for receiving a monocular input image I; an original depth map obtaining module 02, configured to obtain an original depth map D of the monocular input image I; a common depth map construction module 03 for constructing common depth maps D of several different resolution levels based on the original depth map DnWherein n is a resolution level; a relative depth map construction module 04 for constructing a depth map based on the common depth map DnConstructing a relative depth map R for each resolution leveln(ii) a A common depth detail map obtaining module 05 for utilizing the common depth map D of each resolution levelnObtaining a common depth detail map F for each resolution leveln(ii) a A relative depth detail map acquisition module 06 for utilizing the relative depth map R of each resolution levelnObtaining a relative depth detail map f of each resolution level respectivelyn(ii) a A mean depth component calculation module 07 for utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and an optimal depth map reconstruction module 08, configured to reconstruct an optimal depth map according to the average depth component of the detail map of each resolution level.
Referring to FIG. 2, the original depth map acquisition module 02 is formed by an encoder that extracts depth features from the input image I at a predetermined resolution level of 3 ≦ n ≦ 7, which generates low resolution, high level features when processing the image. The encoder employs a DenseNet-BC convolutional neural network model, which includes: a convolution layer (C)onv E1), one max pooling layer (Pool E1), three pairs of Dense blocks (Dense E2, Dense E3, Dense E4) and transition layers (Trans E2, Trans E3, Trans E4), in general, given a 224 × 224 RGB image, the encoder generates a signature graph of size 8 × 8 and 1056 channels. The common depth map construction module 03 and the relative depth map construction module 04 are respectively formed by a plurality of pairs of decoders, and the plurality of pairs of decoders use the depth characteristics transmitted by the encoder to reconstruct the common depth map DnAnd a relative depth map Rn(ii) a In this embodiment, each pair of decoders includes: for constructing the common depth map DnAnd for constructing the relative depth map RnEach decoder comprising: a dense block, at least one full stripe masking block (WSM for short); as shown in fig. 3, there are five pairs of decoders, there are five general depth map decoder Dense blocks (density D1, density D2, density D3, density D4, density D5) and five relative depth map decoder Dense blocks (density D6, density D7, density D8, density D9, density D10), each of which further includes an alternating least square layer (ALS), as shown in fig. 3, the five relative depth map decoder Dense blocks (density D6, density D7, density D8, density D9, density D10) respectively correspond to alternating 686 layers (ALS D8, ALS D6, ALS D8, ALS D9, ALS D73729), and all of the final least square decoder BC-73729 is used.
Referring to fig. 3, a monocular depth estimation algorithm disclosed in an embodiment of the present invention includes: receiving a monocular input image I, and acquiring an original depth map D of the input image I; constructing a common depth map D of several different resolution levels based on the original depth map DnWherein n is a resolution level; based on the common depth map DnConstructing a relative depth map R for each resolution leveln(ii) a Using the common depth map D for each resolution levelnAnd the relative depth map RnRespectively obtaining common depth detail map F of each resolution levelnAnd a relative depth detail map fn(ii) a Utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and reconstructing an optimal depth map according to the average depth component of the detail map of each resolution level.
Specifically, an input image is set
Figure BDA0002341663600000061
An image of size r × c, r is the image lateral dimension, and c is the image longitudinal dimension. After obtaining the original depth map D of the input image I, to reconstruct the optimal depth map of the image I, it is necessary to obtain the common depth map D of the original depth map DnAnd a relative depth map Rn
In the embodiment of the present invention, the method for constructing the common depth map includes: calculating the geometric mean g (D) of the original depth map D, and taking the geometric mean g (D) of the original depth map D as the common depth map D with the lowest resolution level0(ii) a And a common depth map D using the lowest resolution level0Obtaining a common depth map D of each resolution level by a convolution recurrence algorithm of common depth maps of adjacent resolution levelsn
Further, the geometric mean g (D) of the original depth map D is calculated as follows:
Figure BDA0002341663600000071
where D (i, j) is the depth of the (i, j) th pixel in the original depth map D. And i belongs to r, j belongs to c, and i and j respectively represent the horizontal coordinate and the vertical coordinate of the pixel point in the original depth map D. Since the geometric mean g (D) of the original depth map D is equal to the normal depth map D of the lowest resolution level0Thus, the ordinary depth map D with the geometric mean g (D) of the original depth map D as the lowest resolution level0. Obtaining a common depth map D at the lowest resolution level0On the premise of obtaining each resolution ratio by convolution recursion algorithm of common depth maps of adjacent resolution ratio levelsLevel general depth map DnThe formula of the convolution recurrence algorithm of the ordinary depth map of the adjacent resolution levels is as follows:
Figure BDA0002341663600000072
wherein D isn(i, j) is a common depth map DnThe common depth of the (i, j) th pixel, k, l represents two variables, k, l ∈ [0,1 ]]. Due to the common depth map DnHas a size of 2n×2nThus, the common depth map Dn-1Has a size of 2n-1×2n-1Can know the common depth map Dn-1Size and general depth map DnThe size of the depth map differs by four times, and the formula of the convolution recurrence algorithm of the ordinary depth map of the adjacent resolution levels shows that: common depth map Dn-1With four convolution kernels
Figure BDA0002341663600000073
Convolution operation respectively obtains four different convolution results, namely four corresponding depths, and the product of the four convolution results is taken to obtain a common depth map Dn. I.e. the common depth map DnDepth in (D) is a common depth mapn-1The product of four corresponding depths.
Obtaining a common depth map D by the convolution recurrence algorithm formula of the common depth maps of the adjacent resolution levels1To the common depth map Dn
Specifically, a full stripe masking block (WSM) uses a common depth map D of the lowest resolution level0Obtaining a common depth map D of each resolution level by a convolution recurrence algorithm of common depth maps of adjacent resolution levelsnThe method mainly expands the size of the characteristic diagram input by the encoder. Target common depth map DnThe resolution level of (a) determines the number of full-stripe mask blocks, the higher the resolution level, the higher the number of full-stripe mask blocks. As shown in fig. 3, a general depth map D3And a relative depth map R3Does not have full slice mask blocks in the decoder, as is commonDepth map D3And a relative depth map R3Is 8 x 8, and is equal to the resolution of the feature map (original depth map D) output by the encoder, without being processed by the convolution recursive algorithm of the ordinary depth maps of adjacent resolution levels. Common depth map D4And a relative depth map R4Each having a full-slice mask block (WSM D2-1 and WSM D7-1), a common depth map D5And a relative depth map R5The decoder of (2) has two full-stripe mask blocks (WSM D3-1, WSM D3-2, WSM D8-1, WSM D8-2), respectively, and a common depth map D6And a relative depth map R6The decoders of (1) each have three full-stripe mask blocks (WSM D4-1, WSM D4-2, WSM D4-3 and WSM D9-1, WSM D9-2, WSM D9-3) up to a common depth map D7And a relative depth map R7The decoder of (2) has four full stripe mask blocks each (WSM D5-1, WSM D5-2, WSM D5-3, WSM D5-4 and WSM D10-1, WSM D10-2, WSM D10-3, WSM D10-4). Finally, the common depth map D7And a relative depth map R7The decoder of (2) uses four full stripe mask blocks (WSM D5-1, WSM D5-2, WSM D5-3, WSM D5-4 and WSM D10-1, WSM D10-2, WSM D10-3, WSM D10-4) to input 8 × 8 (2D 10-4) into the encoder respectively3×23) The feature map of size is extended to 128 × 128 (2)7×27) Resolution target feature map (common depth map D)7)。
Referring to fig. 4, the method for constructing the relative depth map includes: common depth map D according to adjacent resolution levelsn-1And DnConstructing a sparse comparison matrix Pn,n-1(ii) a Using an Alternating Least Squares (ALS) algorithm to compare the sparse comparison matrix Pn,n-1Reverting to a dense comparison matrix
Figure BDA0002341663600000081
And by applying to said dense comparison matrix
Figure BDA0002341663600000082
Left vector matrix of
Figure BDA0002341663600000083
Normalization and remodelingProcessing reconstructs a relative depth map for each resolution level.
In fig. 2, the bottom relative depth map R3Relative depth map R4Relative depth map R5Relative depth map R6Relative depth map R7Respectively for estimating a relative depth map R3Relative depth map R4Relative depth map R5Relative depth map R6Relative depth map R7In the present embodiment, n is equal to or greater than 3 and equal to or less than 7, as described above. Relative depth map R4Is a common depth map D of two adjacent resolution levelsnAnd Dn-1With reference to fig. 4, relative depth map RnThe reconstruction is as follows:
constructing a sparse comparison matrix Pn,n-1: relative depth map R with n being more than or equal to 3 and less than or equal to 7nWe define a sparse comparison matrix Pn,n-1For passing through a common depth map DnDepth of middle pixel and common depth map Dn-1The depths of the middle pixels are compared. Sparse comparison matrix Pn,n-1The formula is as follows:
Figure BDA0002341663600000091
wherein the content of the first and second substances,
Figure BDA0002341663600000092
representing a common depth map DnMiddle 22nThe depth of a single pixel is determined,
Figure BDA0002341663600000093
is a common depth map Dn-1Middle 22 (n-1)The inverse of the depth of a pixel, T representing the matrix transposition operation;
comparing sparse matrix Pn,n-1Packed as a dense comparison matrix
Figure BDA0002341663600000094
And recovering the relative depth map Rn: in this example, the least squares layer (ALS D6, ALS D7, ALS D8, AL)S D9, ALS D10) respectively utilize an Alternating Least Squares (ALS) algorithm to compare the sparse matrix Pn,n-1Reverting to a dense comparison matrix
Figure BDA0002341663600000095
Then passing the dense comparison matrix through least squares layer (ALS D6, ALS D7, ALS D8, ALS D9, ALS D10) respectively
Figure BDA0002341663600000096
Left vector matrix of
Figure BDA0002341663600000097
Normalization and reshaping processes are performed to reconstruct a relative depth map for each resolution level. Specifically, referring to fig. 4, the alternating least squares ALS algorithm is to first find two low-dimensional left vector matrices p and right vector matrices q such that their product is the original matrix. s represents the sparse comparison matrix Pn,n-1Is a common depth map D, r ' is a set of positions (r ', c ') of (a)nThe depth of the r' th pixel. c' is a common depth map DnThe depth of the c' th pixel. The left vector matrix p and the right vector matrix q each represent a size of 22nAnd 22(n-1)A low-dimensional matrix of (a). Repeatedly and alternately executing the following two steps:
Figure BDA0002341663600000098
Figure BDA0002341663600000099
at each step, the convex condition is satisfied and the closed form solution of the right vector matrix q or the left vector matrix p is easily obtained. Thus, the algorithm yields a converged solution of the left vector matrix p and the right vector matrix q as well as an approximation:
Figure BDA00023416636000000910
thus, a sparse comparison matrix P is derivedn,n-1Dense comparison matrix of
Figure BDA00023416636000000911
Wherein the content of the first and second substances,
Figure BDA00023416636000000912
and
Figure BDA00023416636000000913
respectively dense comparison matrix
Figure BDA00023416636000000914
A left vector matrix and a right vector matrix.
As can be seen from FIG. 4, the matrix P is compared with the sparseness of4,3For example, sparse comparison matrix P4,3Only the relative depth is estimated
Figure BDA0002341663600000101
And also lack some relative depth, by sparse comparison matrix P4,3Missing entries that estimate relative depth need to be filled in. Aiming at the deficiency problem, the embodiment of the invention adopts an alternating least square method ALS algorithm to compare the sparse comparison matrix Pn,n-1Reverting to a dense comparison matrix
Figure BDA0002341663600000102
Normalizing and reshaping dense comparison matrices
Figure BDA0002341663600000103
Left vector matrix of
Figure BDA0002341663600000104
To reconstruct a relative depth map R for each resolution leveln
Further, in the embodiment of the present invention, the common depth map D of each resolution level is utilizednObtaining the common depth detail map F for each resolution levelnThe method comprises the following steps: first predetermined upsampling operation matrix U using one lower resolution leveln-1To the lower order of resolutionCommon depth map D of rate leveln-1Performing a first upsampling operation U; and using a common depth map D for each resolution levelnCommon depth map D 'of lower level resolution level after first up-sampling operation'n-1Obtaining the common depth detail map F of each resolution level by Hadamard division in the element directionn
In a typical common depth map DnNormal depth map D of medium and low resolution leveln-1Common depth map D at a dominant, low resolution leveln-1Which contains low frequency information. The low frequency information has a greater impact on the depth reconstruction than the high frequency information. Common depth map D at a low resolution leveln-1By eliminating the common depth map DnDetail information acquisition in (1), FnDetail view F showing the general depthn. We define a first upsample operation U, by a first predetermined upsample operation matrix Un-1Generic depth map D enabling low resolution levelsn-1Is doubled in the horizontal and vertical directions to obtain a common depth map D 'of the lower level of resolution after the first upsampling operation'n-1. From the above description, it can be derived the generic depth detail map F for each resolution levelnThe algorithm formula of (1) is as follows:
Figure BDA0002341663600000105
wherein the content of the first and second substances,
Figure BDA0002341663600000106
representing the hadamard division of two matrices, i.e. the division of the directions of the elements.
Likewise, the relative depth map R for each resolution level is utilizednObtaining the relative depth detail map f for each resolution level separatelynThe method comprises the following steps: second Preset Up-sampling operation matrix U 'with Primary lower resolution level'n-1For a relative depth map R of one lower resolution leveln-1Performing an upper second upsampling operation U'; and utilizing the relative depth map R for each resolution levelnRelative depth map R 'with lower level of resolution after the second upsampling operation'n-1Hadamard division in the element direction to obtain the relative depth detail map f of each resolution leveln. Wherein the relative depth detail map f for each resolution levelnThe algorithm formula of (1) is as follows:
Figure BDA0002341663600000111
wherein the content of the first and second substances,
Figure BDA0002341663600000112
representing the hadamard division of two matrices, i.e. the division of the directions of the elements.
Further, the common depth detail map F for each resolution level is utilizednAnd the relative depth detail map fnCalculating the average depth component of each resolution level detail map respectively, comprising: using the common depth detail map FnCalculating said common depth map D at a predetermined resolution levelnCommon depth components of a plurality of detail maps with different resolution levels respectively corresponding to the common depth components; using the relative depth detail map fnCalculating the relative depth map R at a predetermined resolution levelnRelative depth components of respective corresponding detail maps of a number of different resolution levels; and respectively calculating the average value of the common depth component and the relative depth component of each same resolution level to obtain the average depth component of each corresponding resolution level detail map.
In particular, the common depth map DnThe depth component of (a) is formulated as follows:
Figure BDA0002341663600000113
wherein, log Un(D0) And
Figure BDA0002341663600000114
respectively representing a common depth map D for the lowest resolution level0Hepu (Hepu)Depth-through detail map FiAnd (when n is i), carrying out logarithm taking operation after the first upsampling operation U.
Also, the relative depth map RnThe depth component of (a) is formulated as follows:
Figure BDA0002341663600000115
wherein the content of the first and second substances,
Figure BDA0002341663600000116
representing a relative depth detail map f for a lowest resolution leveliAnd (when n is i), carrying out logarithm taking operation after the first upsampling operation U.
In this embodiment, a common depth map D is usednAnd a relative depth map RnFor example, the predetermined resolution level 3 is more than or equal to n is less than or equal to 7, and the common depth map D with the lowest resolution level is obtained by calculating the geometric mean g (D) of the original depth map D according to the calculation formula0Then, the common depth map D can be obtained by recursion of the convolution recursion algorithm formula of the common depth maps of the adjacent resolution levels1To the common depth map D7Obtaining a common depth map D1To the common depth map D7Then, the comparison matrix is obtained by comparing the density
Figure BDA0002341663600000117
Left vector matrix of
Figure BDA0002341663600000118
Carrying out normalization and remodeling treatment to reconstruct a relative depth map R1To a relative depth map R7With a common depth map D0To the common depth map D7Using said common depth detail map F for each resolution levelnThe common depth detail map F can be obtained by calculating the algorithm formula1To the general depth detail view F7In the same way, there is a relative depth map R1To a relative depth map R7Using the relative depth detail map f for each resolution levelnThe relative depth detail map f can be obtained by calculating the algorithm formula1To phaseFor depth detail map f7. Using a generic depth detail map F1To the general depth detail view F7Calculating a common depth map D with a predetermined resolution level 3. ltoreq. n.ltoreq.73To the common depth map D7A common depth component of several different resolution level detail maps each. I.e. the common depth map D3Corresponding calculation is carried out to obtain a common depth detail map F1To the general depth detail view F3Depth component of (2), normal depth map D4Corresponding calculation is carried out to obtain a common depth detail map F1To the general depth detail view F4A depth component of; common depth map D5Corresponding calculation is carried out to obtain a common depth detail map F1To the general depth detail view F5A depth component of; common depth map D6Corresponding calculation is carried out to obtain a common depth detail map F1To the general depth detail view F6A depth component of; common depth map D7Corresponding calculation is carried out to obtain a common depth detail map F1To the general depth detail view F7A depth component of; using a relative depth detail map f1To a relative depth detail map f7Calculating a relative depth map R with a predetermined resolution level 3. ltoreq. n.ltoreq.73To a relative depth map R7Relative depth components of respective corresponding detail maps of a number of different resolution levels; i.e. the relative depth map R3Obtaining a relative depth detail map f by corresponding calculation1To a relative depth detail map f3Relative depth map R4Obtaining a relative depth detail map f by corresponding calculation1To a relative depth detail map f4A depth component of; relative depth map R5Obtaining a relative depth detail map f by corresponding calculation1To a relative depth detail map f5A depth component of; relative depth map R6Obtaining a relative depth detail map f by corresponding calculation1To a relative depth detail map f6A depth component of; relative depth map R7Obtaining a relative depth detail map f by corresponding calculation1To a relative depth detail map f7A depth component of; respectively calculating average values of common depth components and relative depth components of each same resolution level to obtain average value of each corresponding resolution level detail mapDepth components, e.g. for resolution level n equal to 1, using a common depth map D3To the common depth map D7General depth detail map F1Depth component of (2) and relative depth map R3To a relative depth map R7Relative depth detail map f1The average depth component of the detail map with resolution level n being 1 is obtained, and similarly, the average depth component of the detail map with resolution level n being 2 to 7 is obtained by calculation in sequence, as shown in table 1 below:
table 1: when n is more than or equal to 3 and less than or equal to 7, the common depth map DnAnd a relative depth map RnList of required computed depth components
Figure BDA0002341663600000121
Figure BDA0002341663600000131
When n is more than or equal to 3 and less than or equal to 7, the common depth map DnAnd a relative depth map RnThe required computed depth components are shown in Table 1, from a common depth map DnThe depth component formula of (2) can know that no matter how many values n is, the common depth map D is calculatednAll of which require a common depth map D of the lowest resolution level0. When n is 3, calculating the common depth map DnIn the depth component formula
Figure BDA0002341663600000132
The common depth detail map F will be used1General depth detail map F2General depth detail map F3When n is 7, the common depth map DnIn the depth component formula
Figure BDA0002341663600000133
The common depth detail map F will be used1General depth detail map F2General depth detail map F3General depth detail map F4General depth detail map F5General depth detailsFIG. F6General depth detail map F7. Calculating a relative depth map RnDoes not require the normal depth map D of the lowest resolution level for the depth component of (2)0Thus, the relative depth map RnCorresponding common depth map D of the lowest resolution level0Is empty.
In general, a normal depth map reconstructs the overall depth distribution well, while a relative depth map is more suitable for estimation of fine details. Furthermore, each relative depth map can reliably estimate a certain proportion of depth information, depending on the resolution. Thus, by combining the component maps at multiple resolutions, a reliable depth map is obtained.
Still taking the predetermined resolution level 3. ltoreq. n.ltoreq.7 as an example, as described above, the normal depth map D for the predetermined resolution level 3. ltoreq. n.ltoreq.7 is estimatednAnd a relative depth map RnUp to 10 depth maps, as shown in table 1, each depth map is decomposed into depth components. Since each depth component has multiple candidate components, embodiments of the present invention obtain an optimal estimate by combining their averages in the logarithmic domain. The best depth map is then generated using the average depth component reconstruction of these detail maps for each corresponding resolution level.
The embodiment of the invention evaluates the proposed algorithm on the NYUv2 dataset. The input images comprise indoor video images, consisting of RGB images with a spatial resolution of 480 x 640 and corresponding depth maps acquired with a Microsoft Kinect apparatus. The proposed algorithm was trained using all training sequences and evaluated using 654 test RGBD images. Furthermore, the test image is also effectively cropped to a spatial resolution of 427 × 561.
For the depth map experimental results, three evaluation indexes of RMSE, precision delta and Spearman's rho are used. RMSE (Rootmean Squared Error) is called root mean square Error and is calculated by the formula
Figure BDA0002341663600000141
Figure BDA0002341663600000142
And diRespectively representing an estimated depth value of pixel i and its corresponding ground truth depth value, N being the number of pixels in the depth map. The formula is that a predicted value is subtracted from a real value, then the square value is summed and averaged, and then a square is formed, wherein the RMSE aims to enable a predicted result to be closer to the real value and is used for measuring the deviation between the predicted result and a true value.
Precision measure (% correct):
Figure BDA0002341663600000143
t (Threshold) is a Threshold value, and the higher the accuracy, the closer the predicted depth value is to the true depth value. Currently, three different threshold values are mostly used: 1.25,1.252,1.253
Spearman's ρ (Spearman correlation coefficient), which evaluates
Figure BDA0002341663600000144
And diIs related to when
Figure BDA0002341663600000145
And diThe spearman correlation coefficient p is +1 or-1 when the correlation is completely monotonic, i.e., the closer the predicted result is to the true value.
In the embodiment of the invention, a Nestrov method is used for optimizing network parameters, and the initial learning rate, the momentum and the weight attenuation are respectively set to 10-5,0.9,10-4And adjusting the learning rate according to the repeated shifted cosine function, and setting the period of the cosine function as 1/4 epoch.
The network is trained in two steps, first we train the encoder with a decoder that generates a normal depth map D as shown in FIG. 23. Second, after fixing the encoder parameters, we train each decoder. Except for the relative depth map R7The decoder's batch size is 2 and the remaining decoders are 4.
Table 2 below compares the algorithm disclosed in the present example with other algorithms on the NYUv2 data set. Some algorithms use different methods for depth map cropping and performance measurement. Therefore, for fair comparison, the most common evaluation method is adopted in the examples of the present invention. For other algorithms, the performance scores were extracted directly from the papers, and table 3 is a published paper list of performance scores for other algorithms presented in table 2.
Table 2: comparative listing of performance of NYUv2 test data
Figure BDA0002341663600000151
In table 2, the data marked in bold is the best result, and next to the best result is the underlined data.
Table 3: published paper list of other algorithm performance scores presented in Table 2
Figure BDA0002341663600000152
Figure BDA0002341663600000161
As can be seen from table 2, the algorithm proposed by the embodiment of the present invention achieves the third and second best performance, respectively, in the δ <1.25 metric. In particular, the algorithm proposed by the embodiment of the present invention provides a p significantly higher than the conventional algorithm. This means that the algorithm proposed by the embodiment of the present invention predicts the depth order of the pixels more accurately by estimating the relative depth map containing order information and the general depth map.
The embodiment of the invention provides an algorithm, a system, equipment and a storage medium for performing monocular depth estimation by using a relative depth map. First, an encoder-decoder network is designed that has multiple decoder modules for estimating relative depth and common depth at different scales. To reduce complexity, we recovered the entire relative depth map from the selective estimation data using the ALS algorithm. And finally, reconstructing the optimal depth map through depth map decomposition and depth component combination. Experiments prove that the algorithm has the best performance, and the relative depth map is more effective than a common depth map in the aspect of keeping the depth sequence of the scene.
Corresponding to the above embodiments, an embodiment of the present invention further provides a monocular depth estimation device, where the device includes: a processor and a memory; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform a monocular depth estimation algorithm as described above.
In correspondence with the above embodiments, embodiments of the present invention also provide a computer storage medium containing one or more program instructions therein. Wherein one or more program instructions are for executing a monocular depth estimation algorithm as described above.
In an embodiment of the invention, the processor may be an integrated circuit chip having signal processing capability. The processor may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component.
The algorithms, steps, and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the algorithm disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The processor reads the information in the storage medium and completes the steps of the algorithm in combination with the hardware.
The storage medium may be a memory, for example, which may be volatile memory or nonvolatile memory, or which may include both volatile and nonvolatile memory.
The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory.
The volatile Memory may be a Random Access Memory (RAM) which serves as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), SLDRAM (SLDRAM), and Direct Rambus RAM (DRRAM).
The storage media described in connection with the embodiments of the invention are intended to comprise, without being limited to, these and any other suitable types of memory.
Those skilled in the art will appreciate that the functionality described in the present invention may be implemented in a combination of hardware and software in one or more of the examples described above. When software is applied, the corresponding functionality may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present invention should be included in the scope of the present invention.

Claims (10)

1. A monocular depth estimation algorithm, the algorithm comprising:
receiving a monocular input image I, and acquiring an original depth map D of the monocular input image I;
constructing a common depth map D of several different resolution levels based on the original depth map DnWherein n is a resolution level;
based on the common depth map DnConstructing a relative depth map R for each resolution leveln
Using the common depth map D for each resolution levelnAnd the relative depth map RnRespectively obtaining common depth detail map F of each resolution levelnAnd a relative depth detail map fn
Utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and reconstructing an optimal depth map from the average depth component of each resolution level detail map.
2. The algorithm of claim 1, wherein a number of common depth maps D of different resolution levels are constructed based on the original depth map DnThe method specifically comprises the following steps:
calculating the geometric mean g (D) of the original depth map D, and taking the geometric mean g (D) of the original depth map D as the common depth map D with the lowest resolution level0
Common depth map D with lowest resolution level0Obtaining a common depth map D of each resolution level by a convolution recurrence algorithm of common depth maps of adjacent resolution levelsn
3. The algorithm of claim 1,based on the common depth map DnConstructing a relative depth map R for each resolution levelnThe method specifically comprises the following steps:
common depth map D according to adjacent resolution levelsn-1And DnConstructing a sparse comparison matrix Pn,n-1
Using an Alternating Least Squares (ALS) algorithm to compare the sparse comparison matrix Pn,n-1Reverting to a dense comparison matrix
Figure FDA0002341663590000011
By applying to said dense comparison matrix
Figure FDA0002341663590000012
Left vector matrix of
Figure FDA0002341663590000013
Normalization and reshaping processes are performed to reconstruct a relative depth map for each resolution level.
4. The algorithm of claim 1, wherein the normal depth map D for each resolution level is utilizednObtaining the common depth detail map F for each resolution levelnThe method comprises the following steps:
first predetermined upsampling operation matrix U using one lower resolution leveln-1For ordinary depth map D of lower one-level resolution leveln-1Performing a first upsampling operation U;
using the common depth map D for each resolution levelnCommon depth map D 'of lower level resolution level after first up-sampling operation'n-1Obtaining the common depth detail map F of each resolution level by Hadamard division in the element directionn
5. The algorithm of claim 1, wherein the relative depth map R for each resolution level is utilizednSeparately acquiring the phases for each resolution levelFor depth detail map fnThe method comprises the following steps:
second Preset Up-sampling operation matrix U 'with Primary lower resolution level'n-1For a relative depth map R of one lower resolution leveln-1Performing an upper second upsampling operation U';
utilizing the relative depth map R for each resolution levelnRelative depth map R 'with lower level of resolution after the second upsampling operation'n-1Hadamard division in the element direction to obtain the relative depth detail map f of each resolution leveln
6. The algorithm of claim 1, wherein: utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnCalculating the average depth component of each resolution level detail map respectively, comprising:
using the common depth detail map FnCalculating said common depth map D at a predetermined resolution levelnCommon depth components of a plurality of detail maps with different resolution levels respectively corresponding to the common depth components;
using the relative depth detail map fnCalculating the relative depth map R at a predetermined resolution levelnRelative depth components of respective corresponding detail maps of a number of different resolution levels;
and respectively calculating the average value of the common depth component and the relative depth component of each same resolution level to obtain the average depth component of each corresponding resolution level detail map.
7. A monocular depth estimation system, the system comprising:
the image input module is used for receiving a monocular input image I;
the original depth map acquisition module is used for acquiring an original depth map D of the monocular input image I;
a common depth map construction module for constructing common depth maps D of several different resolution levels based on the original depth map DnWherein n is a resolution level;
a relative depth map construction module for constructing a relative depth map based on the common depth map DnConstructing a relative depth map R for each resolution leveln
A common depth detail map acquisition module for utilizing the common depth map D of each resolution levelnObtaining a common depth detail map F for each resolution leveln
A relative depth detail map acquisition module for utilizing the relative depth map R of each resolution levelnObtaining a relative depth detail map f of each resolution level respectivelyn
An average depth component calculation module for utilizing the common depth detail map F for each resolution levelnAnd the relative depth detail map fnRespectively calculating the average depth component of the detail map of each resolution level; and
and the optimal depth map reconstruction module is used for reconstructing an optimal depth map according to the average depth component of the detail map of each resolution level.
8. The system according to claim 7, wherein the original depth map acquisition module is formed by one encoder, and the normal depth map construction module and the relative depth map construction module are respectively formed by a plurality of pairs of decoders; the encoder employs a DenseNet-BC convolutional neural network model, which includes: a convolution layer, a maximum pooling layer, three pairs of dense blocks and a transition layer; each pair of decoders includes: for constructing the common depth map DnAnd for constructing the relative depth map RnEach decoder comprising: one dense block, at least one full stripe mask block; each relative depth map decoder also includes an alternating least squares layer, which employs the last dense block of DenseNet-BC.
9. A monocular depth estimation device, the device comprising: a processor and a memory;
the memory is to store one or more program instructions;
the processor being configured to execute one or more program instructions to perform an algorithm according to any one of claims 1 to 6.
10. A computer storage medium comprising one or more program instructions for executing an algorithm according to any one of claims 1-6.
CN201911378572.2A 2019-12-27 2019-12-27 Monocular depth estimation method, system, equipment and storage medium Active CN111179326B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911378572.2A CN111179326B (en) 2019-12-27 2019-12-27 Monocular depth estimation method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911378572.2A CN111179326B (en) 2019-12-27 2019-12-27 Monocular depth estimation method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111179326A true CN111179326A (en) 2020-05-19
CN111179326B CN111179326B (en) 2020-12-29

Family

ID=70646382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911378572.2A Active CN111179326B (en) 2019-12-27 2019-12-27 Monocular depth estimation method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111179326B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184526A1 (en) * 2022-04-02 2023-10-05 Covidien Lp System and method of real-time stereoscopic visualization based on monocular camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945265A (en) * 2017-11-29 2018-04-20 华中科技大学 Real-time dense monocular SLAM method and systems based on on-line study depth prediction network
CN108062769A (en) * 2017-12-22 2018-05-22 中山大学 A kind of fast deep restoration methods for three-dimensional reconstruction
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
CN109087349A (en) * 2018-07-18 2018-12-25 亮风台(上海)信息科技有限公司 A kind of monocular depth estimation method, device, terminal and storage medium
US20190356905A1 (en) * 2018-05-17 2019-11-21 Niantic, Inc. Self-supervised training of a depth estimation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945265A (en) * 2017-11-29 2018-04-20 华中科技大学 Real-time dense monocular SLAM method and systems based on on-line study depth prediction network
CN108062769A (en) * 2017-12-22 2018-05-22 中山大学 A kind of fast deep restoration methods for three-dimensional reconstruction
CN108416840A (en) * 2018-03-14 2018-08-17 大连理工大学 A kind of dense method for reconstructing of three-dimensional scenic based on monocular camera
US20190356905A1 (en) * 2018-05-17 2019-11-21 Niantic, Inc. Self-supervised training of a depth estimation system
CN109087349A (en) * 2018-07-18 2018-12-25 亮风台(上海)信息科技有限公司 A kind of monocular depth estimation method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184526A1 (en) * 2022-04-02 2023-10-05 Covidien Lp System and method of real-time stereoscopic visualization based on monocular camera

Also Published As

Publication number Publication date
CN111179326B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
US11488308B2 (en) Three-dimensional object detection method and system based on weighted channel features of a point cloud
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN113159143B (en) Infrared and visible light image fusion method and device based on jump connection convolution layer
CN107301631B (en) SAR image speckle reduction method based on non-convex weighted sparse constraint
CN112967327A (en) Monocular depth method based on combined self-attention mechanism
CN111652921A (en) Generation method of monocular depth prediction model and monocular depth prediction method
CN111046893A (en) Image similarity determining method and device, and image processing method and device
CN112598708A (en) Hyperspectral target tracking method based on four-feature fusion and weight coefficient
CN111179326B (en) Monocular depth estimation method, system, equipment and storage medium
CN114913284A (en) Three-dimensional face reconstruction model training method and device and computer equipment
CN109741258B (en) Image super-resolution method based on reconstruction
CN114663749A (en) Training method and device for landslide mass recognition model, electronic equipment and storage medium
CN114529793A (en) Depth image restoration system and method based on gating cycle feature fusion
CN106934398A (en) Image de-noising method based on super-pixel cluster and rarefaction representation
US20150296207A1 (en) Method and Apparatus for Comparing Two Blocks of Pixels
Radoi Generative adversarial networks under CutMix transformations for multimodal change detection
CN112686830A (en) Super-resolution method of single depth map based on image decomposition
CN114078149A (en) Image estimation method, electronic equipment and storage medium
Yufeng et al. Research on SAR image change detection algorithm based on hybrid genetic FCM and image registration
CN116310832A (en) Remote sensing image processing method, device, equipment, medium and product
CN116385281A (en) Remote sensing image denoising method based on real noise model and generated countermeasure network
CN114820755A (en) Depth map estimation method and system
Zhao et al. Single image super-resolution reconstruction using multiple dictionaries and improved iterative back-projection
CN113628289A (en) Hyperspectral image nonlinear unmixing method and system based on graph convolution self-encoder
Gaur et al. Precipitation Nowcasting using Deep Learning Techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant