CN113496468B - Depth image restoration method, device and storage medium - Google Patents

Depth image restoration method, device and storage medium Download PDF

Info

Publication number
CN113496468B
CN113496468B CN202010201231.4A CN202010201231A CN113496468B CN 113496468 B CN113496468 B CN 113496468B CN 202010201231 A CN202010201231 A CN 202010201231A CN 113496468 B CN113496468 B CN 113496468B
Authority
CN
China
Prior art keywords
image
repaired
restoration
sparse coding
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010201231.4A
Other languages
Chinese (zh)
Other versions
CN113496468A (en
Inventor
李甲
宋昊坤
吴恒
付奎
赵沁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202010201231.4A priority Critical patent/CN113496468B/en
Publication of CN113496468A publication Critical patent/CN113496468A/en
Application granted granted Critical
Publication of CN113496468B publication Critical patent/CN113496468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides a method, a device and a storage medium for restoring a depth image, wherein the method comprises the following steps: extracting a binary mask image of an image to be repaired; performing space transformation processing on the image to be repaired to obtain a processed image to be repaired; performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary to obtain a spatial transformation restoration result, wherein the convolution sparse coding dictionary is obtained by performing convolution sparse coding learning on a plurality of sample images with complete depth information; and then repairing the depth information of the image to be repaired according to the spatial transformation repairing result. The method, the device and the storage medium for restoring the depth image can more accurately express the distribution change of the depth information of the image to be restored, so that the restoring effect of the depth image is better.

Description

Depth image restoration method, device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and apparatus for repairing a depth image, and a storage medium.
Background
With the development of research in the field of computer three-dimensional graphics, indoor scene depth images acquired by monocular cameras are widely applied as materials for scene modeling, so that the method has important significance in repairing the scene depth images so as to ensure the integrity and accuracy of image depth information.
At present, the depth image restoration work based on traditional machine learning mostly adopts extra information as a guide, and restores the depth image by a method of constructing a filter by the extra information or by adopting a histogram distribution fitting mode, wherein the extra information comprises color information of a scene or texture information of the scene and the like.
However, these methods have difficulty in avoiding the problem of inconsistency of the distribution of the additional information and the depth information. For example, for color information, the color distribution inside a non-solid object is different, in the process of repairing the non-solid object, the object is separated from the inside in the process of constructing the bilateral filter, but for the depth information of the object, the parts with different colors are positioned at the same position, and the depth distribution is necessarily similar, so that the bilateral filter constructed by using the color information is difficult to accurately express the change of the depth information distribution of the object in a scene, and the repairing effect on an image is not obvious.
Disclosure of Invention
The invention provides a method, a device and a storage medium for repairing a depth image, which can more accurately express the distribution change of the depth information of the image to be repaired so as to ensure that the repairing effect on the depth image is better.
In a first aspect, an embodiment of the present invention provides a method for repairing a depth image, including:
extracting a binary mask image of an image to be repaired;
performing space transformation on the image to be repaired to obtain a processed image to be repaired;
performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary to obtain a spatial transformation restoration result, wherein the convolution sparse coding dictionary is obtained by performing convolution sparse coding learning on a plurality of sample images with complete depth information;
and repairing the depth information of the image to be repaired according to the space transformation repairing result.
Optionally, the performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary to obtain a spatial transformation restoration result, including:
constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary;
Determining sparse codes corresponding to the minimum loss value according to the loss function;
and carrying out transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain the space transformation restoration result.
Optionally, the performing spatial transformation processing on the image to be repaired to obtain a processed image to be repaired, including:
and carrying out space transformation processing on the image to be repaired by adopting local regularization to obtain the processed image to be repaired.
Optionally, the repairing the depth information of the image to be repaired according to the spatial transformation repairing result includes:
migrating the space transformation restoration result to the image to be restored to obtain a migrated image to be restored;
and carrying out restoration of depth information on the migrated image to be restored to obtain a restoration result of the image to be restored.
Optionally, the repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired includes:
and repairing the depth information of the migrated image to be repaired by a bilateral filtering algorithm to obtain a repairing result of the image to be repaired.
Optionally, the repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired includes:
dividing the image to be repaired into an effective depth point set and a incomplete depth point set according to the binary mask image, wherein the effective depth point set is a set of points with depth information distribution in the image to be repaired, and the incomplete depth point set is a set of points with depth information missing in the image to be repaired;
in the space transformation restoration result, a mapping relation between an effective depth point set and a incomplete depth point set is obtained through a local linear embedding algorithm;
and interpolating the incomplete depth point set in the migrated image to be repaired according to the mapping relation and the effective depth point set to obtain a repairing result of the image to be repaired.
Optionally, before performing transformation spatial data restoration on the image to be restored after spatial transformation according to the binary mask image and the convolution sparse coding dictionary, the method further includes:
acquiring a plurality of sample images with complete depth information;
performing space transformation processing on the plurality of sample images with the complete depth information respectively to obtain a plurality of processed sample images;
And performing convolutional sparse coding learning on the plurality of processed sample images to obtain the convolutional sparse coding dictionary.
In a second aspect, an embodiment of the present invention provides a depth image restoration apparatus, including:
the extraction module is used for extracting the binary mask image of the image to be repaired;
the processing module is used for performing space transformation processing on the image to be repaired to obtain a processed image to be repaired;
the restoration module is used for carrying out transformation space data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary, so as to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information;
and the restoration module is also used for restoring the depth information of the image to be restored according to the spatial transformation restoration result.
Optionally, the repair module is specifically configured to:
constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary;
determining sparse codes corresponding to the minimum loss value according to the loss function;
And carrying out transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain the space transformation restoration result.
Optionally, the processing module is further configured to perform spatial transformation processing on the image to be repaired by adopting local regularization, so as to obtain a processed image to be repaired.
Optionally, the repair module is specifically configured to:
migrating the space transformation restoration result to the image to be restored to obtain a migrated image to be restored;
and carrying out restoration of depth information on the migrated image to be restored to obtain a restoration result of the image to be restored.
Optionally, the repair module is further configured to repair the depth information of the migrated image to be repaired by using a bilateral filtering algorithm, so as to obtain a repair result of the image to be repaired.
Optionally, the repair module is specifically configured to:
dividing the image to be repaired into an effective depth point set and a incomplete depth point set according to the binary mask image, wherein the effective depth point set is a set of points with depth information distribution in the image to be repaired, and the incomplete depth point set is a set of points with depth information missing in the image to be repaired;
In the space transformation restoration result, a mapping relation between an effective depth point set and a incomplete depth point set is obtained through a local linear embedding algorithm;
and interpolating the incomplete depth point set in the migrated image to be repaired according to the mapping relation and the effective depth point set to obtain a repairing result of the image to be repaired.
Optionally, the apparatus further includes: the device comprises an acquisition module and a learning module, wherein,
the acquisition module is used for acquiring a plurality of sample images with complete depth information;
the processing module is further used for performing space transformation processing on the plurality of sample images with the complete depth information respectively to obtain a plurality of processed sample images;
and the learning module is used for carrying out convolution sparse coding learning on the plurality of processed sample images to obtain the convolution sparse coding dictionary.
In a third aspect, an embodiment of the present invention provides a server, including:
a processor;
a memory for storing a computer program of the processor; the method comprises the steps of,
wherein the processor is configured to perform the method of restoration of a depth image according to the first aspect by executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing a computer program that causes a depth image restoration apparatus to execute the depth image restoration method according to the first aspect.
The invention provides a restoration method, a restoration device and a storage medium for a depth image, wherein a binary mask image of an image to be restored is extracted; performing space transformation processing on the image to be repaired to obtain a processed image to be repaired; performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary to obtain a spatial transformation restoration result, wherein the convolution sparse coding dictionary is obtained by performing convolution sparse coding learning on a plurality of sample images with complete depth information; and then repairing the depth information of the image to be repaired according to the spatial transformation repairing result. The image to be repaired is repaired by using the convolution sparse coding dictionary, so that the distribution change of the depth information of the image to be repaired can be expressed more accurately, and the repair effect of the depth image can be improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of a method for depth image restoration according to an exemplary embodiment of the present invention;
FIG. 2 is a flow chart of a method for depth image restoration according to another exemplary embodiment of the present invention;
FIG. 3 is a flow chart of a method for depth image restoration according to yet another exemplary embodiment of the present invention;
FIG. 4 is a block diagram of a depth image restoration apparatus according to an exemplary embodiment of the present invention;
FIG. 5 is a block diagram of a depth image restoration apparatus according to another exemplary embodiment of the present invention;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The depth image restoration method provided by the invention can be applied to scenes for restoring the depth images of the same category of scenes. In the prior art, for repairing a depth image, the repair of the depth image is guided only through the distribution of additional information (color information or texture information and the like), so that the distribution change of the depth information of an object in a scene is difficult to accurately express, and the repair effect of the depth image is not obvious.
In view of the above technical problems, an embodiment of the present invention provides a method for repairing a depth image by extracting a binary mask image of an image to be repaired; performing space transformation on the image to be repaired to obtain a processed image to be repaired; performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary to obtain a spatial transformation restoration result, wherein the convolution sparse coding dictionary is obtained by performing convolution sparse coding learning on a plurality of sample images with complete depth information; and then repairing the depth information of the image to be repaired according to the spatial transformation repairing result. The image to be repaired is repaired by using the convolution sparse coding dictionary, and the convolution sparse coding dictionary is obtained by learning the depth images of a plurality of scenes in the same category, so that the distribution change of the depth information of the image to be repaired can be more accurately expressed, and the repair effect of the depth image is improved.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
Fig. 1 is a flowchart illustrating a method for restoring a depth image according to an exemplary embodiment of the present invention. The method may be performed by a prosthetic device of any depth image, which may be implemented in software and/or hardware. As shown in fig. 1, the method for repairing a depth image provided by the embodiment of the invention includes the following steps:
step 101: and extracting a binary mask image of the image to be repaired.
In this step, the image to be repaired is an image of missing part of depth information, and the image to be repaired is a scene image of a certain category, for example, a landscape image, an animal image, a credential image or a building image; the image to be repaired is a scene image in a room, for example, a table image, a stool image, or a panoramic image in a room, and the embodiment of the present invention is described herein by way of example only, but the embodiment of the present invention is not limited thereto.
After the image to be repaired is obtained, extracting a binary mask image in the image to be repaired, wherein the binary mask image can reflect an effective depth information point set and a incomplete depth information point set in the image to be repaired, and acquiring the effective depth information and the incomplete depth information point set in the image to be repaired by extracting the binary mask image.
For example, a binary mask image extracted from an image to be repaired is set as M, a pixel value of an effective depth information point set in the binary mask image M in the image to be repaired is set as 1, and a pixel value of a incomplete depth information point set in the binary mask image M in the image to be repaired is set as 0, as shown in formula (1).
Figure BDA0002419455170000071
Wherein: m is M x,y Representing the values of the corresponding coordinates (x, y) of the binary mask image M.
In the step, the effective depth information point set and the incomplete depth information point set in the image to be repaired can be obtained by extracting the binary mask image of the image to be repaired, and the binary mask image can provide a basis for repairing the subsequent image to be repaired.
Step 102: and performing space transformation processing on the image to be repaired to obtain the processed image to be repaired.
In this step, the spatial transformation processing means that there is a transformation matrix so that the original image and the transformed image can be mutually transformed, in this embodiment, that is, the image to be repaired may obtain the processed image to be repaired through the spatial transformation processing, and the processed image to be repaired may also obtain the image to be repaired through the spatial transformation processing.
Optionally, local regularization may be used to perform spatial transformation on the image to be repaired, so as to obtain a processed image to be repaired. For example, in the local regularization process, a gaussian operator σ with a scale of 13×13 may be selected as the weight model, or other scales and gaussian operators σ may be selected as the weight model, which is not limited in this embodiment. For each image I to be repaired, the image I to be repaired after being subjected to local regularized spatial transformation processing N As shown in formula (2). Regularized data means the relative position in the neighborhood data space. In this embodiment, the spatial transformation may be performed by other methods, and may be selected according to a specific situation, and the method of spatial transformation is not limited in the present invention.
Figure BDA0002419455170000081
Wherein: i represents an image to be repaired; i N Representing the image to be repaired after the spatial transformation treatment; σ denotes a gaussian operator.
In the step, redundant information in the image to be repaired can be deleted by performing spatial transformation processing on the image to be repaired, and unnecessary repair workload is reduced in the process of repairing the image to be repaired by deleting the redundant information, so that the repair efficiency of the image to be repaired is improved.
After performing the spatial transformation processing on the image to be repaired to obtain the processed image to be repaired, step 103 is further performed.
Step 103: and carrying out transformation space data restoration on the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information.
The spatial transformation restoration result can be an induced image, and the depth image to be restored is restored through the induced image. The convolution sparse coding dictionary is obtained in advance by learning a plurality of sample images, and it is to be noted that the plurality of sample images must be images with complete depth information, and the convolution sparse coding dictionary with complete depth information is obtained by learning a plurality of images with complete depth information, that is, the images with complete depth information exist in the convolution sparse coding dictionary, and the convolution sparse coding dictionary can represent the relative distribution state of the depth information in the image to be repaired after spatial transformation processing.
In the step, according to the formula (1) and the convolution sparse coding dictionary in the step 101, the transformation space data restoration is carried out on the image to be restored after the local regularized space transformation processing, and finally a space transformation restoration result is obtained.
In the step, the processed image to be repaired is subjected to transformation spatial data repair, so that the processed image pixel values can well utilize the peripheral information of the processed image pixel values, and a more ideal spatial transformation repair result is obtained.
After the processed image to be repaired is subjected to transformation spatial data repair to obtain a spatial transformation repair result, the depth information of the image to be repaired is repaired according to the spatial transformation repair result, namely, step 104 is executed.
Step 104: and according to the spatial transformation restoration result, restoring the depth information of the image to be restored.
In this step, according to the spatial transformation restoration result obtained in step 103, the restoration of the depth information of the image to be restored is further performed, where the depth information of the image to be restored may be the pixel information of the image.
According to the depth image restoration method, a binary mask image of an image to be restored is extracted; performing space transformation on the image to be repaired to obtain a processed image to be repaired; according to the binary mask image and the convolution sparse coding dictionary, carrying out transformation space data restoration on the processed image to be restored to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information; and according to the space transformation restoration result, restoring the depth information of the image to be restored. The method for repairing the image to be repaired by the convolution sparse coding dictionary is characterized in that the convolution sparse coding dictionary is obtained by learning the depth images of a plurality of scenes in the same category, so that the distribution change of the depth information of the image to be repaired can be expressed more accurately, and the repair effect of the depth image is improved.
Fig. 2 is a flow chart illustrating a method for restoring a depth image according to an exemplary embodiment of the present invention. The process of transforming spatial data restoration of the processed image to be restored to obtain a spatial transformation restoration result is described in detail on the basis of the diagram shown in fig. 1, according to the binary mask image and the convolution sparse coding dictionary, and as shown in fig. 2, the restoration method of the depth image provided by the embodiment of the invention comprises the following steps:
step 201: and extracting a binary mask image of the image to be repaired.
Step 202: and performing space transformation processing on the image to be repaired to obtain the processed image to be repaired.
Steps 201 to 202 are similar to steps 101 to 102 and will not be described here.
Step 203: and constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary.
Before the space transformation restoration result is obtained by carrying out transformation space data restoration on the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary, the convolution sparse coding dictionary needs to be acquired first, and the convolution sparse coding dictionary can be acquired by the following method.
Optionally, acquiring a plurality of sample images with complete depth information; performing space transformation processing on a plurality of sample images with complete depth information respectively to obtain a plurality of processed sample images; and performing convolutional sparse coding learning on the plurality of processed sample images to obtain a convolutional sparse coding dictionary.
For example, when a plurality of sample images are acquired, the sample images to be acquired may be acquired by voice, or may be acquired by text input, or may be acquired by other manners, which is merely an example, but the embodiments of the present invention are not limited thereto.
After a plurality of sample images with complete depth information are acquired, frequency domains of the depth data in the plurality of sample images can be separated according to the complete depth data in the scene, and as the data distribution of the original depth space in the scene of the same category can have the same or similar depth values, the absolute difference of the depth information of objects with similar depth information distribution in different scenes belongs to redundant information and should not be learned by convolution sparse coding, wherein the redundant information can be a low-frequency component. Therefore, the low frequency components in the plurality of sample images should be deleted. Further, the spatial transformation processing is performed on the obtained plurality of sample images, which is specifically described in step 102, and will not be described herein.
It should be noted that, the obtained multiple sample images and the images to be repaired should be depth images of the same category scene, because local relative distribution rules of the depth images of the same category scene are used as guidance, compared with the prior art that the repair of the depth images is guided by the distribution of additional information, the method is more in line with the distribution rules of the depth information of the images to be repaired, and adverse effects caused by the distribution differences of different types of depth information can be avoided, so that a better depth image repair effect is achieved.
After obtaining the processed sample images, further, performing convolutional sparse coding learning on the processed sample images. For example, a convolution kernel with a scale of 11×11 can be selected as a convolution sparse coding dictionary, when the scale of the dictionary is 100 convolution kernels, image depth information of a certain category of scene can be well represented, and then optimization of the convolution sparse coding dictionary a and the sparse representation coefficient phi (sparse coding) is performed by using an alternate direction multiplier method, wherein a specific optimization process is a process of minimizing a loss function shown in another formula (4) under the constraint shown in a formula (3).
‖φ i2 ≤1,
Figure BDA0002419455170000101
Figure BDA0002419455170000102
Wherein: x is x (j) Representing the transformed image of the original depth map; phi represents a convolutional sparse coding dictionary obtained after learning i Representing an ith dictionary;
Figure BDA0002419455170000103
representing sparse coding; />
Figure BDA0002419455170000104
Representing a regularization term; k, m is a positive integer; lambda represents a local regularization parameter to maintain data fidelity terms and local regularization termsBalance.
In the step, the spatial transformation is carried out on a plurality of sample images, so that adverse effects caused by redundant information in the scene depth image in the process of learning the convolution sparse coding dictionary can be weakened, and the local distribution representation capability of the convolution sparse coding dictionary on the scene depth information is increased under the same scale of the convolution sparse coding dictionary, so that the effect of restoring the spatial transformation data of the image to be restored later is more complete and accurate.
After the convolutional sparse coding dictionary is obtained, step 203 is performed, i.e. a convolutional sparse coding dictionary penalty function is constructed from the binary mask image and the convolutional sparse coding dictionary. Illustratively, a loss function of the convolutional sparse coding dictionary is constructed from the above-obtained convolutional sparse coding dictionary and formula (1), as shown in formula (5). Wherein the loss function is the loss of the convolution sparse coding dictionary under different sparse coding.
Figure BDA0002419455170000111
Wherein: x represents an image to be repaired after spatial transformation processing, M represents a binary mask image, a represents convolutional sparse coding, a i Representing the ith code therein; phi represents a sparse coding dictionary, phi i Representing an ith dictionary; λ represents a local regularization parameter; k is a positive integer.
Step 204: and determining sparse codes corresponding to the minimum loss value according to the loss function.
In this step, after the loss function is obtained in step 203, since the smaller the loss value of the convolutional sparse coding dictionary is, the more accurate the restoration result of the image to be restored is, the minimum loss value of the convolutional sparse coding dictionary is obtained by the loss function.
In addition, each sparse code corresponds to each sample image after spatial transformation, each loss value and the sparse code in the convolution sparse coding dictionary are in a corresponding relation, and after the minimum loss value of the convolution sparse coding dictionary is determined, the sparse code corresponding to the minimum loss value is determined, as shown in a formula (6).
Figure BDA0002419455170000112
Wherein: x represents an image to be repaired after spatial transformation processing, M represents a binary mask image, a represents convolutional sparse coding, a i Representing the ith code therein; phi represents a sparse coding dictionary, phi i Representing an ith dictionary therein; λ represents a local regularization parameter; k is a positive integer.
In this step, the optimal sparse coding, that is, the sparse coding corresponding to the minimum loss value is guided by using the convolutional sparse coding dictionary, and step 205 may be performed after determining the sparse coding corresponding to the minimum loss value.
Step 205: and carrying out transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain a space transformation restoration result.
In the step, according to a convolution sparse coding dictionary and a formula (6), transformation space data restoration is carried out on the image to be restored after space transformation processing, and a space transformation restoration result is obtained.
In this embodiment, according to the binary mask images of the convolution sparse coding dictionary and the image to be repaired obtained in advance, the efficiency of repairing the transformation space data of the processed image to be repaired is higher, and in addition, the minimum loss value of the convolution sparse coding dictionary is determined, so that the repairing result of the space transformation is more accurate.
Fig. 3 is a flowchart of a depth image restoration method according to another exemplary embodiment of the present invention, and the detailed description of how to restore depth information of an image to be restored is given based on the embodiment shown in fig. 1 according to a spatial transformation restoration result. The method for repairing the depth image provided by the embodiment of the invention comprises the following steps:
Step 301: and extracting a binary mask image of the image to be repaired.
Step 302: and performing space transformation processing on the image to be repaired to obtain the processed image to be repaired.
Step 303: and carrying out transformation space data restoration on the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information.
Steps 301 to 303 are similar to steps 101 to 103 and will not be described again here.
Step 304: and migrating the space transformation restoration result to the image to be restored to obtain the migrated image to be restored.
In this step, the spatial transformation restoration result is migrated into the image to be restored, and the migration method may be the above-mentioned spatial transformation processing, that is, the spatial transformation restoration result is migrated into the image to be restored by adopting a spatial transformation manner, and by way of example, assuming that the image to be restored is a, the image obtained after the spatial transformation processing is a ', and then the spatial transformation restoration is performed on the a ', so as to obtain the spatial transformation restoration result a ' 0 Further, the spatial transformation restoration result A 'is obtained' 0 Migrating to the image A to be repaired to obtain the migrated image A to be repaired 0 . Of course, other migration manners are also possible, and the specific migration manner is not limited in any way.
Step 305: and repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired.
In this step, the depth information of the migrated image to be repaired is further repaired, and the specific method for repairing the depth information may be a bilateral filtering algorithm or a local linear embedding algorithm, etc., which is only described as an example, but the embodiment of the present invention is not limited thereto.
Optionally, repairing the depth information of the migrated image to be repaired by a bilateral filtering algorithm to obtain a repairing result of the image to be repaired. By way of example, a joint bilateral filter is constructed by using the restoration result of the spatial transformation, bilateral filtering is performed on the image to be restored, and depth information restoration is performed on the migrated image to be restored. As shown in formula (7), in the process of constructing the joint bilateral filter, the convolution kernel with the scale of 16×16 is better, and of course, convolution kernels with other scales can be selected.
Figure BDA0002419455170000131
Wherein: i filtered (x) Representing the corresponding position x of the input image i Pixel values of (2); x represents the repair result of the spatial transformation, x i Representing physical coordinates; i (x) represents a pixel value of a corresponding position x of the input image; f (f) r Representing a spatial gaussian filter kernel; g s Representing a range filter kernel centered on the depth image value at i; omega represents core f r Is a space support for the (b); w (W) p The filter weights designed based on the restoration result of the transformation space are represented as shown in formula (8).
Figure BDA0002419455170000132
Wherein: x represents the repair result of the spatial transformation, x i Representing physical coordinates; i (x) represents a pixel value of a corresponding position x of the input image; f (f) r Representing a spatial gaussian filter kernel; g s Representing a range filter kernel centered on the depth image value at i; omega represents core f r Is a space support for the (b); w (W) p Representing the filter weights designed based on the restoration results of the transform space.
In the step, a combined bilateral filter is constructed according to the repair result of the space transformation, and the depth information of the migrated image to be repaired is repaired by adopting a bilateral algorithm, so that the repair result of the bilateral filter method is obtained.
In one implementation, an image to be repaired is divided into an effective depth point set and a incomplete depth point set according to a binary mask image, wherein the effective depth point set is a set of points with depth information distribution in the image to be repaired, and the incomplete depth point set is a set of points with depth information missing in the image to be repaired; in the space transformation restoration result, obtaining a mapping relation between an effective depth point set and a incomplete depth point set through a local linear embedding algorithm; and interpolating the incomplete depth point set in the migrated image to be repaired according to the mapping relation and the effective depth point set to obtain a repairing result of the image to be repaired.
Specifically, the image to be repaired is divided into an effective depth point set and a incomplete depth point set through the binary mask image, so that the effective depth point set and the incomplete depth point set can be displayed in the corresponding repair results of the processed image to be repaired and the space transformation. The mapping relation between the effective depth point set and the incomplete depth point set in the repairing result of the space transformation can be obtained by adopting a local linear embedding algorithm, and correspondingly, the corresponding mapping relation between the effective depth point set and the incomplete depth point set in the migrated image to be repaired can also be obtained. And according to the mapping relation, interpolating the incomplete depth point set in the migrated image to be repaired, and finally obtaining a repairing result of the image to be repaired. For example, according to the local linear embedding algorithm, the incomplete depth point set is fitted by using the effective depth point set, and the fitting weight is recorded, in the fitting process, in order to make the fitting effect better, the number of the fields is 12 to perform the fitting, but the number of the neighborhoods is not limited by the embodiment of the invention. And interpolating the incomplete depth point set in the migrated image to be repaired according to the obtained fitting weight to obtain a repairing result of the local linear embedding algorithm.
In the step, the depth information of the migrated image to be repaired is repaired in two ways, a depth information repair result (a bilateral filtering method repair result and a local linear embedding algorithm repair result) of the image to be repaired is obtained respectively, the image to be repaired is compared with a real image to be repaired, and the repair result which is closest to the real image to be repaired and has the highest repair result accuracy is taken as a final image repair result to be repaired.
In this embodiment, the repair result of the spatial transformation is migrated into the image to be repaired, and two methods are adopted to repair the depth information of the migrated image to be repaired, so that the repair effect is compared, and the final repair result of the image to be repaired is determined, so that the repair effect of the image to be repaired is more complete and the repair result is more accurate.
Fig. 4 is a block diagram of a depth image restoration apparatus according to an exemplary embodiment of the present invention, and as shown in fig. 4, the depth image restoration apparatus includes: an extraction module 11, a processing module 12 and a repair module 13, wherein:
an extracting module 11, configured to extract a binary mask image of an image to be repaired;
The processing module 12 is used for performing space transformation processing on the image to be repaired to obtain a processed image to be repaired;
the restoration module 13 is used for restoring the transformation space data of the processed image to be restored according to the binary mask image and the convolution sparse coding dictionary, so as to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information;
and the restoration module 13 is further used for restoring the depth information of the image to be restored according to the spatial transformation restoration result.
Optionally, the processing module 12 is further configured to perform spatial transformation processing on the image to be repaired by using local regularization, so as to obtain a processed image to be repaired.
Optionally, the repairing module 13 is specifically configured to:
constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary;
determining sparse codes corresponding to the minimum loss value according to the loss function;
and carrying out transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain a space transformation restoration result.
Optionally, the repairing module 13 is specifically configured to:
Migrating the space transformation restoration result to the image to be restored to obtain a migrated image to be restored;
and repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired.
Optionally, the repair module 13 is further configured to repair the depth information of the migrated image to be repaired by using a bilateral filtering algorithm, so as to obtain a repair result of the image to be repaired.
Optionally, the repairing module 13 is specifically configured to:
dividing an image to be repaired into an effective depth point set and a incomplete depth point set according to the binary mask image, wherein the effective depth point set is a set of points with depth information distribution in the image to be repaired, and the incomplete depth point set is a set of points with depth information missing in the image to be repaired;
in the space transformation restoration result, obtaining a mapping relation between an effective depth point set and a incomplete depth point set through a local linear embedding algorithm;
and interpolating the incomplete depth point set in the migrated image to be repaired according to the mapping relation and the effective depth point set to obtain a repairing result of the image to be repaired.
Fig. 5 is a block diagram of a depth image restoration apparatus according to an exemplary embodiment of the present invention, and as shown in fig. 5, the depth image restoration apparatus further includes: an acquisition module 14 and a learning module 15, wherein:
An acquisition module 14 for acquiring a plurality of sample images having complete depth information;
the processing module 12 is further configured to perform spatial transformation processing on a plurality of sample images with complete depth information, so as to obtain a plurality of processed sample images;
and the learning module 15 is used for performing convolutional sparse coding learning on the plurality of processed sample images to obtain a convolutional sparse coding dictionary.
The above device may be used to execute the method provided by the corresponding method embodiment, and the specific implementation manner and technical effects are similar, and are not repeated here.
Fig. 6 is a schematic structural diagram of a server 60 according to an embodiment of the present invention, for example, referring to fig. 6, the server 60 may include a processor 601 and a memory 602, where,
the memory 602 is used to store program instructions;
the processor 601 is configured to read the program instructions in the memory 602, and execute the depth image restoration method according to any one of the embodiments described above according to the program instructions in the memory 602.
The server 60 in the embodiment of the present invention may execute the technical scheme of the depth image restoration method in any of the foregoing embodiments, and the implementation principle and beneficial effects of the method are similar to those of the text processing method, and are not repeated here.
The embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the technical scheme of the depth image restoration method shown in any of the foregoing embodiments is executed, and the implementation principle and the beneficial effects of the method are similar to those of the depth image restoration method, and are not repeated herein.
The processor in the above embodiments may be a general purpose processor, a digital signal processor (digital signal processor, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a memory medium well known in the art such as random access memory (random access memory, RAM), flash memory, read-only memory (ROM), programmable read-only memory, or electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads instructions from the memory and, in combination with its hardware, performs the steps of the method described above.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. A method for repairing a depth image, comprising:
extracting a binary mask image of an image to be repaired;
performing space transformation on the image to be repaired to obtain a processed image to be repaired;
performing transformation spatial data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary to obtain a spatial transformation restoration result, wherein the convolution sparse coding dictionary is obtained by performing convolution sparse coding learning on a plurality of sample images with complete depth information;
According to the space transformation restoration result, restoring the depth information of the image to be restored;
according to the binary mask image and the convolution sparse coding dictionary, carrying out transformation space data restoration on the processed image to be restored to obtain a space transformation restoration result, wherein the method comprises the following steps of:
constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary;
determining sparse codes corresponding to the minimum loss value according to the loss function;
performing transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain the space transformation restoration result;
and repairing the depth information of the image to be repaired according to the spatial transformation repair result, wherein the repairing comprises the following steps:
migrating the space transformation restoration result to the image to be restored to obtain a migrated image to be restored;
and carrying out restoration of depth information on the migrated image to be restored to obtain a restoration result of the image to be restored.
2. The method according to claim 1, wherein the performing spatial transformation on the image to be repaired to obtain a processed image to be repaired includes:
And carrying out space transformation processing on the image to be repaired by adopting local regularization to obtain the processed image to be repaired.
3. The method according to claim 1, wherein the repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired includes:
and repairing the depth information of the migrated image to be repaired by a bilateral filtering algorithm to obtain a repairing result of the image to be repaired.
4. The method according to claim 1, wherein the repairing the depth information of the migrated image to be repaired to obtain a repairing result of the image to be repaired includes:
dividing the image to be repaired into an effective depth point set and a incomplete depth point set according to the binary mask image, wherein the effective depth point set is a set of points with depth information distribution in the image to be repaired, and the incomplete depth point set is a set of points with depth information missing in the image to be repaired;
in the space transformation restoration result, a mapping relation between an effective depth point set and a incomplete depth point set is obtained through a local linear embedding algorithm;
And interpolating the incomplete depth point set in the migrated image to be repaired according to the mapping relation and the effective depth point set to obtain a repairing result of the image to be repaired.
5. The method of claim 1, wherein prior to performing transform spatial data restoration on the spatially transformed image to be restored based on the binary mask image and a convolutional sparse coding dictionary, the method further comprises:
acquiring a plurality of sample images with complete depth information;
performing space transformation processing on the plurality of sample images with the complete depth information respectively to obtain a plurality of processed sample images;
and performing convolutional sparse coding learning on the plurality of processed sample images to obtain the convolutional sparse coding dictionary.
6. A depth image restoration apparatus, comprising:
the extraction module is used for extracting the binary mask image of the image to be repaired;
the processing module is used for performing space transformation processing on the image to be repaired to obtain a processed image to be repaired;
the restoration module is used for carrying out transformation space data restoration on the processed image to be restored according to the binary mask image and a convolution sparse coding dictionary, so as to obtain a space transformation restoration result, wherein the convolution sparse coding dictionary is obtained by carrying out convolution sparse coding learning on a plurality of sample images with complete depth information;
The restoration module is further used for restoring the depth information of the image to be restored according to the spatial transformation restoration result;
the repair module is specifically configured to:
constructing a convolution sparse coding dictionary loss function according to the binary mask image and the convolution sparse coding dictionary;
determining sparse codes corresponding to the minimum loss value according to the loss function;
performing transformation space data restoration on the processed image to be restored according to the convolution sparse coding dictionary and the sparse coding to obtain the space transformation restoration result;
wherein, repair module still specifically is used for:
migrating the space transformation restoration result to the image to be restored to obtain a migrated image to be restored;
and carrying out restoration of depth information on the migrated image to be restored to obtain a restoration result of the image to be restored.
7. A server, comprising:
a processor;
a memory for storing a computer program of the processor; the method comprises the steps of,
wherein the processor is configured to perform the depth image restoration method of any one of claims 1 to 5 by executing the computer program.
8. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the depth image restoration method according to any one of claims 1 to 5.
CN202010201231.4A 2020-03-20 2020-03-20 Depth image restoration method, device and storage medium Active CN113496468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010201231.4A CN113496468B (en) 2020-03-20 2020-03-20 Depth image restoration method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010201231.4A CN113496468B (en) 2020-03-20 2020-03-20 Depth image restoration method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113496468A CN113496468A (en) 2021-10-12
CN113496468B true CN113496468B (en) 2023-07-04

Family

ID=77993717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010201231.4A Active CN113496468B (en) 2020-03-20 2020-03-20 Depth image restoration method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113496468B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433536A (en) * 2023-06-13 2023-07-14 安徽大学 Processing method and system for high-precision restoration of panoramic image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308485A (en) * 2018-08-02 2019-02-05 中国矿业大学 A kind of migration sparse coding image classification method adapted to based on dictionary domain
CN109785244A (en) * 2018-11-30 2019-05-21 中国农业大学 A kind of restorative procedure of multi-Target Image

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014144306A1 (en) * 2013-03-15 2014-09-18 Arizona Board Of Regents On Behalf Of Arizona State University Ensemble sparse models for image analysis and restoration
CN103679662B (en) * 2013-12-25 2016-05-25 苏州市职业大学 Based on the right super-resolution image restoration method of classification priori non-negative sparse coding dictionary
RU2568929C1 (en) * 2014-04-30 2015-11-20 Самсунг Электроникс Ко., Лтд. Method and system for fast mri-images reconstruction from sub-sampled data
WO2016050729A1 (en) * 2014-09-30 2016-04-07 Thomson Licensing Face inpainting using piece-wise affine warping and sparse coding
CN105608678B (en) * 2016-01-11 2018-03-27 宁波大学 The depth image cavity represented based on sparse distortion model is repaired and denoising method
CN105844635B (en) * 2016-03-21 2018-10-12 北京工业大学 A kind of rarefaction representation depth image method for reconstructing based on structure dictionary
CN106780342A (en) * 2016-12-28 2017-05-31 深圳市华星光电技术有限公司 Single-frame image super-resolution reconstruction method and device based on the reconstruct of sparse domain
KR102314703B1 (en) * 2017-12-26 2021-10-18 에스케이하이닉스 주식회사 Joint dictionary generation method for image processing, interlace based high dynamic range imaging apparatus using the joint dictionary and image processing method of the same
CN108492250A (en) * 2018-03-07 2018-09-04 大连大学 The method of depth image Super-resolution Reconstruction based on the guiding of high quality marginal information
CN108629756B (en) * 2018-04-28 2021-06-25 东北大学 Kinectv2 depth image invalid point repairing method
CN108765327B (en) * 2018-05-18 2021-10-29 郑州国测智能科技有限公司 Image rain removing method based on depth of field and sparse coding
CN109543724B (en) * 2018-11-06 2021-09-03 南京晓庄学院 Multilayer identification convolution sparse coding learning method
CN109636722B (en) * 2018-12-05 2023-09-05 中国矿业大学 Method for reconstructing super-resolution of online dictionary learning based on sparse representation
CN109784159A (en) * 2018-12-11 2019-05-21 北京航空航天大学 The processing method of scene image, apparatus and system
CN110276389B (en) * 2019-06-14 2023-04-07 中国矿业大学 Mine mobile inspection image reconstruction method based on edge correction
CN110827209A (en) * 2019-09-26 2020-02-21 西安交通大学 Self-adaptive depth image restoration method combining color and depth information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308485A (en) * 2018-08-02 2019-02-05 中国矿业大学 A kind of migration sparse coding image classification method adapted to based on dictionary domain
CN109785244A (en) * 2018-11-30 2019-05-21 中国农业大学 A kind of restorative procedure of multi-Target Image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
深度视频空洞区域修复技术研究;胡天佑;《中国优秀硕士学位论文全文数据库-信息科技辑》;I138-2289 *

Also Published As

Publication number Publication date
CN113496468A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN109840477B (en) Method and device for recognizing shielded face based on feature transformation
CN111063021A (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN111161269B (en) Image segmentation method, computer device, and readable storage medium
CN111160298A (en) Robot and pose estimation method and device thereof
WO2021219835A1 (en) Pose estimation method and apparatus
CN113096249B (en) Method for training vertex reconstruction model, image reconstruction method and electronic equipment
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN111680573B (en) Face recognition method, device, electronic equipment and storage medium
CN112734887A (en) Face mixing-deformation generation method and device based on deep learning
CN113496468B (en) Depth image restoration method, device and storage medium
CN113822982A (en) Human body three-dimensional model construction method and device, electronic equipment and storage medium
CN108986210B (en) Method and device for reconstructing three-dimensional scene
CN117274605B (en) Method and device for extracting water area outline from photo shot by unmanned aerial vehicle
CN113077477B (en) Image vectorization method and device and terminal equipment
CN114626118A (en) Building indoor model generation method and device
CN109829857B (en) Method and device for correcting inclined image based on generation countermeasure network
CN111311732A (en) 3D human body grid obtaining method and device
CN115908753A (en) Whole body human mesh surface reconstruction method and related device
CN116342385A (en) Training method and device for text image super-resolution network and storage medium
CN113378864B (en) Method, device and equipment for determining anchor frame parameters and readable storage medium
CN115423697A (en) Image restoration method, terminal and computer storage medium
CN115375715A (en) Target extraction method and device, electronic equipment and storage medium
CN114611667A (en) Reconstruction method for calculating characteristic diagram boundary based on small-scale parameter matrix
CN112184884A (en) Three-dimensional model construction method and device, computer equipment and storage medium
CN112288748A (en) Semantic segmentation network training and image semantic segmentation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant