CN107301662B - Compression recovery method, device and equipment for depth image and storage medium - Google Patents

Compression recovery method, device and equipment for depth image and storage medium Download PDF

Info

Publication number
CN107301662B
CN107301662B CN201710524546.0A CN201710524546A CN107301662B CN 107301662 B CN107301662 B CN 107301662B CN 201710524546 A CN201710524546 A CN 201710524546A CN 107301662 B CN107301662 B CN 107301662B
Authority
CN
China
Prior art keywords
depth image
image
model
restored
branch model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710524546.0A
Other languages
Chinese (zh)
Other versions
CN107301662A (en
Inventor
王旭
张乒乒
江健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201710524546.0A priority Critical patent/CN107301662B/en
Publication of CN107301662A publication Critical patent/CN107301662A/en
Application granted granted Critical
Publication of CN107301662B publication Critical patent/CN107301662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/529Depth or shape recovery from texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of computers, and provides a compression recovery method and device of a depth image, a mobile terminal and a storage medium, wherein the method comprises the following steps: the method comprises the steps of receiving an input restoration request of a depth image to be restored, associating the depth image to be restored with a corresponding texture image, preprocessing the texture image and the depth image to be restored, obtaining high-frequency information of the texture image and the depth image to be restored, inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y branch model, a D branch model and an M branch model of a preset depth image restoration model respectively, inputting a characteristic image obtained by restoring the Y branch model and the D branch model into the M branch model, and restoring the depth image corresponding to the depth image to be restored through the M branch model, so that the compression restoration quality of the depth image is improved, and further the user experience is improved.

Description

Compression recovery method, device and equipment for depth image and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a depth image compression recovery method, device, equipment and storage medium.
Background
In 3D computer graphics, the depth image contains information on the distance of the viewpoint from the surface of the field scene. Conventional machine vision projects a three-dimensional object into a two-dimensional image and recovers the three-dimensional scene from the relationship between the object's features, image data, and the imaging process. Depth information plays a key role in three-dimensional reconstruction. Sending stereoscopic video (left and right views) can provide a 3D experience, but with significant limitations. To reduce the number of transmitted views, texture plus depth formats have become widely accepted. This method is to transmit color information and depth information of several viewpoints together, and then synthesize a virtual view using a depth-image-based rendering (DIBR) technique. Texture information and depth information need to be acquired when a 3D scene is reconstructed, and a large amount of space is occupied by a high-resolution depth image in the storage and transmission processes. Therefore, the depth image needs to be compressed to improve space utilization and transmission efficiency. However, the compressed depth image has distortions such as blur and block, which further cause distortion in rendering the 3D scene.
Lossy compression enables compression of images with a larger compression ratio, but lossy compressed images are irreversible. Therefore, image compression and restoration are also the direction of research. In the early years, many researchers designed many kinds of smoothing filters to remove the blocking effect from the spatial or transform domain. Luo et al propose adaptive removal of blocking effects in the spatial domain and Discrete Cosine Transform (DCT); the model proposed by Singh et al enables filtering of smooth and non-smooth regions with different filters. Shape Adaptive discrete cosine Transform (SA-DCT) is probably the most popular method for removing the blocking effect. By calculation, it can transform the size of the filtered shape to reconstruct sharp edges of the image. However, these filters may over-smooth the image, causing blurring of the edges of the image.
In recent years, a Convolutional Neural Network (CNN) learns feature detection through training data, and a special structure shared by local weights of the Convolutional Neural Network has unique superiority in speech recognition and image processing. CNN also exhibits excellent performance in the image restoration neighborhood. The Super-resolution Network (SRCNN) model proposed by Dong et al illustrates the potential of the end-to-end Data Communication Network (DCN) to solve image Super-resolution. The Deeper SRCNN allows better image recovery by increasing the number of layers. However, the algorithms proposed by them mainly perform super-resolution analysis on the images, and the deblocking effect is poor. An artifact Reduction Convolutional Neural Network (AR-CNN) recovers compressed images such as JPEG, JPEG2000, Twitter, and the like on the basis of the SRCNN. Although this is a more general model, their models are used to recover texture images, which are significantly different from depth images.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a storage medium for compressing and recovering a depth image, and aims to solve the problems of poor compression and recovery quality and poor user experience of the depth image caused by the fact that the prior art cannot provide an effective method for compressing and recovering the depth image.
In one aspect, the present invention provides a method for compressing and restoring a depth image, including the following steps:
receiving an input restoration request of a depth image to be restored, wherein the depth image to be restored is associated with a corresponding texture image;
preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored;
respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model;
inputting the characteristic images obtained by recovering the Y branch model and the D branch model into the M branch model, and recovering the depth image corresponding to the depth image to be recovered through the M branch model.
In another aspect, the present invention provides an apparatus for compressing and restoring a depth image, the apparatus including:
the device comprises a request receiving unit, a processing unit and a processing unit, wherein the request receiving unit is used for receiving an input recovery request of a depth image to be recovered, and the depth image to be recovered is associated with a corresponding texture image;
the image preprocessing unit is used for preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored;
the corresponding input unit is used for respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model; and
and the image recovery unit is used for inputting the characteristic images obtained by recovering the Y branch model and the D branch model into the M branch model and recovering the depth image corresponding to the depth image to be recovered through the M branch model.
In another aspect, the present invention further provides a compression recovery apparatus for a depth image, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the compression recovery method for a depth image when executing the computer program.
In another aspect, the present invention further provides a computer-readable storage medium storing a computer program, which when executed by a processor implements the steps of the method for compression recovery of depth images.
The method comprises the steps of receiving an input recovery request of a depth image to be recovered, associating the depth image to be recovered with a corresponding texture image, preprocessing the texture image and the depth image to be recovered to obtain high-frequency information of the texture image and the depth image to be recovered, respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be recovered and the depth image to be recovered to a Y branch model, a D branch model and an M branch model of a preset depth image recovery model, inputting characteristic images obtained by recovering the Y branch model and the D branch model to the M branch model, and recovering the depth image corresponding to the depth image to be recovered through the M branch model, so that the compression recovery quality of the depth image is improved, and further the user experience is improved.
Drawings
Fig. 1 is a flowchart of an implementation of a depth image compression recovery method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a depth image compression and recovery apparatus according to a second embodiment of the present invention;
fig. 3 is a schematic structural diagram of a depth image compression and recovery apparatus according to a second embodiment of the present invention; and
fig. 4 is a schematic structural diagram of a depth image compression and recovery apparatus according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation flow of a depth image compression recovery method according to a first embodiment of the present invention, and for convenience of description, only the portions related to the first embodiment of the present invention are shown, and the details are as follows:
in step S101, an input request for restoring a depth image to be restored, which is associated with a corresponding texture image, is received.
The embodiment of the invention is suitable for a recovery system of the compressed depth image, so that the compressed depth image can be recovered conveniently. In the embodiment of the invention, a recovery request of a to-be-recovered depth image input by a user is received, wherein the to-be-recovered depth image is associated with a corresponding texture image so as to assist in recovering the to-be-recovered depth image.
Preferably, the depth image restoration model is first constructed before receiving an input restoration request of the depth image to be restored. Specifically, when constructing the depth image restoration model, a Y-branch model of the depth image restoration model is first constructed, where the constructed Y-branch model includes 2 convolutional layers:
Figure BDA0001338269260000041
then, a D-branch model of the depth image recovery model is constructed, wherein the constructed D-branch model comprises 2 convolutional layers:
Figure BDA0001338269260000051
and finally, constructing an M-branch model of the depth image recovery model, wherein the constructed M-branch model comprises 5 convolutional layers:
Figure BDA00013382692600000511
Figure BDA0001338269260000052
Figure BDA0001338269260000053
wherein the content of the first and second substances,
Figure BDA0001338269260000054
the "+" indicates a convolution operation,
Figure BDA0001338269260000055
in order to be a filter, the filter is,
Figure BDA0001338269260000056
is a bias vector.
Further preferably, after the step of constructing the depth image recovery model and before the step of receiving the input request for recovering the depth image to be recovered, the input training set is received first, the training set comprises an uncompressed texture image, an uncompressed depth image and a corresponding compressed depth image, then the uncompressed texture image, the uncompressed depth image and the corresponding compressed depth image in the training set are preprocessed to obtain high-frequency information of the uncompressed texture image and the compressed depth image, then the obtained high-frequency information of the uncompressed texture image, the obtained high-frequency information of the compressed depth image and the obtained compressed depth image are input to a Y branch model, a D branch model and an M branch model of the depth image recovery model which are constructed in advance respectively, and the Y branch model, the D branch model and the M branch model of the depth image recovery model are trained, calculating a loss function of training according to the uncompressed depth images in the training set and the recovered images obtained by training, updating a filter and a bias vector of the depth image recovery model, repeating the training and updating steps when the preset iteration times are not reached, and setting the trained depth image recovery model as the preset depth image recovery model until the preset iteration times are reached, so that the recovery accuracy of the compressed depth image is improved.
Preferably, in calculating the trained loss function, a formula is used
Figure BDA0001338269260000057
Calculating a loss function of the training, wherein N is the number of training targets in the training set,
Figure BDA0001338269260000058
the resulting depth image is restored and,
Figure BDA0001338269260000059
is composed of
Figure BDA00013382692600000510
And (3) corresponding texture images, i represents each training, and theta is a parameter to be optimized and comprises a filter and a bias vector parameter, so that the accuracy and the speed of the training are improved.
In step S102, the texture image and the depth image to be restored are preprocessed, and high frequency information of the texture image and the depth image to be restored is obtained.
In the embodiment of the invention, after a recovery request of a depth image to be recovered, a corresponding texture image and the depth image to be recovered are obtained, then the texture image and the depth image to be recovered are preprocessed respectively, and high-frequency information of the texture image and the depth image to be recovered is extracted, so that the high-frequency information of the texture image and the depth image to be recovered is obtained.
Preferably, when the texture image and the depth image to be recovered are preprocessed, the texture image and the depth image to be recovered are processed according to a formula
Figure BDA0001338269260000061
Preprocessing the texture image to obtain high-frequency information of the texture image according to a formula
Figure BDA0001338269260000062
Preprocessing a depth image to be restored to obtain high-frequency information of the depth image to be restored, wherein a parameter Y is a texture image, a parameter D is the depth image to be restored, h (Y) represents that mean pooling processing is carried out on the texture image, h (D) represents that mean pooling processing is carried out on the depth image to be restored, abs () represents takingAn absolute value function, namely, firstly performing mean pooling (9x9) on a texture image to be preprocessed or a depth image to be restored, then subtracting a pixel value after the mean pooling from a pixel value before the mean pooling to obtain a pixel value difference value before and after the mean pooling, and then taking an absolute value of the difference value to obtain an edge image (namely high-frequency information of the image) of the corresponding image. The image may be blurred by mean pooling, i.e., removing the high frequency information portion of the image, leaving the low frequency information of the image, and then subtracting the image with only the low frequency information from the original image, so that the high frequency information may be retained.
In step S103, the high frequency information of the texture image, the high frequency information of the depth image to be restored, and the depth image to be restored are input to the Y-branch model, the D-branch model, and the M-branch model of the preset depth image restoration model, respectively.
In the embodiment of the invention, after the high-frequency information of the texture image and the depth image to be restored is obtained, the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored are respectively input into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model.
In step S104, the feature images restored by the Y-branch model and the D-branch model are input to the M-branch model, and the depth image corresponding to the depth image to be restored is restored by the M-branch model.
In the embodiment of the invention, the formula is used
Figure BDA0001338269260000063
And
Figure BDA0001338269260000064
inputting the characteristic images obtained by recovering the Y branch model and the D branch model into the M branch model, wherein the M branch model is according to a formula
Figure BDA0001338269260000071
And restoring the depth image corresponding to the depth image to be restored.
In the embodiment of the invention, an input recovery request of a depth image to be recovered is received, the depth image to be recovered is associated with a corresponding texture image, the texture image and the depth image to be recovered are preprocessed to obtain high-frequency information of the texture image and the depth image to be recovered, the high-frequency information of the texture image, the high-frequency information of the depth image to be recovered and the depth image to be recovered are respectively input into a Y branch model, a D branch model and an M branch model of a preset depth image recovery model, characteristic images obtained by recovering the Y branch model and the D branch model are input into the M branch model, and the depth image corresponding to the depth image to be recovered is recovered through the M branch model, so that the compression recovery quality of the depth image is improved, and the user experience is improved.
Example two:
fig. 2 shows a structure of a depth image compression recovery apparatus according to a second embodiment of the present invention, and for convenience of description, only the portions related to the second embodiment of the present invention are shown, where the portions include:
the request receiving unit 21 is configured to receive an input request for restoring a depth image to be restored, where the depth image to be restored is associated with a corresponding texture image.
In the embodiment of the present invention, the request receiving unit 21 first receives a depth image to be restored recovery request input by a user, where the depth image to be restored is associated with a corresponding texture image, so as to assist in recovering the depth image to be restored.
Preferably, the depth image restoration model is first constructed before receiving an input restoration request of the depth image to be restored. Specifically, when constructing the depth image restoration model, a Y-branch model of the depth image restoration model is first constructed, where the constructed Y-branch model includes 2 convolutional layers:
Figure BDA0001338269260000072
then, a D-branch model of the depth image recovery model is constructed, wherein the constructed D-branch model comprises 2 convolutional layers:
Figure BDA0001338269260000073
finally, M scores of the depth image recovery model are constructedThe branch model, the M branch model that constructs contains 5 convolutional layers:
Figure BDA0001338269260000077
Figure BDA0001338269260000074
Figure BDA0001338269260000075
wherein the content of the first and second substances,
Figure BDA0001338269260000076
the "+" indicates a convolution operation,
Figure BDA0001338269260000081
in order to be a filter, the filter is,
Figure BDA0001338269260000082
is a bias vector.
Further preferably, after the step of constructing the depth image recovery model and before the step of receiving the input request for recovering the depth image to be recovered, the input training set is received first, the training set comprises an uncompressed texture image, an uncompressed depth image and a corresponding compressed depth image, then the uncompressed texture image, the uncompressed depth image and the corresponding compressed depth image in the training set are preprocessed to obtain high-frequency information of the uncompressed texture image and the compressed depth image, then the obtained high-frequency information of the uncompressed texture image, the obtained high-frequency information of the compressed depth image and the obtained compressed depth image are input to a Y branch model, a D branch model and an M branch model of the depth image recovery model which are constructed in advance respectively, and the Y branch model, the D branch model and the M branch model of the depth image recovery model are trained, calculating a loss function of training according to the uncompressed depth images in the training set and the recovered images obtained by training, updating a filter and a bias vector of the depth image recovery model, repeating the training and updating steps when the preset iteration times are not reached, and setting the trained depth image recovery model as the preset depth image recovery model until the preset iteration times are reached, so that the recovery accuracy of the compressed depth image is improved.
Preferably, in calculating the trained loss function, a formula is used
Figure BDA0001338269260000083
Calculating a loss function of the training, wherein N is the number of training targets in the training set,
Figure BDA0001338269260000084
the resulting depth image is restored and,
Figure BDA0001338269260000085
is composed of
Figure BDA0001338269260000086
And (3) corresponding texture images, i represents each training, and theta is a parameter to be optimized and comprises a filter and a bias vector parameter, so that the accuracy and the speed of the training are improved.
And the image preprocessing unit 22 is configured to preprocess the texture image and the depth image to be restored, and acquire high-frequency information of the texture image and the depth image to be restored.
In the embodiment of the present invention, after the request for restoring the depth image to be restored, the image preprocessing unit 22 first obtains the corresponding texture image and the depth image to be restored, then preprocesses the texture image and the depth image to be restored, and extracts the high frequency information of the texture image and the depth image to be restored, so as to obtain the high frequency information of the texture image and the depth image to be restored.
Preferably, when the texture image and the depth image to be recovered are preprocessed, the texture image and the depth image to be recovered are processed according to a formula
Figure BDA0001338269260000091
Preprocessing the texture image to obtain high-frequency information of the texture image according to a formula
Figure BDA0001338269260000092
Preprocessing a depth image to be restored to obtain high-frequency information of the depth image to be restored, wherein a parameter Y is a texture image, a parameter D is the depth image to be restored, h (Y) represents mean pooling of the texture image, h (D) represents mean pooling of the depth image to be restored, abs () is an absolute value taking function, namely, firstly performing mean pooling (9x9) on the texture image to be preprocessed or the depth image to be restored, then subtracting a pixel value after the mean pooling from a pixel value before the mean pooling to obtain a pixel value difference before and after the mean pooling, and then taking an absolute value of the difference to obtain an edge image of the corresponding image. The image may be blurred by mean pooling, i.e., removing the high frequency information portion of the image, leaving the low frequency information of the image, and then subtracting the image with only the low frequency information from the original image, so that the high frequency information may be retained.
And the corresponding input unit 23 is configured to input the high-frequency information of the texture image, the high-frequency information of the depth image to be restored, and the depth image to be restored to a Y-branch model, a D-branch model, and an M-branch model of a preset depth image restoration model, respectively.
In the embodiment of the present invention, after obtaining the texture image and the high frequency information of the depth image to be restored, the corresponding input unit 23 respectively inputs the high frequency information of the texture image, the high frequency information of the depth image to be restored, and the depth image to be restored to the Y-branch model, the D-branch model, and the M-branch model of the preset depth image restoration model.
And the image recovery unit 24 is configured to input the feature images obtained by recovering the Y-branch model and the D-branch model into the M-branch model, and recover the depth image corresponding to the depth image to be recovered through the M-branch model.
In an embodiment of the present invention, the image restoration unit 24 is based on a formula
Figure BDA0001338269260000093
And
Figure BDA0001338269260000094
restoring the Y branch model and the D branch modelInputting the obtained characteristic image into M branch model according to formula
Figure BDA0001338269260000095
And restoring the depth image corresponding to the depth image to be restored.
Therefore, preferably, as shown in fig. 3, the apparatus further comprises:
a model construction unit 30 for constructing a depth image restoration model;
preferably, the model building unit 30 includes:
a first constructing unit 301 for constructing a Y-branch model of the depth image restoration model, the convolution layer of the Y-branch model being
Figure BDA0001338269260000101
A second constructing unit 302 for constructing a D-branch model of the depth image restoration model, the convolution layer of the D-branch model being
Figure BDA0001338269260000102
A third constructing unit 303, configured to construct an M-branch model of the depth image restoration model, where a convolution layer of the M-branch model is F1 M=max(0,W1 M*Dq+B1 M),
Figure BDA0001338269260000103
Figure BDA0001338269260000104
Wherein the content of the first and second substances,
Figure BDA0001338269260000105
the "+" indicates a convolution operation,
Figure BDA0001338269260000106
in order to be a filter, the filter is,
Figure BDA0001338269260000107
is a bias vector;
preferably, the image preprocessing unit 22 includes:
a first processing unit 321 for processing the data according to the formula
Figure BDA0001338269260000108
Preprocessing a texture image to obtain high-frequency information of the texture image, wherein a parameter Y is the texture image, h (Y) represents that mean pooling processing is carried out on the texture image, and abs () is an absolute value taking function; and
a second processing unit 322 for processing according to the formula
Figure BDA0001338269260000109
Preprocessing the depth image to be restored to obtain high-frequency information of the depth image to be restored, wherein the parameter D is the depth image to be restored, and h (D) represents that the mean value pooling processing is carried out on the depth image to be restored.
In the embodiment of the invention, the request receiving unit receives an input restoration request of the depth image to be restored, the depth image to be restored is associated with a corresponding texture image, the image preprocessing unit preprocesses the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored, the corresponding input unit respectively inputs the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model, the image restoration unit inputs the characteristic images obtained by restoring the Y-branch model and the D-branch model into the M-branch model, and recovering the depth image corresponding to the depth image to be recovered through the M branch model, thereby improving the compression recovery quality of the depth image and further improving the user experience.
In the embodiment of the present invention, each unit of the compression and recovery apparatus for depth images may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein.
Example three:
fig. 4 shows a structure of a depth image compression recovery apparatus according to a fourth embodiment of the present invention, and for convenience of description, only the portions related to the fourth embodiment of the present invention are shown.
The compression restoration apparatus 4 of a depth image of an embodiment of the present invention includes a processor 40, a memory 41, and a computer program 42 stored in the memory 41 and executable on the processor 40. The processor 40, when executing the computer program 42, implements the steps in the above-described embodiments of the depth image compression recovery method, such as the steps S101 to S104 shown in fig. 1. Alternatively, the processor 40, when executing the computer program 42, implements the functionality of the units in the above-described apparatus embodiments, such as the functionality of the units 21 to 24 shown in fig. 2.
In the embodiment of the present invention, when the processor 40 executes the computer program 42 to implement the steps in the above-mentioned various embodiments of the screen wakeup control method, receiving an input restoration request of a depth image to be restored, wherein the depth image to be restored is associated with a corresponding texture image, preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored, respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored to a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model, inputting characteristic images obtained by restoring the Y-branch model and the D-branch model to the M-branch model, and recovering the depth image corresponding to the depth image to be recovered through the M branch model, thereby improving the compression recovery quality of the depth image.
The steps implemented by the processor 40 in the depth image compression and recovery apparatus 4 when executing the computer program 42 may specifically refer to the description of the method in the first embodiment, and are not described herein again.
Example four:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program that, when executed by a processor, implements the steps in the above-described compression recovery method embodiment of a depth image, for example, steps S101 to S104 shown in fig. 1. Alternatively, the computer program realizes the functions of the units in the above-described apparatus embodiments, such as the functions of the units 21 to 24 shown in fig. 2, when executed by the processor.
In the embodiment of the invention, an input recovery request of a depth image to be recovered is received, the depth image to be recovered is associated with a corresponding texture image, the texture image and the depth image to be recovered are preprocessed to obtain high-frequency information of the texture image and the depth image to be recovered, the high-frequency information of the texture image, the high-frequency information of the depth image to be recovered and the depth image to be recovered are respectively input into a Y branch model, a D branch model and an M branch model of a preset depth image recovery model, characteristic images obtained by recovering the Y branch model and the D branch model are input into the M branch model, and the depth image corresponding to the depth image to be recovered is recovered through the M branch model, so that the compression recovery quality of the depth image is improved. The method for compressing and restoring the depth image when the computer program is executed by the processor may further refer to the description of the steps in the foregoing method embodiments, and will not be described herein again.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (8)

1. A method for compression restoration of a depth image, the method comprising the steps of:
receiving an input restoration request of a depth image to be restored, wherein the depth image to be restored is associated with a corresponding texture image;
preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored;
respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model;
inputting the characteristic images obtained by recovering the Y branch model and the D branch model into the M branch model, and recovering the depth image corresponding to the depth image to be recovered through the M branch model;
constructing the depth image recovery model;
the step of constructing the depth image restoration model includes:
the Y-branch model of the depth image restoration model is constructed, and the convolution layer of the Y-branch model is
Figure FDA0002547816750000011
Constructing the D-branch model of the depth image restoration model, the convolution layer of the D-branch model being
Figure FDA0002547816750000012
Constructing the M-branch model of the depth image restoration model, the convolution layer of the M-branch model being
Figure FDA0002547816750000013
Figure FDA0002547816750000014
The above-mentioned
Figure FDA0002547816750000015
The above-mentioned
Figure FDA0002547816750000016
Said ". mark" denotes a convolution operation, said Wj Y、Wj D
Figure FDA0002547816750000017
Being a filter, said parameter DqFor the depth image to be restored, the
Figure FDA0002547816750000018
Is a bias vector.
2. The method of claim 1, wherein after the step of constructing the depth image restoration model and before the step of receiving an input restoration request for the depth image to be restored, the method further comprises:
receiving an input training set, wherein the training set comprises an uncompressed texture image, an uncompressed depth image and a compressed depth image;
preprocessing the uncompressed texture image, the uncompressed depth image and the compressed depth image in the training set to obtain high-frequency information of the uncompressed texture image and the compressed depth image;
inputting the high-frequency information of the uncompressed texture image, the high-frequency information of the compressed depth image and the compressed depth image into the Y-branch model, the D-branch model and the M-branch model of the depth image recovery model which are constructed in advance respectively, training the Y-branch model, the D-branch model and the M-branch model of the depth image recovery model, calculating a loss function of the training, and updating the filter and the offset vector of the depth image recovery model;
and when the preset iteration times are not reached, repeating the training and updating steps until the iteration times are reached, and setting the trained depth image recovery model as the preset depth image recovery model.
3. The method of claim 2, wherein the step of calculating the trained loss function comprises:
using the formula
Figure FDA0002547816750000021
Calculating a loss function of the training, wherein N is the number of training targets in the training set, and
Figure FDA0002547816750000022
to restore the resulting depth image, the
Figure FDA0002547816750000023
Is composed of
Figure FDA0002547816750000024
And corresponding texture images, wherein i represents each training, and theta is a parameter to be optimized and comprises the filter and the bias vector.
4. The method of claim 1, wherein the step of preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored comprises:
according to the formula
Figure FDA0002547816750000025
Preprocessing the texture image to obtain high-frequency information of the texture image, wherein the parameter Y is the texture image, the h (Y) represents that the texture image is subjected to mean pooling, and abs () is an absolute value taking function;
according to the formula
Figure FDA0002547816750000031
Preprocessing the depth image to be recovered to obtain high-frequency information of the depth image to be recovered, wherein h (D)q) And representing that the depth image to be restored is subjected to mean pooling.
5. An apparatus for compression restoration of a depth image, the apparatus comprising:
the device comprises a request receiving unit, a processing unit and a processing unit, wherein the request receiving unit is used for receiving an input recovery request of a depth image to be recovered, and the depth image to be recovered is associated with a corresponding texture image;
the image preprocessing unit is used for preprocessing the texture image and the depth image to be restored to obtain high-frequency information of the texture image and the depth image to be restored;
the corresponding input unit is used for respectively inputting the high-frequency information of the texture image, the high-frequency information of the depth image to be restored and the depth image to be restored into a Y-branch model, a D-branch model and an M-branch model of a preset depth image restoration model; and
the image recovery unit is used for inputting the characteristic images obtained by recovering the Y branch model and the D branch model into the M branch model and recovering the depth image corresponding to the depth image to be recovered through the M branch model;
the device further comprises:
a model construction unit for constructing the depth image restoration model;
the model building unit includes:
a first constructing unit for constructing the Y-branch model of the depth image restoration model, wherein the convolution layer of the Y-branch model is
Figure FDA0002547816750000032
A second construction unit for constructing the D-branch model of the depth image restoration model, the convolution layer of the D-branch model being
Figure FDA0002547816750000033
And
a third constructing unit, configured to construct the M-branch model of the depth image restoration model, where a convolution layer of the M-branch model is
Figure FDA0002547816750000034
Figure FDA0002547816750000035
The above-mentioned
Figure FDA0002547816750000041
The above-mentioned
Figure FDA0002547816750000042
Said ". mark" denotes a convolution operation, said Wj Y、Wj D
Figure FDA0002547816750000043
Being a filter, said parameter DqFor the depth image to be restored, the
Figure FDA0002547816750000044
Is a bias vector.
6. The apparatus of claim 5, wherein the image pre-processing unit comprises:
a first processing unit for processing the data according to a formula
Figure FDA0002547816750000045
Preprocessing the texture image to obtain high-frequency information of the texture image, wherein the parameter Y is the texture image, the h (Y) represents that the texture image is subjected to mean pooling, and abs () is an absolute value taking function; and
a second processing unit for processing the data according to the formula
Figure FDA0002547816750000046
Preprocessing the depth image to be recovered to obtain high-frequency information of the depth image to be recovered, wherein h (D)q) And representing that the depth image to be restored is subjected to mean pooling.
7. An apparatus for compression restoration of a depth image, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method according to any one of claims 1 to 4 when executing the computer program.
8. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201710524546.0A 2017-06-30 2017-06-30 Compression recovery method, device and equipment for depth image and storage medium Active CN107301662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710524546.0A CN107301662B (en) 2017-06-30 2017-06-30 Compression recovery method, device and equipment for depth image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710524546.0A CN107301662B (en) 2017-06-30 2017-06-30 Compression recovery method, device and equipment for depth image and storage medium

Publications (2)

Publication Number Publication Date
CN107301662A CN107301662A (en) 2017-10-27
CN107301662B true CN107301662B (en) 2020-09-08

Family

ID=60135428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710524546.0A Active CN107301662B (en) 2017-06-30 2017-06-30 Compression recovery method, device and equipment for depth image and storage medium

Country Status (1)

Country Link
CN (1) CN107301662B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288255B (en) * 2018-01-26 2022-02-18 中国科学院广州生物医药与健康研究院 Phase recovery method, device and system
CN108564546B (en) * 2018-04-18 2020-08-04 厦门美图之家科技有限公司 Model training method and device and photographing terminal
CN109345449B (en) * 2018-07-17 2020-11-10 西安交通大学 Image super-resolution and non-uniform blur removing method based on fusion network
CN109410318B (en) * 2018-09-30 2020-09-08 先临三维科技股份有限公司 Three-dimensional model generation method, device, equipment and storage medium
CN112368738B (en) * 2020-05-18 2024-01-16 上海联影医疗科技股份有限公司 System and method for image optimization
CN112437311A (en) * 2020-11-23 2021-03-02 黄晓红 Video sequence compression coding method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581687A (en) * 2013-09-11 2014-02-12 北京交通大学长三角研究院 Self-adaptive depth image coding method based on compressed sensing
EP2230855B1 (en) * 2009-03-17 2014-10-15 Mitsubishi Electric Corporation Synthesizing virtual images from texture and depth images

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120003147A (en) * 2010-07-02 2012-01-10 삼성전자주식회사 Depth map coding and decoding apparatus using loop-filter
CN105447840B (en) * 2015-12-09 2019-01-29 西安电子科技大学 The image super-resolution method returned based on active sampling with Gaussian process
CN106791927A (en) * 2016-12-23 2017-05-31 福建帝视信息科技有限公司 A kind of video source modeling and transmission method based on deep learning
CN106709875B (en) * 2016-12-30 2020-02-18 北京工业大学 Compressed low-resolution image restoration method based on joint depth network
CN106683067B (en) * 2017-01-20 2020-06-23 福建帝视信息科技有限公司 Deep learning super-resolution reconstruction method based on residual sub-images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2230855B1 (en) * 2009-03-17 2014-10-15 Mitsubishi Electric Corporation Synthesizing virtual images from texture and depth images
CN103581687A (en) * 2013-09-11 2014-02-12 北京交通大学长三角研究院 Self-adaptive depth image coding method based on compressed sensing

Also Published As

Publication number Publication date
CN107301662A (en) 2017-10-27

Similar Documents

Publication Publication Date Title
CN107301662B (en) Compression recovery method, device and equipment for depth image and storage medium
Li et al. An efficient deep convolutional neural networks model for compressed image deblocking
CN107403415B (en) Compressed depth map quality enhancement method and device based on full convolution neural network
CN107392852B (en) Super-resolution reconstruction method, device and equipment for depth image and storage medium
Zhang et al. Image restoration using joint statistical modeling in a space-transform domain
US9736455B2 (en) Method and apparatus for downscaling depth data for view plus depth data compression
Yue et al. CID: Combined image denoising in spatial and frequency domains using Web images
CN109035146B (en) Low-quality image super-resolution method based on deep learning
CN112801901A (en) Image deblurring algorithm based on block multi-scale convolution neural network
CN107563974B (en) Image denoising method and device, electronic equipment and storage medium
WO2005124664A2 (en) Image clean-up and pre-coding
WO2016127271A1 (en) An apparatus and a method for reducing compression artifacts of a lossy-compressed image
CN109949234A (en) Video restoration model training method and video restoration method based on depth network
Dong et al. Learning spatially variant linear representation models for joint filtering
CN110415169B (en) Depth map super-resolution reconstruction method, system and electronic equipment
Yue et al. Image noise estimation and removal considering the bayer pattern of noise variance
CN114926336A (en) Video super-resolution reconstruction method and device, computer equipment and storage medium
CN111626943B (en) Total variation image denoising method based on first-order forward and backward algorithm
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
Jammal et al. Multiview video quality enhancement without depth information
Wang et al. Data-driven tight frame for multi-channel images and its application to joint color-depth image reconstruction
CN113362338B (en) Rail segmentation method, device, computer equipment and rail segmentation processing system
Yatnalli et al. Review of inpainting algorithms for wireless communication application
CN110766635A (en) Identification card digital portrait restoration method and device, storage medium and processor
Yang et al. Hyperspectral Image Denoising with Collaborative Total Variation and Low Rank Regularization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant