CN113327209A - Depth image generation method and device, electronic equipment and storage medium - Google Patents

Depth image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113327209A
CN113327209A CN202110728603.3A CN202110728603A CN113327209A CN 113327209 A CN113327209 A CN 113327209A CN 202110728603 A CN202110728603 A CN 202110728603A CN 113327209 A CN113327209 A CN 113327209A
Authority
CN
China
Prior art keywords
image
denoising
data
denoised
light spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110728603.3A
Other languages
Chinese (zh)
Inventor
李宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110728603.3A priority Critical patent/CN113327209A/en
Publication of CN113327209A publication Critical patent/CN113327209A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a depth image generation method, a depth image generation device, an electronic device and a computer readable storage medium, wherein the depth image generation method comprises the following steps: acquiring original depth information corresponding to a target object; generating an original depth image based on the original depth information, and denoising the original depth image to obtain a first denoised image; denoising the original depth information to obtain preprocessed depth information, and generating a second denoised image based on the preprocessed depth information; and fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object. By adopting the method, the high-quality and accurate depth image can be generated.

Description

Depth image generation method and device, electronic equipment and storage medium
Technical Field
The present application relates to image processing technologies, and in particular, to a depth image generation method, apparatus, electronic device, and computer-readable storage medium.
Background
With the development of electronic device technology, the demand of depth images is increasing. Structured light modules have the advantages of low cost, high real-time performance, convenience and the like, and are widely applied to obtaining depth images. However, due to the influence of device power consumption, precision, external factors and the like, the depth image output by the structured light module may have more noises, and the noises may affect the accuracy of the depth image.
Disclosure of Invention
The embodiment of the application provides a depth image generation method and device, electronic equipment and a computer readable storage medium, which can generate a high-quality and accurate depth image.
A depth image generation method, comprising:
acquiring original depth information corresponding to a target object; generating an original depth image based on the original depth information, and denoising the original depth image to obtain a first denoised image;
denoising the original depth information to obtain preprocessed depth information, and generating a second denoised image based on the preprocessed depth information;
and fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object.
A depth image generation apparatus comprising:
the acquisition module is used for acquiring original depth information corresponding to the target object; the first denoising module is used for generating an original depth image based on the original depth information and denoising the original depth image to obtain a first denoised image;
the second denoising module is used for denoising the original depth information to obtain preprocessed depth information and generating a second denoising image based on the preprocessed depth information;
and the fusion module is used for fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object.
An electronic device comprising a memory and a processor, the memory having a computer program stored thereon, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the depth image generation method.
A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, carries out the steps of the above-mentioned depth image generation method.
According to the depth image generation method, the depth image generation device, the electronic equipment and the computer readable storage medium, two paths of denoising processing are carried out on original depth information corresponding to a target object, the first path is used for generating an original depth image based on the original depth information, then denoising processing is carried out on the original depth image to obtain a first denoised image, the second path is used for denoising processing the original depth information to obtain preprocessed depth information, then a second denoised image is generated based on the preprocessed depth information, and finally the first denoised image and the second denoised image are fused to obtain a target depth image corresponding to the target object.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of a method for depth image generation in one embodiment;
FIG. 2 is a flow chart of a depth image generation method in another embodiment;
FIG. 3 is a flow diagram of the steps of a denoising process in one embodiment;
FIG. 4 is a schematic illustration of different levels of images in one embodiment;
FIG. 5 is a flow chart of a depth image generation method in yet another embodiment;
FIG. 6 is a block diagram showing the configuration of a depth image generating apparatus according to an embodiment;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
In an embodiment, as shown in fig. 1, a depth image generating method is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
102, acquiring original depth information corresponding to a target object; wherein the target object is a photographic subject in the embodiment of the present application.
In one embodiment, the terminal is configured with a structured light module through which a depth image of a photographic subject is acquired. Optionally, the structured light module may include a signal transmitter, a signal receiver, and a depth image generator, wherein the signal transmitter is configured to transmit a light spot signal to the photographic subject, the signal receiver is configured to receive the light spot signal reflected by the photographic subject, and the depth image generator is configured to generate a depth image corresponding to the photographic subject based on the light spot signal reflected by the photographic subject.
In one embodiment, the original depth information may be a spot signal reflected by the target object. In other embodiments, the original depth information may be a two-dimensional electrical signal obtained by performing photoelectric conversion on a light spot signal reflected by the target object, or a pre-processed two-dimensional electrical signal obtained by pre-processing the two-dimensional electrical signal, such as speckle, laser stripe, gray code, sinusoidal stripe, or the like, or intermediate image data or other form data used for generating a depth image, obtained by further processing the pre-processed two-dimensional electrical signal.
In one embodiment, the terminal transmits a light spot signal to the target object, receives the light spot signal reflected by the target object, and determines the original depth information corresponding to the target object according to the light spot signal reflected by the target object.
And 104, generating an original depth image based on the original depth information, and denoising the original depth image to obtain a first denoised image.
In the application, referring to fig. 2, a terminal performs denoising processing on original depth information corresponding to a target object through two paths, wherein a first path is to generate an original depth image based on the original depth information and then perform denoising processing on the original depth image to obtain a first denoised image; the second path is to perform denoising processing on the original depth information to obtain preprocessed depth information, and then generate a second denoised image based on the preprocessed depth information. It is understood that the original depth information input by the first path and the second path may be the same or different information, for example, the original depth information input by the first path may be a light spot signal reflected by the target object, the original depth information input by the second path may be a two-dimensional electrical signal obtained by performing a photoelectric conversion process on the light spot signal reflected by the target object, and the like.
In one embodiment, the terminal generates an original depth image based on the original depth information by a depth image generator in the structured light module. Optionally, the terminal performs template matching on the original depth information through a depth image generator in the structured light module, and obtains a depth image corresponding to the original depth information, that is, the original depth image.
In one embodiment, the terminal processes the original depth information to obtain data required by the depth image generator, and inputs the processed original depth information to the depth image generator in a processing manner, such as filtering, encoding, data format conversion, and the like.
In one embodiment, the terminal performs denoising processing on the original depth image according to a general denoising strategy to obtain a first denoised image. It can be understood that a general denoising strategy can meet the requirement of the embodiment of the present application on denoising, so that the general denoising strategy can be adopted to denoise the original depth image. In the embodiment, the original depth image is subjected to denoising processing in the first path, so that error information in the original depth image can be eliminated, and a better denoising effect is achieved.
And 106, denoising the original depth information to obtain preprocessed depth information, and generating a second denoised image based on the preprocessed depth information.
In one embodiment, the terminal performs denoising processing on the original depth information according to a general denoising strategy to obtain the preprocessed depth information. It can be understood that the terminal may perform denoising processing on the original depth information by using a denoising strategy that is the same as or different from the denoising strategy in the first path.
In one embodiment, the terminal performs denoising processing on the original depth image or the original depth information, and the data types before and after the denoising processing can be kept consistent. For example, the original depth information is a two-dimensional electrical signal, the terminal performs denoising processing on the original depth information, and the preprocessed depth information is also a two-dimensional electrical signal. In one embodiment, the terminal generates a second denoised image based on the pre-processed depth information through a depth image generator in the structured light module. Optionally, the terminal performs template matching on the pre-processing depth information through a depth image generator in the structured light module to obtain a depth image corresponding to the pre-processing depth information, that is, a second denoised image. In one embodiment, the terminal processes the pre-processed depth information to obtain data required by the depth image generator, and inputs the processed pre-processed depth information to the depth image generator by a processing method such as filtering, encoding, data format conversion, and the like.
In this embodiment, the original depth image is denoised in the first path, and since the original depth image has lost some original depth information such as details, textures, edges, and the like, the obtained first denoised image may not be accurate enough although the denoising effect is good. The original depth information is subjected to denoising processing in the second channel, the obtained preprocessing information is accurate because part of the original depth information is not lost, and the second denoising image is generated based on the preprocessing information, so that the accuracy of the depth image can be improved.
And step 108, fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object.
In one embodiment, the terminal performs fusion processing on the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object.
In the embodiment, the first denoised image has a good denoising effect, and the second denoised image has high accuracy and image quality, so that the first denoised image and the second denoised image are fused to obtain a high-quality and accurate depth image.
Compared with the traditional structured light module, the structured light module provided by the application has the advantages that processing steps of denoising an original depth image, denoising the original depth information, generating a depth image based on the preprocessed depth information, fusing a first denoised image and a second denoised image and the like are added. Therefore, when the terminal shoots the depth image through the structured light module provided by the application, the accurate and high-quality depth image can be obtained.
In the depth image generation method in this embodiment, two paths of denoising processing are performed on original depth information corresponding to a target object, a first path is to generate an original depth image based on the original depth information, then perform denoising processing on the original depth image to obtain a first denoised image, a second path is to perform denoising processing on the original depth information to obtain preprocessed depth information, then generate a second denoised image based on the preprocessed depth information, and finally perform fusion processing on the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object.
In an embodiment, taking the original depth image or the original depth information as the original depth data, referring to fig. 3, the step of denoising the original depth data includes: carrying out space domain denoising processing on the original depth data to obtain space domain denoising data; performing time domain denoising processing on the space domain denoising data based on reference denoising data corresponding to the original depth data to obtain fusion denoising data; performing texture fusion processing on the original depth data and the fusion denoising data to obtain target denoising data; the target denoising data is a first denoising image or preprocessing depth information.
In one embodiment, the terminal may employ the same denoising strategy for the original depth image and the original depth information. For convenience of description, the embodiments of the present application adopt a uniform name for input data (i.e., raw depth data), intermediate data (i.e., spatial domain denoising data, reference denoising data, and fusion denoising data) and output data (i.e., target denoising data) of a raw depth image and raw depth information during a denoising process, but the input data, the intermediate data, and the output data of the two during the denoising process are respectively different data. The denoising processing procedures of the original depth image and the original depth information are respectively introduced below.
In one embodiment, for an original depth image, a terminal performs spatial domain denoising processing on the original depth image to obtain a spatial domain denoised image; performing time domain denoising processing on the space domain denoising image based on the reference denoising image corresponding to the light spot signal to obtain a fusion denoising image; and carrying out texture fusion processing on the original depth image and the fused denoising image to obtain a first denoising image.
In this embodiment, for the original depth image, in the denoising process, the input data, the intermediate data, and the output data may be in a depth image format. In one embodiment, for original depth information, a terminal performs spatial domain denoising processing on the original depth information to obtain spatial domain denoising information; carrying out time domain denoising processing on the space domain denoising information based on the reference denoising information corresponding to the light spot signal to obtain fusion denoising information; and carrying out texture fusion processing on the original depth information and the fusion denoising information to obtain the preprocessing depth information.
In one embodiment, for raw depth information, during the denoising process, the input data and the output data may be in an image format or a non-image format, and the intermediate data may be in an image format. Optionally, if the original depth information is in a non-image format, the original depth information may be first converted into an image format, and then the original depth information after format conversion may be subjected to denoising processing. It is to be understood that the image format in the present embodiment is different from the depth image format, such as a two-dimensional electrical signal image or the like. In one embodiment, the terminal performs spatial domain denoising processing on the original depth data according to a general spatial domain denoising strategy to obtain spatial domain denoising data. Spatial domain denoising may be processing the gray values of the pixels of a single frame image. The general spatial domain denoising strategy comprises a median filtering algorithm, an arithmetic mean filtering and Gaussian filtering algorithm, a bilateral filtering algorithm and the like.
In one embodiment, the terminal continuously transmits more than one light spot signal to the target object, receives more than one light spot signal reflected by the target object, and samples the light spot signals reflected by the target object to obtain a preset number of light spot signals. For one of the sampled light spot signals, the reference denoising data corresponding to the light spot signal can be determined by the other sampled light spot signals. Therefore, the terminal carries out time domain denoising processing on the space domain denoising data based on the reference denoising data, and the denoising accuracy and the denoising effect can be improved.
In one embodiment, the terminal performs texture fusion processing on the original depth data and the fused denoising data, and restores the texture of the fused denoising data according to the texture of the original depth data, so that the denoised target denoising data can keep a clear texture.
In the embodiment, in the denoising process, the spatial domain denoising process is performed on the original depth data, the time domain denoising process is performed on the reference denoising data corresponding to the original depth data, and finally the texture fusion process is performed on the original depth data, so that the denoising accuracy and the denoising effect can be improved, and the target denoising data after the denoising process can keep clear textures.
In one embodiment, the method further comprises: more than one light spot signal reflected by the target object is obtained; determining original depth data corresponding to more than one light spot signal; forming a light spot signal sequence according to more than one light spot signal, determining reference denoising data corresponding to the light spot signals according to fusion denoising data corresponding to the light spot signals positioned in front of the light spot signals in the light spot signal sequence for each light spot signal in the light spot signal sequence, and taking the reference denoising data corresponding to the light spot signals as reference denoising data corresponding to original depth data determined according to the light spot signals.
In one embodiment, the terminal may form a sequence of spot signals according to the reception time of more than one spot signal. For example, the terminal may sequence the more than one light spot signals according to the receiving time of the more than one light spot signals from near to far or from far to near, so as to obtain a light spot signal sequence. In other embodiments, the terminal may randomly order more than one spot signal to form a sequence of spot signals.
In one embodiment, the terminal continuously transmits more than one light spot signal to the target object, receives more than one light spot signal reflected by the target object, and samples the light spot signals reflected by the target object to obtain a preset number of light spot signals. The preset number can be set according to practical application, such as 4.
In an embodiment, for each light spot signal in the light spot signal sequence, the terminal may determine reference denoising data corresponding to the light spot signal according to the fusion denoising data corresponding to the light spot signal located before the light spot signal in the light spot signal sequence.
In one embodiment, the terminal takes the fused denoising data corresponding to the light spot signal which is located in front of and adjacent to the light spot signal in the light spot signal sequence as the reference denoising data corresponding to the light spot signal. In other embodiments, the terminal determines the reference denoising data corresponding to the light spot signal according to the fusion denoising data corresponding to all the light spot signals located before the light spot signal in the light spot signal sequence, for example, performs fusion processing on the fusion denoising data corresponding to all the light spot signals located before the light spot signal in the light spot signal sequence, and uses the fusion processing result as the reference denoising data corresponding to the light spot signal. The fusion processing mode may be to take an average value, a maximum value, a minimum value, or the like of the fusion denoising data according to the pixel position.
In one embodiment, the method further comprises: for each light spot signal in the light spot signal sequence, when the light spot signal is a first light spot signal in the light spot signal sequence, after space domain denoising data corresponding to the first light spot signal is obtained, texture fusion processing is carried out on original depth data and the space domain denoising data corresponding to the first light spot signal, and target denoising data is obtained.
In one embodiment, if the light spot signal is a first light spot signal in the light spot signal sequence, the terminal may directly perform texture fusion processing on the original depth data and the spatial domain denoising data corresponding to the light spot signal, and add the spatial domain denoising data or the target denoising data corresponding to the light spot signal into the reference data set to provide reference denoising data for other light spot signals in the light spot signal sequence.
In one embodiment, the terminal may output the target denoising data corresponding to any one of the light spot signals in the light spot signal sequence as a first denoising image or preprocessing depth information, for example, output the target denoising data corresponding to the last light spot signal in the light spot signal sequence as the first denoising image or preprocessing depth information.
In the embodiment, more than one light spot signal continuously reflected by the target object forms a light spot signal sequence, and the reference denoising data is provided in the denoising processing process through the light spot signal sequence, so that the denoising accuracy and the denoising effect are improved.
In one embodiment, the spatial domain denoising data is a spatial domain denoising image, the reference denoising data is a reference denoising image, and the fusion denoising data is a fusion denoising image; based on reference denoising data corresponding to the original depth data, performing time domain denoising processing on the space domain denoising data to obtain fusion denoising data, wherein the fusion denoising data comprises: constructing a target difference image based on the difference between the reference denoising image and the airspace denoising image; determining a fusion denoising weight corresponding to the target differential image; and performing time domain denoising processing on the reference denoising image and the spatial domain denoising image by using the fusion denoising weight to obtain a fusion denoising image.
The target difference image reflects the difference between the reference denoising image and the airspace denoising image, and the larger the difference between the reference denoising image and the airspace denoising image is, the larger the pixel value mean of the target difference image is.
For the original depth image and the original depth information, for convenience of description, the embodiments of the present application use a uniform name for intermediate data (i.e., the spatial domain denoised image, the reference denoised image, and the fusion denoised image) of the two in the denoising process, but the intermediate data of the two in the denoising process are images with different formats respectively. For example, for the original depth image, the spatial domain denoised image in the denoising process may be in a depth image format, and for the original depth information, the spatial domain denoised image in the denoising process may be in a two-dimensional electrical signal image format.
In one embodiment, the terminal constructs a target difference image based on the difference of pixel values of the reference denoised image and the spatial domain denoised image at each pixel position. For example, the terminal obtains a pixel value difference value of each pixel position of the reference denoised image and the spatial domain denoised image, and takes an absolute value of the difference value as a pixel value of the target difference image at each pixel position. In one embodiment, the terminal determines a fusion denoising weight corresponding to the target difference image according to a relation between the target difference image and the fusion denoising weight. For example, the terminal determines a fusion denoising weight corresponding to the target difference image according to a relationship between the pixel value mean of the target difference image and the fusion denoising weight. Alternatively, the mean value of the pixel values of the target differential image and the fusion denoising weight may be inversely proportional.
In one embodiment, the terminal determines the fusion denoising weight according to the size relationship between the pixel value mean value of the target difference image and a preset threshold value. In other embodiments, the terminal determines a solving relational expression of the fusion denoising weight according to the magnitude relation between the pixel value mean value of the target difference image and a preset threshold value, and calculates the fusion denoising weight according to the determined solving relational expression; and the terminal takes the pixel value mean value of the target difference image and a preset threshold value as independent variables, calculates the fusion denoising weight according to a preset functional relation, and the like.
In one embodiment, the terminal performs time domain denoising processing on the reference denoised image and the spatial domain denoised image by using the fusion denoising weight to obtain a fusion denoised image, and the fusion denoised image can be obtained by calculating according to the following formula:
Figure BDA0003138487240000111
wherein, fnr _ frmi,jRepresenting the pixel value of the fused denoised image at pixel position (i, j); snr _ frmi,jRepresenting the pixel value of the spatial domain denoised image at the pixel position (i, j); ref _ frmi,jA pixel value representing the reference denoised image at pixel location (i, j); bdr _ wgt denotes the fusion denoising weight.
In the embodiment, time domain denoising processing is performed on the reference denoising data and the space domain denoising data, so that the denoising accuracy and the denoising effect are improved.
In one embodiment, constructing a target difference image based on a difference between a reference denoised image and a spatial domain denoised image comprises: acquiring reference de-noising images of different levels; the method comprises the steps of averagely dividing a reference denoised image into more than one image blocks, wherein the number of the image blocks in the reference denoised images at different levels is different; acquiring space domain de-noising images of different levels; the spatial domain denoised image is averagely divided into more than one image blocks, and the number of the image blocks in the spatial domain denoised images at different levels is different; obtaining a difference image of a corresponding level based on the difference between the reference denoising image and the airspace denoising image of the same level; and generating a target differential image according to the differential images of different levels.
In one embodiment, the terminal averagely divides the reference de-noising image into more than one image block, and obtains reference de-noising images of different levels according to different sizes of the image blocks; in the same way, the space domain denoised image is averagely divided into more than one image blocks, and the space domain denoised images of different levels are obtained according to the difference of the sizes of the image blocks; obtaining a difference image of a corresponding level based on the difference between the reference denoising image and the airspace denoising image of the same level; and generating a target differential image according to the differential images of different levels.
For example, referring to fig. 4, it can be seen that, according to different sizes of image blocks, different levels of reference denoised images and spatial domain denoised images are obtained, and corresponding difference images are respectively determined for the reference denoised images and the spatial domain denoised images of each level.
In one embodiment, the terminal obtains the pixel value difference value of the reference denoised image and the spatial domain denoised image at each pixel position at the same level, and the absolute value of the difference value is used as the pixel value of the difference image at each pixel position at the corresponding level. In one embodiment, the terminal generates the target difference image according to the difference images of different levels, and the generation mode can be that the difference images of different levels are averaged, maximum value or minimum value according to pixel positions, and the like.
In one embodiment, after obtaining the target difference image, the terminal may perform filtering processing on the target difference image to strengthen and smooth the boundary in the target difference image. Optionally, the terminal performs smoothing filtering on the target difference image, and then performs expansion processing based on the smoothing filtering result to obtain a final target difference image.
In one embodiment, the reference denoised image is averagely divided into more than one image blocks, and the mean value of the pixel values of each pixel position in each image block is used as the pixel value of each pixel position in each image block; averagely dividing the space domain denoising image into more than one image block, and taking the pixel value mean value of each pixel position in each image block as the pixel value of each pixel position in each image block; obtaining a differential image of a corresponding level based on the difference between the reference denoised image and the spatial domain denoised image of the same level, wherein the differential image comprises: and obtaining a difference image of a corresponding level based on the pixel value difference of the reference denoised image and the spatial domain denoised image of the same level at each pixel position.
In one embodiment, the terminal obtains different levels of reference de-noising images and spatial domain de-noising images according to different sizes of image blocks, and for one level of reference de-noising images or spatial domain de-noising images, the mean value of pixel values of pixel positions in each image block is used as the pixel value of each pixel position in each image block, so that error de-noising can be avoided, and de-noising accuracy is improved.
In the embodiment, different levels of reference denoising images and spatial domain denoising images are obtained according to different sizes of image blocks, for one level of the reference denoising images or the spatial domain denoising images, the mean value of the pixel values of each pixel position in each image block is used as the pixel value of each pixel position in each image block, and target difference images respectively determined according to the different levels of the reference denoising images and the spatial domain denoising images are fused, so that the denoising accuracy can be improved.
In one embodiment, constructing a target difference image based on a difference between a reference denoised image and a spatial domain denoised image comprises: when the number of the reference denoising images corresponding to the light spot signals is more than one, constructing difference images respectively based on the difference between each reference denoising image and the space domain denoising image; a target differential image is generated from the more than one differential image.
In one embodiment, the terminal may form a light spot signal sequence according to the receiving time of a preset number of light spot signals, and for each light spot signal in the light spot signal sequence, the fused denoising data corresponding to the light spot signal located before the light spot signal in the light spot signal sequence is used as the reference denoising data corresponding to the light spot signal. When the number of the reference denoising images corresponding to the light spot signal is more than one, the terminal utilizes each reference denoising image to respectively construct a difference image with the space domain denoising image, and generates a target difference image according to the more than one difference image.
In one embodiment, the target differential image may be calculated by the following formula:
Figure BDA0003138487240000141
wherein,
Figure BDA0003138487240000142
a pixel value representing the target differential image at pixel position (i, j);
Figure BDA0003138487240000143
indicating the pixel value of the nth difference image at pixel position (i, j).
In one embodiment, the target differential image may be further calculated by the following formula:
Figure BDA0003138487240000144
wherein,
Figure BDA0003138487240000145
a pixel value representing the target differential image at pixel position (i, j);
Figure BDA0003138487240000146
a pixel value representing the kth difference image at pixel position (i, j); n represents the number of differential images.
In this embodiment, when the number of the reference denoising images corresponding to the light spot signal is more than one, the terminal constructs a difference image with each reference denoising image and the airspace denoising image, and generates a target difference image according to the more than one difference image, so that the denoising accuracy can be improved.
In one embodiment, performing texture fusion processing on the original depth data and the fused denoising data to obtain target denoising data includes: acquiring a texture value corresponding to the original depth data; determining texture fusion weight corresponding to the texture value; and performing texture fusion processing on the original depth data and the fusion denoising data by using the texture fusion weight to obtain target denoising data.
In one embodiment, the terminal may detect the original depth data based on a general edge detection operator to obtain a texture value. For example, the terminal may detect the original depth data based on a horizontal Sobel operator and a vertical Sobel operator, generate a gradient value, and use the gradient value as a texture value corresponding to the original depth data.
In one embodiment, the texture value and the texture fusion weight corresponding to the original depth data may be proportional. Optionally, the terminal determines the texture fusion weight corresponding to the texture value according to a preset mapping relationship between the texture value and the texture fusion weight. In one embodiment, the terminal performs texture fusion processing on the original depth data and the fusion de-noising data by using texture fusion weights to obtain target de-noising data, and the target de-noising data is obtained by calculating according to the following formula:
Figure BDA0003138487240000151
in this embodiment, outfrm is target denoising data; k is a texture fusion weight; grad _ val is a texture value; currfrm is the original depth data; tnrfrm is the fused de-noised data.
In the embodiment, the texture of the fused denoising data is restored according to the texture of the original depth data, so that the target denoising data after denoising processing can keep clear texture.
In one embodiment, fusing the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object, including: acquiring pixel values and gradients of the first denoised image and the second denoised image at each pixel position; and determining the pixel value of the target depth image at each pixel position according to the pixel value and the gradient of the first denoised image and the second denoised image at each pixel position.
In one embodiment, the terminal detects the pixel values and gradients of the first denoised image and the second denoised image at each pixel position, and determines the pixel values of the target depth image at each pixel position according to the pixel values and gradients of the first denoised image and the second denoised image at each pixel position.
In one embodiment, the target depth image is calculated by the following formula:
Figure BDA0003138487240000161
wherein, Vali,jA pixel value representing the target depth image at pixel location (i, j); val _ ai,jRepresenting the pixel value, grad _ a, of the first denoised image at pixel location (i, j)i,jRepresenting a gradient of the first denoised image at pixel location (i, j); val _ bi,jRepresenting the pixel value of the second denoised image at pixel location (i, j), grad _ bi,jRepresenting the gradient of the second denoised image at pixel location (i, j).
In the embodiment, the first denoising image has a good denoising effect, and the second denoising image has high accuracy and image quality, and the first denoising image and the second denoising image are fused to obtain a high-quality and accurate depth image.
In one embodiment, a terminal performs denoising processing on original depth information corresponding to a target object through two paths, a first path generates an original depth image based on the original depth information, then performs denoising processing on the original depth image to obtain a first denoised image, a second path performs denoising processing on the original depth information to obtain preprocessed depth information, then generates a second denoised image based on the preprocessed depth information, and finally performs fusion processing on the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object, wherein denoising strength of the first path is smaller than denoising strength of the second path.
In one embodiment, the terminal determines original depth information based on a light spot signal reflected by a target object, performs denoising processing on the original depth information to obtain preprocessed depth information, generates a second denoised image based on the preprocessed depth information, and takes the second denoised image as a target depth image corresponding to the target object.
In an embodiment, as shown in fig. 5, a depth image generating method is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. In this embodiment, the method includes the steps of:
step 502, obtaining original depth information corresponding to the target object, where the original depth information is determined by a light spot signal reflected by the target object.
Step 504, generating an original depth image based on the original depth information; carrying out space domain denoising processing on the original depth image to obtain a first space domain denoising image; acquiring first reference denoised images of different levels, wherein the first reference denoised images are averagely divided into more than one image block, the number of the image blocks in the first reference denoised images of different levels is different, and the mean value of the pixel values of each pixel position in each image block is used as the pixel value of each pixel position in each image block; acquiring first space domain de-noising images of different levels, wherein the first space domain de-noising images are averagely divided into more than one image block, the number of the image blocks in the first space domain de-noising images of different levels is different from that of the first space domain de-noising images, and the pixel value mean value of each pixel position in each image block is used as the pixel value of each pixel position in each image block; obtaining a differential image of a corresponding level based on the pixel value difference of the first reference denoised image and the first airspace denoised image of the same level at each pixel position, and generating a first target differential image according to the differential images of different levels; determining a fusion denoising weight corresponding to the first target differential image; carrying out time domain denoising processing on the first reference denoising image and the first spatial domain denoising image by using the fusion denoising weight to obtain a first fusion denoising image; acquiring a texture value corresponding to the original depth image, determining a texture fusion weight corresponding to the texture value, and performing texture fusion processing on the original depth image and the first fusion de-noised image by using the texture fusion weight to obtain a first de-noised image.
Step 506, performing airspace denoising processing on the original depth information to obtain a second airspace denoising image; acquiring second reference denoised images of different levels, wherein the second reference denoised images are averagely divided into more than one image block, the number of the image blocks in the second reference denoised images of different levels is different, and the mean value of the pixel values of each pixel position in each image block is used as the pixel value of each pixel position in each image block; acquiring second space domain de-noising images of different levels, wherein the second space domain de-noising images are averagely divided into more than one image block, the number of the image blocks in the second space domain de-noising images of different levels is different, and the mean value of the pixel value of each pixel position in each image block is used as the pixel value of each pixel position in each image block; obtaining a difference image of a corresponding level based on the pixel value difference of a second reference denoised image and a second airspace denoised image of the same level at each pixel position, and generating a second target difference image according to the difference images of different levels; determining a fusion denoising weight corresponding to the second target difference image; performing time domain denoising processing on the second reference denoising image and the second spatial domain denoising image by using the fusion denoising weight to obtain a second fusion denoising image; acquiring a texture value corresponding to the original depth information, determining a texture fusion weight corresponding to the texture value, and performing texture fusion processing on the original depth information and the second fusion de-noised image by using the texture fusion weight to obtain preprocessed depth information; and generating a second denoised image based on the pre-processing depth information.
It is understood that the intermediate data (i.e. the spatial domain denoised image, the reference denoised image, the target difference image, the fusion denoised image, etc.) of the original depth image and the original depth information in the denoising process in steps 504 and 506 are respectively different data. For the purpose of differentiation, the intermediate data in step 504 are respectively named as a first spatial domain denoised image, a first reference denoised image, a first target difference image and a first fusion denoised image, and the intermediate data in step 506 are respectively named as a second spatial domain denoised image, a second reference denoised image, a second target difference image and a second fusion denoised image.
And step 508, acquiring the pixel values and gradients of the first denoised image and the second denoised image at the pixel positions, and determining the pixel values of the target depth image at the pixel positions according to the pixel values and gradients of the first denoised image and the second denoised image at the pixel positions.
In one embodiment, the target depth image is calculated by the following formula:
Figure BDA0003138487240000181
wherein, Vali,jA pixel value representing the target depth image at pixel location (i, j); val _ ai,jRepresenting the pixel value, grad _ a, of the first denoised image at pixel location (i, j)i,jRepresenting a gradient of the first denoised image at pixel location (i, j); val _ bi,jRepresenting the pixel value of the second denoised image at pixel location (i, j), grad _ bi,jRepresenting the gradient of the second denoised image at pixel location (i, j).
In the depth image generation method in this embodiment, two paths of denoising processing are performed on original depth information corresponding to a target object, a first path is to generate an original depth image based on the original depth information, then perform denoising processing on the original depth image to obtain a first denoised image, a second path is to perform denoising processing on the original depth information to obtain preprocessed depth information, then generate a second denoised image based on the preprocessed depth information, and finally perform fusion processing on the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object.
It should be understood that, although the steps in the flowcharts of fig. 2 and 5 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 5 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
Fig. 6 is a block diagram of a depth image generation device according to an embodiment. As shown in fig. 6, there is provided a depth image generating apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two modules, and specifically includes: an obtaining module 602, a first denoising module 604, a second denoising module 606, and a fusion module 608, wherein:
an obtaining module 602, configured to obtain original depth information corresponding to a target object; the first denoising module 604 is configured to generate an original depth image based on the original depth information, and perform denoising processing on the original depth image to obtain a first denoised image;
a second denoising module 606, configured to perform denoising processing on the original depth information to obtain preprocessed depth information, and generate a second denoised image based on the preprocessed depth information;
and a fusion module 608, configured to fuse the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object.
In one embodiment, taking the original depth image or the original depth information as the original depth data, the first denoising module 604 or the second denoising module 606 is further configured to: carrying out space domain denoising processing on the original depth data to obtain space domain denoising data; performing time domain denoising processing on the space domain denoising data based on reference denoising data corresponding to the original depth data to obtain fusion denoising data; performing texture fusion processing on the original depth data and the fusion denoising data to obtain target denoising data; the target denoising data is a first denoising image or preprocessing depth information.
In one embodiment, the depth image generating apparatus further includes a sorting module and a determining module, and the obtaining module 602 is further configured to: more than one light spot signal reflected by the target object is obtained; a determination module to: determining original depth data corresponding to the more than one light spot signals respectively; a ranking module to: forming a light spot signal sequence according to more than one light spot signal; a determination module further configured to: forming a light spot signal sequence according to the more than one light spot signals, determining reference denoising data corresponding to each light spot signal in the light spot signal sequence according to fusion denoising data corresponding to the light spot signal positioned in front of the light spot signal in the light spot signal sequence, and taking the reference denoising data corresponding to the light spot signal as reference denoising data corresponding to original depth data determined according to the light spot signal.
In one embodiment, the first denoising module 604 or the second denoising module 606 is further configured to: for each light spot signal in the light spot signal sequence, when the light spot signal is a first light spot signal in the light spot signal sequence, after space domain denoising data corresponding to the first light spot signal is obtained, texture fusion processing is carried out on original depth data and the space domain denoising data corresponding to the first light spot signal, and target denoising data is obtained.
In one embodiment, the spatial domain denoising data is a spatial domain denoising image, the reference denoising data is a reference denoising image, and the fusion denoising data is a fusion denoising image; the first denoising module 604 or the second denoising module 606 is further configured to: constructing a target difference image based on the difference between the reference denoising image and the airspace denoising image; determining a fusion denoising weight corresponding to the target differential image; and performing time domain denoising processing on the reference denoising image and the spatial domain denoising image by using the fusion denoising weight to obtain a fusion denoising image.
In one embodiment, the first denoising module 604 or the second denoising module 606 is further configured to: when the number of the reference denoising images corresponding to the light spot signals is more than one, constructing difference images respectively based on the difference between each reference denoising image and the space domain denoising image; a target differential image is generated from the more than one differential image.
In one embodiment, the first denoising module 604 or the second denoising module 606 is further configured to: acquiring reference de-noising images of different levels; the method comprises the steps of averagely dividing a reference denoised image into more than one image blocks, wherein the number of the image blocks in the reference denoised images at different levels is different; acquiring space domain de-noising images of different levels; the spatial domain denoised image is averagely divided into more than one image blocks, and the number of the image blocks in the spatial domain denoised images at different levels is different; obtaining a difference image of a corresponding level based on the difference between the reference denoising image and the airspace denoising image of the same level; and generating a target differential image according to the differential images of different levels.
In one embodiment, the reference denoised image is averagely divided into more than one image blocks, and the mean value of the pixel values of each pixel position in each image block is used as the pixel value of each pixel position in each image block; averagely dividing the space domain denoising image into more than one image block, and taking the pixel value mean value of each pixel position in each image block as the pixel value of each pixel position in each image block; the first denoising module 604 or the second denoising module 606 is further configured to: and obtaining a difference image of a corresponding level based on the pixel value difference of the reference denoised image and the spatial domain denoised image of the same level at each pixel position.
In one embodiment, the first denoising module 604 or the second denoising module 606 is further configured to calculate a fused denoised image by the following formula:
Figure BDA0003138487240000211
wherein, fnr _ frmi,jRepresenting the pixel value of the fused denoised image at pixel position (i, j); snr _ frmi,jRepresenting the pixel value of the spatial domain denoised image at the pixel position (i, j); ref _ frmi,jA pixel value representing the reference denoised image at pixel location (i, j); bdr _ wgt denotes the fusion denoising weight.
In one embodiment, the first denoising module 604 or the second denoising module 606 is further configured to: acquiring a texture value corresponding to the original depth data; determining texture fusion weight corresponding to the texture value; and performing texture fusion processing on the original depth data and the fusion denoising data by using the texture fusion weight to obtain target denoising data.
In one embodiment, the fusion module 608 is further configured to: acquiring pixel values and gradients of the first denoised image and the second denoised image at each pixel position; and determining the pixel value of the target depth image at each pixel position according to the pixel value and the gradient of the first denoised image and the second denoised image at each pixel position.
In one embodiment, the fusion module 608 is further configured to calculate the target depth image by the following formula:
Figure BDA0003138487240000221
wherein, Vali,jA pixel value representing the target depth image at pixel location (i, j); val _ ai,jRepresenting the pixel value, grad _ a, of the first denoised image at pixel location (i, j)i,jRepresenting a gradient of the first denoised image at pixel location (i, j); val _ bi,jRepresenting the pixel value of the second denoised image at pixel location (i, j), grad _ bi,jRepresenting the gradient of the second denoised image at pixel location (i, j).
For specific limitations of the depth image generation apparatus, reference may be made to the above limitations of the depth image generation method, which are not described herein again. The modules in the depth image generation device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The depth image generating device in this embodiment performs denoising processing of two paths on original depth information corresponding to a target object, where the first path is to generate an original depth image based on the original depth information, and then performs denoising processing on the original depth image to obtain a first denoised image, the second path is to perform denoising processing on the original depth information to obtain preprocessed depth information, and then generates a second denoised image based on the preprocessed depth information, and finally performs fusion processing on the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object.
The division of each module in the depth image generating apparatus is merely used for illustration, and in other embodiments, the depth image generating apparatus may be divided into different modules as needed to complete all or part of the functions of the depth image generating apparatus.
Fig. 7 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 7, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program is executable by a processor for implementing a method for depth image generation 7 as provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, and a wearable device.
The implementation of each module in the depth image generation apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the depth image generation method.
A computer program product containing instructions which, when run on a computer, cause the computer to perform a depth image generation method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. A depth image generation method, comprising:
acquiring original depth information corresponding to a target object;
generating an original depth image based on the original depth information, and denoising the original depth image to obtain a first denoised image;
denoising the original depth information to obtain preprocessed depth information, and generating a second denoised image based on the preprocessed depth information;
and fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object.
2. The method of claim 1, wherein the step of denoising the original depth data using the original depth image or the original depth information as original depth data comprises:
carrying out space domain denoising processing on the original depth data to obtain space domain denoising data;
performing time domain denoising processing on the space domain denoising data based on reference denoising data corresponding to the original depth data to obtain fusion denoising data;
performing texture fusion processing on the original depth data and the fusion denoising data to obtain target denoising data; the target denoising data is the first denoising image or the preprocessing depth information.
3. The method of claim 2, further comprising:
obtaining more than one light spot signal reflected by the target object;
determining original depth data corresponding to the more than one light spot signals respectively;
forming a light spot signal sequence according to the more than one light spot signals, determining reference denoising data corresponding to each light spot signal in the light spot signal sequence according to fusion denoising data corresponding to the light spot signal positioned in front of the light spot signal in the light spot signal sequence, and taking the reference denoising data corresponding to the light spot signal as reference denoising data corresponding to original depth data determined according to the light spot signal.
4. The method of claim 3, further comprising:
and for each light spot signal in the light spot signal sequence, when the light spot signal is a first light spot signal in the light spot signal sequence, after space domain denoising data corresponding to the first light spot signal is obtained, texture fusion processing is carried out on original depth data and space domain denoising data corresponding to the first light spot signal, and the target denoising data is obtained.
5. The method according to claim 2, wherein the spatial domain de-noising data is a spatial domain de-noised image, the reference de-noising data is a reference de-noised image, and the fused de-noising data is a fused de-noised image;
the time domain denoising processing is performed on the space domain denoising data based on the reference denoising data corresponding to the original depth data to obtain fusion denoising data, and the fusion denoising data comprises:
constructing a target difference image based on the difference between the reference denoising image and the airspace denoising image;
determining a fusion denoising weight corresponding to the target difference image;
and carrying out time domain denoising processing on the reference denoising image and the space domain denoising image by using the fusion denoising weight to obtain the fusion denoising image.
6. The method of claim 5, wherein constructing a target difference image based on the difference between the reference denoised image and the spatial denoised image comprises:
when the number of the reference denoising images corresponding to the light spot signals is more than one, constructing difference images respectively based on the difference between each reference denoising image and the space domain denoising image;
generating the target differential image from more than one differential image.
7. The method of claim 5, wherein constructing a target difference image based on the difference between the reference denoised image and the spatial denoised image comprises:
acquiring reference de-noising images of different levels; the reference de-noised image is averagely divided into more than one image blocks, and the number of the image blocks in the reference de-noised images at different levels is different;
acquiring space domain de-noising images of different levels; the spatial domain denoised image is averagely divided into more than one image blocks, and the number of the image blocks in the spatial domain denoised images at different levels is different;
obtaining a difference image of a corresponding level based on the difference between the reference denoising image and the airspace denoising image of the same level;
and generating the target differential image according to the differential images of different levels.
8. The method according to claim 7, wherein the reference denoised image is divided into more than one image blocks on average, and the mean value of the pixel values of the pixel positions in each image block is used as the pixel value of each pixel position in each image block; averagely dividing the space domain denoised image into more than one image blocks, and taking the pixel value mean value of each pixel position in each image block as the pixel value of each pixel position in each image block;
the obtaining of the difference image of the corresponding level based on the difference between the reference denoised image and the spatial domain denoised image of the same level comprises:
and obtaining a difference image of a corresponding level based on the pixel value difference of the reference denoised image and the spatial domain denoised image of the same level at each pixel position.
9. The method of claim 5, wherein the fused denoised image is calculated by the following formula:
Figure FDA0003138487230000031
wherein, fnr _ frmi,jRepresenting the pixel value of the fused denoised image at pixel position (i, j); snr _ frmi,jRepresenting the pixel value of the spatial domain denoised image at the pixel position (i, j);ref_frmi,ja pixel value representing the reference denoised image at pixel location (i, j); bdr _ wgt denotes the fusion denoising weight.
10. The method of claim 2, wherein the performing texture fusion processing on the original depth data and the fused denoising data to obtain target denoising data comprises:
acquiring a texture value corresponding to the original depth data;
determining a texture fusion weight corresponding to the texture value;
and performing texture fusion processing on the original depth data and the fusion de-noising data by using the texture fusion weight to obtain the target de-noising data.
11. The method of claim 1, wherein the fusing the first denoised image and the second denoised image to obtain a target depth image corresponding to the target object comprises:
acquiring pixel values and gradients of the first denoised image and the second denoised image at each pixel position;
and determining the pixel value of the target depth image at each pixel position according to the pixel value and the gradient of the first denoised image and the second denoised image at each pixel position.
12. The method of claim 11, wherein the target depth image is calculated by the following formula:
Figure FDA0003138487230000041
wherein, Vali,jA pixel value representing the target depth image at pixel location (i, j); val _ ai,jRepresenting the pixel value, grad _ a, of the first denoised image at pixel location (i, j)i,jRepresenting a gradient of the first denoised image at pixel location (i, j); val _ bi,jIs shown asPixel value of two denoised images at pixel position (i, j), grad _ bi,jRepresenting the gradient of the second denoised image at pixel location (i, j).
13. A depth image generation apparatus, characterized by comprising:
the acquisition module is used for acquiring original depth information corresponding to the target object;
the first denoising module is used for generating an original depth image based on the original depth information and denoising the original depth image to obtain a first denoised image;
the second denoising module is used for denoising the original depth information to obtain preprocessed depth information and generating a second denoised image based on the preprocessed depth information;
and the fusion module is used for fusing the first denoising image and the second denoising image to obtain a target depth image corresponding to the target object.
14. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the depth image generation method of any of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the depth image generation method according to any one of claims 1 to 12.
CN202110728603.3A 2021-06-29 2021-06-29 Depth image generation method and device, electronic equipment and storage medium Pending CN113327209A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110728603.3A CN113327209A (en) 2021-06-29 2021-06-29 Depth image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110728603.3A CN113327209A (en) 2021-06-29 2021-06-29 Depth image generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113327209A true CN113327209A (en) 2021-08-31

Family

ID=77425168

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110728603.3A Pending CN113327209A (en) 2021-06-29 2021-06-29 Depth image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113327209A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117115453A (en) * 2023-10-20 2023-11-24 光轮智能(北京)科技有限公司 Target image generation method, device and computer readable storage medium
WO2024139261A1 (en) * 2022-12-29 2024-07-04 广东美的白色家电技术创新中心有限公司 Method and apparatus for determining depth image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242043A1 (en) * 2012-03-19 2013-09-19 Gwangju Institute Of Science And Technology Depth video filtering method and apparatus
CN105354805A (en) * 2015-10-26 2016-02-24 京东方科技集团股份有限公司 Depth image denoising method and denoising device
CN109191506A (en) * 2018-08-06 2019-01-11 深圳看到科技有限公司 Processing method, system and the computer readable storage medium of depth map
CN110458778A (en) * 2019-08-08 2019-11-15 深圳市灵明光子科技有限公司 A kind of depth image denoising method, device and storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130242043A1 (en) * 2012-03-19 2013-09-19 Gwangju Institute Of Science And Technology Depth video filtering method and apparatus
CN105354805A (en) * 2015-10-26 2016-02-24 京东方科技集团股份有限公司 Depth image denoising method and denoising device
CN109191506A (en) * 2018-08-06 2019-01-11 深圳看到科技有限公司 Processing method, system and the computer readable storage medium of depth map
CN110458778A (en) * 2019-08-08 2019-11-15 深圳市灵明光子科技有限公司 A kind of depth image denoising method, device and storage medium
CN113012061A (en) * 2021-02-20 2021-06-22 百果园技术(新加坡)有限公司 Noise reduction processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘田间;郭连朋;朱;赖平;: "一种深度图像修复算法研究", 信息技术, no. 06, 25 June 2017 (2017-06-25), pages 115 - 119 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024139261A1 (en) * 2022-12-29 2024-07-04 广东美的白色家电技术创新中心有限公司 Method and apparatus for determining depth image
CN117115453A (en) * 2023-10-20 2023-11-24 光轮智能(北京)科技有限公司 Target image generation method, device and computer readable storage medium
CN117115453B (en) * 2023-10-20 2024-02-02 光轮智能(北京)科技有限公司 Target image generation method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN110702034A (en) High-light-reflection surface three-dimensional surface shape measuring method, server and system
CN105740876B (en) A kind of image pre-processing method and device
AU2017232186A1 (en) Fast and robust image alignment for burst mode
CN113327209A (en) Depth image generation method and device, electronic equipment and storage medium
CN109640066B (en) Method and device for generating high-precision dense depth image
CN113538551B (en) Depth map generation method and device and electronic equipment
US9740932B2 (en) Cross-sensor iris matching
CN111009011A (en) Method, device, system and storage medium for predicting vehicle direction angle
CN113888438A (en) Image processing method, device and storage medium
CN114049549A (en) Underwater visual recognition method, system and computer readable storage medium
US10529085B2 (en) Hardware disparity evaluation for stereo matching
CN111462178A (en) Method, device and equipment for detecting living body flow direction based on thermal imaging and storage medium
CN113674232A (en) Image noise estimation method and device, electronic equipment and storage medium
CN112053394B (en) Image processing method, device, electronic equipment and storage medium
CN109785312B (en) Image blur detection method and system and electronic equipment
CN115619678B (en) Correction method and device for image deformation, computer equipment and storage medium
CN117152330A (en) Point cloud 3D model mapping method and device based on deep learning
CN116665201A (en) Monocular three-dimensional target detection method, monocular three-dimensional target detection device, monocular three-dimensional target detection equipment and storage medium
CN108961161B (en) Image data processing method, device and computer storage medium
CN110910436B (en) Distance measuring method, device, equipment and medium based on image information enhancement technology
CN114119842B (en) Rendering method and system based on SSIM (structural similarity) and PSNR (Peak Signal to noise ratio) algorithm and computer readable storage medium
CN105389775A (en) Image group registration method combined with image gray feature and structured representation
CN104832161A (en) Automatic depth correction method based on dual-scale correlation contrast
CN103618904A (en) Motion estimation method and device based on pixels
CN113052886A (en) Method for acquiring depth information of double TOF cameras by adopting binocular principle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination