CN111598817A - Filling method and system for missing pixels of depth image - Google Patents

Filling method and system for missing pixels of depth image Download PDF

Info

Publication number
CN111598817A
CN111598817A CN202010338945.XA CN202010338945A CN111598817A CN 111598817 A CN111598817 A CN 111598817A CN 202010338945 A CN202010338945 A CN 202010338945A CN 111598817 A CN111598817 A CN 111598817A
Authority
CN
China
Prior art keywords
depth image
filling
missing
pixel
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010338945.XA
Other languages
Chinese (zh)
Other versions
CN111598817B (en
Inventor
钟旭阳
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lingyunguang Technology Group Co ltd
Original Assignee
Beijing Lingyunguang Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lingyunguang Technology Group Co ltd filed Critical Beijing Lingyunguang Technology Group Co ltd
Priority to CN202010338945.XA priority Critical patent/CN111598817B/en
Publication of CN111598817A publication Critical patent/CN111598817A/en
Application granted granted Critical
Publication of CN111598817B publication Critical patent/CN111598817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a method for filling missing pixels of a depth image, which comprises the following steps: determining missing pixels needing to be filled according to the data effectiveness of the current depth image; setting filling parameters according to a preset strategy; constructing a structure tensor according to a predetermined strategy; and (5) carrying out iterative solution through a corresponding formula, and filling missing pixels. In addition, when filling the missing pixels of the depth image, the effective pixel region with large depth image noise can also be designated as the region to be filled, so that the smooth denoising effect of the effective pixel region of the depth image is realized while the missing pixels are filled. The beneficial effects of the technical scheme of the application are that: 1. the missing pixels are filled by using a filling algorithm, so that the expected filling effect can be achieved; 2. by constructing different structure tensors D, a smooth basis of filling pixels and surrounding areas can be achieved while maintaining good edge information.

Description

Filling method and system for missing pixels of depth image
Technical Field
The present disclosure relates to depth image technologies, and in particular, to a method and a system for filling missing pixels in a depth image.
Background
In the field of visual images, depth images may reflect depth information of a photographed object. By processing the acquired depth image, height measurement, volume measurement, related detection operation and the like of the object can be realized. At present, due to the limited view angle of a laser light source or a camera, a laser cannot scan a partial region of an object, or a camera cannot observe partial light bar information of the laser, so that the depth value of a part of pixels in an image cannot be calculated, and finally, a missing pixel exists in the depth image, thereby affecting the processing of a subsequent algorithm.
Disclosure of Invention
The technical problem to be solved by the application is to provide a method for filling missing pixels of a depth image, the method can fill the missing pixels of the depth image based on an image restoration technology, and an expected filling effect is achieved by setting a structure tensor and a filling parameter. In addition, another technical problem to be solved by the present application is to provide a system for filling missing pixels of a depth image.
In order to solve the above technical problem, the present application provides a method for filling missing pixels in a depth image, including the following steps:
determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
setting filling parameters according to a preset strategy;
constructing a structure tensor according to a predetermined strategy;
and (3) carrying out iterative solution by the following formula to fill the missing pixel:
Figure BDA0002467699500000011
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the structure tensor to be constructed, ▽ u is the gradient of the depth image, represents the point multiplication operation, div () represents the divergence operation,
Figure BDA0002467699500000012
represents the partial derivative of the depth image u (x, y, t) over time t. When time t → ∞ is reachedU (x, y, t) at this time is a depth image in which the missing pixel has been filled.
Optionally, the step of determining a missing pixel to be filled according to the data validity of the current depth image includes:
the depth image comprises depth data and data validity, wherein the depth data is depth information represented by a current pixel, the data validity represents whether the current pixel is a missing pixel or not, and when the data validity is FALSE, the current pixel is represented as the missing pixel.
Optionally, the step of setting the filling parameter according to a predetermined policy includes:
the filling parameters comprise iteration times, wherein the iteration times refer to the times of iterative computation required for obtaining a depth image with good filling effect, and the number of the iteration times is in direct proportion to the filling effect of a missing pixel and in inverse proportion to the iteration time consumption;
and the iteration times are obtained by compromise according to the filling effect and the time consumption.
Optionally, the step of setting the filling parameter according to the predetermined policy further includes:
the filling parameters further comprise at least one of an iteration step size, a contrast threshold and a filter coefficient;
the iteration step size is the step size during iterative update calculation;
the contrast threshold is used for distinguishing a flat area from an edge area in the depth image, and when the difference value between the depth value of the pixel in the current area and the neighborhood is smaller than the contrast threshold, the current area is the flat area; otherwise, the edge area is defined;
the filter coefficients are parameters set to reduce the effect of noise in the image before filling in the missing pixels of the depth image.
Optionally, the step of constructing a structure tensor according to a predetermined strategy includes:
constructing a structure tensor D as an identity matrix I;
the two eigenvalues of the identity matrix I are equal and represent the diffusion intensity of the image depth information, so that the intensities of the surrounding effective pixels of the missing pixel diffusing the effective depth information into the missing pixel in different diffusion directions during the filling process are equal. Optionally, after the structure tensor D is constructed as the identity matrix I, iterative solution is performed through the following formula, and filling of the depth value of the missing pixel is performed:
Figure BDA0002467699500000021
optionally, the iterative solution is performed by using the following formula, and the step of filling the missing pixel includes the following steps:
s101: judging whether the current iteration times reach the preset iteration times or not, and if so, outputting a result image;
if not, the next iterative calculation is carried out.
Optionally, if not, performing the next iterative computation, including the following steps:
s102, judging whether the pixels of the depth image are completely traversed or not;
if yes, replying to execute the step S101;
if not, the following steps are executed:
s103: and acquiring the next pixel to be processed, and judging whether the pixel is a missing pixel.
Optionally, in step S103, if yes, the following steps are performed:
s104: the depth value is updated according to the following formula:
Figure BDA0002467699500000031
if not, the step S102 is executed in a reply mode.
In addition, to solve the above technical problem, the present application further provides a system for filling missing pixels of a depth image, where the system includes:
the missing pixel determining unit is used for determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
a filling parameter setting unit for setting a filling parameter according to a predetermined policy;
a structure tensor construction unit for constructing a structure tensor according to a predetermined strategy;
the computing unit is used for carrying out iterative solution through the following formula to fill missing pixels:
Figure BDA0002467699500000032
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the structure tensor to be constructed, ▽ u is the gradient of the depth image, represents the point multiplication operation, div () represents the divergence operation,
Figure BDA0002467699500000033
represents the partial derivative of the depth image u (x, y, t) over time t. When time t → ∞ is reached, u (x, y, t) at this time is the depth image in which the missing pixels have been filled.
In one embodiment, the present application provides a method for filling missing pixels of a depth image, including the following steps:
determining missing pixels needing to be filled according to the data effectiveness of the current depth image; setting filling parameters according to a preset strategy; constructing a structure tensor according to a predetermined strategy; and (3) carrying out iterative solution by the following formula to fill the missing pixel:
Figure BDA0002467699500000034
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the structure tensor to be constructed, ▽ u is the gradient of the depth image, represents the point multiplication operation, div () represents the divergence operation,
Figure BDA0002467699500000041
representing depthThe partial derivative of the degree image u (x, y, t) over time t. When time t → ∞ is reached, u (x, y, t) at this time is the depth image in which the missing pixels have been filled.
The filling method can fill the missing pixels of the depth image based on the image repairing technology, and can obtain the expected filling effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a logic flow diagram of a method for filling missing pixels in a depth image according to an embodiment of the present disclosure;
FIG. 2 is a diagram illustrating depth data and data validity of a depth image according to the present application;
FIG. 3 is a detailed flowchart of the embodiment of FIG. 1 during iterative computation;
fig. 4 is an exemplary diagram of the iterative computation in fig. 3.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 and fig. 2, fig. 1 is a logic flow diagram illustrating a method for filling missing pixels in a depth image according to an embodiment of the present disclosure; fig. 2 is a schematic diagram of depth data and data validity of a depth image in the present application.
The depth image processing method and device mainly process the depth image with the missing pixels. Fig. 2 shows a depth image with missing pixels, where the depth image is composed of two parts, namely depth data and data validity, where the depth data is depth information represented by a current pixel, and the data validity represents whether the current pixel is a missing pixel or not, and when the data validity is FALSE, the current pixel is a missing pixel.
Note that the depth data means: depth information represented by the current pixel; the meaning of data validity is: it represents whether the current pixel is valid, i.e. whether the current pixel is a missing pixel (an invalid pixel is a missing pixel). The missing pixel is a pixel without depth information formed because the camera does not acquire the depth information of the current point. Since the missing pixel has no depth information, the information of the current point needs to be padded up with the depth information of the surrounding pixels.
As shown in fig. 1, the present application provides a method for filling missing pixels in a depth image, including the following steps:
determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
setting filling parameters according to a preset strategy;
constructing a structure tensor according to a predetermined strategy;
and (3) carrying out iterative solution by the following formula to fill the missing pixel:
Figure BDA0002467699500000051
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the structure tensor to be constructed, ▽ u is the gradient of the depth image, represents the point multiplication operation, div () represents the divergence operation,
Figure BDA0002467699500000052
represents the partial derivative of the depth image u (x, y, t) over time t. When time t → ∞ is reached, u (x, y, t) at this time is the depth image in which the missing pixels have been filled.
According to the invention, the data validity FALSE part of the depth image is regarded as the region to be filled in the depth image, and then the region to be filled is filled according to the filling algorithm summarized by the formula. The filling method can fill the missing pixels of the depth image based on the image restoration technology, and can enable the filled pixels to be in smooth transition with the surrounding area.
The depth image missing pixel filling method mainly comprises the following steps: determining a filling area, setting filling parameters, constructing a structure tensor, and performing iterative solution. The filling area determination mainly determines which pixels in the depth image are missing pixels to be filled, as shown in fig. 2 (b), TRUE represents that the current pixel is a valid pixel, and FALSE represents that the current pixel is a missing pixel, so that a FALSE area in the validity portion of the depth image data can be regarded as an area to be filled.
In the above discussion, it should be noted that the FALSE area is an area to be filled with information.
There are several ways how to determine the FALSE region:
one is to use the depth image format for me, and automatically give the depth data and its data validity part, which will indicate which part is TRUE and which part is FALSE.
Another, there are some camera factories that use depth image formats output by 3D cameras: generally, data between-32768 and 32767 are used to indicate that some pixels with depth information of-32768 are regarded as invalid pixels, i.e. corresponding FALSE areas in the text, and the remaining pixels with depth information are valid pixels, i.e. TRUE areas.
The filling parameter setting is mainly to set parameters required in the missing pixel filling process, such as: iteration times, iteration step length, contrast threshold, filter coefficient and the like. Generally speaking, as the number of iterations increases, the filling effect of missing pixels of the depth image will be improved, but the time consumption will also increase, so the setting of the parameter needs to be compromised according to the filling effect and the time consumption.
That is, the number of the parallel iterations is more or less proportional to the filling effect of the missing pixel and inversely proportional to the iteration time consumption; the iteration times are obtained by compromise according to filling effect and time consumption.
It should be noted that the number of iterations is proportional to the filling effect of the missing pixel, and is not absolute, and conforms to a proportional relationship within a certain numerical range. In general, the larger the number of iterations, the better the filling effect of the missing pixels. However, when the number of iterations exceeds a certain value, the filling effect is ideal or close to ideal, and the filling effect is not changed greatly at this time. That is, when the iteration number reaches a certain number, the filling effect reaches a stable value. At this time, you increase the number of iterations, and the filling effect is not changed. The iteration step (the specific iteration times are related to the specific image) is the step size in the iterative updating calculation, and is generally set to be 0-0.25. The contrast threshold parameter may be used to distinguish which part of the depth image is edge information and which part is a flat region, and the contrast threshold should be set to separate the flat region from the edge region, such as: the difference between a flat region pixel and its neighborhood is 20 and the difference between an edge information region pixel and its neighborhood is 220, at which point the contrast threshold can be set to 200 to distinguish between a flat region and an edge information region in the depth image. The filter coefficient is a parameter set to reduce the influence of noise in the image before filling the missing pixels of the depth image, and the setting of the parameter is related to the actual scene.
It should be noted that the filling parameter may include at least one of an iteration step, a contrast threshold, and a filter coefficient, which is not limited in this application:
the iteration step length is the step length size in the iterative updating calculation;
the contrast threshold is used for distinguishing a flat area from an edge area in the depth image, and when the difference value between the depth value of the pixel in the current area and the neighborhood is smaller than the contrast threshold, the current area is the flat area; otherwise, the edge area is defined;
the filter coefficients are parameters set to reduce the effect of noise in the image before filling in the missing pixels of the depth image.
In the above embodiments, further designs may be made.
For example, the step of constructing the structure tensor according to the predetermined strategy includes:
constructing a structure tensor D as an identity matrix I;
it should be noted that the structure tensor D is only an example and is not limited in this application. When the structure tensor D is constructed as the identity matrix I, the filling effect at this time can achieve the effect of smoothing the filled pixels and the surrounding area. When the structure tensor D is constructed in other forms, the good edge information can be kept on the basis of the smoothness of the filling pixels and the surrounding area. I.e. the structure tensor D is structured in relation to the filling effect to be finally achieved. The structure tensor D can be designed according to the filling effect which is finally needed to be achieved. If the effect of the final fill-in is desired, which is sufficiently smooth with the missing pixels and the surrounding active pixels, the structure tensor D can be constructed as the identity matrix I. The structure tensor D is not necessarily constructed as the identity matrix I.
The two eigenvalues of the identity matrix I are equal, and the eigenvalues represent the diffusion intensity of the image depth information, so that the intensity of diffusion of the effective information into the missing pixels by the surrounding effective pixels of the missing pixels during the filling process is equal. After the structure tensor D is constructed into the unit matrix I, iterative solution is carried out through the following formula, and depth value filling of the missing pixels is carried out:
Figure BDA0002467699500000071
the above formula is formula (1).
The above scheme is discussed as follows:
the structure tensor D structure is constructed according to the filling effect to be finally achieved, for example: if it is desired that the finally filled missing pixel has a smooth behavior in all its directions, D can be constructed as the identity matrix I, with the iterative formula as above. The filling effect of the algorithm on the missing pixels of the depth image is closely related to the structure tensor D. For example: the structure tensor D is constructed as the identity matrix I in the above formula, because the two eigenvalues of the identity matrix I are equal, and the eigenvalue represents the diffusion intensity of the image depth information, so that the intensity of diffusion of the effective information into the missing pixel by the effective pixel around the missing pixel represented by the identity matrix in the missing pixel filling process is equal, and the finally filled missing pixel has smooth characteristics in all directions. The iterative solution part is mainly used for carrying out iterative solution according to the set filling parameters and the structure tensor D until the iteration times reach the final set times.
For the set filling parameters, the following can also be stated:
the step of setting the filling parameters according to a predetermined policy comprises:
the filling parameters comprise iteration times, wherein the iteration times refer to the times of iterative computation required for obtaining the depth image with good filling effect, and the number of the iteration times is in direct proportion to the filling effect of the missing pixels and in inverse proportion to the iteration time consumption; the iteration times are obtained by compromise according to filling effect and time consumption.
The step of setting the filling parameters according to a predetermined policy further comprises: the filling parameter also comprises at least one of an iteration step size, a contrast threshold and a filter coefficient; the iteration step length is the step length size in the iterative updating calculation; the contrast threshold is used for distinguishing a flat area from an edge area in the depth image, and when the difference value between the depth value of the pixel in the current area and the neighborhood is smaller than the contrast threshold, the current area is the flat area; otherwise, the edge area is defined; the filter coefficients are parameters set to reduce the effect of noise in the image before filling in the missing pixels of the depth image. The step of constructing the structure tensor according to the predetermined strategy comprises: constructing a structure tensor D as an identity matrix I; the two eigenvalues of the identity matrix I are equal, and the eigenvalues represent the diffusion intensity of the image depth information, so that the intensity of diffusion of the effective information into the missing pixels by the surrounding effective pixels of the missing pixels during the filling process is equal.
In the above technical solution, a specific iterative technical process may be introduced, please refer to fig. 3 and fig. 4, where fig. 3 is a specific flowchart of iterative computation in the embodiment of fig. 1; fig. 4 is an exemplary diagram of the iterative computation in fig. 3.
As shown in fig. 3, the step of performing the filling of the missing pixel by performing an iterative solution according to the following formula includes the following steps:
s101: judging whether the current iteration times reach the preset iteration times or not, and if so, outputting a result image;
if not, the next iterative calculation is carried out.
Further, if not, the next iterative computation is carried out, and the steps comprise the following steps:
s102, judging whether the pixels of the depth image are completely traversed or not;
if yes, replying to execute the step S101;
if not, the following steps are executed:
s103: and acquiring the next pixel to be processed, and judging whether the pixel is a missing pixel.
In step S103, if yes, the following steps are performed:
s104: the depth value is updated according to the following formula:
Figure BDA0002467699500000081
if not, the step S102 is executed in a reply mode.
The above formula is formula (2).
That is:
the iterative computation steps involved in the present application are shown in fig. 3, and described in detail below:
firstly, judging whether the iteration times of the algorithm reach the set times, if so, stopping the cycle execution of the fifth step, otherwise, entering the second step for calculation;
secondly, judging whether the pixel of the depth image is traversed, if so, executing the first step, otherwise, acquiring the next pixel to be processed and executing the third step;
thirdly, judging whether the current pixel is a missing pixel, if so, executing the fourth step, otherwise, executing the second step;
fourthly, discretizing the formula (1), solving in one step, taking the solved depth value as the latest filling value of the current missing pixel, and then executing the second step;
fifth, the filled depth image is output.
The calculation example shown in fig. 4 is performed according to the iterative calculation steps described above. The iteration times are 3 times, the iteration step length is set to be 0.1, the initial value of the missing pixel is 0, and the structure tensor D is constructed into an identity matrix I, namely, the formula (2) is used for carrying out iteration solution.
In the present application, the following modifications can be made to the scheme:
when filling missing pixels of the depth image, an effective pixel region with large noise of the depth image can also be designated as a region to be filled, so that the smooth denoising effect of the effective pixel region of the depth image is realized while the missing pixels are filled.
In the above discussion, the areas of the depth image where noise is large are determined based on the images captured in the field and the images captured by the staff. Such as: the noise exists at A of the depth image, at the moment, a worker circles out the interested region at A, and when the algorithm internally processes, the algorithm sets the validity of the interested region at A to be FALSE, namely, the noise A of the image is also regarded as a region needing pixel filling.
The beneficial effects of the technical scheme of the application are that: 1. the missing pixels are filled by using a filling algorithm, so that the expected filling effect can be achieved; 2. by constructing different structure tensors D, a smooth basis of filling pixels and surrounding areas can be achieved while maintaining good edge information.
The beneficial effects of the effective deformation technical scheme are as follows: 1. when missing pixels are filled, an effective pixel region needing noise filtering is designated as a filtering region (region to be filled), so that smooth denoising of effective pixels in the filtering region can be realized while the missing pixels are filled; 2. by constructing different structure tensors D, the filtering of effective pixels in the filtering area can be realized, and good edge information can be kept.
Among the above technical effects, it should be noted that:
1. the function of the missing pixel filling tool is to fill in missing pixels in the depth image (i.e., pixels with data significance of FALSE). 2. In the depth image actually acquired, there will be noise data, and at this time, the validity of data corresponding to the noise data is changed from TRUE to FALSE, that is, the image noise area is also specified as an area that needs to be pixel-filled, and at this time, smooth filtering of effective pixels in the filtering area can be realized while filling missing pixels.
In addition, the method can realize the smooth filtering of effective pixels in a filtering area on the basis of' constructing different structure tensors D, and simultaneously keep good edge information. "explanation is as follows:
if the effect after the depth image filling is expected, the smooth characteristic is provided in a flat area (an area with a small gradient ▽ u), and the smooth characteristic is not provided in an edge area (an area with a large gradient ▽ u) (the edge is not smoothly blurred), then the tensor D can be constructed as g (.)IWhere the function g (is) is a decreasing function with respect to the gradient ▽ u, it can be satisfied that the smoothing is small where the gradient is large (the diffusion strength is small) and large where the gradient is small (the diffusion strength is large).
In addition, the present application also provides a system for filling missing pixels of a depth image, the system comprising:
the missing pixel determining unit is used for determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
a filling parameter setting unit for setting a filling parameter according to a predetermined policy;
a structure tensor construction unit for constructing a structure tensor according to a predetermined strategy;
the computing unit is used for carrying out iterative solution through the following formula to fill missing pixels:
Figure BDA0002467699500000101
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the structure tensor to be constructed, ▽ u is the gradient of the depth image, representing the dot product operation, div () represents the divergence operation,
Figure BDA0002467699500000102
represents the partial derivative of the depth image u (x, y, t) over time t. When time t → ∞ is reached, u (x, y, t) at this time is the depth image in which the missing pixels have been filled.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, component, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, components, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, without limitation, a particular feature, component, or characteristic illustrated or described in connection with one embodiment may be combined, in whole or in part, with a feature, component, or characteristic of one or more other embodiments. Such modifications and variations are intended to be included within the scope of the present application.
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" terminal, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present application and are presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for filling missing pixels of a depth image is characterized by comprising the following steps:
determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
setting filling parameters according to a preset strategy;
constructing a structure tensor according to a predetermined strategy;
and (3) carrying out iterative solution by the following formula to fill the missing pixel:
Figure FDA0002467699490000011
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the tensor of the structure to be constructed,
Figure FDA0002467699490000012
is the gradient of the depth image, represents a point multiplication operation, div () represents a divergence operation,
Figure FDA0002467699490000013
represents the partial derivative of the depth image u (x, y, t) over time t; when time t → ∞ is reached, u (x, y, t) at this time is the depth image in which the missing pixels have been filled.
2. The method for filling the missing pixel of the depth image as claimed in claim 1, wherein the step of determining the missing pixel to be filled according to the data validity of the current depth image comprises:
the depth image comprises depth data and data validity, wherein the depth data is depth information represented by a current pixel, the data validity represents whether the current pixel is a missing pixel or not, and when the data validity is FALSE, the current pixel is represented as the missing pixel.
3. The method for filling missing pixels in a depth image as claimed in claim 1, wherein the step of setting the filling parameters according to a predetermined strategy comprises:
the filling parameters comprise iteration times, wherein the iteration times refer to the times of iterative computation required for obtaining a depth image with good filling effect, and the number of the iteration times is in direct proportion to the filling effect of a missing pixel and in inverse proportion to the iteration time consumption;
and the iteration times are obtained by compromise according to the filling effect and the time consumption.
4. The method for filling missing pixels in a depth image as claimed in claim 3, wherein the step of setting the filling parameters according to a predetermined strategy further comprises:
the filling parameters further comprise at least one of an iteration step size, a contrast threshold and a filter coefficient;
the iteration step size is the step size during iterative update calculation;
the contrast threshold is used for distinguishing a flat area from an edge area in the depth image, and when the difference value between the depth value of the pixel in the current area and the neighborhood is smaller than the contrast threshold, the current area is the flat area; otherwise, the edge area is defined;
the filter coefficients are parameters set to reduce the effect of noise in the image before filling in the missing pixels of the depth image.
5. The method for filling in missing pixels in a depth image as claimed in claim 3, wherein the step of constructing the structure tensor according to the predetermined strategy comprises:
constructing a structure tensor D as an identity matrix I;
the two eigenvalues of the identity matrix I are equal and represent the diffusion intensity of the image depth information, so that the intensities of the surrounding effective pixels of the missing pixel diffusing the effective depth information into the missing pixel in different diffusion directions during the filling process are equal.
6. The method as claimed in claim 5, wherein the filling of the depth value of the missing pixel is performed by constructing the structure tensor D as the identity matrix I and then performing an iterative solution by using the following formula:
Figure FDA0002467699490000021
7. the method for filling missing pixels in a depth image as claimed in any one of claims 3 to 6, wherein said step of iteratively solving by the following formula comprises the steps of:
s101: judging whether the current iteration times reach the preset iteration times or not, and if so, outputting a result image;
if not, the next iterative calculation is carried out.
8. The method for filling missing pixels in a depth image as claimed in claim 7, wherein if not, the step of performing the next iterative computation comprises the steps of:
s102, judging whether the pixels of the depth image are completely traversed or not;
if yes, replying to execute the step S101;
if not, the following steps are executed:
s103: and acquiring the next pixel to be processed, and judging whether the pixel is a missing pixel.
9. The method of claim 8, wherein the depth image is filled with missing pixels,
in step S103, if yes, the following steps are performed:
s104: the depth value is updated according to the following formula:
Figure FDA0002467699490000031
if not, the step S102 is executed in a reply mode.
10. A system for filling missing pixels of a depth image, the system comprising:
the missing pixel determining unit is used for determining missing pixels needing to be filled according to the data effectiveness of the current depth image;
a filling parameter setting unit for setting a filling parameter according to a predetermined policy;
a structure tensor construction unit for constructing a structure tensor according to a predetermined strategy;
the computing unit is used for carrying out iterative solution through the following formula to fill missing pixels:
Figure FDA0002467699490000032
wherein u is0(x, y) is the initial value of the depth data of the depth image to be filled, D is the tensor of the structure to be constructed,
Figure FDA0002467699490000033
is the gradient of the depth image, represents a point multiplication operation, div () represents a divergence operation,
Figure FDA0002467699490000034
represents the partial derivative of the depth image u (x, y, t) over time t; when time t → ∞ is reached, u: (f) at that timex, y, t) is the depth image with the missing pixels already filled.
CN202010338945.XA 2020-04-26 2020-04-26 Filling method and system for missing pixels of depth image Active CN111598817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010338945.XA CN111598817B (en) 2020-04-26 2020-04-26 Filling method and system for missing pixels of depth image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010338945.XA CN111598817B (en) 2020-04-26 2020-04-26 Filling method and system for missing pixels of depth image

Publications (2)

Publication Number Publication Date
CN111598817A true CN111598817A (en) 2020-08-28
CN111598817B CN111598817B (en) 2023-07-18

Family

ID=72190708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010338945.XA Active CN111598817B (en) 2020-04-26 2020-04-26 Filling method and system for missing pixels of depth image

Country Status (1)

Country Link
CN (1) CN111598817B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986124A (en) * 2020-09-07 2020-11-24 北京凌云光技术集团有限责任公司 Filling method and device for missing pixels of depth image
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN113658037A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Method and device for converting depth image into gray image
CN111986124B (en) * 2020-09-07 2024-05-28 凌云光技术股份有限公司 Filling method and device for missing pixels of depth image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
US20160171669A1 (en) * 2014-12-11 2016-06-16 Sony Corporation Using depth for recovering missing information in an image
CN105761213A (en) * 2014-12-16 2016-07-13 北京大学 Image inpainting method and device
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
CN109903322A (en) * 2019-01-24 2019-06-18 江苏大学 A kind of depth camera depth image restorative procedure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198486A (en) * 2013-04-10 2013-07-10 浙江大学 Depth image enhancement method based on anisotropic diffusion
US20160171669A1 (en) * 2014-12-11 2016-06-16 Sony Corporation Using depth for recovering missing information in an image
CN105761213A (en) * 2014-12-16 2016-07-13 北京大学 Image inpainting method and device
CN107993201A (en) * 2017-11-24 2018-05-04 北京理工大学 A kind of depth image enhancement method for retaining boundary characteristic
CN109903322A (en) * 2019-01-24 2019-06-18 江苏大学 A kind of depth camera depth image restorative procedure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HONGYANG XUE 等: "Depth Image Inpainting: Improving Low Rank Matrix Completion with Low Gradient Regularization", ARXIV *
何雨亭 等: "结构张量的改进Criminisi 修复", 中国图像图形学报 *
周自顾;曹杰;郝群;高泽东;肖宇晴;: "保留边界特征的深度图像增强算法研究", 应用光学 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986124A (en) * 2020-09-07 2020-11-24 北京凌云光技术集团有限责任公司 Filling method and device for missing pixels of depth image
CN111986124B (en) * 2020-09-07 2024-05-28 凌云光技术股份有限公司 Filling method and device for missing pixels of depth image
CN112465723A (en) * 2020-12-04 2021-03-09 北京华捷艾米科技有限公司 Method and device for repairing depth image, electronic equipment and computer storage medium
CN113658037A (en) * 2021-08-24 2021-11-16 凌云光技术股份有限公司 Method and device for converting depth image into gray image
CN113658037B (en) * 2021-08-24 2024-05-14 凌云光技术股份有限公司 Method and device for converting depth image into gray level image

Also Published As

Publication number Publication date
CN111598817B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN109840477B (en) Method and device for recognizing shielded face based on feature transformation
KR101871098B1 (en) Apparatus and method for image processing
JP6218402B2 (en) Method and apparatus for removing non-uniform motion blur in large input images based on tile units
US9818201B2 (en) Efficient lens re-distortion
CN111598817A (en) Filling method and system for missing pixels of depth image
CN112785507A (en) Image processing method and device, storage medium and terminal
EP2884745A1 (en) Virtual view generating method and apparatus
CN110705576B (en) Region contour determining method and device and image display equipment
WO2017096814A1 (en) Image processing method and apparatus
CN110728636A (en) Monte Carlo rendering image denoising model, method and device based on generative confrontation network
CN110992243A (en) Intervertebral disc section image construction method and device, computer equipment and storage medium
KR101662407B1 (en) Method for vignetting correction of image and apparatus therefor
JP2018133110A (en) Image processing apparatus and image processing program
US10748248B2 (en) Image down-scaling with pixel sets selected via blue noise sampling
JP5617841B2 (en) Image processing apparatus, image processing method, and image processing program
CN112801890B (en) Video processing method, device and equipment
CN107170007A (en) The method of image device and its generation out-of-focus image with image defocus function
Duan et al. Color texture image inpainting using the non local CTV model
CN112348808A (en) Screen perspective detection method and device
CN111062878A (en) Image denoising method and device and computer readable storage medium
CN108230251A (en) Combined type image recovery method and device
CN108537786B (en) Method and apparatus for processing image
CN113112457B (en) Fiber reinforced composite material uncertainty analysis method and device
CN113496468B (en) Depth image restoration method, device and storage medium
CN111815510B (en) Image processing method based on improved convolutional neural network model and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant after: Lingyunguang Technology Co.,Ltd.

Address before: 100094 Beijing city Haidian District Cui Hunan loop 13 Hospital No. 7 Building 7 room 701

Applicant before: Beijing lingyunguang Technology Group Co.,Ltd.

GR01 Patent grant
GR01 Patent grant