CN110490829B - Depth image filtering method and system - Google Patents

Depth image filtering method and system Download PDF

Info

Publication number
CN110490829B
CN110490829B CN201910789404.6A CN201910789404A CN110490829B CN 110490829 B CN110490829 B CN 110490829B CN 201910789404 A CN201910789404 A CN 201910789404A CN 110490829 B CN110490829 B CN 110490829B
Authority
CN
China
Prior art keywords
target pixel
image
pixel
depth
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910789404.6A
Other languages
Chinese (zh)
Other versions
CN110490829A (en
Inventor
白志强
李骊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing HJIMI Technology Co Ltd
Original Assignee
Beijing HJIMI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing HJIMI Technology Co Ltd filed Critical Beijing HJIMI Technology Co Ltd
Priority to CN201910789404.6A priority Critical patent/CN110490829B/en
Publication of CN110490829A publication Critical patent/CN110490829A/en
Application granted granted Critical
Publication of CN110490829B publication Critical patent/CN110490829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering

Abstract

The invention discloses a filtering method and a system for a depth image, which comprise the following steps: determining a target pixel in a second image obtained by expanding an original depth image, and obtaining attribute feature information of the target pixel based on a set dynamic threshold; if the target pixel is located in the plane area, calculating the mean value of the pixel depth values of the neighborhood area of the target pixel, and updating all the pixel depth values in the neighborhood area to the mean value to obtain a filtering result of the plane area; if the target pixel is located in the edge area, calculating the weighted values of all pixel points in the neighborhood area of the target pixel, calculating according to the weighted values to obtain a target depth value of the target pixel, and updating the depth value of the target pixel to the target depth value to obtain a filtering result of the edge area; and obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area. The invention solves the problems that the filtering effect on the plane area can not be obvious and the edge information can not be simultaneously reserved in the process of filtering the depth map.

Description

Depth image filtering method and system
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a depth image filtering method and system.
Background
Depth images, also known as range images, refer to images having as pixel values the distances (depths) from an image grabber to various points in a scene, which directly reflect the geometry of the visible surface of a scene. At present, two methods are generally used for obtaining depth images, one method is direct measurement through hardware equipment, and the main equipment is a depth camera based on structured light and a depth camera based on TOF (Time Of Flight); the other is calculated by software measurement of a binocular algorithm.
In the hardware measurement process, a certain measurement error exists in a depth image obtained under the influence of the manufacturing process and the performance of hardware equipment, and the measurement error is increased along with the increase of the depth value; in the software measurement process, certain errors also exist in the obtained depth image due to the influences of the binocular equipment resolution, the algorithm matching precision and the like.
Although the error problem in the depth image is solved through a filtering algorithm in the prior art, the filtering effect of the traditional filtering algorithm on the plane area of the depth image is not obvious, or edge information cannot be reserved.
Disclosure of Invention
Aiming at the problems, the invention provides a depth image filtering method and a depth image filtering system, which solve the problems that the filtering effect on a plane area is obvious and edge information cannot be simultaneously reserved in the depth image filtering process.
In order to achieve the purpose, the invention provides the following technical scheme:
a method of filtering a depth image, the method comprising:
performing image expansion on a first image according to a preset sliding window to obtain a second image, wherein the first image represents an original depth image, and the size of the second image has a preset corresponding relation with the size of the first image and the size of the preset sliding window;
determining a target pixel in the second image, and performing attribute judgment on the target pixel based on a set dynamic threshold to obtain attribute feature information of the target pixel, wherein the attribute feature information comprises that the target pixel is located in an edge area or the target pixel is located in a plane area;
if the target pixel is located in the plane area, calculating the mean value of the pixel depth values of the neighborhood area of the target pixel, updating all the pixel depth values in the neighborhood area to the mean value, realizing the filtering processing of all the pixel points of the plane area, and obtaining the filtering result of the plane area;
if the target pixel is located in the edge region, calculating weight values of all pixel points in a neighborhood region of the target pixel, calculating a target depth value of the target pixel according to the weight values, updating the depth value of the target pixel to the target depth value, realizing filtering processing on the pixel points in the edge region, and obtaining a filtering result of the edge region;
and obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area.
Optionally, the performing image expansion on the first image according to a preset sliding window to obtain a second image includes:
expanding first numerical pixels on the upper side, the lower side, the left side and the right side of the first image respectively according to the size of a preset sliding window, wherein the size of the preset sliding window is N x N, N is a positive odd number which is more than or equal to 3, the first numerical value is (N-1)/2, and the depth value of each expanded pixel is equal to the depth value of the pixel adjacent to the expanded pixel;
calculating and obtaining the size of a second image according to the size of a first image and the size of the sliding window, wherein if the width of the first image is W1 and the height of the first image is H1, the width W2 of the second image is W1+ N-1, and the height H2 of the second image is H1+ N-1;
and assigning values to the pixel points in the second image according to the depth values of the pixel points of the first image and the size of a preset sliding window to obtain the assigned second image.
Optionally, the determining a target pixel in the second image, and performing attribute judgment on the target pixel based on a set dynamic threshold to obtain attribute feature information of the target pixel includes:
determining a dynamic threshold value matched with a target pixel of the second image according to reference parameters of the target pixel, wherein the reference parameters comprise a reference percentage and a reference depth value;
acquiring a neighborhood region of the target pixel;
calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
and comparing each gradient value in the gradient set with the dynamic threshold, if one gradient value is greater than the dynamic threshold, judging that the target pixel is positioned in an edge area, otherwise, judging that the target pixel is positioned in a plane area.
Optionally, if the target pixel is located in a planar region, calculating a mean value of pixel depth values of a neighborhood region of the target pixel, and updating all pixel depth values in the neighborhood region to the mean value, including:
if the target pixel is located in the plane area, calculating the sum of all pixels in the neighborhood area of the target pixel;
calculating to obtain the mean value of all pixels in the neighborhood region according to the sum of all pixels;
updating the depth values of all pixels in the neighborhood region to the mean value;
and updating the coordinate value of the target pixel, and filtering the updated target pixel.
Optionally, if the target pixel is located in the edge region, calculating weight values of all pixel points in a neighborhood region of the target pixel, calculating according to the weight values to obtain a target depth value of the target pixel, updating the depth value of the target pixel to the target depth value, and implementing filtering processing on the pixel points in the edge region to obtain a filtering result of the edge region, including:
if the target pixel is located in the edge area, calculating the weights of all pixel points in the domain area of the target pixel;
normalizing the weight of the pixel point to obtain a normalized weight value;
calculating a target depth value of the target pixel based on the normalized weight value;
and updating the depth value of the target pixel to the target depth value, wherein the depth values of the pixels except the target pixel in the edge area are the original depth values corresponding to the target pixel, so as to realize the filtering processing of the pixel points in the edge area and obtain the filtering result of the edge area.
Optionally, the method further comprises:
and if the depth value of the pixel point in the neighborhood region is 0, eliminating the pixel point with the depth value of 0.
A system for filtering a depth image, the system comprising:
the image expansion unit is used for performing image expansion on a first image according to a preset sliding window to obtain a second image, wherein the first image represents an original depth image, and the size of the second image has a preset corresponding relation with the size of the first image and the size of the preset sliding window;
the attribute judging unit is used for determining a target pixel in the second image and judging the attribute of the target pixel based on a set dynamic threshold value to obtain attribute feature information of the target pixel, wherein the attribute feature information comprises that the target pixel is positioned in an edge area or the target pixel is positioned in a plane area;
the plane area processing unit is used for calculating the mean value of the pixel depth values of the neighborhood area of the target pixel and updating all the pixel depth values in the neighborhood area to the mean value if the target pixel is located in the plane area, so as to realize the filtering processing of each pixel point of the plane area and obtain the filtering result of the plane area;
the edge area processing unit is used for calculating weight values of all pixel points in a neighborhood area of the target pixel if the target pixel is located in the edge area, calculating a target depth value of the target pixel according to the weight values, updating the depth value of the target pixel to the target depth value, realizing filtering processing on the pixel points in the edge area, and obtaining a filtering result of the edge area;
and the generating unit is used for obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area.
Optionally, the extension unit includes:
the expansion subunit is configured to expand, according to the size of a preset sliding window, first numerical pixels to the upper, lower, left, right, and left sides of the first image, where the size of the preset sliding window is N × N, N is a positive odd number greater than or equal to 3, the first numerical value is (N-1)/2, and a depth value of each expanded pixel is equal to a depth value of an adjacent pixel;
a first calculating subunit, configured to calculate a size of a second image according to a size of a first image and a size of the sliding window, where if the first image has a width W1 and a height H1, a width W2 of the second image is W1+ N-1, and a height H2 of the second image is H1+ N-1;
and the assignment subunit is used for assigning the values of the pixels in the second image according to the depth values of the pixels in the first image and the size of a preset sliding window to obtain the assigned second image.
Optionally, the attribute determining unit includes:
a determining subunit, configured to determine, according to reference parameters of a target pixel of the second image, a dynamic threshold matching the target pixel, where the reference parameters include a reference percentage and a reference depth value;
the first obtaining subunit is used for obtaining a neighborhood region of the target pixel;
the second calculating subunit is used for calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
and the comparison subunit is configured to compare each gradient value in the gradient set with the dynamic threshold, and if one gradient value is greater than the dynamic threshold, determine that the target pixel is located in an edge region, otherwise, determine that the target pixel is located in a plane region.
Optionally, the plane processing unit includes:
a third computing subunit, configured to compute a sum of all pixels in a neighborhood region of the target pixel if the target pixel is located in a planar region;
the fourth calculating subunit is configured to calculate a mean value of all pixels in the neighborhood region according to the sum of all pixels;
a first updating subunit, configured to update the depth values of all pixels in the neighborhood region to the mean value;
the first processing subunit is used for updating the coordinate value of the target pixel and performing filtering processing on the updated target pixel;
the edge area processing unit includes:
a fifth calculating subunit, configured to calculate weights of all pixel points in a domain area of the target pixel if the target pixel is located in an edge area;
the normalization subunit is configured to perform normalization processing on the weights of the pixel points to obtain normalized weights;
a sixth calculating subunit, configured to calculate a target depth value of the target pixel based on the normalized weight value;
and the second updating subunit is used for updating the depth value of the target pixel to the target depth value, and the depth values of the pixels except the target pixel in the edge area are the original depth values corresponding to the target pixel, so that the filtering processing of the pixel points in the edge area is realized, and the filtering result of the edge area is obtained.
Compared with the prior art, the invention provides a filtering method and a system of a depth image, wherein the original depth image is expanded to obtain a second image, then the attribute of a target image in the second image is judged based on a dynamic threshold, the dynamic threshold can be adopted to have self-adaption and robustness for different depth values, the depth pixel values are classified according to the dynamic threshold, different filtering processing aiming at different attribute characteristics is realized, namely, the depth values of all pixels in a neighborhood are updated by adopting a mean value in a plane area, and the depth value of the target pixel is updated by adopting a recalculated depth value in an edge area. Therefore, the problems that the filtering effect on the plane area is obvious and the edge information is reserved cannot be achieved simultaneously in the depth map filtering process are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic flowchart of a depth image filtering method according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an original depth image according to an embodiment of the present invention;
FIG. 3 is a three-dimensional display diagram of the original depth map provided by the embodiment of the present invention in matlab;
FIG. 4 is a schematic diagram of a filtered depth map according to an embodiment of the present invention;
fig. 5 is a three-dimensional display diagram of the depth map after filtering provided by the embodiment of the present invention in matlab;
fig. 6 is a schematic structural diagram of a depth image filtering system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like in the description and claims of the present invention and the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not set forth for a listed step or element but may include steps or elements not listed.
In an embodiment of the present invention, a depth image filtering method is provided, and referring to fig. 1, the method includes the following steps:
s101, performing image expansion on the first image according to a preset sliding window to obtain a second image.
The first image represents an original depth image, and the size of the second image has a preset corresponding relation with the size of the first image and the size of a preset sliding window. In order to avoid the loss of edge information, the upper, lower, left and right sides of the original depth image need to be expanded, and a preset number of pixels can be expanded, so that the expanded second image meets the processing requirement. Correspondingly, the original image can be expanded according to the set size of the sliding window.
The embodiment of the application also provides a method for expanding the image, which can comprise the following steps:
s1011, expanding first numerical value pixels on the upper side, the lower side, the left side and the right side of the first image according to the size of a preset sliding window;
s1012, calculating to obtain the size of a second image according to the size of the first image and the size of the sliding window;
and S1013, assigning values to the pixel points in the second image according to the depth values of the pixel points of the first image and the size of a preset sliding window to obtain the assigned second image.
The preset sliding window is set to be N × N, and N is a positive odd number greater than or equal to 3, it should be noted that the value of N is set to be an odd number to ensure that two sides of the central pixel of the sliding window have the same number of pixels. The first value is (N-1)/2, that is, the original depth image is expanded by (N-1)/2 pixels on the upper, lower, left and right sides, and the depth value of each expanded pixel is equal to the depth value of the pixel adjacent to the expanded pixel. Specifically, the method comprises the following steps:
if the width of the first image is W1 and the height is H1, the width of the second image is W2 ═ W1+ N-1, and the height of the second image is H2 ═ H1+ N-1;
and assigning the depth values of the pixels in the second image, wherein the assignment formula is shown as formula (1).
Figure GDA0003399840120000081
Where d2(i, j) represents a depth coordinate value with i-column coordinates j in the second image, d1(i, j) represents a depth coordinate value with i-column coordinates j in the first image, max (a, b) represents the largest number of a and b, and min (a, b) represents the smallest number of a and b.
It should be noted that the value of N needs to be taken according to factors such as the resolution of an image, the filtering precision, the algorithm time consumption, and the like, and for convenience of calculation, the value is preferably 3 to 11, but other values are not excluded and are selected according to specific situations. For example, in the present invention, in the implementation process, the value of the parameter is W1-552, H1-664, N-7, (N-1)/2-3; in the following part of the present invention, a pixel point of the second image is represented by d (i, j), and d (i, j) represents a pixel coordinate of the ith row and the jth column of the second image and also represents a depth value of the pixel point.
S102, determining a target pixel in the second image, and performing attribute judgment on the target pixel based on a set dynamic threshold value to obtain attribute feature information of the target pixel.
After obtaining the extended image of the original depth image, it is necessary to perform attribute judgment on the pixel d (i, j) to be subjected to filtering processing in the second image, that is, to judge whether the pixel is located in an edge region or a plane region.
The embodiment of the invention also provides an attribute judgment method, which comprises the following steps:
s1021, determining a dynamic threshold matched with a target pixel of the second image according to reference parameters of the target pixel, wherein the reference parameters comprise a reference percentage and a reference depth value;
s1022, acquiring a neighborhood region of the target pixel;
s1023, calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
s1024, comparing each gradient value in the gradient set with the dynamic threshold, if one gradient value is larger than the dynamic threshold, judging that the target pixel is located in an edge area, and if not, judging that the target pixel is located in a plane area.
Specifically, the initial coordinate value of the pixel d (i, j) to be processed in the second image is (N-1)/2, and j is (N-1)/2; the process of attribute judgment is as follows:
firstly, a reference threshold value for judging the attribute of the pixel d (i, j) is obtained according to a dynamic threshold value calculation formula. The dynamic threshold calculation formula is shown in formula (2):
Figure GDA0003399840120000091
DT (i, j) represents a reference dynamic threshold for determining the attribute of the pixel d (i, j), base is a base reference percentage, ref _ depth is a reference depth value, and for example, a value of a parameter during implementation is base 5 and ref _ depth 1000.
A neighborhood region of the pixel d (i, j) is obtained. Taking the pixel point d (i, j) as the center of the sliding window, and recording a set formed by all elements in the sliding window on the depth image as a neighborhood region of the pixel d (i, j), so that the neighborhood region comprises the target pixel point d (i, j);
and calculating the gradient between each pixel in the neighborhood region and d (i, j), and performing absolute value operation on all gradient values, wherein a set formed by all gradient values after the absolute value operation is recorded as a gradient set GS of d (i, j). The calculation formula of the elements in GS is formula (3):
grad(r,c)=abs(d(i+r,j+c)-d(i,j)) (3)
wherein, grad (r, c) represents the (r + a) × N + (c + a) th element in GS, r ═ a, - (a-1), … (a-1), a; c ═ a, - (a-1), … (a-1), a; a is (N-1)/2; abs denotes absolute value operation. For example, in the implementation process, the values of the parameters are N-7 and a-3, respectively.
Comparing each gradient value in the GS with a reference dynamic threshold value, and if one gradient value is greater than the threshold value, indicating that d (i, j) is located in the edge area of the depth image; otherwise, d (i, j) is located in the depth image plane area.
S103, if the target pixel is located in the plane area, calculating the mean value of the pixel depth values of the neighborhood area of the target pixel, updating all the pixel depth values in the neighborhood area to the mean value, realizing the filtering processing of all the pixel points of the plane area, and obtaining the filtering result of the plane area;
s104, if the target pixel is located in the edge area, calculating weight values of all pixel points in the neighborhood area of the target pixel, calculating according to the weight values to obtain a target depth value of the target pixel, and updating the depth value of the target pixel to the target depth value to obtain a filtering result of the edge area;
and S105, obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area.
After the attribute information of the target pixel d (i, j) is obtained, the target pixel needs to be subjected to filtering processing.
When the pixel d (i, j) is located in the depth image plane area, a mean value filtering method is adopted to calculate a mean value of depth values of all pixels in the window and update the depth values of all pixels in the window by using the mean value, and the specific implementation process comprises the following steps:
calculating the sum of all pixels in the neighborhood region of d (i, j), wherein the calculation formula is shown as formula (4):
Figure GDA0003399840120000101
where sum (i, j) represents the sum of the depths of all elements in the d (i, j) neighborhood region, r ═ a, - (a-1), … (a-1), a; c ═ a, - (a-1), … (a-1), a; a is (N-1)/2.
Calculating the mean value of all pixels in the d (i, j) neighborhood region, wherein the calculation formula is shown as formula (5):
mean(i,j)=sum(i,j)/(N*N) (5)
as shown in equation (6), the depth values of all pixels in the d (i, j) neighborhood region are updated:
d(i+r,j+c)=mean(i,j) (6)
wherein r ═ a, - (a-1), … (a-1), a; c ═ a, - (a-1), … (a-1), a; a is (N-1)/2.
When the filtering process of the current target pixel d (i, j) is completed, the value of i, j needs to be updated, that is, the filtering process of the next pixel is realized. The sliding step of the sliding window in the plane area is a, and j is j + a; if j < W1, i ═ i; otherwise, i is i + a; if i is less than H1, let j equal to (N-1)/2, and re-filter the next pixel point d (i, j) to be processed; otherwise, it indicates that the second image has finished filtering.
When the pixel d (i, j) is located in the edge area of the depth image, calculating a new depth value by using an improved bilateral filtering method, and then updating the d (i, j) by using the new depth value, wherein the specific process is as follows:
the pixel points with the depth value of 0 in the depth map represent points with invalid depth, so that the pixel points with the depth value of 0 need to be removed when a new depth value is calculated. And calculating the weights of all pixel points in the d (i, j) neighborhood region according to an improved bilateral filtering method. The weight calculation formula is shown in equation (7):
Figure GDA0003399840120000111
where abs represents an absolute value calculation, and min (a, b) represents the minimum value of a and b.
Normalizing the weights of all pixel points in the d (i, j) neighborhood region, wherein the normalization calculation formula is shown as formula (8):
Figure GDA0003399840120000113
and (3) recalculating the depth value of d (i, j), wherein the calculation formula is shown as formula (9):
Figure GDA0003399840120000112
the value of i, j is updated. In the edge region, in order to keep the sliding step of the detail information sliding window at the edge to be 1, let j equal to j + 1; if j < W1, i ═ i; otherwise, i is i + 1; if i is less than H1, let j equal to (N-1)/2, and re-filter the next pixel point d (i, j) to be processed; otherwise, it indicates that the second image has finished filtering.
For example, referring to fig. 2, which shows a schematic diagram of an original depth map, fig. 2 is a 552 × 664 size depth image before filtering, the image foreground is a person with two open arms, the image background is a horizontal wall surface, and black parts in the image represent pixel points of invalid depth values; the image 3 is a three-dimensional display image in matlab of fig. 2, and as can be seen from fig. 3, the depth map captured by the depth camera has large fluctuation in a plane area, so that the depth image needs to be filtered. The filtering method provided in the embodiment of the present invention is adopted, where W1 is 552, H1 is 664, N is 7, (N-1)/2 is 3, and a is 3 in the first image. The depth image obtained by filtering the second image is shown in fig. 4, fig. 5 is a three-dimensional display diagram of fig. 4 in matlab, and it can be seen from fig. 5 that the planar area is smoothed compared with fig. 3. In addition, fig. 5 is compared with fig. 2, and the detailed information in fig. 2 is retained, as the finger edge in fig. 2 is still clearly visible in fig. 4. In addition, after filtering, partial holes in fig. 2 are effectively filled, and for example, partial black holes in the ellipse in fig. 2 have depth values at corresponding positions in fig. 4.
In the filtering method for the depth image provided by the embodiment of the invention, the threshold value for judging the pixel classification is dynamically determined according to the pixel depth value of the depth image, and the dynamic threshold value enhances the self-adaptability and robustness of the method to different depth values; secondly, the method classifies depth pixel values according to dynamic threshold values, different filtering modes are adopted according to different classes, namely mean filtering is adopted in a plane area, improved bilateral filtering is adopted in an edge area, and the classified filtering mode can achieve the effect of smoothing the plane area and can also keep the detail information of the edge of the depth image; in the process of plane filtering, the step length of a sliding window can be increased by adopting a mode of updating all pixel depth values in the window, and the mode not only can transfer plane flatness, but also can reduce filtering time; finally, improved bilateral filtering is adopted in the process of edge filtering, and bilateral filtering is improved as well, wherein the improved bilateral filtering comprises the steps of replacing the distance of a position with the pixel interval number when calculating the position weight, and reducing the depth gradient value in proportion when calculating the weight of the depth value of the image, so that the weight distribution through exponential transformation is more reasonable, and the pixel point with the depth value of 0 is eliminated when calculating the weight; the improved bilateral filtering reduces the filtering time consumption, has a better filtering effect on the edge processing of the depth map, and can fill partial holes in the depth image.
Referring to fig. 6, in an embodiment of the present application, there is further provided a depth image filtering system, including:
the image processing apparatus includes an expansion unit 10, configured to perform image expansion on a first image according to a preset sliding window to obtain a second image, where the first image represents an original depth image, and a size of the second image has a preset corresponding relationship with a size of the first image and a size of the preset sliding window;
an attribute determining unit 20, configured to determine a target pixel in the second image, and perform attribute determination on the target pixel based on a set dynamic threshold to obtain attribute feature information of the target pixel, where the attribute feature information includes that the target pixel is located in an edge region or that the target pixel is located in a plane region;
the plane area processing unit 30 is configured to calculate an average value of pixel depth values of a neighborhood area of the target pixel if the target pixel is located in the plane area, update all the pixel depth values in the neighborhood area to the average value, implement filtering processing on each pixel point of the plane area, and obtain a filtering result of the plane area;
the edge region processing unit 40 is configured to calculate weight values of all pixel points in a neighborhood region of the target pixel if the target pixel is located in an edge region, calculate a target depth value of the target pixel according to the weight values, update the depth value of the target pixel to the target depth value, implement filtering processing on the pixel points in the edge region, and obtain a filtering result of the edge region;
a generating unit 50, configured to obtain a filtered depth image based on the filtering result of the plane region and the filtering result of the edge region.
On the basis of the above embodiment, the extension unit includes:
the expansion subunit is configured to expand, according to the size of a preset sliding window, first numerical pixels to the upper, lower, left, right, and left sides of the first image, where the size of the preset sliding window is N × N, N is a positive odd number greater than or equal to 3, the first numerical value is (N-1)/2, and a depth value of each expanded pixel is equal to a depth value of an adjacent pixel;
a first calculating subunit, configured to calculate a size of a second image according to a size of a first image and a size of the sliding window, where if the first image has a width W1 and a height H1, a width W2 of the second image is W1+ N-1, and a height H2 of the second image is H1+ N-1;
and the assignment subunit is used for assigning the values of the pixels in the second image according to the depth values of the pixels in the first image and the size of a preset sliding window to obtain the assigned second image.
On the basis of the above embodiment, the attribute determination unit includes:
a determining subunit, configured to determine, according to reference parameters of a target pixel of the second image, a dynamic threshold matching the target pixel, where the reference parameters include a reference percentage and a reference depth value;
the first obtaining subunit is used for obtaining a neighborhood region of the target pixel;
the second calculating subunit is used for calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
and the comparison subunit is configured to compare each gradient value in the gradient set with the dynamic threshold, and if one gradient value is greater than the dynamic threshold, determine that the target pixel is located in an edge region, otherwise, determine that the target pixel is located in a plane region.
On the basis of the above embodiment, the plane processing unit includes:
a third computing subunit, configured to compute a sum of all pixels in a neighborhood region of the target pixel if the target pixel is located in a planar region;
the fourth calculating subunit is configured to calculate a mean value of all pixels in the neighborhood region according to the sum of all pixels;
a first updating subunit, configured to update the depth values of all pixels in the neighborhood region to the mean value;
the first processing subunit is used for updating the coordinate value of the target pixel and performing filtering processing on the updated target pixel;
the edge area processing unit includes:
a fifth calculating subunit, configured to calculate weights of all pixel points in a domain area of the target pixel if the target pixel is located in an edge area;
the normalization subunit is configured to perform normalization processing on the weights of the pixel points to obtain normalized weights;
a sixth calculating subunit, configured to calculate a target depth value of the target pixel based on the normalized weight value;
and the second updating subunit is configured to update the depth value of the target pixel to the target depth value, and the depth values of the pixels in the edge region except the target pixel are the original depth values corresponding to the target depth value, so as to implement filtering processing on the pixel points in the edge region, and obtain a filtering result of the edge region.
On the basis of the above embodiment, the edge area processing unit further includes
And the eliminating subunit is used for eliminating the pixel points with the depth value of 0 if the depth value of the pixel points in the neighborhood region is 0.
The invention provides a filtering system of a depth image, which is characterized in that an original depth image is expanded to obtain a second image, then the attribute of a target image in the second image is judged based on a dynamic threshold, the dynamic threshold can be adopted to have self-adaption and robustness for different depth values, the depth pixel values are classified according to the dynamic threshold, different filtering processing aiming at different attribute characteristics is realized, namely, the depth values of all pixels in a neighborhood are updated by adopting a mean value in a plane area, and the depth value of the target pixel is updated by adopting a recalculated depth value in the edge area. Therefore, the problems that the filtering effect on the plane area is obvious and the edge information is reserved cannot be achieved simultaneously in the depth map filtering process are solved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A method for filtering a depth image, the method comprising:
performing image expansion on a first image according to a preset sliding window to obtain a second image, wherein the first image represents an original depth image, and the size of the second image has a preset corresponding relation with the size of the first image and the size of the preset sliding window;
determining a target pixel in the second image, and performing attribute judgment on the target pixel based on a set dynamic threshold to obtain attribute feature information of the target pixel, wherein the attribute feature information comprises that the target pixel is located in an edge area or the target pixel is located in a plane area; the determining a target pixel in the second image, and performing attribute judgment on the target pixel based on a set dynamic threshold to obtain attribute feature information of the target pixel includes:
determining a dynamic threshold value matched with a target pixel of the second image according to reference parameters of the target pixel, wherein the reference parameters comprise a reference percentage and a reference depth value; the calculation formula of the dynamic threshold is as follows:
Figure FDA0003399840110000011
wherein, DT (i, j) represents a dynamic threshold value of the attribute judgment of the pixel d (i, j), base represents a reference percentage, and ref _ depth represents a reference depth value;
acquiring a neighborhood region of the target pixel;
calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
comparing each gradient value in the gradient set with the dynamic threshold, if one gradient value is larger than the dynamic threshold, judging that the target pixel is positioned in an edge area, otherwise, judging that the target pixel is positioned in a plane area;
if the target pixel is located in the plane area, calculating the mean value of the pixel depth values of the neighborhood area of the target pixel, updating all the pixel depth values in the neighborhood area to the mean value, realizing the filtering processing of all the pixel points of the plane area, and obtaining the filtering result of the plane area;
if the target pixel is located in the edge region, calculating weight values of all pixel points in a neighborhood region of the target pixel, calculating a target depth value of the target pixel according to the weight values, updating the depth value of the target pixel to the target depth value, realizing filtering processing on the pixel points in the edge region, and obtaining a filtering result of the edge region;
and obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area.
2. The method according to claim 1, wherein the image expanding the first image according to the preset sliding window to obtain the second image comprises:
expanding first numerical pixels on the upper side, the lower side, the left side and the right side of the first image respectively according to the size of a preset sliding window, wherein the size of the preset sliding window is N x N, N is a positive odd number which is more than or equal to 3, the first numerical value is (N-1)/2, and the depth value of each expanded pixel is equal to the depth value of the pixel adjacent to the expanded pixel;
calculating and obtaining the size of a second image according to the size of a first image and the size of the sliding window, wherein if the width of the first image is W1 and the height of the first image is H1, the width W2 of the second image is W1+ N-1, and the height H2 of the second image is H1+ N-1;
and assigning values to the pixel points in the second image according to the depth values of the pixel points of the first image and the size of a preset sliding window to obtain the assigned second image.
3. The method of claim 1, wherein if the target pixel is located in a planar region, calculating a mean value of pixel depth values of a neighborhood region of the target pixel, and updating all pixel depth values in the neighborhood region to the mean value comprises:
if the target pixel is located in the plane area, calculating the sum of all pixels in the neighborhood area of the target pixel;
calculating to obtain the mean value of all pixels in the neighborhood region according to the sum of all pixels;
updating the depth values of all pixels in the neighborhood region to the mean value;
and updating the coordinate value of the target pixel, and filtering the updated target pixel.
4. The method according to claim 1, wherein if the target pixel is located in an edge region, calculating weight values of all pixel points in a neighborhood region of the target pixel, calculating a target depth value of the target pixel according to the weight values, updating the depth value of the target pixel to the target depth value, and implementing filtering processing on the pixel points in the edge region to obtain a filtering result of the edge region, includes:
if the target pixel is located in the edge area, calculating the weights of all pixel points in the domain area of the target pixel;
normalizing the weight of the pixel point to obtain a normalized weight value;
calculating a target depth value of the target pixel based on the normalized weight value;
and updating the depth value of the target pixel to the target depth value, wherein the depth values of the pixels except the target pixel in the edge area are the original depth values corresponding to the target pixel, so as to realize the filtering processing of the pixel points in the edge area and obtain the filtering result of the edge area.
5. The method of claim 4, further comprising:
and if the depth value of the pixel point in the neighborhood region is 0, eliminating the pixel point with the depth value of 0.
6. A system for filtering a depth image, the system comprising:
the image expansion unit is used for performing image expansion on a first image according to a preset sliding window to obtain a second image, wherein the first image represents an original depth image, and the size of the second image has a preset corresponding relation with the size of the first image and the size of the preset sliding window;
the attribute judging unit is used for determining a target pixel in the second image and judging the attribute of the target pixel based on a set dynamic threshold value to obtain attribute feature information of the target pixel, wherein the attribute feature information comprises that the target pixel is positioned in an edge area or the target pixel is positioned in a plane area; the attribute judging unit includes:
a determining subunit, configured to determine, according to reference parameters of a target pixel of the second image, a dynamic threshold matching the target pixel, where the reference parameters include a reference percentage and a reference depth value; the calculation formula of the dynamic threshold is as follows:
Figure FDA0003399840110000031
wherein, DT (i, j) represents a dynamic threshold value of the attribute judgment of the pixel d (i, j), base represents a reference percentage, and ref _ depth represents a reference depth value;
the first obtaining subunit is used for obtaining a neighborhood region of the target pixel;
the second calculating subunit is used for calculating the gradient of each pixel in the neighborhood region to obtain a gradient set;
a comparison subunit, configured to compare each gradient value in the gradient set with the dynamic threshold, and if one gradient value is greater than the dynamic threshold, determine that the target pixel is located in an edge region, otherwise, determine that the target pixel is located in a plane region;
the plane area processing unit is used for calculating the mean value of the pixel depth values of the neighborhood area of the target pixel and updating all the pixel depth values in the neighborhood area to the mean value if the target pixel is located in the plane area, so as to realize the filtering processing of each pixel point of the plane area and obtain the filtering result of the plane area;
the edge area processing unit is used for calculating weight values of all pixel points in a neighborhood area of the target pixel if the target pixel is located in the edge area, calculating a target depth value of the target pixel according to the weight values, updating the depth value of the target pixel to the target depth value, realizing filtering processing on the pixel points in the edge area, and obtaining a filtering result of the edge area;
and the generating unit is used for obtaining a filtered depth image based on the filtering result of the plane area and the filtering result of the edge area.
7. The system of claim 6, wherein the extension unit comprises:
the expansion subunit is configured to expand, according to the size of a preset sliding window, first numerical pixels to the upper, lower, left, right, and left sides of the first image, where the size of the preset sliding window is N × N, N is a positive odd number greater than or equal to 3, the first numerical value is (N-1)/2, and a depth value of each expanded pixel is equal to a depth value of an adjacent pixel;
a first calculating subunit, configured to calculate a size of a second image according to a size of a first image and a size of the sliding window, where if the first image has a width W1 and a height H1, a width W2 of the second image is W1+ N-1, and a height H2 of the second image is H1+ N-1;
and the assignment subunit is used for assigning the values of the pixels in the second image according to the depth values of the pixels in the first image and the size of a preset sliding window to obtain the assigned second image.
8. The system of claim 6, wherein the planar area processing unit comprises:
a third computing subunit, configured to compute a sum of all pixels in a neighborhood region of the target pixel if the target pixel is located in a planar region;
the fourth calculating subunit is configured to calculate a mean value of all pixels in the neighborhood region according to the sum of all pixels;
a first updating subunit, configured to update the depth values of all pixels in the neighborhood region to the mean value;
the first processing subunit is used for updating the coordinate value of the target pixel and performing filtering processing on the updated target pixel;
the edge area processing unit includes:
a fifth calculating subunit, configured to calculate weights of all pixel points in a domain area of the target pixel if the target pixel is located in an edge area;
the normalization subunit is configured to perform normalization processing on the weights of the pixel points to obtain normalized weights;
a sixth calculating subunit, configured to calculate a target depth value of the target pixel based on the normalized weight value;
and the second updating subunit is configured to update the depth value of the target pixel to the target depth value, and the depth values of the pixels in the edge region except the target pixel are the original depth values corresponding to the target depth value, so as to implement filtering processing on the pixel points in the edge region, and obtain a filtering result of the edge region.
CN201910789404.6A 2019-08-26 2019-08-26 Depth image filtering method and system Active CN110490829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910789404.6A CN110490829B (en) 2019-08-26 2019-08-26 Depth image filtering method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910789404.6A CN110490829B (en) 2019-08-26 2019-08-26 Depth image filtering method and system

Publications (2)

Publication Number Publication Date
CN110490829A CN110490829A (en) 2019-11-22
CN110490829B true CN110490829B (en) 2022-03-15

Family

ID=68553945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910789404.6A Active CN110490829B (en) 2019-08-26 2019-08-26 Depth image filtering method and system

Country Status (1)

Country Link
CN (1) CN110490829B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111323756B (en) * 2019-12-30 2022-05-13 北京海兰信数据科技股份有限公司 Marine radar target detection method and device based on deep learning
CN111415310B (en) * 2020-03-26 2023-06-30 Oppo广东移动通信有限公司 Image processing method and device and storage medium
CN111986124A (en) * 2020-09-07 2020-11-24 北京凌云光技术集团有限责任公司 Filling method and device for missing pixels of depth image
CN114066779B (en) * 2022-01-13 2022-05-06 杭州蓝芯科技有限公司 Depth map filtering method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581633A (en) * 2012-07-24 2014-02-12 索尼公司 Image processing device, image processing method, program, and imaging apparatus
CN103854257A (en) * 2012-12-07 2014-06-11 山东财经大学 Depth image enhancement method based on self-adaptation trilateral filtering
CN104683783A (en) * 2015-01-08 2015-06-03 电子科技大学 Self-adaptive depth map filtering method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120003147A (en) * 2010-07-02 2012-01-10 삼성전자주식회사 Depth map coding and decoding apparatus using loop-filter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581633A (en) * 2012-07-24 2014-02-12 索尼公司 Image processing device, image processing method, program, and imaging apparatus
CN103854257A (en) * 2012-12-07 2014-06-11 山东财经大学 Depth image enhancement method based on self-adaptation trilateral filtering
CN104683783A (en) * 2015-01-08 2015-06-03 电子科技大学 Self-adaptive depth map filtering method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
保留边缘信息和特定目标的高密脉冲噪声滤波算法;赵伟舟 等;《西北师范大学学报(自然科学版)》;20141231;第50卷(第4期);第46页 *
基于时空联合滤波的高清视频降噪算法;陈潇红 等;《浙江大学学报(工学版)》;20130531;第47卷(第5期);第854-855页 *
基于边缘检测的Kinect 深度图像去噪算法;邹星星 等;《湖南工业大学学报》;20131130;第27卷(第6期);第37-39页 *

Also Published As

Publication number Publication date
CN110490829A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN110490829B (en) Depth image filtering method and system
US8818077B2 (en) Stereo image matching apparatus and method
CN107636727A (en) Target detection method and device
KR100859210B1 (en) Human being detection apparatus, method of detecting human being, and computer readable recording medium storing human being detecting program
CN107578430B (en) Stereo matching method based on self-adaptive weight and local entropy
CN108446694B (en) Target detection method and device
CN105069804B (en) Threedimensional model scan rebuilding method based on smart mobile phone
CN111402170B (en) Image enhancement method, device, terminal and computer readable storage medium
CN104899853A (en) Image region dividing method and device
CN103826032B (en) Depth map post-processing method
WO2014073670A1 (en) Image processing method and image processing device
JP6307873B2 (en) Object line detection apparatus, method, and program
Kaewaramsri et al. Improved triangle box-counting method for fractal dimension estimation
US20150302595A1 (en) Method and apparatus for generating depth information
CN111357034A (en) Point cloud generation method, system and computer storage medium
CN106991697A (en) The ambition ratio measuring method of chest digitized video
CN111160309A (en) Image processing method and related equipment
JP4631973B2 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
KR102158390B1 (en) Method and apparatus for image processing
US11256949B2 (en) Guided sparse feature matching via coarsely defined dense matches
CN111161288B (en) Image processing method and device
JP2013073598A (en) Image processing device, image processing method, and program
CN108182666A (en) A kind of parallax correction method, apparatus and terminal
JP6295556B2 (en) Object dividing apparatus and method
CN110717910B (en) CT image target detection method based on convolutional neural network and CT scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant