WO2021004260A1 - 深度图处理方法和装置 - Google Patents

深度图处理方法和装置 Download PDF

Info

Publication number
WO2021004260A1
WO2021004260A1 PCT/CN2020/097460 CN2020097460W WO2021004260A1 WO 2021004260 A1 WO2021004260 A1 WO 2021004260A1 CN 2020097460 W CN2020097460 W CN 2020097460W WO 2021004260 A1 WO2021004260 A1 WO 2021004260A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
depth
image frame
difference
Prior art date
Application number
PCT/CN2020/097460
Other languages
English (en)
French (fr)
Inventor
康健
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2021004260A1 publication Critical patent/WO2021004260A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • This application relates to the field of image processing technology, and in particular to a depth map processing method and device.
  • the ToF sensor determines the distance between the sensor and the object by calculating the flight time of the pulse signal, and then determines the depth value of the object based on the distance.
  • This application aims to at least to some extent solve the problem of traversing all depth image frames for filtering processing in the related art, which results in a large amount of calculation for the depth image frame filtering processing.
  • the first purpose of the present application is to propose a depth map processing method to determine the sampling interval of the image frame based on the smoothness of the depth value change, and balance the processing accuracy and resource consumption of the depth image frame.
  • the second purpose of this application is to provide a depth map processing device.
  • the third purpose of this application is to propose an electronic device.
  • the fourth purpose of this application is to provide a non-transitory computer-readable storage medium.
  • an embodiment of the first aspect of the present application proposes a depth map processing method, including the following steps: acquiring a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein , Each pixel in the first depth image frame and the second depth image frame includes a depth value, and each first pixel in the first depth image frame includes a corresponding value in the second depth image frame Determining the first content value of each first pixel and the corresponding second content value of the second pixel, and obtaining the content difference between the first content value and the second content value; Determine the credible pixel in the first depth image frame according to the content difference, and determine the area of the area where the credible pixel is located; when the area of the area meets a preset condition, adjust the depth image according to the area of the area The acquisition interval of the frame, and obtain the to-be-processed according to the acquisition interval.
  • An embodiment of the second aspect of the present application provides a depth map processing device, including: a first acquisition module, configured to acquire a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein , Each pixel in the first depth image frame and the second depth image frame includes a depth value, and each first pixel in the first depth image frame includes a corresponding value in the second depth image frame The second pixel; a second acquisition module for determining the first content value of each first pixel and the corresponding second content value of the second pixel, and acquiring the first content value and the first content value A content difference between two content values; a first determination module, configured to determine a credible pixel in the first depth image frame according to the content difference, and determine the area where the credible pixel is located; a processing module, configured to When the area of the region meets the preset condition, the collection interval of the depth image frame is adjusted according to the area of the region, and the to-be-processed is acquired according to the collection interval.
  • the embodiment of the third aspect of the present application proposes an electronic device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the computer program, The depth map processing method described in the embodiment of the first aspect is implemented.
  • the embodiment of the fourth aspect of the present application proposes a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the depth map processing method as described in the embodiment of the first aspect is implemented .
  • the depth image frame is divided into credible area and non-credible area, and the sub-area is smoothed, which effectively makes the depth value of the gently changing area in the time dimension smoother, ensuring The depth value error after the image frame filtering has time consistency, while the rapid depth change area maintains the original high dynamics.
  • the acquisition interval is adaptively expanded and reduced The calculation amount for filtering the depth image frame is calculated.
  • Fig. 1 is a flowchart of a depth map processing method according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a TOF-based depth map processing method provided by an embodiment of the application
  • Fig. 3 is a schematic flowchart of a method for calculating an original depth value according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a time-consistent filtering method according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of a depth map processing device according to an embodiment of the present application.
  • Fig. 6 is a schematic structural diagram of a depth map processing device according to another embodiment of the present application.
  • the embodiment of the present application discloses a depth map processing method.
  • the depth map processing method includes: acquiring a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein each pixel in the first depth image frame and the second depth image frame contains a depth value, Each first pixel in the first depth image frame includes a corresponding second pixel in the second depth image frame; determine the first content value of each first pixel and the second content value of the corresponding second pixel, and obtain The content difference between the first content value and the second content value; the credible pixel is determined in the first depth image frame according to the content difference, and the area of the credible pixel is determined; when the area of the area meets the preset condition, the area is Adjust the collection interval of the depth image frames, and obtain the depth image frames to be processed according to the collection interval.
  • adjusting the collection interval of the depth image frame according to the area area includes: obtaining the area of the trusted pixel and the total area of the first depth image frame Area ratio; judge whether the area ratio is greater than the preset threshold; if it is greater than the preset threshold, obtain the difference between the area ratio and the preset threshold and determine the increase in the collection interval based on the difference, based on the sum of the increase in the collection interval and the initial sampling interval Determine the collection interval; if it is less than or equal to the preset threshold, the difference between the preset threshold and the area ratio is obtained and the collection interval reduction value is determined according to the difference, and the collection interval is determined according to the difference between the initial sampling interval and the collection interval reduction value.
  • the method further includes: determining a smoothing factor corresponding to the credible pixel; and determining the smoothing factor and the credible pixel in the second depth image frame.
  • the depth value of the second pixel corresponding to the pixel is filtered on the depth value of the credible pixel.
  • filtering the depth value of the credible pixel according to the smoothing factor and the depth value of the pixel corresponding to the credible pixel in the second depth image frame includes: determining the depth value of each credible pixel The depth difference with the depth value of the corresponding second pixel; obtain the first gray value of each credible pixel and the second gray value of the corresponding second pixel, and determine the first gray value and the second gray value Value of the gray difference; obtain the first weight coefficient corresponding to the depth difference, and determine the second weight coefficient corresponding to the gray difference according to the first weight coefficient; calculate the value of each credible pixel according to the preset calculation formula The first depth value, the first weight coefficient, the second weight coefficient, the gray difference value, the depth difference value, and the smoothing factor are calculated to obtain the first smooth value; according to the first smooth value and the second depth image frame and the credible pixel The depth value of the corresponding pixel is filtered for the depth value of the credible pixel.
  • the depth value of the credible pixel is filtered according to the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame , Including: determining a second smoothing value according to the first smoothing value, wherein the first smoothing value is in inverse proportion to the second smoothing value; acquiring the second smoothing value and the depth of the credible pixel Obtain the first product of the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame; obtain the second product of the depth value of the pixel corresponding to the credible pixel in the second depth image frame; according to the first product and the second product The product sum filters the depth value of the trusted pixel unit.
  • the preset calculation formula includes:
  • w1 is the first smoothing value
  • s is the smoothing factor
  • diff1 is the depth difference
  • diff2 is the gray difference
  • d is the first weight coefficient
  • 1-d is the second weight coefficient
  • is each credible pixel The product value of the depth value and the preset standard error.
  • the embodiment of the present application also discloses a depth map processing device.
  • the depth map processing device includes a first acquisition module 10, a second acquisition module 20, a first determination module 30, and a processing module 40.
  • the first acquisition module 10 is configured to acquire a first depth image frame and a second depth image frame adjacent to the first depth image frame, where each pixel in the first depth image frame and the second depth image frame includes a depth value , Each first pixel in the first depth image frame includes a corresponding second pixel in the second depth image frame.
  • the second acquisition module 20 is configured to determine the first content value of each first pixel and the second content value of the corresponding second pixel, and acquire the content difference between the first content value and the second content value.
  • the first determining module 30 is configured to determine the credible pixel in the first depth image frame according to the content difference, and determine the area of the credible pixel.
  • the processing module 40 is configured to adjust the collection interval of the depth image frame according to the area area when the area of the area meets the preset condition, and obtain the depth image frame to be processed according to the collection interval.
  • the depth map processing apparatus further includes a third acquiring module 50.
  • the processing module 40 includes a judgment unit 41, a first determination unit 42 and a second determination unit 43.
  • the third acquiring module 50 is configured to acquire the area ratio of the area of the trusted pixel to the total area of the first depth image frame.
  • the determining unit 41 is used to determine whether the area ratio is greater than a preset threshold.
  • the first determining unit 42 is configured to obtain the difference between the area ratio and the preset threshold when the area ratio is greater than the preset threshold, and determine the collection interval increase value according to the difference, and determine the collection interval according to the sum of the collection interval increase value and the initial sampling interval .
  • the second determining unit is used to obtain the difference between the preset threshold and the area ratio when the area ratio is less than or equal to the preset threshold, and determine the collection interval reduction value according to the difference, and determine the collection interval according to the difference between the initial sampling interval and the collection interval reduction value .
  • the embodiment of the application also discloses an electronic device.
  • the electronic device includes a memory, a processor, and a computer program stored on the memory and running on the processor.
  • the processor executes the computer program, the following depth map processing method is implemented: acquiring a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein the first depth image frame and the second depth image frame Each pixel of contains a depth value, and each first pixel in the first depth image frame contains a corresponding second pixel in the second depth image frame; determine the first content value of each first pixel and the corresponding second pixel The second content value of the pixel, the content difference between the first content value and the second content value is obtained; the credible pixel is determined in the first depth image frame according to the content difference, and the area of the credible pixel is determined; when the area is satisfied When the conditions are preset, the collection interval of the depth image frames is adjusted according to the area area, and the depth image frames to be processed are obtained according to the collection interval.
  • the following depth map processing method is also implemented: obtaining the area of the region where the trusted pixel is located and the area of the total area of the first depth image frame Ratio; judge whether the area ratio is greater than the preset threshold; if it is greater than the preset threshold, obtain the difference between the area ratio and the preset threshold and determine the increase in the collection interval according to the difference, and determine the increase in the collection interval and the sum of the initial sampling interval Collection interval; if it is less than or equal to the preset threshold, the difference between the preset threshold and the area ratio is obtained and the collection interval reduction value is determined according to the difference, and the collection interval is determined according to the difference between the initial sampling interval and the collection interval reduction value.
  • the following depth map processing method is also implemented: determining the smoothing factor corresponding to the trusted pixel; according to the smoothing factor and the second depth image frame corresponding to the trusted pixel The depth value of the pixel, and the depth value of the trusted pixel is filtered.
  • the following depth map processing method is also implemented: determine the depth difference between the depth value of each credible pixel and the depth value of the corresponding second pixel; obtain each credible pixel The first gray value of the pixel and the second gray value of the corresponding second pixel determine the gray difference between the first gray value and the second gray value; obtain the first weight coefficient corresponding to the depth difference, Determine the second weight coefficient corresponding to the gray difference value according to the first weight coefficient; calculate the first depth value, first weight coefficient, second weight coefficient, and gray difference value of each credible pixel according to the preset calculation formula Calculate the depth difference and smoothing factor to obtain the first smooth value; perform filtering processing on the depth value of the credible pixel according to the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame.
  • the following depth map processing method is also implemented: the second smoothing value is determined according to the first smoothing value, where the first smoothing value is inversely proportional to the second smoothing value; The first product of the second smooth value and the depth value of the credible pixel; obtain the second product of the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame; according to the first product and the second product The sum filters the depth value of the trusted pixel unit.
  • the preset calculation formula includes:
  • w1 is the first smoothing value
  • s is the smoothing factor
  • diff1 is the depth difference
  • diff2 is the gray difference
  • d is the first weight coefficient
  • 1-d is the second weight coefficient
  • is each credible pixel The product value of the depth value and the preset standard error.
  • the embodiment of the present application also discloses a non-temporary computer-readable storage medium on which a computer program is stored.
  • the following depth map processing method is implemented: acquiring a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein the first depth image frame and the second depth image frame are Each pixel of each contains a depth value, and each first pixel in the first depth image frame contains a corresponding second pixel in the second depth image frame; determine the first content value of each first pixel and the corresponding second pixel The second content value of the pixel, the content difference between the first content value and the second content value is obtained; the credible pixel is determined in the first depth image frame according to the content difference, and the area of the credible pixel is determined; when the area is satisfied When the conditions are preset, the collection interval of the depth image frames is adjusted according to the area area, and the depth image frames to be processed are obtained according to the collection interval.
  • the computer program when the area of the area meets the preset condition, is executed by the processor to further implement the following depth map processing method: obtain the area of the area where the trusted pixel is located and the area of the total area of the first depth image frame Ratio; judge whether the area ratio is greater than the preset threshold; if it is greater than the preset threshold, obtain the difference between the area ratio and the preset threshold and determine the increase in the collection interval according to the difference, and determine the increase in the collection interval and the sum of the initial sampling interval Collection interval; if it is less than or equal to the preset threshold, the difference between the preset threshold and the area ratio is obtained and the collection interval reduction value is determined according to the difference, and the collection interval is determined according to the difference between the initial sampling interval and the collection interval reduction value.
  • the following depth map processing method is also implemented: determining the smoothing factor corresponding to the trusted pixel; according to the smoothing factor and the second depth image frame corresponding to the trusted pixel The depth value of the pixel, and the depth value of the trusted pixel is filtered.
  • the following depth map processing method when the computer program is executed by the processor, the following depth map processing method is also implemented: determine the depth difference between the depth value of each credible pixel and the depth value of the corresponding second pixel; obtain each credible pixel The first gray value of the pixel and the second gray value of the corresponding second pixel determine the gray difference between the first gray value and the second gray value; obtain the first weight coefficient corresponding to the depth difference, Determine the second weight coefficient corresponding to the gray difference value according to the first weight coefficient; calculate the first depth value, first weight coefficient, second weight coefficient, and gray difference value of each credible pixel according to the preset calculation formula Calculate the depth difference and smoothing factor to obtain the first smooth value; perform filtering processing on the depth value of the credible pixel according to the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame.
  • the second smoothing value is determined according to the first smoothing value, wherein the first smoothing value is inversely proportional to the second smoothing value;
  • the first product of the second smooth value and the depth value of the credible pixel obtain the second product of the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame; according to the first product and the second product
  • the sum filters the depth value of the trusted pixel unit.
  • the preset calculation formula includes:
  • w1 is the first smoothing value
  • s is the smoothing factor
  • diff1 is the depth difference
  • diff2 is the gray difference
  • d is the first weight coefficient
  • 1-d is the second weight coefficient
  • is each credible pixel The product value of the depth value and the preset standard error.
  • the depth map processing method and device of the embodiments of the present application are described below with reference to the accompanying drawings. Among them, the depth value in the depth map in the embodiment of the present application is obtained based on the TOF sensor.
  • FIG. 1 is a flowchart of a depth map processing method according to an embodiment of the present application. As shown in FIG. 1, the depth map processing method includes the following steps:
  • Step 101 Obtain a first depth image frame and a second depth image frame adjacent to the first depth image frame, where each pixel in the first depth image frame and the second depth image frame contains a depth value, and the first depth image frame Each first pixel in the image frame includes a corresponding second pixel in the second depth image frame.
  • the second depth image frame is adjacent to the first depth image frame and can be the previous frame before the first depth image frame or the next frame after the first depth image frame.
  • the reference direction of the image frame is fixed, for example, all refer to the adjacent previous frame, or all refer to the adjacent next frame to smooth the depth value error deal with.
  • each first pixel in the first depth image frame contains a corresponding second pixel in the second depth image frame. It should be emphasized that the correspondence between the first pixel and the second pixel indicates that Correspondence in pixel position.
  • Step 102 Determine the first content value of each first pixel and the second content value of the corresponding second pixel, and obtain the content difference between the first content value and the second content value.
  • the first example is a first example:
  • the content value is the confidence of the depth value, where the confidence of the depth value represents the energy of the depth value of the point. It can be understood that if the confidence of the depth value of the first pixel and the second pixel are the same, then It means that the first pixel and the second pixel are more likely to correspond to the same point of the object. Therefore, the difference between the first pixel and the confidence level and the second pixel confidence level can be calculated as the content difference.
  • the content value is the gray value of the pixel. It can be understood that if the gray value of the first pixel and the second pixel are the same, the more likely it is that the first pixel and the second pixel correspond to the same point of the object. Therefore, the gray value of the first pixel and the confidence level and the second pixel can be calculated according to the color pixel value of the pixel, and the difference between the gray values of the two is used as the content difference.
  • Step 103 Determine the credible pixel in the first depth image frame according to the content difference, and determine the area where the credible pixel is located.
  • the credible pixel is determined in the first depth image frame according to the content difference.
  • the credible pixel level corresponds to pixels with smaller content differences.
  • the area of the area where the credible pixels are located is determined. For example, all credible pixels in the first depth image frame can be determined, and the area size can be determined based on the area composed of all credible pixels.
  • Step 104 When the area of the area meets the preset condition, adjust the collection interval of the depth image frames according to the area area, and adjust the depth image frames to be processed according to the collection and acquisition interval.
  • the above preset condition is used to determine the similarity of the object corresponding to the second image frame and the first image frame according to the area area of the credible pixel, and the above preset condition may include a comparison of the absolute value of the area area, The comparison of the area ratio may also be included. In this example, the area ratio of the area where the credible pixel is located and the total area of the first depth image frame may be obtained.
  • the area ratio between the area of the trusted pixel and the total area of the first depth image frame is obtained, and the area ratio represents the pixel at the same point of the object in the current first depth image frame and the second depth image frame
  • the number of points obviously the larger the area ratio, it indicates that the second image frame and the first image frame are actually more likely to capture the depth information of the same position of the object.
  • the total area of the first depth image frame and the area of the region formed by the credible pixels are acquired, and the area ratio is determined based on the ratio of the two.
  • the total number of pixels in the first depth image frame and the number of credible pixels are acquired, and the area ratio is determined based on the ratio of the number of credible pixels to the total number.
  • the area ratio indicates the similarity of the objects corresponding to the second image frame and the first image frame.
  • the larger the area ratio the more likely the second image frame and the first image frame actually capture the same object.
  • the depth information of the position Therefore, the depth information contained in the two has a lot of repetition.
  • the depth information of the same point of the same object based on two image frames may affect each other, resulting in low measurement accuracy and increasing the amount of calculation Therefore, we can adjust the collection interval of the depth image frame based on the area area, and obtain the next depth image frame according to the collection interval, which greatly reduces the amount of calculation.
  • the preset threshold is calibrated based on a large amount of experimental data. Setting a threshold indicates that most of the depth values obtained by the two image frames are based on the same part of the same object. Therefore, the new depth information obtained by the image frames is not much. At this time, in order to save processing resources, the depth image can be increased
  • the frame collection interval conversely, can reduce the image frame sampling interval to ensure the comprehensive processing of high dynamic information.
  • the area ratio when the area ratio is greater than the preset threshold, it is considered that the first image frame has little noise in the time dimension, and the sampling time interval of the depth image frame is increased.
  • the credible region mask can be morphologically corroded as the subsequent consecutive frames The credible region. Therefore, the credible regions of several consecutive frames are marked as masks for direct replacement. On the one hand, it avoids the slight difference between the image frames of different depths.
  • the letter area is deeply smoothed, and the reserved untrusted area does not need to be processed to ensure that its high dynamic information is retained, which greatly reduces the amount of calculation.
  • the number of consecutive frames of the above-mentioned replaced credible region is determined according to the size of the above-mentioned area ratio and the acquisition frequency of depth image frames.
  • the above-mentioned area is relatively large and the acquisition frequency is greater, it indicates that multiple consecutive depth image frames can be read. The more similar the signal area, therefore, the higher the number of consecutive frames.
  • the difference in depth values between the first image frame and the subsequent consecutive frames can also be roughly detected, when the difference in depth values is less than the preset threshold
  • the depth image frame whose depth is more than a certain value is regarded as the depth image frame in several consecutive frames.
  • the difference between the area ratio and the preset threshold is obtained, and the collection interval increase value is determined according to the difference, and the collection interval increase value and the initial
  • the sum of the sampling intervals determines the collection interval, that is, the expansion of the collection interval as shown in Figure 4. If it is less than or equal to the preset threshold, the difference between the preset threshold and the area ratio is obtained and the collection interval reduction value is determined according to the difference. According to the initial sampling interval The difference between the value and the decrease of the collection interval determines the collection interval, that is, the collection interval is reduced.
  • this application also proposes a time-consistent filtering processing method.
  • the ToF sensor transmits through Modulated pulse signal
  • the surface of the object to be measured receives the pulse signal and reflects the signal
  • the ToF sensor receives the reflected signal
  • decodes the multi-frequency phase diagram and then performs error correction on the ToF data according to the calibration parameters, and then removes the multi-frequency signal.
  • Aliasing, and the depth value is converted from the radial coordinate system to the Cartesian coordinate system, and finally the depth map is filtered with time consistency to output a relatively smooth depth result in the time dimension.
  • the depth temporal consistency filtering scheme includes two main stages: ToF original depth value calculation stage and depth temporal consistency filtering stage, among which, as shown in Figure 3, the ToF original depth value calculation stage includes: Based on the acquired ToF sensor processing Original phase diagram (four-phase diagram in single-frequency mode, eight-phase diagram in dual-frequency mode, assuming dual-frequency mode in this embodiment), calculate the IQ signal of each pixel, and then calculate each pixel based on the IQ signal The phase and confidence of the point, where the confidence represents the reliability of the phase value of the point, and is the response of the energy of the point.
  • ToF original depth value calculation stage includes: Based on the acquired ToF sensor processing Original phase diagram (four-phase diagram in single-frequency mode, eight-phase diagram in dual-frequency mode, assuming dual-frequency mode in this embodiment), calculate the IQ signal of each pixel, and then calculate each pixel based on the IQ signal The phase and confidence of the point, where the confidence represents the reliability of the phase value of the point, and is the response of the energy of the point.
  • the depth temporal consistency filtering stage As shown in FIG. 4, before the sampling interval is expanded, after the original depth map in the Cartesian coordinate system is obtained in the embodiment of the present application, it is determined based on the content difference between the pixels.
  • the specific operation is, if the area is the credible area where the credible pixel is located, then the smoothing is performed.
  • the area is an untrusted area where the unreliable pixel is located, the area is not smoothed to ensure the high dynamic information of the area.
  • the smoothing factor corresponding to the credible pixel can be determined, and then, based on the smoothing factor and the depth of the second pixel corresponding to the credible pixel in the second depth image frame Value, the depth value of the credible pixel is filtered.
  • the difference between the content difference and the preset threshold is determined, and the factor increase value corresponding to the difference value is determined, for example, a correspondence relationship between the difference value and the factor increase value is established in advance, and the corresponding relationship is obtained based on the correspondence relationship.
  • the value of the factor is increased, and further, the smoothing factor is obtained according to the preset initial smoothing factor and the sum of the factor increasing value, that is, the adaptive increase is performed on the basis of the initial smoothing factor.
  • the depth difference between the depth value of each credible pixel and the depth value of the corresponding second pixel is determined, and the first gray value of each credible pixel and the corresponding second pixel are obtained
  • Corresponding second weighting coefficient where the first weighting coefficient is determined according to application needs, the higher the first weighting coefficient indicates that the current smoothing process focuses more on considering the depth difference between pixels, the second weighting coefficient can be compared with
  • the first smooth value is obtained, and the first smooth value may indicate the degree of smoothing. The higher the credibility of the credible pixel, the higher the corresponding first smooth value.
  • the depth value of the credible pixel is filtered according to the first smooth value and the depth value of the pixel corresponding to the credible pixel in the second depth image frame.
  • the smoothing factor and The first smoothing value is proportional.
  • the smoothing factor corresponding to the credible pixel is larger. Therefore, the corresponding first smoothing value is larger.
  • the credible pixel with a larger depth value refers to the corresponding second pixel The depth value balances the error of the credible pixels.
  • the above preset calculation formula is used to balance the measurement error of the depth value of the corresponding pixel. Theoretically, the lower the credibility of the pixel, such as the greater the depth difference, the corresponding reference to the current pixel The degree of the depth value should be larger to retain the high dynamic information of the current pixel.
  • the preset smoothing function is used to indicate the degree of the smoothing factor and the depth value of the reference pixel itself.
  • the preset smoothing function is used to indicate that the smoothing factor is inversely proportional to the depth value of the reference pixel itself, as shown in the following formula (1) ,
  • the smoothing factor s is proportional to the credibility of the corresponding pixel
  • the corresponding weight w1 is proportional to the smoothing factor:
  • w1 is the first smoothing value
  • s is the smoothing factor
  • diff1 is the depth difference
  • diff2 is the gray difference
  • d is the first weight coefficient
  • 1-d is the second weight coefficient
  • is each credible pixel
  • the preset standard error is the empirical measurement error of the depth value caused by temperature error and the like, and it can be 1% or the like.
  • the depth map processing method of the embodiment of the present application divides the depth image frame into a credible area and an untrusted area according to the content difference between pixels of adjacent frames, and the sub-area is smoothed, which effectively makes the depth change gradually.
  • the depth value of the region is smoother in the time dimension, which ensures that the depth value error after the image frame filtering has time consistency, and the rapid depth change region maintains the original high dynamics, and at the same time occupies the full frame effective area according to the credible area
  • Fig. 5 is a schematic structural diagram of a depth map processing apparatus according to an embodiment of the present application.
  • the depth map processing device includes: a first acquisition module 10, a second acquisition module 20, a first determination module 30, and a processing module 40, where:
  • the first acquisition module 10 is configured to acquire a first depth image frame and a second depth image frame adjacent to the first depth image frame, wherein each pixel in the first depth image frame and the second depth image frame includes depth Value, each first pixel in the first depth image frame includes a corresponding second pixel in the second depth image frame.
  • the second acquisition module 20 is configured to determine the first content value of each first pixel and the second content value of the corresponding second pixel, and acquire the content difference between the first content value and the second content value.
  • the first example is a first example:
  • the content value is the confidence of the depth value, where the confidence of the depth value represents the energy of the depth value of the point. It can be understood that if the confidence of the depth value of the first pixel and the second pixel are the same, then It means that the first pixel and the second pixel are more likely to correspond to the same point of the object. Therefore, the second acquisition module 20 can calculate the difference between the first pixel and the confidence level and the second pixel confidence level as the content difference.
  • the content value is the gray value of the pixel. It can be understood that if the gray value of the first pixel and the second pixel are the same, the more likely it is that the first pixel and the second pixel correspond to the same point of the object. Therefore, the second acquisition module 20 may calculate the difference between the first pixel and the confidence level and the gray value of the second pixel as the content difference.
  • the first determining module 30 is configured to determine the credible pixel in the first depth image frame according to the content difference, and determine the area of the area where the credible pixel is located.
  • the credible pixel is determined in the first depth image frame according to the content difference.
  • the credible pixel level corresponds to pixels with smaller content differences.
  • the first determining module 30 determines the area of the area where the credible pixels are located. For example, all credible pixels in the first depth image frame can be determined, and the first determining module 30 is based on the composition of all credible pixels The area determines the size of the area.
  • the processing module 40 is configured to adjust the collection interval of the depth image frame according to the area area when the area of the area meets the preset condition, and obtain the depth image frame to be processed according to the collection interval.
  • the area ratio indicates the similarity of the objects corresponding to the second image frame and the first image frame.
  • the larger the area ratio the more likely the second image frame and the first image frame are actually objects.
  • the depth information of the same position therefore, the depth information contained in the two has a lot of repetition.
  • the depth information of the same point for the same object based on two image frames may affect each other, resulting in low measurement accuracy. Therefore, we
  • the collection interval of the depth image frame can be adjusted based on the area area, and the next depth image frame can be processed according to the collection interval.
  • a third acquiring module 50 configured to acquire the area of the trusted pixel and the first depth image frame The area ratio of the total area.
  • the third acquiring module 50 acquires the area ratio between the area of the trusted pixel and the total area of the first depth image frame, and the area ratio represents the current first depth image frame and the second depth image frame for the object The number of pixels at the same point, obviously, the larger the area ratio, it indicates that the second image frame and the first image frame are actually more likely to capture the depth information of the same position of the object.
  • the processing module 40 includes: a determining unit 41, a first determining unit 42 and a second determining unit 43, wherein the determining unit 41 determines whether the area ratio is greater than a preset threshold, and the first determining unit 42 acquires when the area ratio is greater than the preset threshold The difference between the area ratio and the preset threshold is used to determine the collection interval increase value, and the collection interval is determined according to the sum of the collection interval increase value and the initial sampling interval.
  • the second determining unit 43 obtains when the area ratio is less than or equal to the preset threshold The difference between the threshold and the area ratio is preset, and the collection interval reduction value is determined according to the difference, and the collection interval is determined according to the difference between the initial sampling interval and the collection interval reduction value.
  • the depth map processing device of the embodiment of the present application divides the depth image frame into a credible area and an untrusted area according to the content difference between adjacent frame pixels, and the sub-area is smoothed, which effectively makes the depth change gradually.
  • the depth value of the region is smoother in the time dimension, which ensures that the depth value error after the image frame filtering has time consistency, and the rapid depth change region maintains the original high dynamics, and at the same time occupies the full frame effective area according to the credible area
  • the area ratio is adaptive to expand the collection interval and reduce the amount of calculation.
  • this application also proposes an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the implementation is as described in the foregoing embodiment.
  • the depth map processing method is as described in the foregoing embodiment.
  • this application also proposes a non-transitory computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the depth map processing method as described in the foregoing method embodiment is implemented. .
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include at least one of the features. In the description of the present application, "a plurality of” means at least two, such as two, three, etc., unless specifically defined otherwise.
  • a "computer-readable medium” can be any device that can contain, store, communicate, propagate, or transmit a program for use by an instruction execution system, device, or device or in combination with these instruction execution systems, devices, or devices.
  • computer readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk cases (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable media on which the program can be printed, because it can be used, for example, by optically scanning the paper or other media, and then editing, interpreting, or other suitable media if necessary. The program is processed in a manner to obtain the program electronically and then stored in the computer memory.
  • each part of this application can be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods can be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • Discrete logic gate circuits for implementing logic functions on data signals Logic circuit, application specific integrated circuit with suitable combinational logic gate, programmable gate array (PGA), field programmable gate array (FPGA), etc.
  • the functions in the various embodiments of the present application may be integrated into one processing module, or each may exist alone physically, or two or more may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the aforementioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种深度图处理方法和装置。方法包括:获取相邻的第一深度图像帧和第二深度图像帧;确定第一像素的第一内容值与对应的第二像素的第二内容值的内容差;根据内容差在第一深度图像帧中确定可信像素所在区域面积;当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔并根据采集间隔获取下一帧待处理的深度图像帧。

Description

深度图处理方法和装置
优先权信息
本申请请求2019年07月11日向中国国家知识产权局提交的、专利申请号为201910626061.1的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及图像处理技术领域,尤其涉及一种深度图处理方法和装置。
背景技术
通常,在基于飞行时间(Time of flight,ToF)传感器测量物体的深度时,ToF传感器通过计算脉冲信号的飞行时间来确定传感器和物体之间的距离,进而基于距离确定出物体的深度值。
相关技术中,遍历所有的深度图像帧进行滤波处理,导致深度图像帧滤波处理的计算量较大。
发明内容
本申请旨在至少在一定程度上解决相关技术中遍历所有的深度图像帧进行滤波处理,导致深度图像帧滤波处理的计算量较大的问题。
为此,本申请的第一个目的在于提出一种深度图处理方法,以实现基于深度值变化的平滑程度确定图像帧的采样间隔,平衡了深度图像帧的处理精度和资源消耗。
本申请的第二个目的在于提出一种深度图处理装置。
本申请的第三个目的在于提出一种电子设备。
本申请的第四个目的在于提出一种非临时性计算机可读存储介质。
为达上述目的,本申请第一方面实施例提出了一种深度图处理方法,包括以下步骤:获取第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的。
本申请第二方面实施例提出了一种深度图处理装置,包括:第一获取模块,用于获取 第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;第二获取模块,用于确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;第一确定模块,用于根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;处理模块,用于当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的。
本申请第三方面实施例提出了一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现如第一方面实施例所述的深度图处理方法。
本申请第四方面实施例提出了一种非临时性计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面实施例所述的深度图处理方法。
本申请提供的技术方案,至少包含如下有益效果:
根据相邻帧像素之间的内容差,将深度图像帧分为可信区域和非可信区域,分区域进行平滑,有效的使深度平缓变化区域在时间维度上深度值更为平滑,保证了图像帧滤波后的深度值误差具有时间一致性,而深度快速变化区域又保持了原来的高动态性,同时根据可信区域占全画幅有效区域的面积比,自适应的扩大了采集间隔,降低了对深度图像帧进行滤波处理的计算量。
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是根据本申请一个实施例的深度图处理方法的流程图;
图2为本申请实施例所提供的一种基于TOF的深度图处理方法的流程示意图;
图3是根据本申请一个实施例的原始深度值计算方法流程示意图;
图4是根据本申请一个实施例的时间一致性滤波方法流程示意图;
图5是根据本申请一个实施例的深度图处理装置的结构示意图;
图6是根据本申请另一个实施例的深度图处理装置的结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
本申请实施方式公开了一种深度图处理方法。深度图处理方法包括:获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素;确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差;根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积;当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔,并根据采集间隔获取待处理的深度图像帧。
在某些实施方式中,当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔区域面积调整,包括:获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比;判断面积比是否大于预设阈值;若大于预设阈值,则获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔;若小于等于预设阈值,则获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔。
在某些实施方式中,在根据内容差在第一深度图像帧中确定可信像素之后,还包括:确定与可信像素对应的平滑因子;根据平滑因子和第二深度图像帧中与可信像素对应的第二像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,根据平滑因子和第二深度图像帧中与可信像素对应的像素的深度值,对可信像素的深度值进行滤波处理,包括:确定每个可信像素的深度值和对应的第二像素的深度值的深度差值;获取每个可信像素的第一灰度值和对应的第二像素的第二灰度值,确定第一灰度值和第二灰度值的灰度差值;获取与深度差值对应的第一权重系数,根据第一权重系数确定与灰度差值对应的第二权重系数;根据预设的计算公式对每个可信像素的第一深度值、第一权重系数、第二权重系数、灰度差值、深度差值、平滑因子计算,获取第一平滑值;根据第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,所述根据所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理,包括:根据所述第一平滑值确定第二平滑值,其中,所述第一平滑值与所述第二平滑值成反比关系;获取所述第二平滑值和所述可信像素的深度值的第一乘积;获取所述第一平滑值和所述第二深度图像 帧中与所述可信像素对应的像素的深度值的第二乘积;根据所述第一乘积和所述第二乘积之和对所述可信像素单元的深度值滤波处理。
在某些实施方式中,预设的计算公式,包括:
Figure PCTCN2020097460-appb-000001
其中,w1为第一平滑值,s为平滑因子,diff1为深度差值,diff2为灰度差值,d为第一权重系数,1-d为第二权重系数,σ是每个可信像素的深度值和预设标准误差的乘积值。
请参阅图5,本申请实施方式还公开了一种深度图处理装置。深度图处理装置包括第一获取模块10、第二获取模块20、第一确定模块30及处理模块40。第一获取模块10用于获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素。第二获取模块20用于确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差。第一确定模块30用于根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积。处理模块40用于当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔,并根据采集间隔获取待处理的深度图像帧。
请参阅图6,在某些实施方式中,深度图处理装置还包括第三获取模块50。处理模块40包括判断单元41、第一确定单元42及第二确定单元43。第三获取模块50用于获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比。判断单元41用于判断面积比是否大于预设阈值。第一确定单元42用于在面积比大于预设阈值时,获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔。第二确定单元用于在面积比小于等于预设阈值时,获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔。
本申请实施方式还公开了一种电子设备。电子设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序。处理器执行计算机程序时,实现以下深度图处理方法:获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素;确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差;根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积;当区域面积满足预设条件时,根 据区域面积调整深度图像帧的采集间隔,并根据采集间隔获取待处理的深度图像帧。
在某些实施方式中,当区域面积满足预设条件时,处理器执行计算机程序时,还实现以下深度图处理方法:获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比;判断面积比是否大于预设阈值;若大于预设阈值,则获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔;若小于等于预设阈值,则获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔。
在某些实施方式中,处理器执行计算机程序时,还实现以下深度图处理方法:确定与可信像素对应的平滑因子;根据平滑因子和第二深度图像帧中与可信像素对应的第二像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,处理器执行计算机程序时,还实现以下深度图处理方法:确定每个可信像素的深度值和对应的第二像素的深度值的深度差值;获取每个可信像素的第一灰度值和对应的第二像素的第二灰度值,确定第一灰度值和第二灰度值的灰度差值;获取与深度差值对应的第一权重系数,根据第一权重系数确定与灰度差值对应的第二权重系数;根据预设的计算公式对每个可信像素的第一深度值、第一权重系数、第二权重系数、灰度差值、深度差值、平滑因子计算,获取第一平滑值;根据第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,处理器执行计算机程序时,还实现以下深度图处理方法:根据第一平滑值确定第二平滑值,其中,第一平滑值与第二平滑值成反比关系;获取第二平滑值和可信像素的深度值的第一乘积;获取第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值的第二乘积;根据第一乘积和第二乘积之和对可信像素单元的深度值滤波处理。
在某些实施方式中,预设的计算公式,包括:
Figure PCTCN2020097460-appb-000002
其中,w1为第一平滑值,s为平滑因子,diff1为深度差值,diff2为灰度差值,d为第一权重系数,1-d为第二权重系数,σ是每个可信像素的深度值和预设标准误差的乘积值。
本申请实施方式还公开了一种非临时性计算机可读存储介质,其上存储有计算机程序。计算机程序被处理器执行时实现以下深度图处理方法:获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素; 确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差;根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积;当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔,并根据采集间隔获取待处理的深度图像帧。
在某些实施方式中,当区域面积满足预设条件时,计算机程序被处理器执行时还实现以下深度图处理方法:获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比;判断面积比是否大于预设阈值;若大于预设阈值,则获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔;若小于等于预设阈值,则获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔。
在某些实施方式中,计算机程序被处理器执行时还实现以下深度图处理方法:确定与可信像素对应的平滑因子;根据平滑因子和第二深度图像帧中与可信像素对应的第二像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,计算机程序被处理器执行时还实现以下深度图处理方法:确定每个可信像素的深度值和对应的第二像素的深度值的深度差值;获取每个可信像素的第一灰度值和对应的第二像素的第二灰度值,确定第一灰度值和第二灰度值的灰度差值;获取与深度差值对应的第一权重系数,根据第一权重系数确定与灰度差值对应的第二权重系数;根据预设的计算公式对每个可信像素的第一深度值、第一权重系数、第二权重系数、灰度差值、深度差值、平滑因子计算,获取第一平滑值;根据第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值,对可信像素的深度值进行滤波处理。
在某些实施方式中,计算机程序被处理器执行时还实现以下深度图处理方法:根据第一平滑值确定第二平滑值,其中,第一平滑值与第二平滑值成反比关系;获取第二平滑值和可信像素的深度值的第一乘积;获取第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值的第二乘积;根据第一乘积和第二乘积之和对可信像素单元的深度值滤波处理。
在某些实施方式中,预设的计算公式,包括:
Figure PCTCN2020097460-appb-000003
其中,w1为第一平滑值,s为平滑因子,diff1为深度差值,diff2为灰度差值,d为第一权重系数,1-d为第二权重系数,σ是每个可信像素的深度值和预设标准误差的乘积值。
下面参考附图描述本申请实施例的深度图处理方法和装置。其中,本申请实施例的深 度图中的深度值是基于TOF传感器获取的。
具体的,图1是根据本申请一个实施例的深度图处理方法的流程图,如图1所示,该深度图处理方法包括以下步骤:
步骤101,获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素。
需要说明的是,第二深度图像帧与第一深度图像帧相邻,可以是位于第一深度图像帧之前的上一帧,也可以位于第一深度图像帧之后的下一帧,这根据具体的应用需求而定,当然在同样的场景中,图像帧的参考方向都是固定的,比如,都参考相邻的上一帧,或者,都参考相邻的下一帧进行深度值误差的平滑处理。
另外,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素,需要强调的是,这种第一像素和第二像素的对应关系,表示的是在像素位置上的对应。
步骤102,确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差。
可以理解,如果第一像素和第二像素的内容差较低,则表明第一像素和第二像素实际对应于拍摄的物体的同一点,第一像素和第二像素之间是的深度值的误差应当较低。
需要说明的是,上述内容值在不同的应用场景中,包含不同的参数,示例如下:
第一种示例:
在本示例中,内容值为深度值的置信度,其中,深度值的置信度表示该点深度值的能量大小,可以理解,若第一像素和第二像素的深度值的置信度相同,则表示第一像素和第二像素越可能对应于物体的同一个点,因而,可以计算第一像素和置信度和第二像素的置信度的差值作为内容差。
第二种示例:内容值为像素的灰度值,可以理解,若第一像素和第二像素的灰度值相同,则标识第一像素和第二像素越可能对应于物体的同一个点,因而,可以根据像素的彩色像素值计算第一像素和置信度和第二像素的灰度值,基于二者灰度值的差值作为内容差。
步骤103,根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积。
具体的,正如以上分析的,内容差越小则表明第一像素和第二像素越有可能对应于物体的同一个点,从而,根据内容差在第一深度图像帧中确定可信像素,该可信像素度对应于内容差较小的像素。
进而,在确定可信像素后,确定可信像素所在区域的面积,比如,可以确定第一深度图像帧中所有的可信像素,基于所有可信像素组成的区域确定面积大小。
步骤104,当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔,并根据采集获取间隔待处理的深度图像帧。
其中,上述预设条件用于根据可信像素的区域面积确定第二图像帧和第一图像帧实际上对应的物体的相似度,上述预设条件可以包括区域面积的绝对值的大小的比较,也可以包括面积比的比较,在本示例中,可获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比。
具体的,获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比,该面积比表示当前第一深度图像帧和第二深度图像帧中,针对物体的同一个点的像素点的个数,显然该面积比越大,则表明第二图像帧和第一图像帧实际上是越可能拍摄的是物体的同一个位置的深度信息。比如,获取第一深度图像帧的总面积,以及可信像素形成区域的面积,基于二者的比值确定面积比。又比如,获取第一深度图像帧中像素的总个数,以及可信像素的个数,基于可信像素的个数和总个数的比值确定面积比。正如以上分析的,面积比表示第二图像帧和第一图像帧实际上对应的物体的相似度,面积比越大,第二图像帧和第一图像帧实际上越可能拍摄的是物体的同一个位置的深度信息,因此,二者包含的深度信息具有大量的重复,基于两张图像帧获取针对同一个物体的相同点的深度信息可能会互相影响,导致测量精度不高,并且会提高计算量,因而,我们可以基于区域面积调整深度图像帧的采集间隔,并根据采集间隔获取下一帧深度图像帧,大大降低了计算量。
具体而言,在本申请的一个实施例中,当上述预设条件是面积比时,判断面积比是否大于预设阈值,该预设阈值是根据大量实验数据标定的,若面积比大于该预设阈值,则表明两帧图像帧获取的深度值大部分是基于同一个物体的同一个部分,因此,图像帧获取的新增深度信息不多,此时为了节约处理资源,可以增大深度图像帧的采集间隔,反之,可以降低图像帧的采样间隔,来保证高动态信息的处理全面。
在本申请的一个实施例中,当面积比大于预设阈值,则认为第一图像帧在在时间维度上噪声很小,则提高深度图像帧的采样时间间隔,为了进一步提高图像处理效率,若多个连续深度图像帧的可信区域都较大,则为了避免不同深度图像帧之间的微小的差异,提高处理效率,可以对可信区域掩码进行形态学腐蚀操作作为后续连续若干帧的可信区域,由此,这些连续若干帧的可信区域标记为掩码进行直接替换,一方面,避免了不同深度图像帧之间的微小的差异,另一方面,无需对连续若干帧的可信区域进行深度平滑处理,保留的非可信区域也无需处理以保证保留其高动态信息,大大降低了计算量。
上述被替换可信区域的连续若干帧的数量根据上述面积比的大小和深度图像帧的获取频率确定,当上述面积比较大,且获取频率越大则表明多个连读的深度图像帧的可信区域越相似,因而,连续若干帧的数量越高,当然,也可以大致检测第一图像帧和其后面连续 多帧之间的深度值的差值,当深度值的差值小于预设阈值的深度平缓变化区域的面积,大于一定值的深度图像帧作为连续若干帧中的深度图像帧。
在本实施例中,判断面积比是否大于预设阈值后,若大于预设阈值,则获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔,即如图4所示扩大采集间隔,若小于等于预设阈值,则获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔,即降低采集间隔。
进一步的,由于测量过程中存在着各类不确定性,带来了多种误差,在离线标定阶段已经对多种误差进行了修正,但是由于这些误差具有很大的随机性,这造成了在测量范围内ToF的深度测量误差大约为1%。在计算物体的深度值时,我们基于该固定的深度测量误差进行深度值的平滑处理。若在一定时间内,深度值的误差是固定的,即具有时间一致性,则会为我们的深度值的精确计算具有较大意义,因此,亟需一种方法能够保证深度误差在在短时间内具有时间一致性,不会发生深度误差的跳变。
因此,本申请还提出了一种时间一致性滤波处理方法。
为了使得本领域的技术人员,更加清楚的理解本申请的深度图的滤波处理方法的时机,下面结合图2对TOF的深度图处理的整个流程进行说明,如图2所示,ToF传感器发射经过调制的脉冲信号,待测量物体表面接收到脉冲信号并反射信号,然后ToF传感器接收到反射信号,并对多频相位图解码,接着根据标定参数对ToF数据进行误差修正,然后对多频信号去混叠,并将深度值由径向坐标系转换到笛卡尔坐标系,最后对深度图进行时间一致性滤波,输出时间维度上相对平滑的深度结果。
其中,深度时间一致性滤波方案包括两个主要阶段:ToF原始深度值计算阶段和深度时间一致性滤波阶段,其中,如图3所示,ToF原始深度值计算阶段包括:基于获取的ToF传感器处理原始相位图(单频模式下为四相位图,双频模式下为八相位图,假设本实施例中为双频模式),计算每个像素的IQ信号,进而,根据IQ信号计算每个像素的相位和置信度,其中置信度表示该点相位值的可信度,是该点能量大小的反应,根据ToF离线标定的内参在线修正几种误差,包括循环误差,温度误差,梯度误差,视差误差等,在双频去混叠前进行前滤波,以分别过滤各频率模式下的噪声,在去除双频的噪声后,对双频进行混叠,确定每个像素的真实周期数,基于该真实周期数对混叠的结果进行后滤波,进而将后滤波后的径向坐标系转换到笛卡尔坐标系,进行下一步的处理。
在深度时间一致性滤波阶段,如图4所示,在扩大采样间隔之前,本申请的实施例中在获取到笛卡尔坐标系下的原始深度图后,基于像素点之间的内容差确定可信像素组成的可信区域,和不可信像素组成的非可信区域,进而,根据掩码进行分区域平滑,具体操作为, 如该区域为可信像素所在的可信区域,则进行平滑,以实现基于时间一致性的滤波,若该区域为非可信像素所在的非可信区域,则不对该区域进行平滑,以保证该区域的高动态信息。
具体而言,在进行深度值的时间一致性滤波时,可以确定与可信像素对应的平滑因子,进而,基于该平滑因子和第二深度图像帧中与可信像素对应的第二像素的深度值,对可信像素的深度值进行滤波处理。
在本申请的一个实施例中,判断内容差和预设阈值的差值,并确定与差值对应的因子提高值,比如预先建立差值和因子提高值的对应关系,基于该对应关系获取对应的因子提高值,进而,根据预设的初始平滑因子和因子提高值之和获取平滑因子,也就是说在初始平滑因子的基础上进行适应性的增加。
在本申请的一个实施例中,确定每个可信像素的深度值和对应的第二像素的深度值的深度差值,获取每个可信像素的第一灰度值和对应的第二像素的第二灰度值,确定第一灰度值和第二灰度值的灰度差值,进而,获取与深度差值对应的第一权重系数,根据第一权重系数确定与灰度差值对应的第二权重系数,其中,第一权重系数为根据应用需要确定的,第一权重系数越高则表明当前平滑处理时越侧重于考虑像素之间的深度差值,第二权重系数可以与第一权重系数成反比关系,比如,第一权重系数=1-第二权重系数等,由此,保证在同一个平滑场景中,对灰度差值和深度差值的考量具有不同的侧重点。
进一步的,根据预设的计算公式对所述每个可信像素的所述第一深度值、所述第一权重系数、第二权重系数、灰度差值、深度差值、平滑因子计算,获取第一平滑值,该第一平滑值可以表示平滑处理的程度,可信像素的可信度越高,则其对应的第一平滑值越高。
更进一步的,根据第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值,对可信像素的深度值进行滤波处理。
在本实施例中,可以根据第一平滑值确定第二平滑值,其中,第一平滑值与第二平滑值成反比关系,获取第二平滑值和可信像素的深度值的第一乘积,获取第一平滑值和第二深度图像帧中与可信像素对应的像素的深度值的第二乘积,根据第一乘积和第二乘积之和对可信像素单元的深度值滤波处理。即可信像素点深度值=与可信像素对应的像素的深度值*第一平滑值+可信像素的深度值*第二平滑值,由于第一平滑值和第二平滑值成反比关系,比如,第一平滑值=1-第二平滑值,因此,第一平滑值越大,则第二平滑值越小,另外,当平滑因子与像素的可信度成正比关系时,平滑因子和第一平滑值为正比关系,可信像素对应的平滑因子较大,因而,对应的第一平滑值较大,基于上述公式,可信像素点深度值较大比重的参考对应的第二像素的深度值,平衡了可信像素的误差。
需要说明的是,上述预设的计算公式用于对对应的像素的深度值的测量误差进行平衡, 理论上像素的可信程度越低,比如深度差值越大,则对应的参考当前像素的深度值的程度就应该越大,以保留当前像素的高动态信息,当平滑因子与像素的可信程度成正比关系时,则预设平滑函数用于指示平滑因子和参考像素本身深度值的程度成正比关系,当平滑因子与像素的可信程度成反比关系时,则预设平滑函数用于指示平滑因子和参考像素本身深度值的程度成反比关系,如下公式(1)所示的计算公式,当平滑因子s与对应像素的可信程度成正比关系时,则对应的权重w1与平滑因子成正比关系:
Figure PCTCN2020097460-appb-000004
其中,w1为第一平滑值,s为平滑因子,diff1为深度差值,diff2为灰度差值,d为第一权重系数,1-d为第二权重系数,σ是每个可信像素的深度值和预设标准误差的乘积值。其中,预设标准误差是由温度误差等导致的深度值的经验测量误差,可以为1%等。
综上,本申请实施例的深度图处理方法,根据相邻帧像素之间的内容差,将深度图像帧分为可信区域和非可信区域,分区域进行平滑,有效的使深度平缓变化区域在时间维度上深度值更为平滑,保证了图像帧滤波后的深度值误差具有时间一致性,而深度快速变化区域又保持了原来的高动态性,同时根据可信区域占全画幅有效区域的面积比,自适应的扩大了采集间隔,降低了时间一致性滤波的计算量。
为了实现上述实施例,本申请还提出一种深度图处理装置。图5是根据本申请一个实施例的深度图处理装置的结构示意图。如图5所示,该深度图处理装置,包括:第一获取模块10、第二获取模块20、第一确定模块30、处理模块40,其中,
第一获取模块10,用于获取第一深度图像帧和与第一深度图像帧相邻的第二深度图像帧,其中,第一深度图像帧和第二深度图像帧中的各个像素均包含深度值,第一深度图像帧中的每个第一像素在第二深度图像帧中包含对应的第二像素。
第二获取模块20,用于确定每个第一像素的第一内容值以及对应的第二像素的第二内容值,获取第一内容值和第二内容值的内容差。
需要说明的是,上述内容值在不同的应用场景中,包含不同的参数,示例如下:
第一种示例:
在本示例中,内容值为深度值的置信度,其中,深度值的置信度表示该点深度值的能量大小,可以理解,若第一像素和第二像素的深度值的置信度相同,则表示第一像素和第二像素越可能对应于物体的同一个点,因而,第二获取模块20可以计算第一像素和置信度和第二像素的置信度的差值作为内容差。
第二种示例:内容值为像素的灰度值,可以理解,若第一像素和第二像素的灰度值相同,则标识第一像素和第二像素越可能对应于物体的同一个点,因而,第二获取模块20可 以计算第一像素和置信度和第二像素的灰度值的差值作为内容差。
第一确定模块30,用于根据内容差在第一深度图像帧中确定可信像素,并确定可信像素的所在区域面积。
具体的,正如以上分析的,内容差越小则表明第一像素和第二像素越有可能对应于物体的同一个点,从而,根据内容差在第一深度图像帧中确定可信像素,该可信像素度对应于内容差较小的像素。
进而,在确定可信像素后,第一确定模块30确定可信像素所在区域的面积,比如,可以确定第一深度图像帧中所有的可信像素,第一确定模块30基于所有可信像素组成的区域确定面积大小。
处理模块40,用于当区域面积满足预设条件时,根据区域面积调整深度图像帧的采集间隔,并根据采集间隔获取待处理的深度图像帧。
正如以上分析的,面积比表示第二图像帧和第一图像帧实际上对应的物体的相似度,面积比越大,第二图像帧和第一图像帧实际上是越可能拍摄的是物体的同一个位置的深度信息,因此,二者包含的深度信息具有大量的重复,基于两张图像帧获取针对同一个物体的相同点的深度信息可能会互相影响,导致测量精度不高,因而,我们可以基于区域面积调整深度图像帧的采集间隔,并根据采集间隔处理下一帧深度图像帧等。
在本申请的一个实施例中,如图6所示,在如图5所示的基础上,还包括:第三获取模块50,用于获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比。
具体的,第三获取模块50获取可信像素的所在区域面积和第一深度图像帧的总面积的面积比,该面积比表示当前第一深度图像帧和第二深度图像帧中,针对物体的同一个点的像素点的个数,显然该面积比越大,则表明第二图像帧和第一图像帧实际上是越可能拍摄的是物体的同一个位置的深度信息。
处理模块40包括:判断单元41、第一确定单元42和第二确定单元43,其中,判断单元41判断面积比是否大于预设阈值,第一确定单元42在面积比大于预设阈值时,获取面积比与预设阈值的差值并根据差值确定采集间隔增加值,根据采集间隔增加值和初始采样间隔之和确定采集间隔,第二确定单元43在面积比小于等于预设阈值时,获取预设阈值与面积比的差值并根据差值确定采集间隔降低值,根据初始采样间隔和采集间隔降低值之差确定采集间隔。
需要说明的是,前述对深度图处理方法实施例的解释说明也适用于该实施例的深度图处理装置,此处不再赘述。
综上,本申请实施例的深度图处理装置,根据相邻帧像素之间的内容差,将深度图像帧分为可信区域和非可信区域,分区域进行平滑,有效的使深度平缓变化区域在时间维度 上深度值更为平滑,保证了图像帧滤波后的深度值误差具有时间一致性,而深度快速变化区域又保持了原来的高动态性,同时根据可信区域占全画幅有效区域的面积比,自适应的扩大了采集间隔,降低了计算量。
为了实现上述实施例,本申请还提出一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时,实现如前述实施例所描述的深度图处理方法。
为了实现上述实施例,本申请还提出一种非临时性计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如前述方法实施例所描述的深度图处理方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布 线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能可以集成在一个处理模块中,也可以是各个单独物理存在,也可以两个或两个以上集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种深度图处理方法,其特征在于,包括以下步骤:
    获取第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;
    确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;
    根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;
    当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的深度图像帧。
  2. 如权利要求1所述的方法,其特征在于,当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔区域面积调整,包括:
    获取所述可信像素的所在区域面积和所述第一深度图像帧的总面积的面积比;
    判断所述面积比是否大于预设阈值;
    若大于所述预设阈值,则获取所述面积比与所述预设阈值的差值并根据所述差值确定采集间隔增加值,根据所述采集间隔增加值和初始采样间隔之和确定所述采集间隔;
    若小于等于所述预设阈值,则获取所述预设阈值与所述面积比的差值并根据所述差值确定采集间隔降低值,根据所述初始采样间隔和所述采集间隔降低值之差确定所述采集间隔。
  3. 如权利要求1所述的方法,其特征在于,在所述根据所述内容差在所述第一深度图像帧中确定可信像素之后,还包括:
    确定与所述可信像素对应的平滑因子;
    根据所述平滑因子和所述第二深度图像帧中与所述可信像素对应的第二像素的深度值,对所述可信像素的深度值进行滤波处理。
  4. 如权利要求3所述的方法,其特征在于,所述根据所述平滑因子和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理,包括:
    确定所述每个可信像素的深度值和对应的所述第二像素的深度值的深度差值;
    获取所述每个可信像素的第一灰度值和对应的所述第二像素的第二灰度值,确定所述第一灰度值和所述第二灰度值的灰度差值;
    获取与所述深度差值对应的第一权重系数,根据所述第一权重系数确定与所述灰度差 值对应的第二权重系数;
    根据预设的计算公式对所述每个可信像素的所述第一深度值、所述第一权重系数、所述第二权重系数、所述灰度差值、所述深度差值、所述平滑因子计算,获取所述第一平滑值;
    根据所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理。
  5. 如权利要求4所述的方法,其特征在于,所述根据所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理,包括:
    根据所述第一平滑值确定第二平滑值,其中,所述第一平滑值与所述第二平滑值成反比关系;
    获取所述第二平滑值和所述可信像素的深度值的第一乘积;
    获取所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述可信像素单元的深度值滤波处理。
  6. 如权利要求5所述的方法,其特征在于,所述预设的计算公式,包括:
    Figure PCTCN2020097460-appb-100001
    其中,w1为所述第一平滑值,s为所述平滑因子,diff1为所述深度差值,diff2为所述灰度差值,d为所述第一权重系数,1-d为所述第二权重系数,σ是所述每个可信像素的深度值和预设标准误差的乘积值。
  7. 一种深度图处理装置,其特征在于,包括:
    第一获取模块,用于获取第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;
    第二获取模块,用于确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;
    第一确定模块,用于根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;
    处理模块,用于当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的深度图像帧。
  8. 如权利要求7所述的装置,其特征在于,还包括:
    第三获取模块,用于获取所述可信像素的所在区域面积和所述第一深度图像帧的总面积的面积比;
    所述处理模块,包括:
    判断单元,用于判断所述面积比是否大于预设阈值;
    第一确定单元,用于在所述面积比大于所述预设阈值时,获取所述面积比与所述预设阈值的差值并根据所述差值确定采集间隔增加值,根据所述采集间隔增加值和初始采样间隔之和确定所述采集间隔;
    第二确定单元,用于在所述面积比小于等于所述预设阈值时,获取所述预设阈值与所述面积比的差值并根据所述差值确定采集间隔降低值,根据所述初始采样间隔和所述采集间隔降低值之差确定所述采集间隔。
  9. 一种电子设备,其特征在于,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时,实现以下深度图处理方法:
    获取第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;
    确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;
    根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;
    当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的深度图像帧。
  10. 如权利要求9所述的电子设备,其特征在于,当所述区域面积满足预设条件时,所述处理器执行所述计算机程序时,还实现以下深度图处理方法:
    获取所述可信像素的所在区域面积和所述第一深度图像帧的总面积的面积比;
    判断所述面积比是否大于预设阈值;
    若大于所述预设阈值,则获取所述面积比与所述预设阈值的差值并根据所述差值确定采集间隔增加值,根据所述采集间隔增加值和初始采样间隔之和确定所述采集间隔;
    若小于等于所述预设阈值,则获取所述预设阈值与所述面积比的差值并根据所述差值确定采集间隔降低值,根据所述初始采样间隔和所述采集间隔降低值之差确定所述采集间隔。
  11. 如权利要求9所述的电子设备,其特征在于,所述处理器执行所述计算机程序时, 还实现以下深度图处理方法:
    确定与所述可信像素对应的平滑因子;
    根据所述平滑因子和所述第二深度图像帧中与所述可信像素对应的第二像素的深度值,对所述可信像素的深度值进行滤波处理。
  12. 如权利要求11所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,还实现以下深度图处理方法:
    确定所述每个可信像素的深度值和对应的所述第二像素的深度值的深度差值;
    获取所述每个可信像素的第一灰度值和对应的所述第二像素的第二灰度值,确定所述第一灰度值和所述第二灰度值的灰度差值;
    获取与所述深度差值对应的第一权重系数,根据所述第一权重系数确定与所述灰度差值对应的第二权重系数;
    根据预设的计算公式对所述每个可信像素的所述第一深度值、所述第一权重系数、所述第二权重系数、所述灰度差值、所述深度差值、所述平滑因子计算,获取所述第一平滑值;
    根据所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理。
  13. 如权利要求12所述的电子设备,其特征在于,所述处理器执行所述计算机程序时,还实现以下深度图处理方法:
    根据所述第一平滑值确定第二平滑值,其中,所述第一平滑值与所述第二平滑值成反比关系;
    获取所述第二平滑值和所述可信像素的深度值的第一乘积;
    获取所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述可信像素单元的深度值滤波处理。
  14. 如权利要求13所述的电子设备,其特征在于,所述预设的计算公式,包括:
    Figure PCTCN2020097460-appb-100002
    其中,w1为所述第一平滑值,s为所述平滑因子,diff1为所述深度差值,diff2为所述灰度差值,d为所述第一权重系数,1-d为所述第二权重系数,σ是所述每个可信像素的深度值和预设标准误差的乘积值。
  15. 一种非临时性计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现以下深度图处理方法:
    获取第一深度图像帧和与所述第一深度图像帧相邻的第二深度图像帧,其中,所述第一深度图像帧和所述第二深度图像帧中的各个像素均包含深度值,所述第一深度图像帧中的每个第一像素在所述第二深度图像帧中包含对应的第二像素;
    确定所述每个第一像素的第一内容值以及对应的所述第二像素的第二内容值,获取所述第一内容值和所述第二内容值的内容差;
    根据所述内容差在所述第一深度图像帧中确定可信像素,并确定所述可信像素的所在区域面积;
    当所述区域面积满足预设条件时,根据所述区域面积调整深度图像帧的采集间隔,并根据所述采集间隔获取待处理的深度图像帧。
  16. 如权利要求15所述的非临时性计算机可读存储介质,其特征在于,当所述区域面积满足预设条件时,所述计算机程序被处理器执行时还实现以下深度图处理方法:
    获取所述可信像素的所在区域面积和所述第一深度图像帧的总面积的面积比;
    判断所述面积比是否大于预设阈值;
    若大于所述预设阈值,则获取所述面积比与所述预设阈值的差值并根据所述差值确定采集间隔增加值,根据所述采集间隔增加值和初始采样间隔之和确定所述采集间隔;
    若小于等于所述预设阈值,则获取所述预设阈值与所述面积比的差值并根据所述差值确定采集间隔降低值,根据所述初始采样间隔和所述采集间隔降低值之差确定所述采集间隔。
  17. 如权利要求15所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下深度图处理方法:
    确定与所述可信像素对应的平滑因子;
    根据所述平滑因子和所述第二深度图像帧中与所述可信像素对应的第二像素的深度值,对所述可信像素的深度值进行滤波处理。
  18. 如权利要求17所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下深度图处理方法:
    确定所述每个可信像素的深度值和对应的所述第二像素的深度值的深度差值;
    获取所述每个可信像素的第一灰度值和对应的所述第二像素的第二灰度值,确定所述第一灰度值和所述第二灰度值的灰度差值;
    获取与所述深度差值对应的第一权重系数,根据所述第一权重系数确定与所述灰度差值对应的第二权重系数;
    根据预设的计算公式对所述每个可信像素的所述第一深度值、所述第一权重系数、所述第二权重系数、所述灰度差值、所述深度差值、所述平滑因子计算,获取所述第一平滑 值;
    根据所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值,对所述可信像素的深度值进行滤波处理。
  19. 如权利要求18所述的非临时性计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时还实现以下深度图处理方法:
    根据所述第一平滑值确定第二平滑值,其中,所述第一平滑值与所述第二平滑值成反比关系;
    获取所述第二平滑值和所述可信像素的深度值的第一乘积;
    获取所述第一平滑值和所述第二深度图像帧中与所述可信像素对应的像素的深度值的第二乘积;
    根据所述第一乘积和所述第二乘积之和对所述可信像素单元的深度值滤波处理。
  20. 如权利要求19所述的非临时性计算机可读存储介质,其特征在于,所述预设的计算公式,包括:
    Figure PCTCN2020097460-appb-100003
    其中,w1为所述第一平滑值,s为所述平滑因子,diff1为所述深度差值,diff2为所述灰度差值,d为所述第一权重系数,1-d为所述第二权重系数,σ是所述每个可信像素的深度值和预设标准误差的乘积值。
PCT/CN2020/097460 2019-07-11 2020-06-22 深度图处理方法和装置 WO2021004260A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910626061.1A CN110378853B (zh) 2019-07-11 2019-07-11 深度图处理方法和装置
CN201910626061.1 2019-07-11

Publications (1)

Publication Number Publication Date
WO2021004260A1 true WO2021004260A1 (zh) 2021-01-14

Family

ID=68252815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/097460 WO2021004260A1 (zh) 2019-07-11 2020-06-22 深度图处理方法和装置

Country Status (2)

Country Link
CN (1) CN110378853B (zh)
WO (1) WO2021004260A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378853B (zh) * 2019-07-11 2021-03-26 Oppo广东移动通信有限公司 深度图处理方法和装置
CN111314613B (zh) * 2020-02-28 2021-12-24 重庆金山医疗技术研究院有限公司 图像传感器、图像处理设备、图像处理方法及存储介质
CN116324867A (zh) * 2020-11-24 2023-06-23 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、摄像头组件及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102203829A (zh) * 2008-11-04 2011-09-28 皇家飞利浦电子股份有限公司 用于生成深度图的方法和设备
US20150339824A1 (en) * 2014-05-20 2015-11-26 Nokia Corporation Method, apparatus and computer program product for depth estimation
CN108269280A (zh) * 2018-01-05 2018-07-10 厦门美图之家科技有限公司 一种深度图像的处理方法及移动终端
CN110378853A (zh) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 深度图处理方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7825990B2 (en) * 2005-06-03 2010-11-02 Texas Instruments Incorporated Method and apparatus for analog graphics sample clock frequency offset detection and verification
US8891905B2 (en) * 2012-12-19 2014-11-18 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Boundary-based high resolution depth mapping
CN103778418A (zh) * 2014-01-28 2014-05-07 华南理工大学 一种输电线路杆塔图像监测系统的山火图像识别方法
CN109724950A (zh) * 2017-10-27 2019-05-07 黄晓淳 具有自适应采样帧率的动态超分辨荧光成像技术
CN109683698B (zh) * 2018-12-25 2020-05-22 Oppo广东移动通信有限公司 支付验证方法、装置、电子设备和计算机可读存储介质
CN109751985A (zh) * 2019-03-04 2019-05-14 南京理工大学 一种基于安防摄像机的水库堤坝散浸监测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102203829A (zh) * 2008-11-04 2011-09-28 皇家飞利浦电子股份有限公司 用于生成深度图的方法和设备
US20150339824A1 (en) * 2014-05-20 2015-11-26 Nokia Corporation Method, apparatus and computer program product for depth estimation
CN108269280A (zh) * 2018-01-05 2018-07-10 厦门美图之家科技有限公司 一种深度图像的处理方法及移动终端
CN110378853A (zh) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 深度图处理方法和装置

Also Published As

Publication number Publication date
CN110378853B (zh) 2021-03-26
CN110378853A (zh) 2019-10-25

Similar Documents

Publication Publication Date Title
WO2021004260A1 (zh) 深度图处理方法和装置
WO2021004262A1 (zh) 深度图处理方法及装置、电子设备和可读存储介质
WO2021004261A1 (zh) 深度数据的滤波方法、装置、电子设备和可读存储介质
WO2021004263A1 (zh) 深度图处理方法及装置、电子设备和可读存储介质
CN110390690B (zh) 深度图处理方法和装置
US9299163B2 (en) Method and apparatus for processing edges of a gray image
WO2021004216A1 (zh) 深度传感器的参数调整方法、装置以及电子设备
JP5613501B2 (ja) パイプ厚み計測装置及び方法
CN110400331B (zh) 深度图处理方法和装置
US11961246B2 (en) Depth image processing method and apparatus, electronic device, and readable storage medium
JP2017102245A5 (zh)
JP2018146477A (ja) 3次元形状計測装置及び3次元形状計測方法
KR101662407B1 (ko) 영상의 비네팅 보정 방법 및 장치
US9536169B2 (en) Detection apparatus, detection method, and storage medium
CN110400340B (zh) 深度图处理方法和装置
JP6556492B2 (ja) 散乱線推定方法、非一時的コンピュータ可読媒体及び散乱線推定装置
JPWO2003088648A1 (ja) 動き検出装置、画像処理システム、動き検出方法、プログラム、および記録媒体
EP2536123A1 (en) Image processing method and image processing apparatus
US9621780B2 (en) Method and system of curve fitting for common focus measures
US9392158B2 (en) Method and system for intelligent dynamic autofocus search
JP2005149266A (ja) 画像処理装置、画像処理方法及び画像処理プログラム
CN111815669A (zh) 目标跟踪方法、目标跟踪装置及存储装置
TWI826185B (zh) 外部參數判定方法及影像處理裝置
JP2019045990A (ja) 画像処理装置、画像処理方法、およびプログラム
CN117173156B (zh) 基于机器视觉的极片毛刺检测方法、装置、设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20837019

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20837019

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20837019

Country of ref document: EP

Kind code of ref document: A1