WO2023245378A1 - 一种图像的深度信息置信度的确定方法、装置及存储介质 - Google Patents

一种图像的深度信息置信度的确定方法、装置及存储介质 Download PDF

Info

Publication number
WO2023245378A1
WO2023245378A1 PCT/CN2022/099948 CN2022099948W WO2023245378A1 WO 2023245378 A1 WO2023245378 A1 WO 2023245378A1 CN 2022099948 W CN2022099948 W CN 2022099948W WO 2023245378 A1 WO2023245378 A1 WO 2023245378A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
confidence
window
depth
depth information
Prior art date
Application number
PCT/CN2022/099948
Other languages
English (en)
French (fr)
Inventor
张超
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to CN202280004380.9A priority Critical patent/CN117616456A/zh
Priority to PCT/CN2022/099948 priority patent/WO2023245378A1/zh
Publication of WO2023245378A1 publication Critical patent/WO2023245378A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a method, device and storage medium for determining the depth information confidence of an image.
  • Depth imaging based on depth cameras is widely used in technical fields such as autonomous driving and three-dimensional reconstruction.
  • the reliability of the depth value of the pixels in the depth image that is, the reliability of the depth information, is called the depth information confidence of the image.
  • the confidence of the depth image is calculated based on the energy received by the depth sensor.
  • the calculation method of the confidence relies on the output of the camera system itself and has low correlation with the scene. In scenes with special texture and special reflectivity, the confidence The degree calculation method does not apply.
  • the present disclosure provides a method, device and storage medium for determining the depth information confidence of an image.
  • a method for determining depth information confidence of an image includes:
  • determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window includes:
  • the sum of the product of the initial depth information confidence level and the first preset coefficient and the product of the intra-window texture consistency confidence level and the second preset coefficient is used as the depth information confidence level of the image.
  • determining the texture consistency confidence within the window based on the first depth image and the first color image includes:
  • the texture consistency confidence within the window is determined based on the first texture consistency confidence and the second texture consistency confidence.
  • obtaining the first window area in the second depth image and the second window area in the second color image includes:
  • the first window area in the second depth image is determined based on the first pixel points
  • the second window area in the third color image is determined based on the second pixel points.
  • determining the texture consistency confidence within the window based on the first texture consistency confidence and the second texture consistency confidence includes:
  • the sum of the product of the first texture consistency confidence and the third preset coefficient and the product of the second texture consistency confidence and the fourth preset coefficient is used as the texture consistency confidence within the window.
  • correcting the first depth image and the first color image to obtain the second depth image and the second color image includes:
  • the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained. .
  • the first depth image is acquired by a depth camera; when the depth camera can acquire a first infrared grayscale image, the determining method further includes:
  • a window matching degree confidence is determined based on the first infrared grayscale image and the first color image.
  • determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window includes:
  • determining the depth information confidence of the depth image based on the initial depth information confidence, the texture consistency confidence within the window and the window matching confidence includes:
  • the product of the initial depth information confidence and the first preset coefficient, the product of the intra-window texture consistency confidence and the second preset coefficient are multiplied by the product of the window matching degree confidence and the fifth preset coefficient. The sum is used as the depth information confidence of the image.
  • determining the window matching degree confidence based on the first infrared grayscale image and the first color image includes:
  • the window matching degree confidence is determined according to the third window area and the fourth window area.
  • obtaining the third window area in the second infrared grayscale image and the fourth window area in the third color image include:
  • a third window area in the second infrared grayscale image is determined, and according to the second pixel point, a fourth window area in the third color image is determined.
  • correcting the first infrared grayscale image and the first color image to obtain a second infrared grayscale image and a third color image includes:
  • a device for determining depth information confidence of an image where the determining device includes:
  • a first acquisition module configured to acquire a first depth image
  • a second acquisition module configured to acquire the first color image
  • a first determination module configured to determine an initial depth information confidence based on the first depth image
  • a second determination module configured to determine an intra-window texture consistency confidence based on the first depth image and the first color image
  • a third determination module is configured to determine the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence within the window.
  • the third determining module is further configured to:
  • the sum of the product of the initial depth information confidence level and the first preset coefficient and the product of the intra-window texture consistency confidence level and the second preset coefficient is used as the depth information confidence level of the image.
  • the second determination module is further configured to:
  • the texture consistency confidence within the window is determined based on the first texture consistency confidence and the second texture consistency confidence.
  • the second determination module is further configured to:
  • the first window area in the second depth image is determined based on the first pixel points
  • the second window area in the third color image is determined based on the second pixel points.
  • the second determination module is further configured to:
  • the sum of the product of the first texture consistency confidence and the third preset coefficient and the product of the second texture consistency confidence and the fourth preset coefficient is used as the texture consistency confidence within the window.
  • the second determination module is further configured to:
  • the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained. .
  • the first depth image is acquired by a depth camera; when the depth camera can acquire a first infrared grayscale image, the determining device further includes:
  • a fourth determination module is configured to determine a window matching degree confidence based on the first infrared grayscale image and the first color image.
  • the third determining module is further configured to:
  • the third determining module is further configured to:
  • the product of the initial depth information confidence and the first preset coefficient, the product of the intra-window texture consistency confidence and the second preset coefficient are multiplied by the product of the window matching degree confidence and the fifth preset coefficient. The sum is used as the depth information confidence of the image.
  • the fourth determining module is further configured to:
  • the window matching degree confidence is determined according to the third window area and the fourth window area.
  • the fourth determining module is further configured to:
  • a third window area in the second infrared grayscale image is determined, and according to the second pixel point, a fourth window area in the third color image is determined.
  • the fourth determining module is further configured to:
  • a mobile terminal including:
  • Memory used to store instructions executable by the processor
  • the processor is configured to perform the method according to any one of the first aspects of the embodiments of the present disclosure.
  • a non-transitory computer-readable storage medium which when instructions in the storage medium are executed by a processor of a device, enables the device to perform the first step of the embodiment of the present disclosure.
  • Adopting the above method of the present disclosure has the following beneficial effects: the depth information confidence of the image is determined based on the initial depth information confidence and the texture consistency confidence within the window, and the depth information feature quantities in different images can be comprehensively considered to accurately determine Depth information confidence of the image.
  • Figure 1 is a flowchart of a method for determining the depth information confidence of an image according to an exemplary embodiment
  • Figure 2 is a flow chart of a method for determining texture consistency confidence within a window based on the first depth image and the first color image in step S103 according to an exemplary embodiment
  • Figure 3 is a flowchart of a method for acquiring a first window area in a second depth image and a second window area in a second color image according to an exemplary embodiment
  • Figure 4 is a schematic diagram of a first window area and a second window area according to an exemplary embodiment
  • Figure 5 is a flowchart of a method for determining window matching degree confidence based on a first infrared grayscale image and a first color image according to an exemplary embodiment
  • Figure 6 is a flowchart of a method for acquiring a third window area in a second infrared grayscale image and a fourth window area in a third color image according to an exemplary embodiment
  • FIG. 7 is a schematic diagram of the third window area and the fourth window area according to an exemplary embodiment.
  • Figure 8 is a block diagram of a device for determining depth information confidence of an image according to an exemplary embodiment
  • Figure 9 is a block diagram of a mobile terminal according to an exemplary embodiment.
  • Figure 1 is a flowchart of a method for determining the depth information confidence of an image according to an exemplary embodiment, as shown in As shown in Figure 1, the determination method includes the following steps:
  • Step S101 Obtain the first depth image
  • Step S102 Obtain the first color image
  • Step S103 Determine the initial depth information confidence based on the first depth image
  • Step S104 Determine the texture consistency confidence within the window based on the first depth image and the first color image
  • Step S105 Determine the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window.
  • an image-based method is proposed. Determination method of depth information confidence. Obtain the first depth image and the first color image respectively; determine the initial depth information confidence based on the first depth image; determine the texture consistency confidence within the window based on the first depth image and the first color image; determine the confidence based on the initial depth information degree and texture consistency confidence within the window to determine the depth information confidence of the image.
  • the initial depth information confidence is determined based on the depth values of the pixels of the image. Taking into account different application scenarios, combined with color images, determine the texture consistency confidence within the window, and finally determine the depth information confidence of the image based on the initial depth information confidence and the texture consistency confidence within the window, improving the depth of the image. Accuracy of information confidence.
  • the first depth image can be obtained through a TOF (Time of flight, Time of Flight) depth camera.
  • TOF Time of flight, Time of Flight
  • the first color image may be acquired by the RGB camera.
  • the initial depth information confidence is determined based on the first depth image, that is, the initial depth information confidence is determined based on the depth values of the pixels of the image. For example, you can obtain any pixel Pt0 in the depth image, determine a window area with this pixel as the center point, traverse the entire depth image with this window area, and determine the depth values of all pixels in the window area and the preset depth. The matching degree of the value, the matching degree of the best matching window is used as the initial depth information confidence. The default depth value is the actual depth value.
  • step S104 the intra-window texture consistency confidence is determined based on the first depth image and the first color image. Since the feature amounts of depth information in images acquired by different cameras are different, that is, the feature amounts of depth information in the first depth image and the first color image are different, therefore, the feature amounts of depth information in the two images are combined to determine The texture consistency confidence within the window can obtain the depth information confidence of the image from multiple angles.
  • step S105 when determining the depth information confidence of the image based on the initial depth information confidence and the texture consistency confidence within the window, for example, the initial depth information confidence and the texture consistency confidence within the window can be directly added, or The initial depth information confidence and the texture consistency confidence within the window are weighted and summed.
  • the first depth image and the first color image are respectively acquired, and the initial depth information confidence is determined based on the first depth image.
  • the texture consistency confidence within the window determine the texture consistency confidence within the window. Determine the depth information confidence of the image based on the initial depth information confidence and the texture consistency confidence within the window, and can comprehensively consider the depth information features in different images to accurately determine the depth information confidence of the image.
  • determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window includes: multiplying the product of the initial depth information confidence and the first preset coefficient with The sum of the products of the texture consistency confidence within the window and the second preset coefficient is used as the depth information confidence of the image.
  • the first preset coefficient and the second preset coefficient are both empirical values, which can be obtained through machine learning algorithm training, or can be obtained through data analysis of the initial depth information confidence and the texture consistency confidence within the window.
  • w 11 can be used to represent the first preset coefficient
  • w 12 to represent the second preset coefficient
  • C 11 to represent the initial depth information confidence
  • C 12 to represent the texture consistency confidence within the window
  • FIG. 2 is a flowchart of a method for determining texture consistency confidence within a window based on the first depth image and the first color image in step S103 according to an exemplary embodiment, as shown in FIG. 2 shown, including the following steps:
  • Step S201 Correct the first depth image and the first color image to obtain the second depth image and the second color image
  • Step S202 Obtain the first window area in the second depth image and the second window area in the second color image, where the first window area and the second window area have a first corresponding relationship;
  • Step S203 Determine the first texture consistency confidence of the first window area
  • Step S204 Determine the second texture consistency confidence of the second window area
  • Step S205 Determine the texture consistency confidence within the window based on the first texture consistency confidence and the second texture consistency confidence.
  • step S201 the first depth image and the first color image are corrected through a stereoscopic correction algorithm, so that the non-coplanar row alignment in the first depth image and the first color image is corrected to a coplanar row alignment, and the second depth is obtained image and a second color image.
  • step S202 after obtaining the second depth image and the second color image through the stereo correction algorithm, select any window area in the image as the first window area in the second depth image, and select the second window area in the second color image.
  • the window area, the first window area and the second window area have a first corresponding relationship.
  • the first correspondence relationship includes that the relative position of the first window area in the second depth image is the same as the relative position of the second window area in the second color image.
  • step S203 the pixel at the center point of the first window area is recorded as Pt3, and the first texture consistency confidence tex Pt3 of the first window area is calculated using the gray level co-occurrence matrix:
  • Con is the contrast characteristic value of each pixel in the first window area, which can reflect the clarity of the image and the depth of the texture
  • Asm is the energy characteristic value of each pixel in the first window area, which can reflect the grayscale of the image.
  • Distribution uniformity and texture thickness Ent is the entropy feature value of each pixel in the first window area, which can reflect the complexity of the image grayscale distribution
  • Corr is the correlation feature value of each pixel in the first window area, which can Reflects the local grayscale correlation of the image.
  • step S204 the pixel at the center point of the second window area is recorded as Pt4, and the second texture consistency confidence tex Pt4 of the second window area is calculated using the gray level co-occurrence matrix:
  • Con is the contrast feature value of each pixel in the second window area
  • Asm is the energy feature value of each pixel in the second window area
  • Ent is the entropy feature value of each pixel in the second window area
  • Corr is the The correlation feature value of each pixel in the two window areas.
  • step S205 when determining the texture consistency confidence within the window according to the first texture consistency confidence and the second texture consistency confidence, the first texture consistency confidence and the second texture consistency confidence can be The difference is used as the texture consistency confidence within the window, expressed as:
  • the weighted sum of the first texture consistency confidence and the second texture consistency confidence can also be used as the texture consistency confidence within the window, expressed as:
  • w 3 and w 4 are empirical values, which can be obtained through machine learning algorithm training.
  • the depth of the depth image and the color image can be comprehensively considered information, making the depth information confidence of the image more accurate.
  • FIG. 3 is a flowchart of a method for acquiring a first window area in a second depth image and a second window area in a second color image according to an exemplary embodiment. As shown in FIG. 3 display, including the following steps:
  • Step S301 select a first pixel point in the second depth image, select a second pixel point in the second color image, and the first pixel point and the second pixel point have a second corresponding relationship;
  • Step S302 Determine the first window area in the second depth image based on the first pixel points, and determine the second window area in the second color image based on the second pixel points.
  • the first pixel point can be any pixel point in the image.
  • the first pixel point and the second pixel point form a second pixel point.
  • the second correspondence relationship is such that the first pixel point and the second pixel point are corresponding pixel points with the same coordinate in the image.
  • the first pixel point Pt3 is a pixel point with coordinates (8, 8) in the second depth image
  • the second pixel point Pt4 is a pixel point with coordinates (8, 8) in the second color image. ) pixels.
  • the second window area in the second color image is determined according to the second pixel points, and the second window area is determined in the same manner as the first window area.
  • the first window area and the second window area may be square areas or other symmetrically shaped areas.
  • both the first window area and the second window area are square areas.
  • the first pixel point Pt3 is the center point of the first window area.
  • the size of the window area is 8 ⁇ 8.
  • the window area is recorded as area C, recorded as W3 (w3, h3, R Pt3 , G Pt3 , B Pt3 ); taking the second pixel point Pt4 as the center point of the second window area, the window area size is 8 ⁇ 8, and the second
  • the window area is denoted as area D, denoted as W4 (w4, h4, R Pt4 , G Pt4 , B Pt4 ).
  • R Pt3 , G Pt3 , and B Pt3 are the RGB values of the first pixel point Pt3, and R Pt4 , G Pt4 , and B Pt4 are the RGB values of the second pixel point Pt4.
  • determining the texture consistency confidence within the window according to the first texture consistency confidence and the second texture consistency confidence includes: combining the first texture consistency confidence with a third preset coefficient The sum of the product of and the product of the second texture consistency confidence and the fourth preset coefficient is used as the texture consistency confidence within the window.
  • the texture consistency confidence within the window is recorded as C 4
  • the first texture consistency confidence is recorded as tex Pt3
  • the second texture consistency confidence is recorded as tex Pt4
  • the third preset coefficient is recorded as w 3
  • the fourth preset coefficient is recorded as w 3 .
  • w 3 and w 4 are empirical values, which can be obtained through machine learning algorithm training.
  • correcting the first depth image and the first color image to obtain the second depth image and the second color image includes: under the same coordinate system, such that the coordinates of the target object in the first depth image are With the same coordinates of the target object in the first color image, a second depth image and a second color image are obtained.
  • the intra-window texture consistency confidence needs to be calculated for the first depth image and the first color image, it is necessary to make the target object in the first depth image and the first color image have the same coordinates in the same coordinate system, that is, The first depth image and the first color image are corrected.
  • the first depth image is acquired by a depth camera; when the depth camera can acquire the first infrared grayscale image, the method for determining the depth information confidence of the image further includes: based on the first infrared grayscale image and For the first color image, the window matching confidence is determined.
  • relevant confidence information is calculated based on the first depth image and the first color image to determine the depth information confidence of the image.
  • the window matching degree confidence can also be determined based on the first infrared grayscale image and the first color image.
  • the initial depth information confidence and the window matching confidence can be considered at the same time.
  • the initial depth information confidence and the texture consistency confidence within the window can also be considered at the same time.
  • the initial depth information can also be considered at the same time. Confidence as well as window match confidence and intra-window texture consistency confidence.
  • determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window includes: based on the initial depth information confidence, the texture consistency confidence within the window and the window Matching degree confidence determines the depth information confidence of the depth image.
  • the depth camera can also acquire the first infrared grayscale image, in order to ensure that the depth information confidence is more accurate, the initial depth information confidence as well as the window matching degree confidence and the texture consistency confidence within the window are simultaneously considered.
  • the confidence level determines the depth information confidence level of the depth image.
  • determining the depth information confidence of the depth image based on the initial depth information confidence, the intra-window texture consistency confidence, and the window matching confidence includes: comparing the initial depth information confidence with the first predetermined Let the sum of the product of the coefficients, the product of the texture consistency confidence within the window and the second preset coefficient, and the product of the window matching degree confidence and the fifth preset coefficient be used as the depth information confidence of the image.
  • the initial depth information confidence is marked as C 21
  • the texture consistency confidence within the window is marked as C 22
  • the window matching degree confidence is marked as C 23
  • the first preset coefficient is marked as w 21
  • the second preset coefficient is marked as Including w 22
  • the fifth preset coefficient is recorded as w 23
  • FIG. 5 is a flowchart of a method for determining the window matching degree confidence based on the first infrared grayscale image and the first color image according to an exemplary embodiment. As shown in FIG. 5, it includes Following steps:
  • Step S501 correct the first infrared grayscale image and the first color image to obtain the second infrared grayscale image and the third color image;
  • Step S502 Obtain the third window area in the second infrared grayscale image and the fourth window area in the third color image; wherein, the third window area and the fourth window area have a third corresponding relationship;
  • Step S503 Determine the window matching degree confidence based on the third window area and the fourth window area.
  • step S501 the first infrared grayscale image and the first color image are corrected through a stereoscopic correction algorithm to obtain a second infrared grayscale image and a third color image.
  • the stereoscopic correction algorithm uses the internal and external parameters determined by the dual targets to correct the left and right images.
  • the two image planes are exchanged to achieve the coplanar effect and reduce the computational complexity of stereo matching, that is, the two images that are not actually coplanar are corrected into coplanar alignment.
  • the same pixel in the first infrared grayscale image and the first color image can be positioned at the pixel coordinates through the stereo correction algorithm The same row of the system to facilitate the calculation of window matching confidence.
  • step S502 after obtaining the second infrared grayscale image and the third color image through the stereo correction algorithm, select any window area in the image as the third window area in the second infrared grayscale image, and select the third window area in the third color image
  • the fourth window area is selected, and the fourth window area and the third window area have a second corresponding relationship.
  • the second correspondence relationship includes that the relative position of the fourth window area in the third color image is the same as the relative position of the third window area in the second infrared grayscale image.
  • step S503 when the third window area is determined by the third pixel point Pt1 in the second infrared grayscale image and the fourth window area is determined by the fourth pixel point Pt2 in the third color image, according to the third window area and the fourth window area, determine the window matching degree confidence C 3 through the following formula:
  • w is the width of the window area
  • h is the length of the window area
  • the coordinates of the third pixel point are (i, j)
  • the RGB value of the fourth pixel point Pt2 is the RGB value of the fourth pixel point Pt2
  • the coordinates of the fourth pixel point are (i, j)
  • the third pixel point and the fourth pixel point are pixel points with the same coordinates in the second infrared grayscale image and the third color image .
  • the window matching degree confidence is determined by the window matching degree of the second infrared grayscale image and the third color image, while considering the initial depth information confidence and the window matching degree confidence, which can improve Accuracy of depth information confidence.
  • FIG. 6 is a flowchart of a method for acquiring the third window area in the second infrared grayscale image and the fourth window area in the third color image according to an exemplary embodiment. As shown in FIG. As shown in 6, it includes the following steps:
  • Step S601 select a third pixel point in the second infrared grayscale image, select a fourth pixel point in the third color image, and the third pixel point and the fourth pixel point have a fourth corresponding relationship;
  • Step S602 Determine a third window area in the second infrared grayscale image based on the third pixel point, and determine a fourth window area in the third color image based on the second pixel point.
  • the third pixel point can be any pixel point in the image.
  • the fourth pixel point and the third pixel point are The fourth correspondence.
  • the fourth correspondence relationship is such that the fourth pixel point and the third pixel point are pixel points corresponding to the same coordinates in the image.
  • the third pixel point Pt1 is a pixel point with coordinates (8, 8) in the second infrared grayscale image
  • the fourth pixel point Pt2 is a pixel point with coordinates (8, 8) in the third color image. , 8) pixels.
  • the third window area in the second infrared grayscale image is determined according to the third pixel point.
  • the third pixel point is used as the center point of the window area, and the third window area is determined with the preset window size.
  • the size of the window area is smaller than the image size, or use the third pixel as a vertex of the window area to determine the third window area with the preset window size.
  • the fourth window area in the third color image is determined according to the fourth pixel point, and the fourth window area is determined in the same manner as the third window area.
  • the fourth window area and the third window area may be square areas or other symmetrically shaped areas.
  • both the third window area and the fourth window area are square areas.
  • the third pixel point Pt1 is the center point of the third window area.
  • the size of the window area is 8 ⁇ 8.
  • the window area is recorded as area A, and is recorded as W1(w1,h1,R Pt1 ,G Pt1 ,B Pt1 ); the fourth pixel point Pt2 is the center point of the fourth window area, and the window area size is 8 ⁇ 8.
  • the window area is denoted as area B, denoted as W2 (w2, h2, R Pt2 , G Pt2 , B Pt2 ).
  • w1, w2, h1, and h2 are all 8
  • R Pt1 , G Pt1 , and B Pt1 are the RGB values of the third pixel point Pt1
  • R Pt2 , G Pt2 , and B Pt2 are the RGB values of the fourth pixel point Pt2.
  • correcting the first infrared grayscale image and the first color image to obtain the second infrared grayscale image and the third color image includes: under the same coordinate system, such that the first infrared grayscale The coordinates of the target object in the image are the same as the coordinates of the target object in the first color image, and a second depth image and a second color image are obtained.
  • the window matching degree confidence needs to be calculated for the first infrared grayscale image and the first color image, it is necessary to make the target object in the first infrared grayscale image and the first color image have the same coordinates in the same coordinate system. , that is, correcting the first infrared grayscale image and the first color image.
  • the initial depth information confidence is determined based on the first depth image
  • the initial depth information confidence is determined based on the first depth image and the first color image.
  • the first color image determines the texture consistency confidence within the window, determines the window matching degree confidence based on the first infrared grayscale image and the first color image, and combines the first depth image, the first infrared grayscale image and the first color image. Different depth information features are used to determine the confidence of the depth information, making the determined confidence of the depth information of the image more accurate.
  • a device for determining the depth information confidence of an image includes:
  • the first acquisition module 801 is configured to acquire a first depth image
  • the second acquisition module 802 is configured to acquire the first color image
  • the first determination module 803 is configured to determine the initial depth information confidence based on the first depth image
  • the second determination module 804 is configured to determine the texture consistency confidence within the window according to the first depth image and the first color image;
  • the third determination module 805 is configured to determine the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence within the window.
  • the third determining module 805 is further configured to:
  • the sum of the product of the initial depth information confidence level and the first preset coefficient and the product of the intra-window texture consistency confidence level and the second preset coefficient is used as the depth information confidence level of the image.
  • the second determining module 804 is further configured to:
  • the texture consistency confidence within the window is determined based on the first texture consistency confidence and the second texture consistency confidence.
  • the second determining module 804 is further configured to:
  • the first window area in the second depth image is determined based on the first pixel points
  • the second window area in the third color image is determined based on the second pixel points.
  • the second determining module 804 is further configured to:
  • the sum of the product of the first texture consistency confidence and the third preset coefficient and the product of the second texture consistency confidence and the fourth preset coefficient is used as the texture consistency confidence within the window.
  • the second determining module 804 is further configured to:
  • the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained. .
  • the first depth image is acquired by a depth camera; when the depth camera can acquire the first infrared grayscale image, the determining device further includes:
  • the fourth determination module 806 is configured to determine the window matching degree confidence based on the first infrared grayscale image and the first color image.
  • the third determining module 805 is further configured to:
  • the third determining module 805 is further configured to:
  • the product of the initial depth information confidence and the first preset coefficient, the product of the intra-window texture consistency confidence and the second preset coefficient are multiplied by the product of the window matching degree confidence and the fifth preset coefficient. The sum is used as the depth information confidence of the image.
  • the fourth determining module 806 is further configured to:
  • the window matching degree confidence is determined according to the third window area and the fourth window area.
  • the fourth determining module 806 is further configured to:
  • a third window area in the second infrared grayscale image is determined, and according to the second pixel point, a fourth window area in the third color image is determined.
  • the fourth determining module 806 is further configured to:
  • FIG. 9 is a block diagram of a mobile terminal 900 according to an exemplary embodiment.
  • the device 900 may include one or more of the following components: a processing component 902, a memory 904, a power supply component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and communications component 916.
  • Processing component 902 generally controls the overall operations of device 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include one or more processors 920 to execute instructions to complete all or part of the steps of the above method.
  • processing component 902 may include one or more modules that facilitate interaction between processing component 902 and other components.
  • processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at device 900 . Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 906 provides power to various components of device 900.
  • Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 900 .
  • Multimedia component 908 includes a screen that provides an output interface between the device 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when device 900 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 914 includes one or more sensors for providing various aspects of status assessment for device 900 .
  • the sensor component 914 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the device 900, and the sensor component 914 can also detect a change in position of the device 900 or a component of the device 900. , the presence or absence of user contact with the device 900 , device 900 orientation or acceleration/deceleration and temperature changes of the device 900 .
  • Sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between apparatus 900 and other devices.
  • Device 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 916 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 900 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 904 including instructions, which are executable by the processor 920 of the apparatus 900 to complete the above method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • a non-transitory computer-readable storage medium that, when instructions in the storage medium are executed by a processor of a device, enables the device to perform a method for determining the depth information confidence of an image, including any of the above images Determination method of depth information confidence.
  • the depth information confidence of the image is determined based on the initial depth information confidence and the texture consistency confidence within the window.
  • the depth information features in different images can be comprehensively considered to accurately determine the depth information confidence of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本公开是关于一种图像的深度信息置信度的确定方法、装置及存储介质。图像的深度信息置信度的确定方法包括:获取第一深度图像;获取第一彩色图像;根据第一深度图像,确定初始深度信息置信度;根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度;根据初始深度信息置信度和窗口内纹理一致性置信度,确定深度图像的深度信息置信度。

Description

一种图像的深度信息置信度的确定方法、装置及存储介质 技术领域
本公开涉及图像处理技术领域,尤其涉及一种图像的深度信息置信度的确定方法、装置及存储介质。
背景技术
基于深度摄像头的深度成像,广泛应用于自动驾驶、三维重建等技术领域。深度图像中像素点的深度值的可信赖性,即深度信息的可信赖性,称为图像的深度信息置信度。
相关技术中,基于深度传感器接收到的能量计算深度图像的置信度,该置信度的计算方式依赖相机系统自身的输出,与场景相关性低,在特殊纹理、特殊反射率的场景中,该置信度的计算方式并不适用。
发明内容
为克服相关技术中存在的问题,本公开提供一种图像的深度信息置信度的确定方法、装置及存储介质。
根据本公开实施例的第一方面,提供一种图像的深度信息置信度的确定方法,所述确定方法包括:
获取第一深度图像;
获取第一彩色图像;
根据所述第一深度图像,确定初始深度信息置信度;
根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度;
根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度,包括:
将所述初始深度信息置信度与第一预设系数的乘积与所述窗口内纹理一致性置信度与第二预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述根据所述第一深度图像和所述第一彩色图像,确定窗口内纹 理一致性置信度,包括:
校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像;
获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,其中,所述第一窗口区域和所述第二窗口区域呈第一对应关系;
确定所述第一窗口区域的第一纹理一致性置信度;
确定所述第二窗口区域的第二纹理一致性置信度;
根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度。
在一示例性实施例中,所述获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,包括:
在所述第二深度图像中选取第一像素点,在所述第二彩色图像中选取第二像素点,所述第一像素点和所述第二像素点呈第二对应关系;
根据所述第一像素点,确定所述第二深度图像中的所述第一窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第二窗口区域。
在一示例性实施例中,所述根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度,包括:
将所述第一纹理一致性置信度与第三预设系数的乘积与所述第二纹理一致性置信度与第四预设系数的乘积之和,作为所述窗口内纹理一致性置信度。
在一示例性实施例中,所述校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像,包括:
在相同的坐标系下,使得所述第一深度图像中的目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
在一示例性实施例中,所述第一深度图像由深度摄像头获取;当所述深度摄像头可获取第一红外灰度图时,所述确定方法还包括:
根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度。
在一示例性实施例中,所述根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度,包括:
根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度,包括:
将所述初始深度信息置信度与第一预设系数的乘积、所述窗口内纹理一致性置信度与第二预设系数的乘积与所述窗口匹配度置信度与第五预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度,包括:
校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像;
获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域;其中,所述第三窗口区域和所述第四窗口区域呈第三对应关系;
根据所述第三窗口区域和所述第四窗口区域,确定所述窗口匹配度置信度。
在一示例性实施例中,所述获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域,包括:
在所述第二红外灰度图像中选取第三像素点,在所述第三彩色图像中选取第四像素点,所述第三像素点和所述第四像素点呈第四对应关系;
根据所述第三像素点,确定所述第二红外灰度图像中的第三窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第四窗口区域。
在一示例性实施例中,所述校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像,包括:
在相同的坐标系下,使得所述第一红外灰度图像中的所述目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
根据本公开实施例的第二方面,提供一种图像的深度信息置信度的确定装置,所述确定装置包括:
第一获取模块,被配置为获取第一深度图像;
第二获取模块,被配置为获取第一彩色图像;
第一确定模块,被配置为根据所述第一深度图像,确定初始深度信息置信度;
第二确定模块,被配置为根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度;
第三确定模块,被配置为根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述第三确定模块还被配置为:
将所述初始深度信息置信度与第一预设系数的乘积与所述窗口内纹理一致性置信度与第二预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述第二确定模块还被配置为:
校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像;
获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,其中,所述第一窗口区域和所述第二窗口区域呈第一对应关系;
确定所述第一窗口区域的第一纹理一致性置信度;
确定所述第二窗口区域的第二纹理一致性置信度;
根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度。
在一示例性实施例中,所述第二确定模块还被配置为:
在所述第二深度图像中选取第一像素点,在所述第二彩色图像中选取第二像素点,所述第一像素点和所述第二像素点呈第二对应关系;
根据所述第一像素点,确定所述第二深度图像中的所述第一窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第二窗口区域。
在一示例性实施例中,所述第二确定模块还被配置为:
将所述第一纹理一致性置信度与第三预设系数的乘积与所述第二纹理一致性置信度与第四预设系数的乘积之和,作为所述窗口内纹理一致性置信度。
在一示例性实施例中,所述第二确定模块还被配置为:
在相同的坐标系下,使得所述第一深度图像中的目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
在一示例性实施例中,所述第一深度图像由深度摄像头获取;当所述深度摄像头可获取第一红外灰度图时,所述确定装置还包括:
第四确定模块,被配置为根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度。
在一示例性实施例中,所述第三确定模块还被配置为:
根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述第三确定模块还被配置为:
将所述初始深度信息置信度与第一预设系数的乘积、所述窗口内纹理一致性置信度与第二预设系数的乘积与所述窗口匹配度置信度与第五预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述第四确定模块还被配置为:
校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像;
获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域;其中,所述第三窗口区域和所述第四窗口区域呈第三对应关系;
根据所述第三窗口区域和所述第四窗口区域,确定所述窗口匹配度置信度。
在一示例性实施例中,所述第四确定模块还被配置为:
在所述第二红外灰度图像中选取第三像素点,在所述第三彩色图像中选取第四像素点,所述第三像素点和所述第四像素点呈第四对应关系;
根据所述第三像素点,确定所述第二红外灰度图像中的第三窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第四窗口区域。
在一示例性实施例中,所述第四确定模块还被配置为:
在相同的坐标系下,使得所述第一红外灰度图像中的所述目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
根据本公开实施例的第三方面,提供一种移动终端,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为执行如本公开实施例的第一方面中任一项所述的方法。
根据本公开实施例的第四方面,提供一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置的处理器执行时,使得装置能够执行如本公开实施例的第一方面中任一项所述的方法。
采用本公开的上述方法,具有以下有益效果:根据初始深度信息置信度和窗口内纹理一致性置信度,确定图像的深度信息置信度,能够综合考虑不同图像中的深度信息特征量,以 准确确定图像的深度信息置信度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性的实施例示出的一种图像的深度信息置信度的确定方法的流程图;
图2是根据一示例性实施例示出的步骤S103中根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度的方法流程图;
图3是根据一示例性实施例示出的获取第二深度图像中的第一窗口区域,第二彩色图像中的第二窗口区域的方法流程图;
图4是根据一示例性的实施例示出的第一窗口区域和第二窗口区域的示意图;
图5是根据一示例性实施例示出的根据第一红外灰度图像和第一彩色图像,确定窗口匹配度置信度的方法流程图;
图6是根据一示例性实施例示出的获取第二红外灰度图像中的第三窗口区域,第三彩色图像中的第四窗口区域的方法流程图;
图7是根据一示例性的实施例示出的第三窗口区域和第四窗口区域的示意图。
图8是根据一示例性的实施例示出的一种图像的深度信息置信度的确定装置框图;
图9是根据一示例性的实施例示出的一种移动终端的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开示例性的实施例中,提供一种图像的深度信息置信度的确定方法,图1是根据一示例性的实施例示出的一种图像的深度信息置信度的确定方法的流程图,如图1所示,确定方法包括以下步骤:
步骤S101:获取第一深度图像;
步骤S102:获取第一彩色图像;
步骤S103:根据第一深度图像,确定初始深度信息置信度;
步骤S104:根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度;
步骤S105:根据初始深度信息置信度和窗口内纹理一致性置信度,确定深度图像的深度信息置信度。
在本公开示例性的实施例中,为了克服现有技术中,通过图像的深度图像确定图像的深度信息置信度,所确定的置信度不适用于特殊场景的问题,提出了一种基于图像的深度信息置信度的确定方法。分别获取第一深度图像、第一彩色图像;根据第一深度图像,确定初始深度信息置信度;根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度;根据初始深度信息置信度和窗口内纹理一致性置信度,确定图像的深度信息置信度。
在本公开示例性的实施例中,初始深度信息置信度基于图像的像素的深度值确定。考虑到不同的应用场景,结合彩色图像,确定窗口内纹理一致性置信度,并根据初始深度信息置信度和窗口内纹理一致性置信度,最终确定图像的深度信息置信度,提高了图像的深度信息置信度的准确性。
在步骤S101中,第一深度图像可以通过TOF(Time of flight,飞行时间)深度摄像头获取。
在步骤S102中,第一彩色图像可以通过RGB摄像头获取。
在步骤S103中,根据第一深度图像,确定初始深度信息置信度,即基于图像的像素的深度值确定初始深度信息置信度。例如可以获取深度图像中的任一像素点Pt0,以该像素点为中心点确定一个窗口区域,以该窗口区域遍历整个深度图像,确定该窗口区域内的所有像素点的深度值与预设深度值的匹配度,将最佳匹配窗口的匹配度作为初始深度信息置信度。预设深度值为实际深度值。
在步骤S104中,根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度。由于不同摄像头获取的图像中的深度信息的特征量是不同的,即第一深度图像和第一彩色图像中的深度信息特征量是不同的,因此综合两种图像中的深度信息特征量来确定窗口内纹理一致性置信度,可以从多个角度获得图像的深度信息置信度。
在步骤S105中,根据初始深度信息置信度和窗口内纹理一致性置信度,确定图像的深度信息置信度时,例如可以将初始深度信息置信度和窗口内纹理一致性置信度直接相加,或者将初始深度信息置信度和窗口内纹理一致性置信度进行加权求和。
在本公开示例性的实施例中,确定图像的深度信息置信度时,考虑到不同的应用场景,分别获取第一深度图像和第一彩色图像,根据第一深度图像,确定初始深度信息置信度,根据第一彩色图像,确定窗口内纹理一致性置信度。根据初始深度信息置信度和窗口内纹理一致性置信度,确定图像的深度信息置信度,能够综合考虑不同图像中的深度信息特征量,以准确确定图像的深度信息置信度。
在一示例性的实施例中,根据初始深度信息置信度和窗口内纹理一致性置信度,确定深度图像的深度信息置信度,包括:将初始深度信息置信度与第一预设系数的乘积与窗口内纹理一致性置信度与第二预设系数的乘积之和,作为图像的深度信息置信度。
第一预设系数和第二预设系数均为经验值,可以通过机器学习算法训练得出,也可以通过对初始深度信息置信度和窗口内纹理一致性置信度的数据分析得出。
例如可以用w 11表示第一预设系数,w 12表示第二预设系数,C 11表示初始深度信息置信度,C 12表示窗口内纹理一致性置信度,则图像的深度信息置信度C表示为:C=w 11×C 11+w 12×C 12
在一示例性的实施例中,图2是根据一示例性实施例示出的步骤S103中根据第一深度图像和第一彩色图像,确定窗口内纹理一致性置信度的方法流程图,如图2所示,包括以下步骤:
步骤S201:校正第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像;
步骤S202:获取第二深度图像中的第一窗口区域,第二彩色图像中的第二窗口区域,其中,第一窗口区域和第二窗口区域呈第一对应关系;
步骤S203:确定第一窗口区域的第一纹理一致性置信度;
步骤S204:确定第二窗口区域的第二纹理一致性置信度;
步骤S205:根据第一纹理一致性置信度和第二纹理一致性置信度,确定窗口内纹理一致性置信度。
在步骤S201中,通过立体校正算法校正第一深度图像和第一彩色图像,使得第一深度图像和第一彩色图像中的非共面行对准校正成共面行对准,获得第二深度图像和第二彩色图像。
在步骤S202中,通过立体校正算法获得第二深度图像和第二彩色图像后,在第二深度图像中选取图像中的任一窗口区域作为第一窗口区域,在第二彩色图像中选取第二窗口区域,第一窗口区域和第二窗口区域呈第一对应关系。第一对应关系包括,第一窗口区域在第二深 度图像中的相对位置,与第二窗口区域在第二彩色图像中的相对位置相同。
在步骤S203中,将第一窗口区域的中心点的像素记为Pt3,用灰度共生矩阵的方式计算第一窗口区域的第一纹理一致性置信度tex Pt3
Figure PCTCN2022099948-appb-000001
其中,Con为第一窗口区域中每个像素的对比度特征值,可以反映图像的清晰度和纹理的沟纹深浅;Asm为第一窗口区域中每个像素的能量特征值,可以反映图像灰度分布均匀程度和纹理粗细度;Ent为第一窗口区域中每个像素的熵特征值,可以反映图像灰度分布的复杂程度;Corr为第一窗口区域中每个像素的相关性特征值,可以反映图像局部灰度相关性。上述特征值分别用如下公式计算:
Figure PCTCN2022099948-appb-000002
Figure PCTCN2022099948-appb-000003
Figure PCTCN2022099948-appb-000004
Figure PCTCN2022099948-appb-000005
在步骤S204中,将第二窗口区域的中心点的像素记为Pt4,用灰度共生矩阵的方式计算第二窗口区域的第二纹理一致性置信度tex Pt4
Figure PCTCN2022099948-appb-000006
其中,Con为第二窗口区域中每个像素的对比度特征值;Asm为第二窗口区域中每个像素的能量特征值;Ent为第二窗口区域中每个像素的熵特征值;Corr为第二窗口区域中每个像素的相关性特征值。
在步骤S205中,根据第一纹理一致性置信度和第二纹理一致性置信度,确定窗口内的纹理一致性置信度时,可以将第一纹理一致性置信度和第二纹理一致性置信度的差值作为窗口内纹理一致性置信度,表示为:
C 4=|tex Pt3-tex Pt4|
也可以将第一纹理一致性置信度和第二纹理一致性置信度的加权和作为窗口内纹理一致 性置信度,表示为:
C 4=w 3×tex Pt3+w 4×tex Pt4
其中,w 3和w 4为经验值,可以通过机器学习算法训练得出。
在本公开示例性的实施例中,通过分别计算第二深度图像和第二彩色图像的窗口内纹理一致性置信度,确定最终的纹理一致性置信度,能够综合考虑深度图像和彩色图像的深度信息,使得图像的深度信息置信度更准确。
在一示例性实施例中,图3是根据一示例性实施例示出的获取第二深度图像中的第一窗口区域,第二彩色图像中的第二窗口区域的方法流程图,如图3所示,包括以下步骤:
步骤S301,在第二深度图像中选取第一像素点,在第二彩色图像中选取第二像素点,第一像素点和第二像素点呈第二对应关系;
步骤S302,根据第一像素点,确定第二深度图像中的第一窗口区域,根据第二像素点,确定第二彩色图像中的第二窗口区域。
在第二深度图像中选取第一像素点,第一像素点可以为图像中的任一像素点,在第二彩色图像中选取第二像素点,第一像素点和第二像素点呈第二对应关系。第二对应关系为使第一像素点和第二像素点为图像中同一坐标的相对应的像素点。
在一示例中,如图4所示,第一像素点Pt3为第二深度图像中坐标为(8,8)的像素点,第二像素点Pt4为第二彩色图像中坐标为(8,8)的像素点。
根据第一像素点确定第二深度图像中的第一窗口区域,例如,以第一像素点为窗口区域的中心点,以预设窗口大小确定第一窗口区域,窗口区域的大小小于图像的大小,或者以第一像素点为窗口区域的一个顶角,以预设窗口大小确定第一窗口区域。根据第二像素点确定第二彩色图像中的第二窗口区域,第二窗口区域的确定方式与第一窗口区域的确定方式相同。第一窗口区域和第二窗口区域可以为正方形区域,也可以为其他对称形状的区域。
在一示例中,如图4所示,第一窗口区域和第二窗口区域均为正方形区域,以第一像素点Pt3为第一窗口区域的中心点,窗口区域大小为8×8,第一窗口区域记为区域C,记作W3(w3,h3,R Pt3,G Pt3,B Pt3);以第二像素点Pt4为第二窗口区域的中心点,窗口区域大小为8×8,第二窗口区域记为区域D,记作W4(w4,h4,R Pt4,G Pt4,B Pt4)。其中,w3、w4、h3、h4均为8,R Pt3、G Pt3、B Pt3为第一像素点Pt3的RGB值,R Pt4、G Pt4、B Pt4为第二像素点Pt4的RGB值。
在一示例性实施例中,根据第一纹理一致性置信度和第二纹理一致性置信度,确定窗口 内纹理一致性置信度,包括:将第一纹理一致性置信度与第三预设系数的乘积与第二纹理一致性置信度与第四预设系数的乘积之和,作为窗口内纹理一致性置信度。
将窗口内纹理一致性置信度记为C 4,第一纹理一致性置信度记为tex Pt3,第二纹理一致性置信度记为tex Pt4,第三预设系数记为w 3,第四预设系数记为w 4,则:
C 4=w 3×tex Pt3+w 4×tex Pt4
其中,w 3和w 4为经验值,可以通过机器学习算法训练得出。
在一示例性实施例中,校正第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像,包括:在相同的坐标系下,使得第一深度图像中的目标对象的坐标与第一彩色图像中目标对象的坐标相同,获得第二深度图像和第二彩色图像。
由于需要对第一深度图像和第一彩色图像进行窗口内纹理一致性置信度进行计算,因此需要使得第一深度图像和第一彩色图像中的目标对象,在同一坐标系下的坐标相同,即对第一深度图像和第一彩色图像进行校正。
在一示例性实施例中,第一深度图像由深度摄像头获取;当深度摄像头可获取第一红外灰度图时,图像的深度信息置信度的确定方法还包括:根据第一红外灰度图像和第一彩色图像,确定窗口匹配度置信度。
当深度摄像头只能获取第一深度图像时,根据第一深度图像和第一彩色图像计算相关置信度信息,确定图像的深度信息置信度。当深度摄像头还可以获取第一红外灰度图时,还可以根据第一红外灰度图像和第一彩色图像,确定窗口匹配度置信度。
最终确定图像的深度信息置信度时,可以同时考虑初始深度信息置信度以及窗口匹配度置信度,也可以同时考虑初始深度信息置信度以及窗口内纹理一致性置信度,还可以同时考虑初始深度信息置信度以及窗口匹配度置信度和窗口内纹理一致性置信度。
在一示例性实施例中,根据初始深度信息置信度和窗口内纹理一致性置信度,确定深度图像的深度信息置信度,包括:根据初始深度信息置信度、窗口内纹理一致性置信度和窗口匹配度置信度,确定深度图像的深度信息置信度。
当深度摄像头还可以获取第一红外灰度图像时,为了保证深度信息置信度更准确,同时考虑初始深度信息置信度以及窗口匹配度置信度和窗口内纹理一致性置信度,通过更多方面的置信度信确定深度图像的深度信息置信度。
在一示例性实施例中,根据初始深度信息置信度、窗口内纹理一致性置信度和窗口匹配度置信度,确定深度图像的深度信息置信度,包括:将初始深度信息置信度与第一预设系数 的乘积、窗口内纹理一致性置信度与第二预设系数的乘积与窗口匹配度置信度与第五预设系数的乘积之和,作为图像的深度信息置信度。
将初始深度信息置信度记为C 21,窗口内纹理一致性置信度记为C 22,窗口匹配度置信度记为C 23,第一预设系数记为w 21,第二预设系数记为包括w 22,第五预设系数记为w 23,则图像的深度信息置信度C表示为:C=w 21×C 21+w 22×C 22+w 23×C 23
在一示例性实施例中,图5是根据一示例性实施例示出的根据第一红外灰度图像和第一彩色图像,确定窗口匹配度置信度的方法流程图,如图5所示,包括以下步骤:
步骤S501,校正第一红外灰度图像和第一彩色图像,获得第二红外灰度图像和第三彩色图像;
步骤S502,获取第二红外灰度图像中的第三窗口区域,第三彩色图像中的第四窗口区域;其中,第三窗口区域和第四窗口区域呈第三对应关系;
步骤S503,根据第三窗口区域和第四窗口区域,确定窗口匹配度置信度。
在步骤S501中,通过立体校正算法对第一红外灰度图像和第一彩色图像进行校正,获得第二红外灰度图像和第三彩色图像,立体校正算法是利用双目标定的内外参数对左右两个图像平面进行交换,以达到同行共面的效果,减少立体匹配的计算复杂度,即将实际中非共面行对准的两幅图像校正成共面行对准。由于第一红外灰度图像是由深度摄像头获取的,第一彩色图像是由RGB摄像头获取的,通过立体校正算法可以使第一红外灰度图像和第一彩色图像中同一个像素点在像素坐标系的同一行,以方便进行窗口匹配度置信度计算。
在步骤S502中,通过立体校正算法获得第二红外灰度图像和第三彩色图像后,在第二红外灰度图像中选取图像中的任一窗口区域作为第三窗口区域,在第三彩色图像中选取第四窗口区域,第四窗口区域和第三窗口区域呈第二对应关系。第二对应关系包括,第四窗口区域在第三彩色图像中的相对位置,与第三窗口区域在第二红外灰度图像中的相对位置相同。在选取第三窗口区域和第四窗口区域时,可以通过第二红外灰度图像和第三彩色图像中的像素点确定窗口区域。
在步骤S503中,当通过第二红外灰度图像中的第三像素点Pt1确定第三窗口区域,通过第三彩色图像中的第四像素点Pt2确定第四窗口区域时,根据第三窗口区域和第四窗口区域,通过如下公式确定窗口匹配度置信度C 3
Figure PCTCN2022099948-appb-000007
或者:
Figure PCTCN2022099948-appb-000008
其中,w为窗口区域的宽度,h为窗口区域的长度,
Figure PCTCN2022099948-appb-000009
为第三像素点Pt1的RGB值,第三像素点的坐标为(i,j),
Figure PCTCN2022099948-appb-000010
为第四像素点Pt2的RGB值,第四像素点的坐标为(i,j),第三像素点和第四像素点为第二红外灰度图像和第三彩色图像中坐标相同的像素点。
在本公开示例性的实施例中,通过第二红外灰度图像和第三彩色图像的窗口匹配度来确定窗口匹配度置信度,同时考虑初始深度信息置信度和窗口匹配度置信度,能够提高深度信息置信度的准确性。
在一示例性实施例中,图6是根据一示例性实施例示出的获取第二红外灰度图像中的第三窗口区域,第三彩色图像中的第四窗口区域的方法流程图,如图6所示,包括以下步骤:
步骤S601,在第二红外灰度图像中选取第三像素点,在第三彩色图像中选取第四像素点,第三像素点和第四像素点呈第四对应关系;
步骤S602,根据第三像素点,确定第二红外灰度图像中的第三窗口区域,根据第二像素点,确定第三彩色图像中的第四窗口区域。
在第二红外灰度图像中选取第三像素点,第三像素点可以为图像中的任一像素点,在第三彩色图像中选取第四像素点,第四像素点和第三像素点呈第四对应关系。第四对应关系为使第四像素点和第三像素点为图像中同一坐标相对应的像素点。
在一示例中,如图7所示,第三像素点Pt1为第二红外灰度图像中坐标为(8,8)的像素点,第四像素点Pt2为第三彩色图像中坐标为(8,8)的像素点。
根据第三像素点确定第二红外灰度图像中的第三窗口区域,例如,以第三像素点为窗口区域的中心点,以预设窗口大小确定第三窗口区域,窗口区域的大小小于图像的大小,或者以第三像素点为窗口区域的一个顶点,以预设窗口大小确定第三窗口区域。根据第四像素点确定第三彩色图像中的第四窗口区域,第四窗口区域的确定方式与第三窗口区域的确定方式相同。第四窗口区域和第三窗口区域可以为正方形区域,也可以为其他对称形状的区域。
在一示例中,如图7所示,第三窗口区域和第四窗口区域均为正方形区域,以第三像素点Pt1为第三窗口区域的中心点,窗口区域大小为8×8,第三窗口区域记为区域A,记作W1(w1,h1,R Pt1,G Pt1,B Pt1);以第四像素点Pt2为第四窗口区域的中心点,窗口区域大小为8×8,第四窗口区域记为区域B,记作W2(w2,h2,R Pt2,G Pt2,B Pt2)。其中,w1、w2、h1、h2均为8,R Pt1、G Pt1、B Pt1为第三像素点Pt1的RGB值,R Pt2、G Pt2、B Pt2为第四像素点Pt2的RGB 值。
在一示例性的实施例中,校正第一红外灰度图像和第一彩色图像,获得第二红外灰度图像和第三彩色图像,包括:在相同的坐标系下,使得第一红外灰度图像中的目标对象的坐标与第一彩色图像中目标对象的坐标相同,获得第二深度图像和第二彩色图像。
由于需要对第一红外灰度图像和第一彩色图像进行窗口匹配度置信度进行计算,因此需要使得第一红外灰度图像和第一彩色图像中的目标对象,在同一坐标系下的坐标相同,即对第一红外灰度图像和第一彩色图像进行校正。
在本公开示例性的实施例中,当能够同时获取第一深度图像、第一红外灰度图像和第一彩色图像时,基于第一深度图像确定初始深度信息置信度,基于第一深度图像和第一彩色图像确定窗口内纹理一致性置信度,基于第一红外灰度图像和第一彩色图像确定窗口匹配度置信度,结合第一深度图像、第一红外灰度图像和第一彩色图像中不同的深度信息特征来确定深度信息的置信度,使得确定的图像的深度信息的置信度更准确。
本公开示例性的实施例中,提供一种图像的深度信息置信度的确定装置,如图8所示,所述确定装置包括:
第一获取模块801,被配置为获取第一深度图像;
第二获取模块802,被配置为获取第一彩色图像;
第一确定模块803,被配置为根据所述第一深度图像,确定初始深度信息置信度;
第二确定模块804,被配置为根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度;
第三确定模块805,被配置为根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述第三确定模块805还被配置为:
将所述初始深度信息置信度与第一预设系数的乘积与所述窗口内纹理一致性置信度与第二预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述第二确定模块804还被配置为:
校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像;
获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,其中,所述第一窗口区域和所述第二窗口区域呈第一对应关系;
确定所述第一窗口区域的第一纹理一致性置信度;
确定所述第二窗口区域的第二纹理一致性置信度;
根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度。
在一示例性实施例中,所述第二确定模块804还被配置为:
在所述第二深度图像中选取第一像素点,在所述第二彩色图像中选取第二像素点,所述第一像素点和所述第二像素点呈第二对应关系;
根据所述第一像素点,确定所述第二深度图像中的所述第一窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第二窗口区域。
在一示例性实施例中,所述第二确定模块804还被配置为:
将所述第一纹理一致性置信度与第三预设系数的乘积与所述第二纹理一致性置信度与第四预设系数的乘积之和,作为所述窗口内纹理一致性置信度。
在一示例性实施例中,所述第二确定模块804还被配置为:
在相同的坐标系下,使得所述第一深度图像中的目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
在一示例性实施例中,所述第一深度图像由深度摄像头获取;当所述深度摄像头可获取第一红外灰度图时,所述确定装置还包括:
第四确定模块806,被配置为根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度。
在一示例性实施例中,所述第三确定模块805还被配置为:
根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度。
在一示例性实施例中,所述第三确定模块805还被配置为:
将所述初始深度信息置信度与第一预设系数的乘积、所述窗口内纹理一致性置信度与第二预设系数的乘积与所述窗口匹配度置信度与第五预设系数的乘积之和,作为所述图像的深度信息置信度。
在一示例性实施例中,所述第四确定模块806还被配置为:
校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像;
获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域; 其中,所述第三窗口区域和所述第四窗口区域呈第三对应关系;
根据所述第三窗口区域和所述第四窗口区域,确定所述窗口匹配度置信度。
在一示例性实施例中,所述第四确定模块806还被配置为:
在所述第二红外灰度图像中选取第三像素点,在所述第三彩色图像中选取第四像素点,所述第三像素点和所述第四像素点呈第四对应关系;
根据所述第三像素点,确定所述第二红外灰度图像中的第三窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第四窗口区域。
在一示例性实施例中,所述第四确定模块806还被配置为:
在相同的坐标系下,使得所述第一红外灰度图像中的所述目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
当装置为终端时,图9是根据一示例性的实施例示出的一种移动终端900的框图。
参照图9,装置900可以包括以下一个或多个组件:处理组件902,存储器904,电源组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件914,以及通信组件916。
处理组件902通常控制装置900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件902可以包括一个或多个处理器920来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件902可以包括一个或多个模块,便于处理组件902和其他组件之间的交互。例如,处理组件902可以包括多媒体模块,以方便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在装置900的操作。这些数据的示例包括用于在装置900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件906为装置900的各种组件提供电源。电源组件906可以包括电源管理系统,一个或多个电源,及其他与为装置900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述装置900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。当装置900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),当装置900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910还包括一个扬声器,用于输出音频信号。
I/O接口912为处理组件902和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件914包括一个或多个传感器,用于为装置900提供各个方面的状态评估。例如,传感器组件914可以检测到装置900的打开/关闭状态,组件的相对定位,例如所述组件为装置900的显示器和小键盘,传感器组件914还可以检测装置900或装置900一个组件的位置改变,用户与装置900接触的存在或不存在,装置900方位或加速/减速和装置900的温度变化。传感器组件914可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件914还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件914还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于装置900和其他设备之间有线或无线方式的通信。装置900可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件916经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置900可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包括指令的存储器904,上述指令可由装置900的处理器920执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置的处理器执行时,使得装置能够执行一种图像的深度信息置信度的确定方法,包括上述的任一种图像的深度信息置信度的确定方法。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本发明的其它实施方案。本申请旨在涵盖本发明的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本发明的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本发明的真正范围和精神由下面的权利要求指出。
应当理解的是,本发明并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本发明的范围仅由所附的权利要求来限制。
工业实用性
本文中根据初始深度信息置信度和窗口内纹理一致性置信度,确定图像的深度信息置信度,能够综合考虑不同图像中的深度信息特征量,以准确确定图像的深度信息置信度。

Claims (15)

  1. 一种图像的深度信息置信度的确定方法,其特征在于,所述确定方法包括:
    获取第一深度图像;
    获取第一彩色图像;
    根据所述第一深度图像,确定初始深度信息置信度;
    根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度;
    根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度。
  2. 根据权利要求1所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度,包括:
    将所述初始深度信息置信度与第一预设系数的乘积与所述窗口内纹理一致性置信度与第二预设系数的乘积之和,作为所述图像的深度信息置信度。
  3. 根据权利要求1所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度,包括:
    校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像;
    获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,其中,所述第一窗口区域和所述第二窗口区域呈第一对应关系;
    确定所述第一窗口区域的第一纹理一致性置信度;
    确定所述第二窗口区域的第二纹理一致性置信度;
    根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度。
  4. 根据权利要3所述的图像的深度信息置信度的确定方法,其特征在于,所述获取所述第二深度图像中的第一窗口区域,所述第二彩色图像中的第二窗口区域,包括:
    在所述第二深度图像中选取第一像素点,在所述第二彩色图像中选取第二像素点,所述第一像素点和所述第二像素点呈第二对应关系;
    根据所述第一像素点,确定所述第二深度图像中的所述第一窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第二窗口区域。
  5. 根据权利要3所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述第一纹理一致性置信度和所述第二纹理一致性置信度,确定所述窗口内纹理一致性置信度,包括:
    将所述第一纹理一致性置信度与第三预设系数的乘积与所述第二纹理一致性置信度与第四预设系数的乘积之和,作为所述窗口内纹理一致性置信度。
  6. 根据权利要3所述的图像的深度信息置信度的确定方法,其特征在于,所述校正所述第一深度图像和第一彩色图像,获得第二深度图像和第二彩色图像,包括:
    在相同的坐标系下,使得所述第一深度图像中的目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
  7. 根据权利要1-6任一所述的图像的深度信息置信度的确定方法,其特征在于,所述第一深度图像由深度摄像头获取;当所述深度摄像头可获取第一红外灰度图时,所述确定方法还包括:
    根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度。
  8. 根据权利要求7所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度,包括:
    根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度。
  9. 根据权利要求8所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述初始深度信息置信度、所述窗口内纹理一致性置信度和所述窗口匹配度置信度,确定所述深度图像的深度信息置信度,包括:
    将所述初始深度信息置信度与第一预设系数的乘积、所述窗口内纹理一致性置信度与第二预设系数的乘积与所述窗口匹配度置信度与第五预设系数的乘积之和,作为所述图像的深度信息置信度。
  10. 根据权利要求8所述的图像的深度信息置信度的确定方法,其特征在于,所述根据所述第一红外灰度图像和所述第一彩色图像,确定窗口匹配度置信度,包括:
    校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像;
    获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域; 其中,所述第三窗口区域和所述第四窗口区域呈第三对应关系;
    根据所述第三窗口区域和所述第四窗口区域,确定所述窗口匹配度置信度。
  11. 根据权利要求10所述的图像的深度信息置信度的确定方法,其特征在于,所述获取所述第二红外灰度图像中的第三窗口区域,所述第三彩色图像中的第四窗口区域,包括:
    在所述第二红外灰度图像中选取第三像素点,在所述第三彩色图像中选取第四像素点,所述第三像素点和所述第四像素点呈第四对应关系;
    根据所述第三像素点,确定所述第二红外灰度图像中的第三窗口区域,根据所述第二像素点,确定所述第三彩色图像中的第四窗口区域。
  12. 根据权利要求10所述的图像的深度信息置信度的确定方法,其特征在于,所述校正所述第一红外灰度图像和所述第一彩色图像,获得第二红外灰度图像和第三彩色图像,包括:
    在相同的坐标系下,使得所述第一红外灰度图像中的所述目标对象的坐标与所述第一彩色图像中所述目标对象的坐标相同,获得所述第二深度图像和所述第二彩色图像。
  13. 一种图像的深度信息置信度的确定装置,其特征在于,所述确定装置包括:
    第一获取模块,被配置为获取第一深度图像;
    第二获取模块,被配置为获取第一彩色图像;
    第一确定模块,被配置为根据所述第一深度图像,确定初始深度信息置信度;
    第二确定模块,被配置为根据所述第一深度图像和所述第一彩色图像,确定窗口内纹理一致性置信度;
    第三确定模块,被配置为根据所述初始深度信息置信度和所述窗口内纹理一致性置信度,确定所述深度图像的深度信息置信度。
  14. 一种移动终端,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为执行如权利要求1-12中任一项所述的方法。
  15. 一种非临时性计算机可读存储介质,当所述存储介质中的指令由装置的处理器执行时,使得装置能够执行如权利要求1-12中任一项所述的方法。
PCT/CN2022/099948 2022-06-20 2022-06-20 一种图像的深度信息置信度的确定方法、装置及存储介质 WO2023245378A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280004380.9A CN117616456A (zh) 2022-06-20 2022-06-20 一种图像的深度信息置信度的确定方法、装置及存储介质
PCT/CN2022/099948 WO2023245378A1 (zh) 2022-06-20 2022-06-20 一种图像的深度信息置信度的确定方法、装置及存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099948 WO2023245378A1 (zh) 2022-06-20 2022-06-20 一种图像的深度信息置信度的确定方法、装置及存储介质

Publications (1)

Publication Number Publication Date
WO2023245378A1 true WO2023245378A1 (zh) 2023-12-28

Family

ID=89378961

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/099948 WO2023245378A1 (zh) 2022-06-20 2022-06-20 一种图像的深度信息置信度的确定方法、装置及存储介质

Country Status (2)

Country Link
CN (1) CN117616456A (zh)
WO (1) WO2023245378A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139401A (zh) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 一种深度图中深度的可信度的评估方法
CN109410259A (zh) * 2018-08-27 2019-03-01 中国科学院自动化研究所 基于置信度的结构化的双目深度图上采样方法
CN112150528A (zh) * 2019-06-27 2020-12-29 Oppo广东移动通信有限公司 一种深度图像获取方法及终端、计算机可读存储介质
US20220111839A1 (en) * 2020-01-22 2022-04-14 Nodar Inc. Methods and systems for providing depth maps with confidence estimates
CN114581504A (zh) * 2022-03-30 2022-06-03 努比亚技术有限公司 一种深度图像置信度计算方法、设备及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139401A (zh) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 一种深度图中深度的可信度的评估方法
CN109410259A (zh) * 2018-08-27 2019-03-01 中国科学院自动化研究所 基于置信度的结构化的双目深度图上采样方法
CN112150528A (zh) * 2019-06-27 2020-12-29 Oppo广东移动通信有限公司 一种深度图像获取方法及终端、计算机可读存储介质
US20220111839A1 (en) * 2020-01-22 2022-04-14 Nodar Inc. Methods and systems for providing depth maps with confidence estimates
CN114581504A (zh) * 2022-03-30 2022-06-03 努比亚技术有限公司 一种深度图像置信度计算方法、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN117616456A (zh) 2024-02-27

Similar Documents

Publication Publication Date Title
CN108470322B (zh) 处理人脸图像的方法、装置及可读存储介质
CN106778773B (zh) 图片中目标物的定位方法及装置
WO2022037285A1 (zh) 相机外参标定方法及装置
US11308692B2 (en) Method and device for processing image, and storage medium
CN107967459B (zh) 卷积处理方法、装置及存储介质
CN107341777B (zh) 图片处理方法及装置
CN113920083A (zh) 基于图像的尺寸测量方法、装置、电子设备及存储介质
US9665925B2 (en) Method and terminal device for retargeting images
CN107564047B (zh) 图像处理方法及装置、电子设备和计算机可读存储介质
CN107239758B (zh) 人脸关键点定位的方法及装置
CN109934168B (zh) 人脸图像映射方法及装置
CN112188096A (zh) 拍照方法及装置、终端及存储介质
WO2023245378A1 (zh) 一种图像的深度信息置信度的确定方法、装置及存储介质
CN106469446B (zh) 深度图像的分割方法和分割装置
WO2023240444A1 (zh) 一种图像的处理方法、装置及存储介质
CN111325674A (zh) 图像处理方法、装置及设备
CN115100253A (zh) 图像对比方法、装置、电子设备及存储介质
CN114286072A (zh) 色彩还原装置及方法、图像处理器
WO2023240401A1 (zh) 一种摄像头的标定方法、装置及可读存储介质
CN111985280B (zh) 图像处理方法及装置
CN112070681B (zh) 图像处理方法及装置
US11656351B2 (en) Method and mobile device for acquiring AR or VR information by averaging pixel values
CN111986097B (zh) 图像处理方法及装置
CN115861431A (zh) 摄像头配准方法、装置、通话设备及存储介质
CN117974772A (zh) 视觉重定位方法、装置及存储介质

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280004380.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22947159

Country of ref document: EP

Kind code of ref document: A1