CN117616456A - Method, device and storage medium for determining depth information confidence of image - Google Patents

Method, device and storage medium for determining depth information confidence of image Download PDF

Info

Publication number
CN117616456A
CN117616456A CN202280004380.9A CN202280004380A CN117616456A CN 117616456 A CN117616456 A CN 117616456A CN 202280004380 A CN202280004380 A CN 202280004380A CN 117616456 A CN117616456 A CN 117616456A
Authority
CN
China
Prior art keywords
image
confidence
window
determining
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280004380.9A
Other languages
Chinese (zh)
Inventor
张超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Publication of CN117616456A publication Critical patent/CN117616456A/en
Pending legal-status Critical Current

Links

Abstract

The disclosure relates to a method, a device and a storage medium for determining depth information confidence of an image. The method for determining the confidence level of the depth information of the image comprises the following steps: acquiring a first depth image; acquiring a first color image; determining initial depth information confidence according to the first depth image; determining the confidence level of texture consistency in the window according to the first depth image and the first color image; and determining the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence in the window.

Description

Method, device and storage medium for determining depth information confidence of image Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to a method and a device for determining depth information confidence of an image and a storage medium.
Background
Depth imaging based on a depth camera is widely applied to the technical fields of automatic driving, three-dimensional reconstruction and the like. The reliability of the depth value of a pixel in the depth image, i.e. the reliability of the depth information, is referred to as the depth information confidence of the image.
In the related art, the confidence of the depth image is calculated based on the energy received by the depth sensor, and the calculation mode of the confidence depends on the output of the camera system, has low correlation with the scene, and is not applicable to the scene with special texture and special reflectivity.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method, apparatus, and storage medium for determining a depth information confidence of an image.
According to a first aspect of embodiments of the present disclosure, there is provided a method for determining a depth information confidence of an image, the method comprising:
acquiring a first depth image;
acquiring a first color image;
determining initial depth information confidence according to the first depth image;
determining a confidence level of texture consistency in a window according to the first depth image and the first color image;
and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient and the texture consistency confidence coefficient in the window.
In an exemplary embodiment, the determining the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence in the window includes:
And taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient and the product of the texture consistency confidence coefficient in the window and the second preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the determining the confidence of the texture uniformity in the window according to the first depth image and the first color image includes:
correcting the first depth image and the first color image to obtain a second depth image and a second color image;
acquiring a first window area in the second depth image and a second window area in the second color image, wherein the first window area and the second window area are in a first corresponding relationship;
determining a first texture consistency confidence of the first window region;
determining a second texture consistency confidence of the second window region;
and determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence.
In an exemplary embodiment, the acquiring a first window area in the second depth image, a second window area in the second color image, includes:
Selecting a first pixel point from the second depth image, and selecting a second pixel point from the second color image, wherein the first pixel point and the second pixel point are in a second corresponding relation;
and determining the first window area in the second depth image according to the first pixel point, and determining the second window area in the third color image according to the second pixel point.
In an exemplary embodiment, the determining the texture uniformity confidence in the window according to the first texture uniformity confidence and the second texture uniformity confidence includes:
and taking the sum of the product of the first texture consistency confidence coefficient and the third preset coefficient and the product of the second texture consistency confidence coefficient and the fourth preset coefficient as the texture consistency confidence coefficient in the window.
In an exemplary embodiment, the correcting the first depth image and the first color image to obtain a second depth image and a second color image includes:
and under the same coordinate system, the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
In an exemplary embodiment, the first depth image is acquired by a depth camera; when the depth camera can acquire the first infrared gray scale image, the determining method further comprises the following steps:
and determining window matching confidence according to the first infrared gray level image and the first color image.
In an exemplary embodiment, the determining the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence in the window includes:
and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient, the texture consistency confidence coefficient in the window and the window matching confidence coefficient.
In an exemplary embodiment, the determining the depth information confidence of the depth image according to the initial depth information confidence, the intra-window texture consistency confidence, and the window matching confidence includes:
and taking the sum of the product of the initial depth information confidence coefficient and a first preset coefficient, the product of the texture consistency confidence coefficient in the window and a second preset coefficient, and the product of the window matching confidence coefficient and a fifth preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the determining the window matching confidence level according to the first infrared gray scale image and the first color image includes:
correcting the first infrared gray scale image and the first color image to obtain a second infrared gray scale image and a third color image;
acquiring a third window area in the second infrared gray scale image and a fourth window area in the third color image; wherein the third window area and the fourth window area are in a third corresponding relationship;
and determining the confidence of the window matching degree according to the third window area and the fourth window area.
In an exemplary embodiment, the acquiring a third window area in the second infrared grayscale image and a fourth window area in the third color image includes:
selecting a third pixel point from the second infrared gray level image, and selecting a fourth pixel point from the third color image, wherein the third pixel point and the fourth pixel point are in a fourth corresponding relation;
and determining a third window area in the second infrared gray scale image according to the third pixel point, and determining a fourth window area in the third color image according to the second pixel point.
In an exemplary embodiment, said correcting said first infrared grayscale image and said first color image to obtain a second infrared grayscale image and a third color image comprises:
and under the same coordinate system, the coordinates of the target object in the first infrared gray scale image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
According to a second aspect of the embodiments of the present disclosure, there is provided a determining apparatus for depth information confidence of an image, the determining apparatus including:
a first acquisition module configured to acquire a first depth image;
a second acquisition module configured to acquire a first color image;
a first determination module configured to determine an initial depth information confidence level from the first depth image;
a second determination module configured to determine a texture uniformity confidence within a window from the first depth image and the first color image;
and a third determining module configured to determine a depth information confidence of the depth image according to the initial depth information confidence and the intra-window texture consistency confidence.
In an exemplary embodiment, the third determination module is further configured to:
and taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient and the product of the texture consistency confidence coefficient in the window and the second preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the second determination module is further configured to:
correcting the first depth image and the first color image to obtain a second depth image and a second color image;
acquiring a first window area in the second depth image and a second window area in the second color image, wherein the first window area and the second window area are in a first corresponding relationship;
determining a first texture consistency confidence of the first window region;
determining a second texture consistency confidence of the second window region;
and determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence.
In an exemplary embodiment, the second determination module is further configured to:
selecting a first pixel point from the second depth image, and selecting a second pixel point from the second color image, wherein the first pixel point and the second pixel point are in a second corresponding relation;
And determining the first window area in the second depth image according to the first pixel point, and determining the second window area in the third color image according to the second pixel point.
In an exemplary embodiment, the second determination module is further configured to:
and taking the sum of the product of the first texture consistency confidence coefficient and the third preset coefficient and the product of the second texture consistency confidence coefficient and the fourth preset coefficient as the texture consistency confidence coefficient in the window.
In an exemplary embodiment, the second determination module is further configured to:
and under the same coordinate system, the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
In an exemplary embodiment, the first depth image is acquired by a depth camera; when the depth camera can acquire a first infrared gray scale image, the determining device further comprises:
and a fourth determining module configured to determine a window matching confidence level based on the first infrared grayscale image and the first color image.
In an exemplary embodiment, the third determination module is further configured to:
and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient, the texture consistency confidence coefficient in the window and the window matching confidence coefficient.
In an exemplary embodiment, the third determination module is further configured to:
and taking the sum of the product of the initial depth information confidence coefficient and a first preset coefficient, the product of the texture consistency confidence coefficient in the window and a second preset coefficient, and the product of the window matching confidence coefficient and a fifth preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the fourth determination module is further configured to:
correcting the first infrared gray scale image and the first color image to obtain a second infrared gray scale image and a third color image;
acquiring a third window area in the second infrared gray scale image and a fourth window area in the third color image; wherein the third window area and the fourth window area are in a third corresponding relationship;
and determining the confidence of the window matching degree according to the third window area and the fourth window area.
In an exemplary embodiment, the fourth determination module is further configured to:
selecting a third pixel point from the second infrared gray level image, and selecting a fourth pixel point from the third color image, wherein the third pixel point and the fourth pixel point are in a fourth corresponding relation;
and determining a third window area in the second infrared gray scale image according to the third pixel point, and determining a fourth window area in the third color image according to the second pixel point.
In an exemplary embodiment, the fourth determination module is further configured to:
and under the same coordinate system, the coordinates of the target object in the first infrared gray scale image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
According to a third aspect of embodiments of the present disclosure, there is provided a mobile terminal comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the method according to any of the first aspects of the embodiments of the present disclosure.
According to a fourth aspect of embodiments of the present disclosure, there is provided a non-transitory computer readable storage medium, which when executed by a processor of an apparatus, causes the apparatus to perform the method according to any one of the first aspects of embodiments of the present disclosure.
The method has the following beneficial effects: and determining the depth information confidence of the image according to the initial depth information confidence and the texture consistency confidence in the window, and comprehensively considering the depth information feature quantities in different images to accurately determine the depth information confidence of the image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of determining depth information confidence for an image according to an exemplary embodiment;
FIG. 2 is a flowchart illustrating a method of determining a confidence in texture uniformity within a window from a first depth image and a first color image in step S103, according to an exemplary embodiment;
FIG. 3 is a flowchart illustrating a method of acquiring a first window region in a second depth image, a second window region in a second color image, according to an example embodiment;
FIG. 4 is a schematic diagram of a first window region and a second window region shown according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a method of determining window matching confidence from a first infrared grayscale image and a first color image, according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating a method of acquiring a third window region in a second infrared grayscale image and a fourth window region in a third color image according to an exemplary embodiment;
fig. 7 is a schematic diagram of a third window area and a fourth window area shown according to an exemplary embodiment.
FIG. 8 is a block diagram of a device for determining depth information confidence of an image, according to an exemplary embodiment;
fig. 9 is a block diagram of a mobile terminal, according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In an exemplary embodiment of the present disclosure, a method for determining a depth information confidence of an image is provided, and fig. 1 is a flowchart illustrating a method for determining a depth information confidence of an image according to an exemplary embodiment, and as shown in fig. 1, the method for determining a depth information confidence of an image includes the steps of:
step S101: acquiring a first depth image;
step S102: acquiring a first color image;
step S103: determining initial depth information confidence according to the first depth image;
step S104: determining the confidence level of texture consistency in the window according to the first depth image and the first color image;
step S105: and determining the depth information confidence of the depth image according to the initial depth information confidence and the texture consistency confidence in the window.
In an exemplary embodiment of the present disclosure, in order to overcome the problem that in the prior art, the confidence of depth information of an image is determined by a depth image of the image, and the determined confidence is not applicable to a special scene, a method for determining the confidence of depth information based on the image is provided. Respectively acquiring a first depth image and a first color image; determining initial depth information confidence according to the first depth image; determining the confidence level of texture consistency in the window according to the first depth image and the first color image; and determining the depth information confidence of the image according to the initial depth information confidence and the texture consistency confidence in the window.
In an exemplary embodiment of the present disclosure, the initial depth information confidence is determined based on the depth values of the pixels of the image. And considering different application scenes, combining the color images, determining the confidence coefficient of texture consistency in the window, and finally determining the confidence coefficient of depth information of the image according to the initial confidence coefficient of depth information and the confidence coefficient of texture consistency in the window, thereby improving the accuracy of the confidence coefficient of depth information of the image.
In step S101, a first depth image may be acquired by a TOF (Time of flight) depth camera.
In step S102, a first color image may be acquired by an RGB camera.
In step S103, an initial depth information confidence is determined from the first depth image, i.e. the initial depth information confidence is determined based on the depth values of the pixels of the image. For example, any pixel Pt0 in the depth image may be obtained, a window area is determined by using the pixel as a center point, the window area is traversed through the entire depth image, the matching degree of the depth values of all the pixels in the window area and the preset depth value is determined, and the matching degree of the best matching window is used as the initial depth information confidence degree. The preset depth value is an actual depth value.
In step S104, a confidence of texture uniformity within the window is determined from the first depth image and the first color image. Because the feature quantities of the depth information in the images acquired by different cameras are different, namely the feature quantities of the depth information in the first depth image and the first color image are different, the feature quantities of the depth information in the two images are combined to determine the confidence of the texture consistency in the window, and the confidence of the depth information of the images can be obtained from multiple angles.
In step S105, when determining the depth information confidence of the image according to the initial depth information confidence and the intra-window texture consistency confidence, for example, the initial depth information confidence and the intra-window texture consistency confidence may be directly added, or the initial depth information confidence and the intra-window texture consistency confidence may be weighted and summed.
In an exemplary embodiment of the disclosure, when determining the confidence level of depth information of an image, a first depth image and a first color image are acquired respectively in consideration of different application scenes, an initial confidence level of depth information is determined according to the first depth image, and a confidence level of texture consistency in a window is determined according to the first color image. And determining the depth information confidence of the image according to the initial depth information confidence and the texture consistency confidence in the window, and comprehensively considering the depth information feature quantities in different images to accurately determine the depth information confidence of the image.
In an exemplary embodiment, determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window comprises: and taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient and the product of the texture consistency confidence coefficient in the window and the second preset coefficient as the depth information confidence coefficient of the image.
The first preset coefficient and the second preset coefficient are empirical values, and can be obtained through training of a machine learning algorithm, and can also be obtained through data analysis of initial depth information confidence and texture consistency confidence in a window.
For example, w can be used 11 Represents a first preset coefficient, w 12 Representing a second preset coefficient, C 11 Representing initial depth information confidence, C 12 Representing the confidence of texture consistency in the window, the depth information confidence C of the image is represented as: c=w 11 ×C 11 +w 12 ×C 12
In an exemplary embodiment, fig. 2 is a flowchart of a method for determining confidence of texture consistency within a window according to a first depth image and a first color image in step S103, as shown in fig. 2, according to an exemplary embodiment, including the steps of:
step S201: correcting the first depth image and the first color image to obtain a second depth image and a second color image;
Step S202: acquiring a first window area in a second depth image and a second window area in a second color image, wherein the first window area and the second window area are in a first corresponding relation;
step S203: determining a first texture consistency confidence of a first window region;
step S204: determining a second texture consistency confidence for the second window region;
step S205: and determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence.
In step S201, the first depth image and the first color image are corrected by a stereo correction algorithm such that non-coplanar line alignment in the first depth image and the first color image is corrected to coplanar line alignment, obtaining a second depth image and a second color image.
In step S202, after the second depth image and the second color image are obtained by the stereo correction algorithm, any window area in the image is selected as the first window area in the second depth image, the second window area is selected in the second color image, and the first window area and the second window area are in the first corresponding relationship. The first correspondence includes a relative position of the first window region in the second depth image being the same as a relative position of the second window region in the second color image.
In step S203, the pixel at the center point of the first window area is denoted as Pt3, and the first texture uniformity confidence tex of the first window area is calculated by means of the gray level co-occurrence matrix Pt3
Con is a contrast characteristic value of each pixel in the first window area, and can reflect the definition of an image and the groove depth of textures; asm is an energy characteristic value of each pixel in the first window area, and can reflect the uniformity degree of image gray level distribution and the thickness of textures; ent is an entropy characteristic value of each pixel in the first window area, and can reflect the complexity of image gray level distribution; corr is a correlation characteristic value of each pixel in the first window area, and can reflect local gray scale correlation of an image. The above characteristic values are calculated by the following formulas, respectively:
in step S204, the pixel at the center point of the second window area is denoted as Pt4, and the second texture uniformity confidence tex of the second window area is calculated by using the gray level co-occurrence matrix Pt4
Wherein Con is a contrast characteristic value of each pixel in the second window area; asm is an energy characteristic value of each pixel in the second window region; ent is the entropy feature value of each pixel in the second window region; corr is the correlation characteristic value of each pixel in the second window area.
In step S205, when determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence, the difference between the first texture consistency confidence and the second texture consistency confidence may be expressed as the texture consistency confidence in the window as follows:
C 4 =|tex Pt3 -tex Pt4 |
the weighted sum of the first texture uniformity confidence and the second texture uniformity confidence may also be expressed as an intra-window texture uniformity confidence as:
C 4 =w 3 ×tex Pt3 +w 4 ×tex Pt4
wherein w is 3 And w 4 Is an empirical value and can be obtained through training of a machine learning algorithm.
In the exemplary embodiment of the disclosure, the final texture consistency confidence is determined by respectively calculating the texture consistency confidence in the windows of the second depth image and the second color image, so that the depth information of the depth image and the color image can be comprehensively considered, and the depth information confidence of the image is more accurate.
In an exemplary embodiment, fig. 3 is a flowchart illustrating a method for acquiring a first window area in a second depth image and a second window area in a second color image according to an exemplary embodiment, as shown in fig. 3, including the steps of:
step S301, selecting a first pixel point in a second depth image, selecting a second pixel point in a second color image, wherein the first pixel point and the second pixel point are in a second corresponding relation;
Step S302, a first window area in the second depth image is determined according to the first pixel point, and a second window area in the second color image is determined according to the second pixel point.
Selecting a first pixel point in the second depth image, wherein the first pixel point can be any pixel point in the image, selecting a second pixel point in the second color image, and the first pixel point and the second pixel point are in a second corresponding relation. The second corresponding relation is that the first pixel point and the second pixel point are corresponding pixel points with the same coordinates in the image.
In an example, as shown in fig. 4, the first pixel Pt3 is a pixel with coordinates (8, 8) in the second depth image, and the second pixel Pt4 is a pixel with coordinates (8, 8) in the second color image.
And determining a first window area in the second depth image according to the first pixel point, for example, determining the first window area with a preset window size by taking the first pixel point as a central point of the window area, wherein the size of the window area is smaller than that of the image, or determining the first window area with a preset window size by taking the first pixel point as a vertex angle of the window area. And determining a second window area in the second color image according to the second pixel points, wherein the determination mode of the second window area is the same as that of the first window area. The first window area and the second window area may be square areas or other symmetrical areas.
In an example, as shown in fig. 4, the first window area and the second window area are square areas, the first pixel Pt3 is taken as the center point of the first window area, the window area is 8×8, the first window area is denoted as area C, and W3 (W3, h3, R) Pt3 ,G Pt3 ,B Pt3 ) The method comprises the steps of carrying out a first treatment on the surface of the The second pixel Pt4 is taken as the center point of the second window region, the window region is 8×8 in size, the second window region is denoted as region D, and W4 (W4, h4, R) Pt4 ,G Pt4 ,B Pt4 ). Wherein w3, w4, h3 and h4 are 8,R Pt3 、G Pt3 、B Pt3 R is the RGB value of the first pixel Pt3 Pt4 、G Pt4 、B Pt4 The RGB value of the second pixel Pt 4.
In an exemplary embodiment, determining the texture uniformity confidence within the window based on the first texture uniformity confidence and the second texture uniformity confidence comprises: and taking the sum of the product of the first texture consistency confidence coefficient and the third preset coefficient and the product of the second texture consistency confidence coefficient and the fourth preset coefficient as the texture consistency confidence coefficient in the window.
Marking the confidence of texture consistency in a window as C 4 The first texture consistency confidence is denoted tex Pt3 The confidence of the second texture consistency is marked as tex Pt4 The third preset coefficient is marked as w 3 The fourth preset coefficient is marked as w 4 Then:
C 4 =w 3 ×tex Pt3 +w 4 ×tex Pt4
wherein w is 3 And w 4 Is an empirical value and can be obtained through training of a machine learning algorithm.
In an exemplary embodiment, correcting a first depth image and a first color image to obtain a second depth image and a second color image includes: and under the same coordinate system, the coordinates of the target object in the first depth image are the same as those of the target object in the first color image, and a second depth image and a second color image are obtained.
Since the confidence of the texture consistency in the window needs to be calculated for the first depth image and the first color image, the coordinates of the target object in the first depth image and the first color image in the same coordinate system need to be the same, namely, the first depth image and the first color image are corrected.
In an exemplary embodiment, the first depth image is acquired by a depth camera; when the depth camera can acquire the first infrared gray level image, the method for determining the confidence level of the depth information of the image further comprises the following steps: and determining the confidence of the window matching degree according to the first infrared gray level image and the first color image.
When the depth camera only can acquire the first depth image, calculating relevant confidence information according to the first depth image and the first color image, and determining the depth information confidence of the image. When the depth camera can also acquire the first infrared gray scale image, the window matching degree confidence coefficient can be determined according to the first infrared gray scale image and the first color image.
When the depth information confidence of the image is finally determined, the initial depth information confidence and the window matching confidence can be considered at the same time, the initial depth information confidence and the intra-window texture consistency confidence can be considered at the same time, and the initial depth information confidence, the window matching confidence and the intra-window texture consistency confidence can be considered at the same time.
In an exemplary embodiment, determining the depth information confidence of the depth image based on the initial depth information confidence and the texture consistency confidence within the window comprises: and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient, the texture consistency confidence coefficient in the window and the window matching confidence coefficient.
When the depth camera can also acquire the first infrared gray level image, in order to ensure that the depth information confidence coefficient is more accurate, the initial depth information confidence coefficient, the window matching confidence coefficient and the texture consistency confidence coefficient in the window are considered, and the depth information confidence coefficient of the depth image is determined through the confidence coefficient of more aspects.
In an exemplary embodiment, determining the depth information confidence of the depth image based on the initial depth information confidence, the intra-window texture consistency confidence, and the window matching confidence comprises: and taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient, the product of the texture consistency confidence coefficient in the window and the second preset coefficient, and the product of the window matching confidence coefficient and the fifth preset coefficient as the depth information confidence coefficient of the image.
The confidence of the initial depth information is marked as C 21 Confidence of texture consistency in window is marked as C 22 Confidence in window matching is noted as C 23 The first preset coefficient is marked as w 21 The second preset coefficient is recorded as including w 22 The fifth preset coefficient is marked as w 23 The depth information confidence C of the image is expressed as: c=w 21 ×C 21 +w 22 ×C 22 +w 23 ×C 23
In an exemplary embodiment, fig. 5 is a flowchart illustrating a method for determining window matching confidence from a first infrared grayscale image and a first color image, according to an exemplary embodiment, as shown in fig. 5, comprising the steps of:
step S501, correcting the first infrared gray level image and the first color image to obtain a second infrared gray level image and a third color image;
step S502, a third window area in the second infrared gray level image and a fourth window area in the third color image are obtained; the third window area and the fourth window area are in a third corresponding relation;
step S503, determining the confidence of the window matching degree according to the third window area and the fourth window area.
In step S501, the first infrared gray-scale image and the first color image are corrected by a stereo correction algorithm to obtain the second infrared gray-scale image and the third color image, wherein the stereo correction algorithm uses the internal and external parameters of binocular calibration to exchange the left and right image planes so as to achieve the effect of coplanarity of the same line, reduce the computation complexity of stereo matching, i.e. correct two images aligned in non-coplanarity lines into coplanarity line alignment in practice. Because the first infrared gray level image is acquired by the depth camera, the first color image is acquired by the RGB camera, and the same pixel point in the first infrared gray level image and the first color image can be positioned on the same line of a pixel coordinate system through a three-dimensional correction algorithm, so that the window matching degree confidence coefficient calculation is convenient.
In step S502, after the second infrared gray-scale image and the third color image are obtained through the stereo correction algorithm, any window area in the image is selected as a third window area in the second infrared gray-scale image, a fourth window area is selected in the third color image, and the fourth window area and the third window area are in a second corresponding relationship. The second correspondence includes a relative position of the fourth window region in the third color image being the same as a relative position of the third window region in the second infrared grayscale image. When the third window area and the fourth window area are selected, the window areas can be determined through pixel points in the second infrared gray scale image and the third color image.
In step S503, when the third window area is determined by the third pixel point Pt1 in the second infrared grayscale image and the fourth window area is determined by the fourth pixel point Pt2 in the third color image, the window matching confidence C is determined by the following formula according to the third window area and the fourth window area 3
Or:
where w is the width of the window area, h is the length of the window area,the RGB value of the third pixel Pt1, the coordinates of the third pixel are (i, j), The RGB value of the fourth pixel Pt2 is the coordinates of the fourth pixel (i, j), and the third pixel and the fourth pixel are the same pixels in the second infrared gray-scale image and the third color image.
In the exemplary embodiment of the disclosure, the window matching degree confidence degree is determined through the window matching degree of the second infrared gray level image and the third color image, and the initial depth information confidence degree and the window matching degree confidence degree are considered at the same time, so that the accuracy of the depth information confidence degree can be improved.
In an exemplary embodiment, fig. 6 is a flowchart illustrating a method for acquiring a third window area in a second infrared grayscale image and a fourth window area in a third color image according to an exemplary embodiment, as shown in fig. 6, including the steps of:
step S601, selecting a third pixel point from the second infrared gray level image, and selecting a fourth pixel point from the third color image, wherein the third pixel point and the fourth pixel point are in a fourth corresponding relation;
step S602, determining a third window area in the second infrared gray scale image according to the third pixel point, and determining a fourth window area in the third color image according to the second pixel point.
Selecting a third pixel point from the second infrared gray level image, wherein the third pixel point can be any pixel point in the image, and selecting a fourth pixel point from the third color image, wherein the fourth pixel point and the third pixel point are in a fourth corresponding relation. The fourth corresponding relation is that the fourth pixel point and the third pixel point are the pixel points corresponding to the same coordinate in the image.
In an example, as shown in fig. 7, the third pixel Pt1 is a pixel with coordinates (8, 8) in the second infrared gray scale image, and the fourth pixel Pt2 is a pixel with coordinates (8, 8) in the third color image.
And determining a third window area in the second infrared gray scale image according to the third pixel point, for example, determining the third window area with the third pixel point as the center point of the window area and the preset window size, wherein the size of the window area is smaller than the size of the image, or determining the third window area with the third pixel point as one vertex of the window area and the preset window size. And determining a fourth window area in the third color image according to the fourth pixel point, wherein the determination mode of the fourth window area is the same as that of the third window area. The fourth window region and the third window region may be square regions or regions of other symmetrical shapes.
In an example, as shown in fig. 7, the third window area and the fourth window area are square areas, the third pixel point Pt1 is taken as the center point of the third window area, the window area is 8×8, the third window area is denoted as area a, and W1 (W1, h1, R) Pt1 ,G Pt1 ,B Pt1 ) The method comprises the steps of carrying out a first treatment on the surface of the The fourth pixel Pt2 is taken as the center point of a fourth window area with the size of 8×8, and the fourth window area is denoted as area B and W2 (W2, h2, R) Pt2 ,G Pt2 ,B Pt2 ). Wherein w1, w2, h1 and h2 are 8,R Pt1 、G Pt1 、B Pt1 R is RGB value of the third pixel Pt1 Pt2 、G Pt2 、B Pt2 The RGB value of the fourth pixel Pt 2.
In an exemplary embodiment, correcting the first infrared gray scale image and the first color image to obtain a second infrared gray scale image and a third color image includes: and under the same coordinate system, the coordinates of the target object in the first infrared gray scale image are the same as the coordinates of the target object in the first color image, and a second depth image and a second color image are obtained.
Because the window matching confidence coefficient of the first infrared gray-scale image and the first color image needs to be calculated, the coordinates of the target object in the first infrared gray-scale image and the target object in the first color image in the same coordinate system are the same, namely the first infrared gray-scale image and the first color image are corrected.
In an exemplary embodiment of the present disclosure, when a first depth image, a first infrared grayscale image, and a first color image can be simultaneously acquired, an initial depth information confidence is determined based on the first depth image, an intra-window texture consistency confidence is determined based on the first depth image and the first color image, a window matching confidence is determined based on the first infrared grayscale image and the first color image, and a confidence of depth information is determined in combination with different depth information features in the first depth image, the first infrared grayscale image, and the first color image, so that the confidence of depth information of the determined image is more accurate.
In an exemplary embodiment of the present disclosure, there is provided a determining apparatus for depth information confidence of an image, as shown in fig. 8, the determining apparatus including:
a first acquisition module 801 configured to acquire a first depth image;
a second acquisition module 802 configured to acquire a first color image;
a first determining module 803 configured to determine an initial depth information confidence level from the first depth image;
a second determining module 804 configured to determine a texture consistency confidence within a window from the first depth image and the first color image;
a third determining module 805 is configured to determine a depth information confidence level of the depth image based on the initial depth information confidence level and the intra-window texture consistency confidence level.
In an exemplary embodiment, the third determining module 805 is further configured to:
and taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient and the product of the texture consistency confidence coefficient in the window and the second preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the second determining module 804 is further configured to:
Correcting the first depth image and the first color image to obtain a second depth image and a second color image;
acquiring a first window area in the second depth image and a second window area in the second color image, wherein the first window area and the second window area are in a first corresponding relationship;
determining a first texture consistency confidence of the first window region;
determining a second texture consistency confidence of the second window region;
and determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence.
In an exemplary embodiment, the second determining module 804 is further configured to:
selecting a first pixel point from the second depth image, and selecting a second pixel point from the second color image, wherein the first pixel point and the second pixel point are in a second corresponding relation;
and determining the first window area in the second depth image according to the first pixel point, and determining the second window area in the third color image according to the second pixel point.
In an exemplary embodiment, the second determining module 804 is further configured to:
And taking the sum of the product of the first texture consistency confidence coefficient and the third preset coefficient and the product of the second texture consistency confidence coefficient and the fourth preset coefficient as the texture consistency confidence coefficient in the window.
In an exemplary embodiment, the second determining module 804 is further configured to:
and under the same coordinate system, the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
In an exemplary embodiment, the first depth image is acquired by a depth camera; when the depth camera can acquire a first infrared gray scale image, the determining device further comprises:
a fourth determination module 806 is configured to determine a window matching confidence level from the first infrared grayscale image and the first color image.
In an exemplary embodiment, the third determining module 805 is further configured to:
and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient, the texture consistency confidence coefficient in the window and the window matching confidence coefficient.
In an exemplary embodiment, the third determining module 805 is further configured to:
and taking the sum of the product of the initial depth information confidence coefficient and a first preset coefficient, the product of the texture consistency confidence coefficient in the window and a second preset coefficient, and the product of the window matching confidence coefficient and a fifth preset coefficient as the depth information confidence coefficient of the image.
In an exemplary embodiment, the fourth determination module 806 is further configured to:
correcting the first infrared gray scale image and the first color image to obtain a second infrared gray scale image and a third color image;
acquiring a third window area in the second infrared gray scale image and a fourth window area in the third color image; wherein the third window area and the fourth window area are in a third corresponding relationship;
and determining the confidence of the window matching degree according to the third window area and the fourth window area.
In an exemplary embodiment, the fourth determination module 806 is further configured to:
selecting a third pixel point from the second infrared gray level image, and selecting a fourth pixel point from the third color image, wherein the third pixel point and the fourth pixel point are in a fourth corresponding relation;
And determining a third window area in the second infrared gray scale image according to the third pixel point, and determining a fourth window area in the third color image according to the second pixel point.
In an exemplary embodiment, the fourth determination module 806 is further configured to:
and under the same coordinate system, the coordinates of the target object in the first infrared gray scale image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 9 is a block diagram of a mobile terminal 900, shown in accordance with an exemplary embodiment, when the apparatus is a terminal.
Referring to fig. 9, apparatus 900 may include one or more of the following components: a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operations of the apparatus 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the apparatus 900. Examples of such data include instructions for any application or method operating on the device 900, contact data, phonebook data, messages, pictures, videos, and the like. The memory 904 may be implemented by any type of volatile or nonvolatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 906 provides power to the various components of the apparatus 900. Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for device 900.
The multimedia component 908 comprises a screen between the device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the apparatus 900 is in an operational mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, the sensor assembly 914 may detect the on/off state of the device 900, the relative positioning of the components, such as the display and keypad of the device 900, the sensor assembly 914 may also detect the change in position of the device 900 or one component of the device 900, the presence or absence of user contact with the device 900, the orientation or acceleration/deceleration of the device 900, and the change in temperature of the device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the apparatus 900 and other devices in a wired or wireless manner. The device 900 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the apparatus 900 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
A non-transitory computer readable storage medium, which when executed by a processor of an apparatus, causes the apparatus to perform a method of determining a depth information confidence of an image, comprising any of the methods of determining a depth information confidence of an image described above.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the invention is limited only by the appended claims.
Industrial applicability
According to the initial depth information confidence and the texture consistency confidence in the window, the depth information confidence of the image is determined, and depth information feature quantities in different images can be comprehensively considered to accurately determine the depth information confidence of the image.

Claims (15)

  1. A method for determining confidence of depth information of an image, the method comprising:
    acquiring a first depth image;
    acquiring a first color image;
    determining initial depth information confidence according to the first depth image;
    determining a confidence level of texture consistency in a window according to the first depth image and the first color image;
    and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient and the texture consistency confidence coefficient in the window.
  2. The method for determining the confidence level of depth information of an image according to claim 1, wherein determining the confidence level of depth information of the depth image according to the initial confidence level of depth information and the confidence level of texture consistency within the window comprises:
    and taking the sum of the product of the initial depth information confidence coefficient and the first preset coefficient and the product of the texture consistency confidence coefficient in the window and the second preset coefficient as the depth information confidence coefficient of the image.
  3. The method for determining confidence level of depth information of an image according to claim 1, wherein determining confidence level of texture consistency within a window according to the first depth image and the first color image comprises:
    Correcting the first depth image and the first color image to obtain a second depth image and a second color image;
    acquiring a first window area in the second depth image and a second window area in the second color image, wherein the first window area and the second window area are in a first corresponding relationship;
    determining a first texture consistency confidence of the first window region;
    determining a second texture consistency confidence of the second window region;
    and determining the texture consistency confidence in the window according to the first texture consistency confidence and the second texture consistency confidence.
  4. A method of determining a confidence level of depth information of an image according to claim 3, wherein said acquiring a first window region in said second depth image and a second window region in said second color image comprises:
    selecting a first pixel point from the second depth image, and selecting a second pixel point from the second color image, wherein the first pixel point and the second pixel point are in a second corresponding relation;
    and determining the first window area in the second depth image according to the first pixel point, and determining the second window area in the third color image according to the second pixel point.
  5. The method for determining the confidence level of depth information of an image according to claim 3, wherein determining the confidence level of texture consistency within the window according to the first confidence level of texture consistency and the second confidence level of texture consistency comprises:
    and taking the sum of the product of the first texture consistency confidence coefficient and the third preset coefficient and the product of the second texture consistency confidence coefficient and the fourth preset coefficient as the texture consistency confidence coefficient in the window.
  6. A method of determining a confidence level of depth information of an image according to claim 3, wherein said correcting said first depth image and first color image to obtain a second depth image and second color image comprises:
    and under the same coordinate system, the coordinates of the target object in the first depth image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
  7. The method for determining the confidence level of depth information of an image according to any one of claims 1 to 6, wherein the first depth image is acquired by a depth camera; when the depth camera can acquire the first infrared gray scale image, the determining method further comprises the following steps:
    And determining window matching confidence according to the first infrared gray level image and the first color image.
  8. The method for determining the confidence level of depth information of an image according to claim 7, wherein determining the confidence level of depth information of the depth image based on the initial confidence level of depth information and the confidence level of texture consistency within the window comprises:
    and determining the depth information confidence coefficient of the depth image according to the initial depth information confidence coefficient, the texture consistency confidence coefficient in the window and the window matching confidence coefficient.
  9. The method of determining a depth information confidence level of an image according to claim 8, wherein determining the depth information confidence level of the depth image based on the initial depth information confidence level, the intra-window texture consistency confidence level, and the window matching confidence level comprises:
    and taking the sum of the product of the initial depth information confidence coefficient and a first preset coefficient, the product of the texture consistency confidence coefficient in the window and a second preset coefficient, and the product of the window matching confidence coefficient and a fifth preset coefficient as the depth information confidence coefficient of the image.
  10. The method of determining a confidence level of depth information of an image according to claim 8, wherein said determining a window matching confidence level from said first infrared grayscale image and said first color image comprises:
    correcting the first infrared gray scale image and the first color image to obtain a second infrared gray scale image and a third color image;
    acquiring a third window area in the second infrared gray scale image and a fourth window area in the third color image; wherein the third window area and the fourth window area are in a third corresponding relationship;
    and determining the confidence of the window matching degree according to the third window area and the fourth window area.
  11. The method of determining a confidence level of depth information of an image according to claim 10, wherein said acquiring a third window area in said second infrared grayscale image, a fourth window area in said third color image, comprises:
    selecting a third pixel point from the second infrared gray level image, and selecting a fourth pixel point from the third color image, wherein the third pixel point and the fourth pixel point are in a fourth corresponding relation;
    And determining a third window area in the second infrared gray scale image according to the third pixel point, and determining a fourth window area in the third color image according to the second pixel point.
  12. The method of determining a confidence level of depth information of an image according to claim 10, wherein said correcting said first infrared grayscale image and said first color image to obtain a second infrared grayscale image and a third color image comprises:
    and under the same coordinate system, the coordinates of the target object in the first infrared gray scale image are the same as the coordinates of the target object in the first color image, and the second depth image and the second color image are obtained.
  13. A device for determining confidence of depth information of an image, the device comprising:
    a first acquisition module configured to acquire a first depth image;
    a second acquisition module configured to acquire a first color image;
    a first determination module configured to determine an initial depth information confidence level from the first depth image;
    a second determination module configured to determine a texture uniformity confidence within a window from the first depth image and the first color image;
    And a third determining module configured to determine a depth information confidence of the depth image according to the initial depth information confidence and the intra-window texture consistency confidence.
  14. A mobile terminal, comprising:
    a processor;
    a memory for storing processor-executable instructions;
    wherein the processor is configured to perform the method of any of claims 1-12.
  15. A non-transitory computer readable storage medium, which when executed by a processor of an apparatus, causes the apparatus to perform the method of any of claims 1-12.
CN202280004380.9A 2022-06-20 2022-06-20 Method, device and storage medium for determining depth information confidence of image Pending CN117616456A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/099948 WO2023245378A1 (en) 2022-06-20 2022-06-20 Method and apparatus for determining confidence level of depth information of image, and storage medium

Publications (1)

Publication Number Publication Date
CN117616456A true CN117616456A (en) 2024-02-27

Family

ID=89378961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280004380.9A Pending CN117616456A (en) 2022-06-20 2022-06-20 Method, device and storage medium for determining depth information confidence of image

Country Status (2)

Country Link
CN (1) CN117616456A (en)
WO (1) WO2023245378A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139401A (en) * 2015-08-31 2015-12-09 山东中金融仕文化科技股份有限公司 Depth credibility assessment method for depth map
CN109410259B (en) * 2018-08-27 2020-10-27 中国科学院自动化研究所 Structured binocular depth map up-sampling method based on confidence
CN112150528A (en) * 2019-06-27 2020-12-29 Oppo广东移动通信有限公司 Depth image acquisition method, terminal and computer readable storage medium
US11427193B2 (en) * 2020-01-22 2022-08-30 Nodar Inc. Methods and systems for providing depth maps with confidence estimates
CN114581504A (en) * 2022-03-30 2022-06-03 努比亚技术有限公司 Depth image confidence calculation method and device and computer readable storage medium

Also Published As

Publication number Publication date
WO2023245378A1 (en) 2023-12-28

Similar Documents

Publication Publication Date Title
CN106778773B (en) Method and device for positioning target object in picture
KR20160021737A (en) Method, apparatus and device for image segmentation
CN107944367B (en) Face key point detection method and device
CN107967459B (en) Convolution processing method, convolution processing device and storage medium
CN114170302A (en) Camera external parameter calibration method and device, electronic equipment and storage medium
CN112184787A (en) Image registration method and device, electronic equipment and storage medium
CN110930351A (en) Light spot detection method and device and electronic equipment
CN110796012A (en) Image processing method and device, electronic equipment and readable storage medium
CN107239758B (en) Method and device for positioning key points of human face
CN107292901B (en) Edge detection method and device
CN112188096A (en) Photographing method and device, terminal and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN115100253A (en) Image comparison method, device, electronic equipment and storage medium
CN117616456A (en) Method, device and storage medium for determining depth information confidence of image
CN113920083A (en) Image-based size measurement method and device, electronic equipment and storage medium
CN116934823A (en) Image processing method, device, electronic equipment and readable storage medium
CN114390189A (en) Image processing method, device, storage medium and mobile terminal
CN114286072A (en) Color restoration device and method, image processor
CN114418865A (en) Image processing method, device, equipment and storage medium
CN114722570B (en) Sight estimation model establishment method and device, electronic equipment and storage medium
CN115118950B (en) Image processing method and device
CN111985280B (en) Image processing method and device
CN111986097B (en) Image processing method and device
CN112070681B (en) Image processing method and device
CN117597705A (en) Camera calibration method and device and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination