WO2021087812A1 - Method for determining depth value of image, image processor and module - Google Patents
Method for determining depth value of image, image processor and module Download PDFInfo
- Publication number
- WO2021087812A1 WO2021087812A1 PCT/CN2019/116021 CN2019116021W WO2021087812A1 WO 2021087812 A1 WO2021087812 A1 WO 2021087812A1 CN 2019116021 W CN2019116021 W CN 2019116021W WO 2021087812 A1 WO2021087812 A1 WO 2021087812A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- depth
- value
- depth value
- accuracy
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000004364 calculation method Methods 0.000 claims abstract description 12
- 238000004891 communication Methods 0.000 claims description 26
- 238000001914 filtration Methods 0.000 claims description 24
- 230000000007 visual effect Effects 0.000 claims description 18
- 230000015654 memory Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 8
- 230000015572 biosynthetic process Effects 0.000 description 7
- 238000009499 grossing Methods 0.000 description 6
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
Definitions
- the embodiments of the present application relate to depth image synthesis technology, and in particular, to a method for determining the depth value of an image, an image processor, and a module.
- the Time of Flight (TOF) module includes a modulated light transmitter, a modulated light receiver, and a depth image synthesis processor.
- the modulated light transmitter continuously sends modulated light signals to the target, and the receiver receives the return from the object
- the distance of the target object is obtained by detecting the round-trip time of the modulated light signal in the depth image synthesis processor, thereby obtaining the depth signal of the object.
- the TOF module has good depth perception characteristics for short-to-medium distance objects, but for long-distance and ultra-long-distance objects, accurate depth values cannot be obtained, and when the TOF module is affected by external noise and ambient light , Its depth image imaging accuracy is greatly affected.
- the binocular vision module imitates the characteristics of the human eye, by matching the feature points of the object in the left and right images, using the parallax caused by the dual viewpoint imaging of the same object, and obtaining the depth signal of the object through geometric calculation.
- the feature points of the object are not accurately matched, which becomes the biggest obstacle for the binocular vision module to obtain high-precision depth values.
- the TOF module is affected by ambient light or noise, and the accuracy of the depth image obtained is not high, and the TOF module is only suitable for short and medium distances, while the binocular vision module is caused by inaccurate matching of feature points, resulting in depth images In other words, the accuracy of the depth image obtained by the existing TOF module or binocular vision module is low.
- the embodiments of the present application provide a method, an image processor, and a module for determining the depth value of an image, which can improve the accuracy of the depth value of the image.
- an embodiment of the present application provides a method for determining the depth value of an image, including:
- the first depth image is obtained from the TOF module and the second depth image is obtained from the binocular vision module; wherein, the first depth image and the second depth image respectively include An accuracy value corresponding to the depth value of, where the accuracy value is used to characterize the accuracy of the corresponding depth value;
- the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image Performing weighting calculation using a preset weighting algorithm to obtain the depth value of the target image.
- an embodiment of the present application provides a method for determining the depth value of an image, including:
- the depth value of the target image find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value;
- the first depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
- an embodiment of the present application provides a method for determining the depth value of an image, including:
- the feature points of the first vision image in the binocular vision image and the features of the second vision image in the binocular vision image are respectively determined point;
- the accuracy value corresponding to the matching result of the target image is found from the preset corresponding relationship between the matching result and the accuracy value;
- the second depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
- an image processor including:
- the first acquisition module is configured to acquire a first depth image from the TOF module and a second depth image from the binocular vision module for the object in the target image; wherein, the first depth image and the second depth
- the images also respectively include accuracy values corresponding to the respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values;
- a weighting module configured to be based on the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the depth value of the second depth image The corresponding accuracy value is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
- an embodiment of the present application provides a TOF module, including:
- the second acquiring module is configured to acquire the depth value of the target image
- the first searching module is configured to find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
- a first forming module configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image
- the first sending module is configured to send the first depth image to an image processor, so that the image processor determines the depth value of the target image.
- an embodiment of the present application provides a binocular vision module, including:
- the determining module is configured to, after acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the second vision image in the binocular image Feature points of the visual image;
- a matching module configured to match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image
- the second searching module is configured to find the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value according to the matching result of the target image;
- a second forming module configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a second depth image
- the second sending module is configured to send the second depth image to an image processor, so that the image processor determines the depth value of the target image.
- an embodiment of the present application provides an image processor, and the image processor includes:
- An example is the method for determining the depth value of the image executed by the image processor.
- an embodiment of the present application provides a TOF module, and the TOF module includes:
- An example is the method for determining the depth value of the image executed by the TOF module.
- an embodiment of the present application provides a binocular vision module, and the binocular vision module includes:
- An example is the method for determining the depth value of the image executed by the binocular vision module.
- an embodiment of the present application provides a computer-readable storage medium, in which executable instructions are stored, and when the executable instructions are executed by one or more processors, the processors execute one or more of the above
- the method for determining the depth value of the image executed by the image processor in various embodiments, or the method for determining the depth value of the image executed by the TOF module, or the binocular vision module The method for determining the depth value of the image is performed.
- the embodiment of the application provides a method for determining the depth value of an image, an image processor, and a module. For the object in the target image, a first depth image is obtained from the TOF module, and a second depth image is obtained from the binocular vision module.
- the first depth image and the second depth image also include accuracy values corresponding to the respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values, according to the preset resolution of the target image
- the preset weighting algorithm is used to perform Weighted calculation to obtain the depth value of the target image; that is to say, in the embodiment of the present application, the first depth image and the second depth image are obtained from the TOF module and the binocular vision module, and then according to the depth of the first depth image Value and the corresponding accuracy value, the depth value of the second depth image and the corresponding accuracy value are weighted, so that in the process of determining the depth value of the target image, the accuracy of the depth value of the first depth image is taken into account And the accuracy of the depth value of the second depth image makes the determined depth value of the target image
- FIG. 1 is a schematic structural diagram of an optional image processing system provided by an embodiment of the application.
- FIG. 2 is a schematic diagram of flow interaction of an optional method for determining a depth value of an image provided by an embodiment of the application;
- FIG. 3 is a schematic flowchart of an example of an optional method for determining a depth value of an image provided by an embodiment of the application;
- FIG. 4 is a schematic flowchart of an optional method for determining the depth value of an image provided by an embodiment of the application
- FIG. 5 is a schematic flowchart of another optional method for determining the depth value of an image according to an embodiment of the application.
- FIG. 6 is a schematic flowchart of yet another optional method for determining the depth value of an image provided by an embodiment of the application;
- FIG. 7 is a schematic structural diagram of an optional image processor provided by an embodiment of the application.
- FIG. 8 is a schematic structural diagram of an optional TOF module provided by an embodiment of the application.
- FIG. 9 is a schematic structural diagram of an optional binocular vision module provided by an embodiment of the application.
- FIG. 10 is a schematic structural diagram of another optional image processor provided by an embodiment of this application.
- FIG. 11 is a schematic structural diagram of another optional TOF module provided by an embodiment of the application.
- FIG. 12 is a schematic structural diagram of another optional binocular vision module provided by an embodiment of the application.
- FIG. 1 is a schematic structural diagram of an optional image processing system provided by an embodiment of the application, as shown in FIG. 1.
- the image processing system includes a TOF module 11, a binocular vision module 12, and an image processor 13.
- a communication connection is established between the TOF module 11 and the image processor 13, and the binocular vision module 12 is connected to the A communication connection is established between the image processors 13.
- the TOF module 11 can obtain the distance between itself and the object in the target image by detecting the round-trip time of the modulated light signal, that is, the depth value, but the depth value determined by this method is specific to the short-to-medium-distance objects. Good depth perception characteristics, but for long-distance or ultra-long-distance objects, an accurate depth value cannot be obtained, and the TOF module 11 is easily affected by external noise and ambient light, thereby affecting the accuracy of the depth value.
- the binocular vision module 12 can match the feature points of the objects in the left and right images, use the parallax caused by the dual viewpoint imaging of the same object, and obtain the distance between itself and the object in the target image through geometric calculation. That is the depth value, but in practical applications, if the feature point matching is not accurate, the accuracy of the depth value will be affected.
- the above-mentioned image processor may be used to collect depth values from the TOF module 11 and the binocular vision module 12 respectively, and process the multiple collected depth values to determine a more accurate depth value of the target image.
- an embodiment of the present application provides a method for determining the depth value of an image.
- the method is applied to the above-mentioned image processing system.
- the application embodiment provides a schematic diagram of the flow interaction of an optional method for determining the depth value of an image. As shown in FIG. 2, the method may include:
- the TOF module 11 determines the first depth image of the target image
- the TOF module 11 is used to determine The first depth image of the object in the target image
- the binocular vision module 12 is used to determine the second depth image of the target image
- the image processor 13 can combine the first depth image and the second depth image to determine the depth value of the target image .
- S201 in order to determine the first depth image of the target image by the TOF module 11, S201 may include:
- the TOF module 11 obtains the depth value of the target image
- the TOF module 11 obtains the distance between itself and the object in the target image by detecting the round-trip time of the modulated light signal, that is, the depth value of the target image can be obtained.
- the depth value of the target image is the depth value of each pixel position of the target image.
- the TOF module 11 finds the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
- the TOF module 11 After the TOF module 11 obtains the depth value of the target image in S2011, since the corresponding relationship between the preset depth value and the accuracy value is pre-stored in the TOF module 11, then, after the depth value is determined, The accuracy value corresponding to the depth value of the target image can be found from the corresponding relationship.
- the effective distance of the TOF module 11 is used for judgment, and the transmission time of the signal is calculated according to the time point of the laser modulation signal transmission and reception to obtain the distance between the TOF module 11 and the object in the target image, that is, the depth value.
- the accuracy can be set to the highest within the effective distance, or it can be set from near to far, with the accuracy gradually becoming lower or other situations; outside the effective distance, the accuracy is the lowest.
- the range of the accuracy value can be between 0 and 1. In this way, the corresponding relationship between the depth value and the accuracy value is established to determine the accuracy value.
- the TOF module 11 uses the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image.
- the accuracy value is used to characterize the accuracy of the depth value.
- the first depth image determined by the TOF module 11 not only includes the depth value of the target image, but also includes the accuracy value corresponding to the depth value of the target image.
- the meaning of the accuracy value is that the region corresponds to The credibility of the depth value.
- the TOF module 11 sends the first depth image to the image processor
- the TOF module 11 sends the first depth image including the depth value and the accuracy value to the image processor, so that the image processor 13 can obtain the depth value and the accuracy of the depth value collected from the TOF module 11.
- the degree information helps the image processor 13 to more accurately determine the depth value of the target image.
- the binocular vision module 12 determines the second depth image of the target image
- S203 in order for the binocular vision module 12 to determine the second depth image of the target image, S203 may include:
- the binocular vision module 12 After the binocular vision module 12 obtains the depth value of the target image and the binocular vision image of the target image, it respectively determines the feature points of the first vision image in the binocular vision image and the second vision image in the binocular image Feature points;
- the binocular vision module 12 matches the feature points of the objects in the left and right images, and uses the parallax caused by the dual viewpoint imaging of the same object to obtain the distance between itself and the object image of the target image through geometric calculations, that is, Get the depth value of the target image.
- the binocular vision module 12 After the binocular vision module 12 obtains the depth value of the target image, it needs to determine the feature point of each visual image in the binocular image and the feature point of the second visual image, which can reflect the essence of the image Features can identify the target object in the image, and the matching of the image can be completed through the matching of feature points.
- the binocular vision module 12 matches the feature points of the first vision image with the feature points of the second vision image to obtain a matching result of the target image;
- the binocular vision module 12 matches the feature points of the first vision image with the feature points of the second vision image, and the obtained matching results include matching success and matching failure.
- the matching success indicates that the first vision image and the second vision image are matched successfully.
- the feature points of the binocular vision image are the same or similar, which can be represented by the degree of matching. In practical applications, the degree of matching may be the sum of the absolute values of differences of the same feature point.
- Matching failure means that the feature points of the first visual image and the second visual image are completely different, that is, there is no feature point that can be matched.
- the binocular vision module 12 finds the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value;
- the binocular vision module 11 obtains the depth value of the target image in S2032, since the preset correspondence between the matching result and the accuracy value is pre-stored in the binocular vision module 12, then, when it is determined After the matching result, the accuracy value corresponding to the determined matching result can be found from the corresponding relationship.
- the pixel-level traversal is performed with a specified step length, and the feature points of the two images are searched for matching. If there is no feature point that can be matched in a certain area, the The position accuracy is the lowest. If there are feature points that can be matched, the accuracy is proportional to the matching degree. The higher the matching degree, the higher the accuracy.
- the matching degree can be represented by the sum of the absolute values of the differences of the same feature points in the two images, or in other forms. For example, the range of the accuracy value can be between 0 and 1. In this way, the corresponding relationship between the matching degree and the accuracy value is established to determine the accuracy value.
- the binocular vision module 12 uses the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image.
- the second depth image determined by the binocular vision module 12 not only includes the depth value of the target image, but also includes the accuracy value corresponding to the matching result of the target image.
- the accuracy value of the first depth image and the accuracy value of the second depth image may be different in size and in different units, and may be pixel level, block level, row level or frame level.
- the binocular vision module 12 sends the second depth image to the image processor 13;
- the binocular vision module 12 sends the second depth image including the depth value and the accuracy value to the image processor 13, so that the image processor 13 can obtain the depth value collected from the binocular vision module 12
- the accuracy information of the sum depth value is beneficial to the image processor 13 to more accurately determine the depth value of the target image.
- the process of determining the first depth image by the TOF module 11 and the process of determining the second depth image by the binocular vision module 12 may be executed in parallel or sequentially.
- the process of the TOF module 11 determining the first depth image may precede the process of the binocular vision module 12 determining the second depth image, or the process of the binocular vision module 12 determining the second depth image before the TOF
- the process of the module 11 determining the first depth image which is not specifically limited in the embodiment of the present application.
- S205 The image processor 13 determines the depth value of the target image.
- S205 may include:
- the image processor 13 uses a preset according to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image.
- the weighting algorithm performs weighting calculation to obtain the depth value of the target image.
- S205 may include:
- the image processor 13 respectively performs scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain the processed first depth image and the processed second depth image;
- S2052 The image processor 13 according to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the processed second depth image.
- the accuracy value corresponding to the depth value of the depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
- the first depth image needs to be checked.
- the image and the second depth image are scaled so that the resolution of the first depth image and the resolution of the second image depth are the same as the resolution of the preset target image, thereby obtaining the processed first depth image and the second depth image .
- the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, and the The depth value of the second depth image and the accuracy value corresponding to the depth value of the processed second depth image are input into a preset weighting algorithm for calculation, and then the depth value of the target image can be obtained.
- the depth value of the first depth image generated by the TOF module 11 is represented by P_TOF
- the accuracy value of the first depth image is represented by A_TOF
- the first depth image generated by the binocular vision module 12 is represented by P_TOF
- the depth value of the second depth image is represented by P_BV
- the accuracy value of the second depth image is represented by A_BV.
- the depth value TMP i,j of the target image generated by the image processor 13 in the i-th row and j-th column is calculated as follows:
- K is the pixel compensation value of the accuracy value of the TOF module 11
- T is the pixel compensation value of the accuracy value of the binocular vision module 12
- K and T can be set according to actual debugging
- F is the overall compensation value
- a and b are the trustworthiness values
- different values can be set according to the depth values of different distances.
- the depth value of the processed first depth image, the accuracy value of the processed first depth image, the depth value of the second depth image, and the accuracy value of the processed second depth image are substituted into The above formula (1) is calculated to obtain the depth value of the target image.
- the image processor 13 may perform filtering processing on the depth image after the scaling process, or may perform filtering processing on the depth image after determining the depth value of the target image.
- the depth value of the target image can be filtered, or the depth image can be filtered after the scaling process, and the depth value of the target image can also be filtered after the depth value of the target image is determined.
- the embodiment of the application does not do this Specific restrictions.
- S2051 may include:
- the image processor 13 respectively performs scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain the scaled first depth image and the scaled second depth image;
- the image processor 13 respectively performs filtering processing on the scaled first depth image and the scaled second depth image to obtain the processed first depth image and the processed second depth image.
- the depth image is filtered after the scaling process.
- the image processor 13 performs the scaling process on the first depth image and the second depth image, and then respectively performs the scaling process on the first depth image and the scaled depth image.
- the subsequent second depth image is subjected to filtering processing, which may be Gaussian filtering, smoothing filtering, or filtering with configurable coefficients, which is not specifically limited in the embodiment of the present application.
- the method may further include:
- the image processor 13 performs filtering processing on the depth value of the target image to obtain the depth value after the filtering processing of the target image.
- the depth value of the target image is filtered.
- the image processor 13 determines the depth value of the target image
- the image processor 13 performs filtering processing on the depth value of the target image.
- the filtering process obtains the filtered depth value, which can improve the accuracy of the depth value, which may be Gaussian filtering, smoothing filtering, or filtering with configurable coefficients, which is not specifically limited in the embodiment of the present application.
- Example 1 First, the image processor will scale the first depth image and the second depth image obtained from the TOF module and the binocular vision module to generate a new first depth image with the same resolution as the final output target image And the new second depth image.
- the depth value of the new first depth image generated is P_TOF
- the depth value of the new first depth image corresponds to the accuracy value A_TOF
- the accuracy value corresponding to the depth value of the new second depth image is A_BV
- the depth value of the target image is calculated using the above formula (1).
- the image processor performs Gaussian, smoothing or configurable coefficient filtering on the depth value of the target image to obtain the final filtered depth value.
- Example 2 The image processor scales the first depth image and the second depth image obtained from the TOF module and the binocular vision module, and the resolution is the same as the final output target image, and generates a new first depth image and a new The second depth image.
- the image processor performs smoothing or configurable coefficient filtering on the new first depth image and the new second depth image, respectively, and obtains the depth value P_TOF of the new first depth image corresponding to the depth value of the new first depth image
- the accuracy value A_TOF, the depth value P_BV of the new second depth image and the accuracy value A_BV corresponding to the depth value of the new second depth image set the row and column coordinates as i and j, and use the above formula (1) to calculate the target The depth value of the image.
- Example 3 First, the image processor will scale the first depth image and the second depth image obtained from the TOF module and the binocular vision module, and generate a new first depth image at the same resolution as the final output target image And the new second depth image.
- the image processor performs smoothing or configurable coefficient filtering on the new first depth image and the new second depth image, respectively, to obtain the depth value P_TOF of the new first depth image and the depth of the new first depth image.
- the accuracy value corresponding to the value A_TOF, the depth value P_BV of the new second depth image and the accuracy value A_BV corresponding to the depth value of the new second depth image set the row and column coordinates as i and j, and use the above formula (1) to calculate Get the depth value of the target image.
- the image processor performs Gaussian, smoothing or configurable coefficient filtering on the depth value of the target image to obtain the final filtered depth value.
- FIG. 3 is a schematic flowchart of an example of an optional method for determining the depth value of an image provided by an embodiment of the application.
- object A is a close-range object
- object B is a long-range object.
- the depth judgment of the module and the binocular vision module for the A object is based on the TOF module, and the depth of the B object in the binocular vision module is corrected from b1 to b2; in the final synthesis, the near-field object adopts the depth of the TOF module Value, the depth value of the binocular vision module is used for the distant objects.
- the embodiment of the application uses a binocular vision module to make up for this feature. According to the binocular vision module to determine the depth value of objects at long distances and ultra-long distances, the depth value of the final synthesized image can be obtained with higher accuracy and more depth details. Abundant depth value for taking pictures that blur the depth of field.
- the first depth image and the second depth image obtained by the TOF module and the binocular vision module are synthesized, not only can effectively avoid the respective shortcomings of the two technologies, but also improve the depth image of close objects Its robustness and accuracy, and can enhance each other, no matter the distance of the object, more accurate depth images can be obtained.
- the TOF module For example, to shoot objects within the effective distance of the TOF module, first obtain an accurate low-resolution depth image through the TOF module, and then use binocular vision to expand the depth details based on the depth image to achieve a higher resolution Depth image.
- the method for determining the depth value of the image provided by the embodiment of the application adopts TOF technology and binocular vision technology for joint processing.
- the depth value and corresponding accuracy value of the two depth images are used to obtain the obtained value by means of weighting and filtering.
- More accurate depth values not only can use the respective advantages of TOF technology and binocular vision technology to obtain more high-precision depth information at medium and short distances, but also obtain depth values at long distances and ultra-long distances, and finally achieve any The depth value of the viewing distance is perceptually acquired.
- the embodiment of the present application provides a method for determining the depth value of an image.
- a first depth image is obtained from a TOF module
- a second depth image is obtained from a binocular vision module
- the accuracy value is used to represent the accuracy of the corresponding depth value, according to the depth value of the first depth image, the depth value of the first depth image corresponds
- the accuracy value of the second depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image are weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image; that is, in this application
- the first depth image and the second depth image are obtained from the TOF module and the binocular vision module, and then according to the depth value of the first depth image and the corresponding accuracy value, the depth value of the second depth image and the corresponding Weighted calculation is performed on the accuracy value of the target image
- the accuracy of the depth value of the first depth image and the accuracy of the depth value of the second depth image are taken into account, so that the determined target The depth value of the image is more accurate, which is conducive to capturing a more realistic image, thereby improving the user experience.
- the method for determining the depth value of the image is described on the image processor side.
- FIG. 4 is a schematic flowchart of an optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 4, the method for determining the depth value of the image may include:
- S401 For the object in the target image, obtain a first depth image from the TOF module, and obtain a second depth image from the binocular vision module;
- the first depth image and the second depth image respectively further include an accuracy value corresponding to the respective depth value, and the accuracy value is used to represent the accuracy of the corresponding depth value;
- S402 may include:
- the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image and the depth value of the processed second depth image correspond
- the accuracy value of is calculated using a preset weighting algorithm to obtain the depth value of the target image.
- the method may include:
- the depth value of the target image is filtered to obtain the depth value of the target image after filtering.
- the first depth image and the second depth image are respectively scaled according to the preset resolution of the target image to obtain the processed first depth image and the processed second depth image Images, including:
- Filtering is performed on the scaled first depth image and the scaled second depth image respectively to obtain the processed first depth image and the processed second depth image.
- FIG. 5 is a schematic flowchart of another optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 5, the method may further include:
- S503 Use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image
- S504 Send the first depth image to the image processor, so that the image processor determines the depth value of the target image.
- FIG. 6 is a schematic flowchart of another optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 6, the method may further include:
- S601 After acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the feature points of the second vision image in the binocular vision image;
- S602 Match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image
- S604 Use the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image
- S605 Send the second depth image to the image processor, so that the image processor determines the depth value of the target image.
- FIG. 7 is a schematic structural diagram of an optional image processor provided by an embodiment of the application.
- the image processor may include: The first acquisition module 71 and the weighting module 72; among them,
- the first acquisition module 71 is configured to acquire a first depth image from the TOF module and a second depth image from the binocular vision module for the object in the target image; wherein, the first depth image and the second depth image are respectively It also includes the accuracy value corresponding to the respective depth value, and the accuracy value is used to characterize the accuracy of the corresponding depth value;
- the weighting module 72 is configured to use the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image, using The preset weighting algorithm performs weighting calculation to obtain the depth value of the target image.
- the weighting module 72 includes:
- the scaling sub-module is configured to perform scaling processing on the first depth image and the second depth image respectively according to the preset resolution of the target image, to obtain the processed first depth image and the processed second depth image;
- the weighting sub-module is configured to be based on the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the processed second depth value
- the accuracy value corresponding to the depth value of the depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
- the image processor is further configured to:
- the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the depth value of the processed second depth image is weighted and calculated using a preset weighting algorithm, and after the depth value of the target image is obtained, the depth value of the target image is filtered to obtain the depth value after the filtering process of the target image.
- the zoom sub-module is specifically configured as follows:
- Filtering is performed on the scaled first depth image and the scaled second depth image respectively to obtain the processed first depth image and the processed second depth image.
- the aforementioned first acquisition module 71, weighting module 72, scaling sub-module, and weighting sub-module can be implemented by a processor located on an image processor, specifically a central processing unit (CPU, Central Processing Unit), a microprocessor (MPU, Microprocessor Unit), Digital Signal Processor (DSP, Digital Signal Processing), or Field Programmable Gate Array (FPGA, Field Programmable Gate Array) and other implementations.
- CPU Central Processing Unit
- MPU Microprocessor Unit
- DSP Digital Signal Processor
- FPGA Field Programmable Gate Array
- FIG. 8 is a schematic structural diagram of an optional TOF module provided by an embodiment of the application.
- the TOF module may include: The second acquiring module 81, the first searching module 82, the first forming module 83 and the first sending module 84; among them,
- the second obtaining module 81 is configured to obtain the depth value of the target image
- the first searching module 82 is configured to find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
- the first forming module 83 is configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image;
- the first sending module 84 is configured to send the first depth image to the image processor, so that the image processor determines the depth value of the target image.
- the above-mentioned second acquisition module 81, first search module 82, first formation module 83, and first sending module 84 can be implemented by a processor located on the TOF module, specifically a CPU, MPU, DSP, or FPGA, etc. achieve.
- FIG. 9 is a schematic structural diagram of an optional binocular vision module provided by an embodiment of the application.
- the binocular vision module is The vision module may include: a determining module 91, a matching module 92, a second searching module 93, a second forming module 94, and a second sending module 95; among them,
- the determining module 91 is configured to, after acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the feature points of the second vision image in the binocular image ;
- the matching module 92 is configured to match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;
- the second searching module 93 is configured to find the accuracy value corresponding to the matching result of the target image from the preset correspondence between the matching result and the accuracy value according to the matching result of the target image;
- the second forming module 94 is configured to use the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;
- the second sending module 95 is configured to send the second depth image to the image processor, so that the image processor determines the depth value of the target image.
- the determination module 91, the matching module 92, the second search module 93, the second formation module 94, and the second sending module 95 can be implemented by a processor located on the binocular vision module, specifically CPU, MPU, Realized by DSP or FPGA.
- FIG. 10 is a schematic structural diagram of another optional image processor provided by an embodiment of the application. As shown in FIG. 10, an embodiment of the application provides an image processor 1000, including:
- the storage medium 102 relies on the processor 101 to perform operations through the communication bus 103.
- the method for determining the depth value of the image executed by the above-mentioned image processor is executed.
- the communication bus 103 is used to implement connection and communication between these components.
- the communication bus 103 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the communication bus 103 in FIG. 10.
- FIG. 11 is a schematic structural diagram of another optional TOF module provided by an embodiment of the application. As shown in FIG. 11, an embodiment of the present application provides a TOF module 1100, including:
- the storage medium 112 relies on the processor 111 to perform operations through the communication bus 113.
- the method for determining the depth value of the image executed by the TOF module is executed.
- the communication bus 113 is used to implement connection and communication between these components.
- the communication bus 113 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the communication bus 113 in FIG. 11.
- FIG. 12 is a schematic structural diagram of another optional binocular vision module provided by an embodiment of the application. As shown in FIG. 12, an embodiment of the present application provides a binocular vision module 1200, including:
- the storage medium 122 relies on the processor 121 to perform operations through the communication bus 123.
- the method for determining the depth value of the image executed by the binocular vision module is executed.
- the communication bus 123 is used to implement connection and communication between these components.
- the communication bus 123 also includes a power bus, a control bus, and a status signal bus.
- various buses are marked as the communication bus 123 in FIG. 12.
- An embodiment of the present application provides a computer storage medium that stores executable instructions.
- the processors When the executable instructions are executed by one or more processors, the processors perform the image processing in one or more embodiments above.
- the method for determining the depth value of the image executed by the device, or the method for determining the depth value of the image executed by the TOF module in one or more embodiments, or the method executed by the binocular vision module in one or more embodiments How to determine the depth value of the image.
- the memory in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be random access memory (Random Access Memory, RAM), which is used as an external cache.
- RAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
- Enhanced SDRAM, ESDRAM Synchronous Link Dynamic Random Access Memory
- Synchlink DRAM Synchronous Link Dynamic Random Access Memory
- DRRAM Direct Rambus RAM
- the processor may be an integrated circuit chip with signal processing capabilities.
- each step of the above method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software.
- the above-mentioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP Digital Signal Processor
- ASIC application specific integrated circuit
- FPGA ready-made programmable gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers.
- the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
- the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof.
- the processing unit can be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
- ASIC application specific integrated circuits
- DSP Digital Signal Processing
- DSP Device digital signal processing equipment
- PLD programmable Logic Device
- PLD Field-Programmable Gate Array
- FPGA Field-Programmable Gate Array
- the technology described herein can be implemented by modules (such as procedures, functions, etc.) that perform the functions described herein.
- the software codes can be stored in the memory and executed by the processor.
- the memory can be implemented in the processor or external to the processor.
- the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in each embodiment of the present application.
- a terminal which may be a mobile phone, a computer, a server, or a network device, etc.
- the embodiment of the application provides a method for determining the depth value of an image, an image processor, and a module.
- a first depth image is obtained from a TOF module and a second depth is obtained from a binocular vision module.
- first depth image and the second depth image respectively include accuracy values corresponding to their respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values, according to the depth value of the first depth image ,
- the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image and the accuracy value corresponding to the depth value of the second depth image are weighted and calculated using a preset weighting algorithm to obtain the depth of the target image Value, which can improve the accuracy of the depth value of the image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
A method for determining a depth value of an image, an image processor (1000) and a module. The method for determining a depth value of an image comprises: for an object of a target image, acquiring a first depth image from a TOF module (11), and acquiring a second depth image from a binocular vision module (12) (S401); and using, according to a depth value of the first depth image, an accuracy value corresponding to the depth value of the first depth image, a depth value of the second depth image, and an accuracy value corresponding to the depth value of the second depth image, a preset weighting algorithm to perform weighted calculation to obtain a depth value of a target image (S402). The accuracy of a depth value of an image can thus be improved.
Description
本申请实施例涉及深度图像的合成技术,尤其涉及一种图像的深度值的确定方法、图像处理器以及模组。The embodiments of the present application relate to depth image synthesis technology, and in particular, to a method for determining the depth value of an image, an image processor, and a module.
飞行时间(TOF,Time of Flight)模组包括一个调制光发射器,调制光接收器和深度图像合成处理器,其中,调制光发射器给目标连续发送调制光信号,并用接收器接收从物体返回的调制光信号,在深度图像合成处理器中,通过探测调制光信号的往返时间来得到目标物距离,从而得到该物体的深度信号。TOF模组对于中短距离的物体,有良好的深度感知特性,但是对于远距离和超远距离的物体,却无法得到精确的深度值,而且当TOF模组受到外界噪声和环境光的影响时,其深度图像成像精度受到较大影响。The Time of Flight (TOF) module includes a modulated light transmitter, a modulated light receiver, and a depth image synthesis processor. The modulated light transmitter continuously sends modulated light signals to the target, and the receiver receives the return from the object In the depth image synthesis processor, the distance of the target object is obtained by detecting the round-trip time of the modulated light signal in the depth image synthesis processor, thereby obtaining the depth signal of the object. The TOF module has good depth perception characteristics for short-to-medium distance objects, but for long-distance and ultra-long-distance objects, accurate depth values cannot be obtained, and when the TOF module is affected by external noise and ambient light , Its depth image imaging accuracy is greatly affected.
双目视觉模组仿照人眼特性,通过对左右两幅图像中物体的特征点匹配,利用双视点对同一物体成像造成的视差,通过几何计算得到该物体的深度信号。在实际应用中,物体的特征点匹配不准,成为双目视觉模组获取高精度深度值的最大障碍。The binocular vision module imitates the characteristics of the human eye, by matching the feature points of the object in the left and right images, using the parallax caused by the dual viewpoint imaging of the same object, and obtaining the depth signal of the object through geometric calculation. In practical applications, the feature points of the object are not accurately matched, which becomes the biggest obstacle for the binocular vision module to obtain high-precision depth values.
可见,TOF模组受到环境光或噪声的影响,得到的深度图像精度不高,且TOF模组仅适用于近距离和中距离,而双目视觉模组由于特征点匹配不准,造成深度图像的计算错误;也就是说,现有的TOF模组或者双目视觉模组所得到深度图像的精度较低。It can be seen that the TOF module is affected by ambient light or noise, and the accuracy of the depth image obtained is not high, and the TOF module is only suitable for short and medium distances, while the binocular vision module is caused by inaccurate matching of feature points, resulting in depth images In other words, the accuracy of the depth image obtained by the existing TOF module or binocular vision module is low.
发明内容Summary of the invention
本申请实施例提供一种图像的深度值的确定方法、图像处理器以及模组,能够提高图像的深度值的精确度。The embodiments of the present application provide a method, an image processor, and a module for determining the depth value of an image, which can improve the accuracy of the depth value of the image.
本申请实施例的技术方案可以如下实现:The technical solutions of the embodiments of the present application can be implemented as follows:
第一方面,本申请实施例提供一种图像的深度值的确定方法,包括:In the first aspect, an embodiment of the present application provides a method for determining the depth value of an image, including:
针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,所述第一深度图像和所述第二深度图像中分别还包括与各自的深度值对应的准确度值,所述准确度值用于表征对应的深度值的准确程度;For the object in the target image, the first depth image is obtained from the TOF module and the second depth image is obtained from the binocular vision module; wherein, the first depth image and the second depth image respectively include An accuracy value corresponding to the depth value of, where the accuracy value is used to characterize the accuracy of the corresponding depth value;
根据所述第一深度图像的深度值、所述第一深度图像的深度值对应的准确度值、所述第二深度图像的深度值和所述第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深 度值。According to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image , Performing weighting calculation using a preset weighting algorithm to obtain the depth value of the target image.
第二方面,本申请实施例提供一种图像的深度值的确定方法,包括:In a second aspect, an embodiment of the present application provides a method for determining the depth value of an image, including:
获取目标图像的深度值;Obtain the depth value of the target image;
根据所述目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到所述目标图像的深度值对应的准确度值;According to the depth value of the target image, find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value;
利用所述目标图像的深度值和所述目标图像的深度值对应的准确度值形成第一深度图像;Forming a first depth image by using the depth value of the target image and the accuracy value corresponding to the depth value of the target image;
将所述第一深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The first depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
第三方面,本申请实施例提供一种图像的深度值的确定方法,包括:In a third aspect, an embodiment of the present application provides a method for determining the depth value of an image, including:
获取目标图像的深度值和所述目标图像的双目视觉图像之后,分别确定所述双目视觉图像中第一目视觉图像的特征点和所述双目视觉图像中第二目视觉图像的特征点;After acquiring the depth value of the target image and the binocular vision image of the target image, the feature points of the first vision image in the binocular vision image and the features of the second vision image in the binocular vision image are respectively determined point;
将所述第一目视觉图像的特征点与所述第二目视觉图像的特征点进行匹配,得到所述目标图像的匹配结果;Matching the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;
根据所述目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到所述目标图像的匹配结果对应的准确度值;According to the matching result of the target image, the accuracy value corresponding to the matching result of the target image is found from the preset corresponding relationship between the matching result and the accuracy value;
利用所述目标图像的深度值和所述目标图像的匹配结果对应的准确度值形成第二深度图像;Using the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;
将所述第二深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The second depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
第四方面,本申请实施例提供一种图像处理器,包括:In a fourth aspect, an embodiment of the present application provides an image processor, including:
第一获取模块,配置为针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,所述第一深度图像和所述第二深度图像中分别还包括与各自的深度值对应的准确度值,所述准确度值用于表征对应的深度值的准确程度;The first acquisition module is configured to acquire a first depth image from the TOF module and a second depth image from the binocular vision module for the object in the target image; wherein, the first depth image and the second depth The images also respectively include accuracy values corresponding to the respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values;
加权模块,配置为根据所述第一深度图像的深度值、所述第一深度图像的深度值对应的准确度值、所述第二深度图像的深度值和所述第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值。A weighting module, configured to be based on the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the depth value of the second depth image The corresponding accuracy value is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
第五方面,本申请实施例提供一种TOF模组,包括:In a fifth aspect, an embodiment of the present application provides a TOF module, including:
第二获取模块,配置为获取目标图像的深度值;The second acquiring module is configured to acquire the depth value of the target image;
第一查找模块,配置为根据所述目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到所述目标图像的深度值对应的准确度值;The first searching module is configured to find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
第一形成模块,配置为利用所述目标图像的深度值和所述目标图像的深度值对应的准确度值形成第一深度图像;A first forming module configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image;
第一发送模块,配置为将所述第一深度图像发送至图像处理器,以使 得所述图像处理器确定所述目标图像的深度值。The first sending module is configured to send the first depth image to an image processor, so that the image processor determines the depth value of the target image.
第六方面,本申请实施例提供一种双目视觉模组,包括:In a sixth aspect, an embodiment of the present application provides a binocular vision module, including:
确定模块,配置为获取目标图像的深度值和所述目标图像的双目视觉图像之后,分别确定所述双目视觉图像中第一目视觉图像的特征点和所述双目图像中第二目视觉图像的特征点;The determining module is configured to, after acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the second vision image in the binocular image Feature points of the visual image;
匹配模块,配置为将所述第一目视觉图像的特征点与所述第二目视觉图像的特征点进行匹配,得到所述目标图像的匹配结果;A matching module, configured to match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;
第二查找模块,配置为根据所述目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到所述目标图像的匹配结果对应的准确度值;The second searching module is configured to find the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value according to the matching result of the target image;
第二形成模块,配置为利用所述目标图像的深度值和所述目标图像的深度值对应的准确度值形成第二深度图像;A second forming module configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a second depth image;
第二发送模块,配置为将所述第二深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The second sending module is configured to send the second depth image to an image processor, so that the image processor determines the depth value of the target image.
第七方面,本申请实施例提供一种图像处理器,所述图像处理器包括:In a seventh aspect, an embodiment of the present application provides an image processor, and the image processor includes:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述图像处理器执行的所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relies on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, perform one or more of the foregoing implementations An example is the method for determining the depth value of the image executed by the image processor.
第八方面,本申请实施例提供一种TOF模组,所述TOF模组包括:In an eighth aspect, an embodiment of the present application provides a TOF module, and the TOF module includes:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述TOF模组执行的所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relies on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, perform one or more of the foregoing implementations An example is the method for determining the depth value of the image executed by the TOF module.
第九方面,本申请实施例提供一种双目视觉模组,所述双目视觉模组包括:In a ninth aspect, an embodiment of the present application provides a binocular vision module, and the binocular vision module includes:
处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述一个或多个实施例所述双目视觉模组执行的所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relies on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, perform one or more of the foregoing implementations An example is the method for determining the depth value of the image executed by the binocular vision module.
第十方面,本申请实施例提供一种计算机可读存储介质,其中,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行上述一个或多个实施例所述图像处理器执行的所述的图像的深度值的确定方法,或者所述TOF模组执行的所述的图像的深度值的确定的方法,或者所述双目视觉模组执行的所述的图像的深度值的确定的方法。In a tenth aspect, an embodiment of the present application provides a computer-readable storage medium, in which executable instructions are stored, and when the executable instructions are executed by one or more processors, the processors execute one or more of the above The method for determining the depth value of the image executed by the image processor in various embodiments, or the method for determining the depth value of the image executed by the TOF module, or the binocular vision module The method for determining the depth value of the image is performed.
本申请实施例提供了一种图像的深度值的确定方法、图像处理器以及模组,针对目标图像的物体,从TOF模组获取第一深度图像,从双目视觉模组获取第二深度图像,其中,第一深度图像和第二深度图像中还包括与 各自的深度值对应的准确度值,准确度值用于表征对应的深度值的准确程度,按照预设的目标图像的分辨率,根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值;也就是说,在本申请实施例中,从TOF模组和双目视觉模组获取第一深度图像和第二深度图像,再根据第一深度图像的深度值和对应的准确度值,第二深度图像的深度值和对应的准确度值进行加权计算,这样,在确定目标图像的深度值的过程中,考虑到了第一深度图像的深度值的准确程度和第二深度图像的深度值的准确程度,使得确定出的目标图像的深度值更加精确,有利于捕捉到更为真实的图像,从而提高了用户的体验度。The embodiment of the application provides a method for determining the depth value of an image, an image processor, and a module. For the object in the target image, a first depth image is obtained from the TOF module, and a second depth image is obtained from the binocular vision module. , Wherein the first depth image and the second depth image also include accuracy values corresponding to the respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values, according to the preset resolution of the target image, According to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image and the accuracy value corresponding to the depth value of the second depth image, the preset weighting algorithm is used to perform Weighted calculation to obtain the depth value of the target image; that is to say, in the embodiment of the present application, the first depth image and the second depth image are obtained from the TOF module and the binocular vision module, and then according to the depth of the first depth image Value and the corresponding accuracy value, the depth value of the second depth image and the corresponding accuracy value are weighted, so that in the process of determining the depth value of the target image, the accuracy of the depth value of the first depth image is taken into account And the accuracy of the depth value of the second depth image makes the determined depth value of the target image more accurate, which is conducive to capturing a more realistic image, thereby improving the user experience.
图1为本申请实施例提供的一种可选的图像处理系统的结构示意图;FIG. 1 is a schematic structural diagram of an optional image processing system provided by an embodiment of the application;
图2为本申请实施例提供的一种可选的图像的深度值的确定方法的流程交互示意图;2 is a schematic diagram of flow interaction of an optional method for determining a depth value of an image provided by an embodiment of the application;
图3为本申请实施例提供的一种可选的图像的深度值的确定方法的实例的流程示意图;3 is a schematic flowchart of an example of an optional method for determining a depth value of an image provided by an embodiment of the application;
图4为本申请实施例提供的一种可选的图像的深度值的确定方法的流程示意图;4 is a schematic flowchart of an optional method for determining the depth value of an image provided by an embodiment of the application;
图5为本申请实施例提供的另一种可选的图像的深度值的确定方法的流程示意图;FIG. 5 is a schematic flowchart of another optional method for determining the depth value of an image according to an embodiment of the application;
图6为本申请实施例提供的再一种可选的图像的深度值的确定方法的流程示意图;6 is a schematic flowchart of yet another optional method for determining the depth value of an image provided by an embodiment of the application;
图7为本申请实施例提供的一种可选的图像处理器的结构示意图;FIG. 7 is a schematic structural diagram of an optional image processor provided by an embodiment of the application;
图8为本申请实施例提供的一种可选的TOF模组的结构示意图;FIG. 8 is a schematic structural diagram of an optional TOF module provided by an embodiment of the application;
图9为本申请实施例提供的一种可选的双目视觉模组的结构示意图;FIG. 9 is a schematic structural diagram of an optional binocular vision module provided by an embodiment of the application;
图10为本申请实施例提供的另一种可选的图像处理器的结构示意图;FIG. 10 is a schematic structural diagram of another optional image processor provided by an embodiment of this application;
图11为本申请实施例提供的另一种可选的TOF模组的结构示意图;FIG. 11 is a schematic structural diagram of another optional TOF module provided by an embodiment of the application;
图12为本申请实施例提供的另一种可选的双目视觉模组的结构示意图。FIG. 12 is a schematic structural diagram of another optional binocular vision module provided by an embodiment of the application.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅仅用于解释相关申请,而非对该申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关申请相关的部分。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. It can be understood that the specific embodiments described here are only used to explain the related application, but not to limit the application. In addition, it should be noted that, for ease of description, only the parts related to the relevant application are shown in the drawings.
本申请实施例提供一种图像的深度值的确定方法,该方法应用于一图 像处理系统中,图1为本申请实施例提供的一种可选的图像处理系统的结构示意图,如图1所示,该图像处理系统包括:TOF模组11、双目视觉模组12和图像处理器13,其中,TOF模组11与图像处理器13之间建立有通信连接,双目视觉模组12与图像处理器13之间建立有通信连接。An embodiment of the application provides a method for determining the depth value of an image, which is applied to an image processing system. FIG. 1 is a schematic structural diagram of an optional image processing system provided by an embodiment of the application, as shown in FIG. 1. As shown, the image processing system includes a TOF module 11, a binocular vision module 12, and an image processor 13. A communication connection is established between the TOF module 11 and the image processor 13, and the binocular vision module 12 is connected to the A communication connection is established between the image processors 13.
其中,在图像处理系统中,TOF模组11能够通过探测调制光信号的往返时间来得到自身距离目标图像的物体的距离,即深度值,但是该方法确定的深度值针对中短距离的物体有良好的深度感知特性,但是对远距离或者超远距离的物体,无法得到精确的深度值,且TOF模组11容易受到外界噪声和环境光的影响,从而影响深度值的精确度。Among them, in the image processing system, the TOF module 11 can obtain the distance between itself and the object in the target image by detecting the round-trip time of the modulated light signal, that is, the depth value, but the depth value determined by this method is specific to the short-to-medium-distance objects. Good depth perception characteristics, but for long-distance or ultra-long-distance objects, an accurate depth value cannot be obtained, and the TOF module 11 is easily affected by external noise and ambient light, thereby affecting the accuracy of the depth value.
在图像处理系统中,双目视觉模组12能够通过对左右两幅图像中物体的特征点匹配,利用双视点对同一物体成像造成的视差,通过几何计算得到自身距离目标图像的物体的距离,即深度值,但是在实际应用中,若是特征点匹配不准确将影响深度值的精度。In the image processing system, the binocular vision module 12 can match the feature points of the objects in the left and right images, use the parallax caused by the dual viewpoint imaging of the same object, and obtain the distance between itself and the object in the target image through geometric calculation. That is the depth value, but in practical applications, if the feature point matching is not accurate, the accuracy of the depth value will be affected.
这里,上述图像处理器可以用于分别从TOF模组11和双目视觉模组12采集深度值,对采集到的多个深度值进行处理,以确定出目标图像更为精确的深度值。Here, the above-mentioned image processor may be used to collect depth values from the TOF module 11 and the binocular vision module 12 respectively, and process the multiple collected depth values to determine a more accurate depth value of the target image.
为了克服TOF模组11或者双目视觉模组12在确定深度值中的误差,本申请实施例提供一种图像的深度值的确定方法,该方法应用于上述图像处理系统中,图2为本申请实施例提供的一种可选的图像的深度值的确定方法的流程交互示意图,如图2所示,该方法可以包括:In order to overcome the error in determining the depth value of the TOF module 11 or the binocular vision module 12, an embodiment of the present application provides a method for determining the depth value of an image. The method is applied to the above-mentioned image processing system. The application embodiment provides a schematic diagram of the flow interaction of an optional method for determining the depth value of an image. As shown in FIG. 2, the method may include:
S201:TOF模组11确定目标图像的第一深度图像;S201: The TOF module 11 determines the first depth image of the target image;
为了克服TOF模组11对远距离或者超远距离的物体无法得到精确的深度值,以及双目视觉模组12特征点匹配不准确对深度值的精度的影响,这里,利用TOF模组11确定目标图像的物体的第一深度图像,利用双目视觉模组12确定目标图像的第二深度图像,然后使得图像处理器13能够结合第一深度图像和第二深度图像来确定目标图像的深度值。In order to overcome the inability of the TOF module 11 to obtain accurate depth values for long-distance or ultra-long-distance objects, and the impact of inaccurate matching of feature points of the binocular vision module 12 on the accuracy of the depth value, here, the TOF module 11 is used to determine The first depth image of the object in the target image, the binocular vision module 12 is used to determine the second depth image of the target image, and then the image processor 13 can combine the first depth image and the second depth image to determine the depth value of the target image .
在具体实施过程中,TOF模组11为了确定出目标图像的第一深度图像,S201可以包括:In a specific implementation process, in order to determine the first depth image of the target image by the TOF module 11, S201 may include:
S2011:TOF模组11获取目标图像的深度值;S2011: The TOF module 11 obtains the depth value of the target image;
具体来说,TOF模组11通过探测调制光信号的往返时间来得到自身距离目标图像的物体的距离,即可以得到目标图像的深度值。其中,目标图像的深度值为目标图像的每个像素位置的深度值。Specifically, the TOF module 11 obtains the distance between itself and the object in the target image by detecting the round-trip time of the modulated light signal, that is, the depth value of the target image can be obtained. Wherein, the depth value of the target image is the depth value of each pixel position of the target image.
S2012:TOF模组11根据目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到目标图像的深度值对应的准确度值;S2012: The TOF module 11 finds the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
在S2011中TOF模组11获取到目标图像的深度值之后,由于在TOF模组11中预先存储有预设的深度值与准确度值之间的对应关系,那么,当确定出深度值之后,可以从该对应关系中查找到目标图像的深度值对应的准确度值。After the TOF module 11 obtains the depth value of the target image in S2011, since the corresponding relationship between the preset depth value and the accuracy value is pre-stored in the TOF module 11, then, after the depth value is determined, The accuracy value corresponding to the depth value of the target image can be found from the corresponding relationship.
在实际应用中,以该TOF模组11的有效距离进行判定,根据激光的调制信号发射接收的时间点,计算信号的传输时间以得到TOF模组11与目标图像的物体的距离,即深度值,在有效距离内准确度可设置为最高,也可设定由近到远,准确度逐渐变低或其他情况;在有效距离外,准确度最低。例如,准确度值的范围可以在0到1之间,这样,建立好深度值与准确度值之间的对应关系,以便确定出准确度值。In practical applications, the effective distance of the TOF module 11 is used for judgment, and the transmission time of the signal is calculated according to the time point of the laser modulation signal transmission and reception to obtain the distance between the TOF module 11 and the object in the target image, that is, the depth value. , The accuracy can be set to the highest within the effective distance, or it can be set from near to far, with the accuracy gradually becoming lower or other situations; outside the effective distance, the accuracy is the lowest. For example, the range of the accuracy value can be between 0 and 1. In this way, the corresponding relationship between the depth value and the accuracy value is established to determine the accuracy value.
S2013:TOF模组11利用目标图像的深度值和目标图像的深度值对应的准确度值形成第一深度图像。S2013: The TOF module 11 uses the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image.
其中,准确度值用于表征深度值的准确程度。Among them, the accuracy value is used to characterize the accuracy of the depth value.
也就是说,TOF模组11所确定出的第一深度图像中不仅包括有目标图像的深度值,还包括有目标图像的深度值对应的准确度值,准确度值的含义为该区域所对应深度值的可信度。That is to say, the first depth image determined by the TOF module 11 not only includes the depth value of the target image, but also includes the accuracy value corresponding to the depth value of the target image. The meaning of the accuracy value is that the region corresponds to The credibility of the depth value.
S202:TOF模组11发送第一深度图像至图像处理器;S202: The TOF module 11 sends the first depth image to the image processor;
在S202中,TOF模组11将包括有深度值和准确度值的第一深度图像发送至图像处理器,使得图像处理器13可以获取来自TOF模组11采集到的深度值和深度值的准确程度信息,有利于图像处理器13能够更加准确地确定出目标图像的深度值。In S202, the TOF module 11 sends the first depth image including the depth value and the accuracy value to the image processor, so that the image processor 13 can obtain the depth value and the accuracy of the depth value collected from the TOF module 11. The degree information helps the image processor 13 to more accurately determine the depth value of the target image.
S203:双目视觉模组12确定目标图像的第二深度图像;S203: The binocular vision module 12 determines the second depth image of the target image;
在具体实施过程中,双目视觉模组12为了确定出目标图像的第二深度图像,S203可以包括:In a specific implementation process, in order for the binocular vision module 12 to determine the second depth image of the target image, S203 may include:
S2031:双目视觉模组12获取目标图像的深度值和目标图像的双目视觉图像之后,分别确定双目视觉图像中第一目视觉图像的特征点和双目图像中第二目视觉图像的特征点;S2031: After the binocular vision module 12 obtains the depth value of the target image and the binocular vision image of the target image, it respectively determines the feature points of the first vision image in the binocular vision image and the second vision image in the binocular image Feature points;
具体来说,双目视觉模组12通过对左右两幅图像中物体的特征点匹配,利用双视点对同一物体成像造成的视差,通过几何计算得到自身距离目标图像的物像的距离,即可以得到目标图像的深度值。Specifically, the binocular vision module 12 matches the feature points of the objects in the left and right images, and uses the parallax caused by the dual viewpoint imaging of the same object to obtain the distance between itself and the object image of the target image through geometric calculations, that is, Get the depth value of the target image.
双目视觉模组12在获取到目标图像的深度值之后,需要先确定出双目图像中每一目视觉图像的特征点和第二目视觉图像的特征点,该特征点可以反映出图像的本质特征,能够标识图像中目标物体,通过特征点的匹配能够完成图像的匹配。After the binocular vision module 12 obtains the depth value of the target image, it needs to determine the feature point of each visual image in the binocular image and the feature point of the second visual image, which can reflect the essence of the image Features can identify the target object in the image, and the matching of the image can be completed through the matching of feature points.
S2032:双目视觉模组12将第一目视觉图像的特征点与第二目视觉图像的特征点进行匹配,得到目标图像的匹配结果;S2032: The binocular vision module 12 matches the feature points of the first vision image with the feature points of the second vision image to obtain a matching result of the target image;
这里,双目视觉模组12将第一目视觉图像的特征点与第二目视觉图像的特征点进行匹配,得到的匹配结果包括匹配成功和匹配失败,匹配成功表示第一目视觉图像和第二目视觉图像的特征点相同或者相似,可以通过匹配度来表示,其中,在实际应用中,匹配度可以为同一特征点的差异的绝对值之和,这里,本申请实施例不作具体限定。匹配失败表示第一目视觉图像和第二目视觉图像的特征点完全不同,即没有可以匹配的特征点。Here, the binocular vision module 12 matches the feature points of the first vision image with the feature points of the second vision image, and the obtained matching results include matching success and matching failure. The matching success indicates that the first vision image and the second vision image are matched successfully. The feature points of the binocular vision image are the same or similar, which can be represented by the degree of matching. In practical applications, the degree of matching may be the sum of the absolute values of differences of the same feature point. Here, the embodiment of the present application does not specifically limit it. Matching failure means that the feature points of the first visual image and the second visual image are completely different, that is, there is no feature point that can be matched.
S2033:双目视觉模组12根据匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到目标图像的匹配结果对应的准确度值;S2033: According to the matching result, the binocular vision module 12 finds the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value;
在S2032中双目视觉模组11获取到目标图像的深度值之后,由于在双目视觉模组12中预先存储有预设的匹配结果与准确度值之间的对应关系,那么,当确定出匹配结果之后,可以从该对应关系中查找到确定出的匹配结果对应的准确度值。After the binocular vision module 11 obtains the depth value of the target image in S2032, since the preset correspondence between the matching result and the accuracy value is pre-stored in the binocular vision module 12, then, when it is determined After the matching result, the accuracy value corresponding to the determined matching result can be found from the corresponding relationship.
在实际应用中,以特征点匹配为基础,在左右两幅图像中,以指定步长进行像素级遍历,寻找两幅图像的特征点进行匹配,若某区域没有可以匹配的特征点,则该位置的准确度最低,若存在可以匹配的特征点,则准确度与匹配度成正比,匹配度越高,准确度越高。其中,匹配度可以以两幅图像中同一特征点的差异的绝对值之和进行表示,或其他形式。例如,准确度值的范围可以在0到1之间,这样,建立好匹配度与准确度值之间的对应关系,以便确定出准确度值。In practical applications, based on feature point matching, in the left and right images, the pixel-level traversal is performed with a specified step length, and the feature points of the two images are searched for matching. If there is no feature point that can be matched in a certain area, the The position accuracy is the lowest. If there are feature points that can be matched, the accuracy is proportional to the matching degree. The higher the matching degree, the higher the accuracy. Among them, the matching degree can be represented by the sum of the absolute values of the differences of the same feature points in the two images, or in other forms. For example, the range of the accuracy value can be between 0 and 1. In this way, the corresponding relationship between the matching degree and the accuracy value is established to determine the accuracy value.
S2034:双目视觉模组12利用目标图像的深度值和目标图像的匹配结果对应的准确度值形成第二深度图像。S2034: The binocular vision module 12 uses the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image.
也就是说,双目视觉模组12所确定出的第二深度图像中不仅包括有目标图像的深度值,还包括有目标图像的匹配结果对应的准确度值。That is, the second depth image determined by the binocular vision module 12 not only includes the depth value of the target image, but also includes the accuracy value corresponding to the matching result of the target image.
其中,第一深度图像的准确度值与第二深度图像的准确度值可以大小不同,表示单位不同,可以是像素级,块级,行级或帧级。Wherein, the accuracy value of the first depth image and the accuracy value of the second depth image may be different in size and in different units, and may be pixel level, block level, row level or frame level.
S204:双目视觉模组12发送第二深度图像至图像处理器13;S204: The binocular vision module 12 sends the second depth image to the image processor 13;
在S203中,双目视觉模组12将包括有深度值和准确度值的第二深度图像发送至图像处理器13,使得图像处理器13可以获取来自双目视觉模组12采集到的深度值和深度值的准确程度信息,有利于图像处理器13能够更加准确地确定出目标图像的深度值。In S203, the binocular vision module 12 sends the second depth image including the depth value and the accuracy value to the image processor 13, so that the image processor 13 can obtain the depth value collected from the binocular vision module 12 The accuracy information of the sum depth value is beneficial to the image processor 13 to more accurately determine the depth value of the target image.
这里,需要说明的是,TOF模组11确定第一深度图像的过程与双目视觉模组12确定第二深度图像的过程之间可以是并行执行的,也可以是先后执行的,当先后执行时,可以是TOF模组11确定第一深度图像的过程先于双目视觉模组12确定第二深度图像的过程,也可以是双目视觉模组12确定第二深度图像的过程先于TOF模组11确定第一深度图像的过程,这里,本申请实施例对此不作具体限定。Here, it should be noted that the process of determining the first depth image by the TOF module 11 and the process of determining the second depth image by the binocular vision module 12 may be executed in parallel or sequentially. At this time, the process of the TOF module 11 determining the first depth image may precede the process of the binocular vision module 12 determining the second depth image, or the process of the binocular vision module 12 determining the second depth image before the TOF The process of the module 11 determining the first depth image, which is not specifically limited in the embodiment of the present application.
S205:图像处理器13确定目标图像的深度值。S205: The image processor 13 determines the depth value of the target image.
这里,需要说明的是,当目标图像的分辨率与,第一深度图像的分辨率和第二深度图像的分辨率均相同时,S205可以包括:Here, it should be noted that, when the resolution of the target image is the same as the resolution of the first depth image and the resolution of the second depth image, S205 may include:
图像处理器13根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值。The image processor 13 uses a preset according to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image. The weighting algorithm performs weighting calculation to obtain the depth value of the target image.
但是,当目标图像的分辨率与,第一深度图像的分辨率和第二深度图像的分辨率不同时,S205可以包括:However, when the resolution of the target image is different from the resolution of the first depth image and the resolution of the second depth image, S205 may include:
S2051:图像处理器13按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像;S2051: The image processor 13 respectively performs scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain the processed first depth image and the processed second depth image;
S2052:图像处理器13根据处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、处理后的第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值。S2052: The image processor 13 according to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the processed second depth image. The accuracy value corresponding to the depth value of the depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
也就是说,当获取到的第一深度图像的分辨率与预设的目标图像的分辨率不同,第二深度图像的分辨率与预设的目标图像的分辨率不同时,需要对第一深度图像与第二深度图像进行缩放,使得第一深度图像的分辨率与第二图像深度的分辨率与预设的目标图像的分辨率相同,从而得到处理后的第一深度图像和第二深度图像。That is to say, when the resolution of the acquired first depth image is different from the resolution of the preset target image, and the resolution of the second depth image is different from the resolution of the preset target image, the first depth image needs to be checked. The image and the second depth image are scaled so that the resolution of the first depth image and the resolution of the second image depth are the same as the resolution of the preset target image, thereby obtaining the processed first depth image and the second depth image .
这里,图像处理器13为了为目标图像确定出更为精确地深度值,这里,将处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值输入至预设的加权算法中进行计算,就可以得到目标图像的深度值。Here, in order for the image processor 13 to determine a more accurate depth value for the target image, here, the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, and the The depth value of the second depth image and the accuracy value corresponding to the depth value of the processed second depth image are input into a preset weighting algorithm for calculation, and then the depth value of the target image can be obtained.
具体来说,设行列坐标为i和j,TOF模组11生成的第一深度图像的深度值用P_TOF表示,第一深度图像的准确度值用A_TOF表示,双目视觉模组12生成的第二深度图像的深度值用P_BV表示,第二深度图像的准确度值用A_BV表示,图像处理器13生成的目标图像在第i行第j列的深度值TMP
i,j计算如下:
Specifically, assuming that the row and column coordinates are i and j, the depth value of the first depth image generated by the TOF module 11 is represented by P_TOF, the accuracy value of the first depth image is represented by A_TOF, and the first depth image generated by the binocular vision module 12 is represented by P_TOF. The depth value of the second depth image is represented by P_BV, and the accuracy value of the second depth image is represented by A_BV. The depth value TMP i,j of the target image generated by the image processor 13 in the i-th row and j-th column is calculated as follows:
其中,K为TOF模组11的准确度值的像素补偿值,T为双目视觉模组12的准确度值的像素补偿值,K和T可以根据实际调试进行设定,F为整体补偿值,可以根据器件和场景进行配置,a和b为信任度取值,可以根据不同距离的深度值设置不同的值。Among them, K is the pixel compensation value of the accuracy value of the TOF module 11, T is the pixel compensation value of the accuracy value of the binocular vision module 12, K and T can be set according to actual debugging, and F is the overall compensation value , Can be configured according to the device and the scene, a and b are the trustworthiness values, and different values can be set according to the depth values of different distances.
在实际应用中,将处理后的第一深度图像的深度值、处理后的第一深度图像的准确度值、第二深度图像的深度值和处理后的第二深度图像的准确度值代入至上述公式(1)进行计算,就可以得到目标图像的深度值。In practical applications, the depth value of the processed first depth image, the accuracy value of the processed first depth image, the depth value of the second depth image, and the accuracy value of the processed second depth image are substituted into The above formula (1) is calculated to obtain the depth value of the target image.
另外,需要说明的是,图像处理器13在确定目标图像的深度值的过程中,可以是在缩放处理之后对深度图像进行滤波处理,也可以是在确定出目标图像的深度值之后对目标图像的深度值进行滤波处理,也可以既在缩放处理之后对深度图像进行滤波处理,也在确定出目标图像的深度值之后对目标图像的深度值进行滤波处理,这里,本申请实施例对此不作具体限定。In addition, it should be noted that in the process of determining the depth value of the target image, the image processor 13 may perform filtering processing on the depth image after the scaling process, or may perform filtering processing on the depth image after determining the depth value of the target image. The depth value of the target image can be filtered, or the depth image can be filtered after the scaling process, and the depth value of the target image can also be filtered after the depth value of the target image is determined. Here, the embodiment of the application does not do this Specific restrictions.
在一种可选的实施例中,S2051可以包括:In an optional embodiment, S2051 may include:
图像处理器13按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到缩放后的第一深度图像和缩放后的第二深度图像;The image processor 13 respectively performs scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain the scaled first depth image and the scaled second depth image;
图像处理器13分别对缩放后的第一深度图像和缩放后的第二深度图像进行滤波处理,得到处理后的第一深度图像和处理后的第二深度图像。The image processor 13 respectively performs filtering processing on the scaled first depth image and the scaled second depth image to obtain the processed first depth image and the processed second depth image.
这里,采用在缩放处理之后对深度图像进行滤波处理,具体来说,图像处理器13在对第一深度图像和第二深度图像进行缩放处理之后,再分别对缩放后的第一深度图像和缩放后的第二深度图像进行滤波处理,其中,可以是进行高斯滤波,或者平滑滤波,或者可配系数的滤波,本申请实施例对此不作具体限定。Here, the depth image is filtered after the scaling process. Specifically, the image processor 13 performs the scaling process on the first depth image and the second depth image, and then respectively performs the scaling process on the first depth image and the scaled depth image. The subsequent second depth image is subjected to filtering processing, which may be Gaussian filtering, smoothing filtering, or filtering with configurable coefficients, which is not specifically limited in the embodiment of the present application.
在一种可选的实施例中,S205之后,该方法还可以包括:In an optional embodiment, after S205, the method may further include:
图像处理器13对目标图像的深度值进行滤波处理,得到目标图像滤波处理后的深度值。The image processor 13 performs filtering processing on the depth value of the target image to obtain the depth value after the filtering processing of the target image.
这里,采用确定出目标图像的深度值之后对目标图像的深度值进行滤波处理,具体来说,图像处理器13在确定出目标图像的深度值之后,图像处理器13对目标图像的深度值进行滤波处理,得到滤波后的深度值,这样可以提高深度值的精度,其中,可以是进行高斯滤波,或者平滑滤波,或者可配系数的滤波,本申请实施例对此不作具体限定。Here, after the depth value of the target image is determined, the depth value of the target image is filtered. Specifically, after the image processor 13 determines the depth value of the target image, the image processor 13 performs filtering processing on the depth value of the target image. The filtering process obtains the filtered depth value, which can improve the accuracy of the depth value, which may be Gaussian filtering, smoothing filtering, or filtering with configurable coefficients, which is not specifically limited in the embodiment of the present application.
下面举实例来对上述一个或多个实施例中所述的图像的深度值的确定方法进行说明。The following examples are given to illustrate the method for determining the depth value of the image described in one or more embodiments above.
实例1:首先,图像处理器将从TOF模组和双目视觉模组得到的第一深度图像和第二深度图像进行缩放,与最终输出的目标图像同分辨率,生成新的第一深度图像和新的第二深度图像。Example 1: First, the image processor will scale the first depth image and the second depth image obtained from the TOF module and the binocular vision module to generate a new first depth image with the same resolution as the final output target image And the new second depth image.
其次,设行列坐标为i和j,生成的新的第一深度图像的深度值为P_TOF,新的第一深度图像的深度值对应的准确度值为A_TOF,新的第二深度图像的深度值为P_BV,新的第二深度图像的深度值对应的准确度值为A_BV,采用上述公式(1)计算得到目标图像的深度值。Secondly, let the row and column coordinates be i and j, the depth value of the new first depth image generated is P_TOF, the depth value of the new first depth image corresponds to the accuracy value A_TOF, and the depth value of the new second depth image Is P_BV, the accuracy value corresponding to the depth value of the new second depth image is A_BV, and the depth value of the target image is calculated using the above formula (1).
最后,图像处理器对目标图像的深度值进行高斯,平滑或可配系数的滤波,得到最终滤波后的深度值。Finally, the image processor performs Gaussian, smoothing or configurable coefficient filtering on the depth value of the target image to obtain the final filtered depth value.
实例2:图像处理器将从TOF模组和双目视觉模组得到的第一深度图像和第二深度图像进行缩放,与最终输出的目标图像同分辨率,生成新的第一深度图像和新的第二深度图像。Example 2: The image processor scales the first depth image and the second depth image obtained from the TOF module and the binocular vision module, and the resolution is the same as the final output target image, and generates a new first depth image and a new The second depth image.
图像处理器对新的第一深度图像和新的第二深度图像分别进行平滑或可配系数的滤波后,得到新的第一深度图像的深度值P_TOF和新的第一深度图像的深度值对应的准确度值A_TOF,新的第二深度图像的深度值P_BV和新的第二深度图像的深度值对应的准确度值A_BV;设行列坐标为i和j,采用上述公式(1)计算得到目标图像的深度值。The image processor performs smoothing or configurable coefficient filtering on the new first depth image and the new second depth image, respectively, and obtains the depth value P_TOF of the new first depth image corresponding to the depth value of the new first depth image The accuracy value A_TOF, the depth value P_BV of the new second depth image and the accuracy value A_BV corresponding to the depth value of the new second depth image; set the row and column coordinates as i and j, and use the above formula (1) to calculate the target The depth value of the image.
实例3:首先,图像处理器将从TOF模组和双目视觉模组得到的第一 深度图像和第二深度图像进行缩放,与最终输出的目标图像同分辨率,生成新的第一深度图像和新的第二深度图像。Example 3: First, the image processor will scale the first depth image and the second depth image obtained from the TOF module and the binocular vision module, and generate a new first depth image at the same resolution as the final output target image And the new second depth image.
其次,图像处理器对新的第一深度图像和新的第二深度图像分别进行平滑或可配系数的滤波后,得到新的第一深度图像的深度值P_TOF和新的第一深度图像的深度值对应的准确度值A_TOF,新的第二深度图像的深度值P_BV和新的第二深度图像的深度值对应的准确度值A_BV;设行列坐标为i和j,采用上述公式(1)计算得到目标图像的深度值。Secondly, the image processor performs smoothing or configurable coefficient filtering on the new first depth image and the new second depth image, respectively, to obtain the depth value P_TOF of the new first depth image and the depth of the new first depth image. The accuracy value corresponding to the value A_TOF, the depth value P_BV of the new second depth image and the accuracy value A_BV corresponding to the depth value of the new second depth image; set the row and column coordinates as i and j, and use the above formula (1) to calculate Get the depth value of the target image.
最后,图像处理器对目标图像的深度值进行高斯,平滑或可配系数的滤波,得到最终滤波后的深度值。Finally, the image processor performs Gaussian, smoothing or configurable coefficient filtering on the depth value of the target image to obtain the final filtered depth value.
实例4:图3为本申请实施例提供的一种可选的图像的深度值的确定方法的实例的流程示意图,如图3所示,A物体为近景物体,B物体为远景物体,根据TOF模组和双目视觉模组对A物体的深度判断,以TOF模组为根据,校正双目视觉模组中B物体的深度从b1到b2;最终合成时,近景物体采用TOF模组的深度值,远景物体采用双目视觉模组的深度值。Example 4: FIG. 3 is a schematic flowchart of an example of an optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 3, object A is a close-range object, and object B is a long-range object. According to TOF The depth judgment of the module and the binocular vision module for the A object is based on the TOF module, and the depth of the B object in the binocular vision module is corrected from b1 to b2; in the final synthesis, the near-field object adopts the depth of the TOF module Value, the depth value of the binocular vision module is used for the distant objects.
由图3可知,虚化景深拍照中,TOF模组的有效距离有限,并不能获取到远距离或超远距离的物体深度信息。本申请实施例采用双目视觉模组可以弥补这一特性,根据双目视觉模组判断远距离和超远距离的物体深度值,最终合成的图像的深度值可以获取精度更高,深度细节更丰富的深度值,以用于虚化景深的拍照。It can be seen from Fig. 3 that in the virtual depth of field photography, the effective distance of the TOF module is limited, and the depth information of objects at a long distance or an ultra-long distance cannot be obtained. The embodiment of the application uses a binocular vision module to make up for this feature. According to the binocular vision module to determine the depth value of objects at long distances and ultra-long distances, the depth value of the final synthesized image can be obtained with higher accuracy and more depth details. Abundant depth value for taking pictures that blur the depth of field.
通过上述实例,将TOF模组和双目视觉模组获得的第一深度图像和第二深度图像,进行合成处理,不仅可以有效的避免两种技术各自的缺点,对于近距离物体的深度图像提升其鲁棒性和精度,而且可以相互增强,无论物体的远近,都可以获得更精准的深度图像。Through the above examples, the first depth image and the second depth image obtained by the TOF module and the binocular vision module are synthesized, not only can effectively avoid the respective shortcomings of the two technologies, but also improve the depth image of close objects Its robustness and accuracy, and can enhance each other, no matter the distance of the object, more accurate depth images can be obtained.
例如,雨雾天气拍摄人像,可以通过TOF模组和双目视觉模组的配合,对于近景人像,有效的避免噪声导致的深度图像获取精度不高,对于远景,依赖双目视觉模组获取得到更多的深度信息。For example, to shoot portraits in rainy and foggy weather, you can use the TOF module and binocular vision module to cooperate. For close-up portraits, it can effectively avoid the low accuracy of the depth image acquisition caused by noise. More in-depth information.
例如,TOF模组有效距离范围内的物体拍摄,先通过TOF模组获取准确的低分辨率的深度图像,然后基于该深度图像,采用双目视觉进行深度细节的扩充,达到更高分辨率的深度图像。For example, to shoot objects within the effective distance of the TOF module, first obtain an accurate low-resolution depth image through the TOF module, and then use binocular vision to expand the depth details based on the depth image to achieve a higher resolution Depth image.
本申请实施例提供的图像的深度值的确定方法,采用TOF技术和双目视觉技术进行联合处理,通过两种深度图像的深度值和对应的准确度值,利用权重和滤波等手段,获取得到更加精确的深度值,不但利用TOF技术和双目视觉技术各自的优点,可以在中近距离获得更高精度的深度信息,而且也可以获取远距离及超远距离的深度值,最终实现了任何视距的深度值感知获取。The method for determining the depth value of the image provided by the embodiment of the application adopts TOF technology and binocular vision technology for joint processing. The depth value and corresponding accuracy value of the two depth images are used to obtain the obtained value by means of weighting and filtering. More accurate depth values, not only can use the respective advantages of TOF technology and binocular vision technology to obtain more high-precision depth information at medium and short distances, but also obtain depth values at long distances and ultra-long distances, and finally achieve any The depth value of the viewing distance is perceptually acquired.
本申请实施例提供了一种图像的深度值的确定方法,针对目标图像的物体,从TOF模组获取第一深度图像,从双目视觉模组获取第二深度图像,其中,第一深度图像和第二深度图像中还包括与各自的深度值对应的准确 度值,准确度值用于表征对应的深度值的准确程度,根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值;也就是说,在本申请实施例中,从TOF模组和双目视觉模组获取第一深度图像和第二深度图像,再根据第一深度图像的深度值和对应的准确度值,第二深度图像的深度值和对应的准确度值进行加权计算,这样,在确定目标图像的深度值的过程中,考虑到了第一深度图像的深度值的准确程度和第二深度图像的深度值的准确程度,使得确定出的目标图像的深度值更加精确,有利于捕捉到更为真实的图像,从而提高了用户的体验度。The embodiment of the present application provides a method for determining the depth value of an image. For an object in a target image, a first depth image is obtained from a TOF module, and a second depth image is obtained from a binocular vision module, where the first depth image And the second depth image also includes the accuracy value corresponding to the respective depth value, the accuracy value is used to represent the accuracy of the corresponding depth value, according to the depth value of the first depth image, the depth value of the first depth image corresponds The accuracy value of the second depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image are weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image; that is, in this application In the embodiment, the first depth image and the second depth image are obtained from the TOF module and the binocular vision module, and then according to the depth value of the first depth image and the corresponding accuracy value, the depth value of the second depth image and the corresponding Weighted calculation is performed on the accuracy value of the target image. In this way, in the process of determining the depth value of the target image, the accuracy of the depth value of the first depth image and the accuracy of the depth value of the second depth image are taken into account, so that the determined target The depth value of the image is more accurate, which is conducive to capturing a more realistic image, thereby improving the user experience.
下面以图像处理系统中所部属的各个设备侧对上述图像的深度值的确定方法进行说明。The method for determining the depth value of the above-mentioned image will be described below with respect to each device side deployed in the image processing system.
首先,以图像处理器侧对图像的深度值的确定方法进行描述。First, the method for determining the depth value of the image is described on the image processor side.
图4为本申请实施例提供的一种可选的图像的深度值的确定方法的流程示意图,如图4所示,该图像的深度值的确定方法可以包括:FIG. 4 is a schematic flowchart of an optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 4, the method for determining the depth value of the image may include:
S401:针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;S401: For the object in the target image, obtain a first depth image from the TOF module, and obtain a second depth image from the binocular vision module;
其中,第一深度图像和第二深度图像中分别还包括与各自的深度值对应的准确度值,准确度值用于表征对应的深度值的准确程度;Wherein, the first depth image and the second depth image respectively further include an accuracy value corresponding to the respective depth value, and the accuracy value is used to represent the accuracy of the corresponding depth value;
S402:根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值。S402: According to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image and the accuracy value corresponding to the depth value of the second depth image, use preset weighting The algorithm performs weighted calculation to obtain the depth value of the target image.
在一种可选的实施例中,S402可以包括:In an optional embodiment, S402 may include:
按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像;Respectively performing scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain the processed first depth image and the processed second depth image;
根据处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、处理后的第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值。According to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image and the depth value of the processed second depth image correspond The accuracy value of is calculated using a preset weighting algorithm to obtain the depth value of the target image.
在一种可选的实施例中,在根据处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、处理后的第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值之后,该方法可以包括:In an optional embodiment, according to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and After the accuracy value corresponding to the depth value of the processed second depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image, the method may include:
对目标图像的深度值进行滤波处理,得到目标图像滤波处理后的深度值。The depth value of the target image is filtered to obtain the depth value of the target image after filtering.
在一种可选的实施例中,按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像,包括:In an optional embodiment, the first depth image and the second depth image are respectively scaled according to the preset resolution of the target image to obtain the processed first depth image and the processed second depth image Images, including:
按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到缩放后的第一深度图像和缩放后的第二深度图像;Respectively performing scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain a scaled first depth image and a scaled second depth image;
分别对缩放后的第一深度图像和缩放后的第二深度图像进行滤波处理,得到处理后的第一深度图像和处理后的第二深度图像。Filtering is performed on the scaled first depth image and the scaled second depth image respectively to obtain the processed first depth image and the processed second depth image.
其次,以TOF模组侧对上述图像的深度值的确定方法进行描述。Secondly, the method for determining the depth value of the above image is described on the TOF module side.
图5为本申请实施例提供的另一种可选的图像的深度值的确定方法的流程示意图,如图5所示,该方法还可以包括:FIG. 5 is a schematic flowchart of another optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 5, the method may further include:
S501:获取目标图像的深度值;S501: Obtain the depth value of the target image;
S502:根据目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到目标图像的深度值对应的准确度值;S502: According to the depth value of the target image, find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value;
S503:利用目标图像的深度值和目标图像的深度值对应的准确度值形成第一深度图像;S503: Use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image;
S504:将第一深度图像发送至图像处理器,以使得图像处理器确定目标图像的深度值。S504: Send the first depth image to the image processor, so that the image processor determines the depth value of the target image.
再次,以双目视觉模组侧对上述图像的深度值的确定方法进行描述。Once again, the method for determining the depth value of the above-mentioned image is described on the side of the binocular vision module.
图6为本申请实施例提供的再一种可选的图像的深度值的确定方法的流程示意图,如图6所示,该方法还可以包括:FIG. 6 is a schematic flowchart of another optional method for determining the depth value of an image provided by an embodiment of the application. As shown in FIG. 6, the method may further include:
S601:获取目标图像的深度值和目标图像的双目视觉图像之后,分别确定双目视觉图像中第一目视觉图像的特征点和双目视觉图像中第二目视觉图像的特征点;S601: After acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the feature points of the second vision image in the binocular vision image;
S602:将第一目视觉图像的特征点与第二目视觉图像的特征点进行匹配,得到目标图像的匹配结果;S602: Match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;
S603:根据目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到目标图像的匹配结果对应的准确度值;S603: According to the matching result of the target image, find the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value;
S604:利用目标图像的深度值和目标图像的匹配结果对应的准确度值形成第二深度图像;S604: Use the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;
S605:将第二深度图像发送至图像处理器,以使得图像处理器确定目标图像的深度值。S605: Send the second depth image to the image processor, so that the image processor determines the depth value of the target image.
基于同一发明构思,本申请实施例提供一种图像处理器,图7为本申请实施例提供的一种可选的图像处理器的结构示意图,如图7所示,该图像处理器可以包括:第一获取模块71和加权模块72;其中,Based on the same inventive concept, an embodiment of the application provides an image processor. FIG. 7 is a schematic structural diagram of an optional image processor provided by an embodiment of the application. As shown in FIG. 7, the image processor may include: The first acquisition module 71 and the weighting module 72; among them,
第一获取模块71,配置为针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,第一深度图像和第二深度图像中分别还包括与各自的深度值对应的准确度值,准确度值用于表征对应的深度值的准确程度;The first acquisition module 71 is configured to acquire a first depth image from the TOF module and a second depth image from the binocular vision module for the object in the target image; wherein, the first depth image and the second depth image are respectively It also includes the accuracy value corresponding to the respective depth value, and the accuracy value is used to characterize the accuracy of the corresponding depth value;
加权模块72,配置为根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度 值。The weighting module 72 is configured to use the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image, using The preset weighting algorithm performs weighting calculation to obtain the depth value of the target image.
在一种可选的实施例中,加权模块72,包括:In an optional embodiment, the weighting module 72 includes:
缩放子模块,配置为按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像;The scaling sub-module is configured to perform scaling processing on the first depth image and the second depth image respectively according to the preset resolution of the target image, to obtain the processed first depth image and the processed second depth image;
加权子模块,配置为根据处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、处理后的第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值。The weighting sub-module is configured to be based on the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the processed second depth value The accuracy value corresponding to the depth value of the depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
在一种可选的实施例中,该图像处理器,还配置为:In an optional embodiment, the image processor is further configured to:
在根据处理后的第一深度图像的深度值、处理后的第一深度图像的深度值对应的准确度值、处理后的第二深度图像的深度值和处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值之后,对目标图像的深度值进行滤波处理,得到目标图像滤波处理后的深度值。According to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the depth value of the processed second depth image The corresponding accuracy value is weighted and calculated using a preset weighting algorithm, and after the depth value of the target image is obtained, the depth value of the target image is filtered to obtain the depth value after the filtering process of the target image.
在一种可选的实施例中,缩放子模块具体配置为:In an optional embodiment, the zoom sub-module is specifically configured as follows:
按照预设的目标图像的分辨率,分别对第一深度图像和第二深度图像进行缩放处理,得到缩放后的第一深度图像和缩放后的第二深度图像;Respectively performing scaling processing on the first depth image and the second depth image according to the preset resolution of the target image, to obtain a scaled first depth image and a scaled second depth image;
分别对缩放后的第一深度图像和缩放后的第二深度图像进行滤波处理,得到处理后的第一深度图像和处理后的第二深度图像。Filtering is performed on the scaled first depth image and the scaled second depth image respectively to obtain the processed first depth image and the processed second depth image.
在实际应用中,上述第一获取模块71、加权模块72、缩放子模块和加权子模块可由位于图像处理器上的处理器实现,具体为中央处理器(CPU,Central Processing Unit)、微处理器(MPU,Microprocessor Unit)、数字信号处理器(DSP,Digital Signal Processing)或现场可编程门阵列(FPGA,Field Programmable Gate Array)等实现。In practical applications, the aforementioned first acquisition module 71, weighting module 72, scaling sub-module, and weighting sub-module can be implemented by a processor located on an image processor, specifically a central processing unit (CPU, Central Processing Unit), a microprocessor (MPU, Microprocessor Unit), Digital Signal Processor (DSP, Digital Signal Processing), or Field Programmable Gate Array (FPGA, Field Programmable Gate Array) and other implementations.
基于同一发明构思,本申请实施例提供一种TOF模组,图8为本申请实施例提供的一种可选的TOF模组的结构示意图,如图8所示,该TOF模组可以包括:第二获取模块81,第一查找模块82,第一形成模块83和第一发送模块84;其中,Based on the same inventive concept, an embodiment of the application provides a TOF module. FIG. 8 is a schematic structural diagram of an optional TOF module provided by an embodiment of the application. As shown in FIG. 8, the TOF module may include: The second acquiring module 81, the first searching module 82, the first forming module 83 and the first sending module 84; among them,
第二获取模块81,配置为获取目标图像的深度值;The second obtaining module 81 is configured to obtain the depth value of the target image;
第一查找模块82,配置为根据目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到目标图像的深度值对应的准确度值;The first searching module 82 is configured to find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;
第一形成模块83,配置为利用目标图像的深度值和目标图像的深度值对应的准确度值形成第一深度图像;The first forming module 83 is configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image;
第一发送模块84,配置为将第一深度图像发送至图像处理器,以使得图像处理器确定目标图像的深度值。The first sending module 84 is configured to send the first depth image to the image processor, so that the image processor determines the depth value of the target image.
在实际应用中,上述第二获取模块81、第一查找模块82、第一形成模块83和第一发送模块84可由位于TOF模组上的处理器实现,具体为CPU、 MPU、DSP或FPGA等实现。In practical applications, the above-mentioned second acquisition module 81, first search module 82, first formation module 83, and first sending module 84 can be implemented by a processor located on the TOF module, specifically a CPU, MPU, DSP, or FPGA, etc. achieve.
基于同一发明构思,本申请实施例提供一种双目视觉模组,图9为本申请实施例提供的一种可选的双目视觉模组的结构示意图,如图9所示,该双目视觉模组可以包括:确定模块91,匹配模块92,第二查找模块93,第二形成模块94和第二发送模块95;其中,Based on the same inventive concept, an embodiment of the present application provides a binocular vision module. FIG. 9 is a schematic structural diagram of an optional binocular vision module provided by an embodiment of the application. As shown in FIG. 9, the binocular vision module is The vision module may include: a determining module 91, a matching module 92, a second searching module 93, a second forming module 94, and a second sending module 95; among them,
确定模块91,配置为获取目标图像的深度值和目标图像的双目视觉图像之后,分别确定双目视觉图像中第一目视觉图像的特征点和双目图像中第二目视觉图像的特征点;The determining module 91 is configured to, after acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the feature points of the second vision image in the binocular image ;
匹配模块92,配置为将第一目视觉图像的特征点与第二目视觉图像的特征点进行匹配,得到目标图像的匹配结果;The matching module 92 is configured to match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;
第二查找模块93,配置为根据目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到目标图像的匹配结果对应的准确度值;The second searching module 93 is configured to find the accuracy value corresponding to the matching result of the target image from the preset correspondence between the matching result and the accuracy value according to the matching result of the target image;
第二形成模块94,配置为利用目标图像的深度值和目标图像的匹配结果对应的准确度值形成第二深度图像;The second forming module 94 is configured to use the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;
第二发送模块95,配置为将第二深度图像发送至图像处理器,以使得图像处理器确定目标图像的深度值。The second sending module 95 is configured to send the second depth image to the image processor, so that the image processor determines the depth value of the target image.
在实际应用中,上述确定模块91、匹配模块92、第二查找模块93、第二形成模块94和第二发送模块95可由位于双目视觉模组上的处理器实现,具体为CPU、MPU、DSP或FPGA等实现。In practical applications, the determination module 91, the matching module 92, the second search module 93, the second formation module 94, and the second sending module 95 can be implemented by a processor located on the binocular vision module, specifically CPU, MPU, Realized by DSP or FPGA.
图10为本申请实施例提供的另一种可选的图像处理器的结构示意图,如图10所示,本申请实施例提供了一种图像处理器1000,包括:FIG. 10 is a schematic structural diagram of another optional image processor provided by an embodiment of the application. As shown in FIG. 10, an embodiment of the application provides an image processor 1000, including:
处理器101以及存储有所述处理器101可执行指令的存储介质102,所述存储介质102通过通信总线103依赖所述处理器101执行操作,当所述指令被所述处理器101执行时,执行上述图像处理器执行的所述的图像的深度值的确定方法。A processor 101 and a storage medium 102 storing executable instructions of the processor 101. The storage medium 102 relies on the processor 101 to perform operations through the communication bus 103. When the instructions are executed by the processor 101, The method for determining the depth value of the image executed by the above-mentioned image processor is executed.
需要说明的是,实际应用时,终端中的各个组件通过通信总线103耦合在一起。可理解,通信总线103用于实现这些组件之间的连接通信。通信总线103除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图10中将各种总线都标为通信总线103。It should be noted that in actual applications, various components in the terminal are coupled together through the communication bus 103. It can be understood that the communication bus 103 is used to implement connection and communication between these components. In addition to the data bus, the communication bus 103 also includes a power bus, a control bus, and a status signal bus. However, for the sake of clear description, various buses are marked as the communication bus 103 in FIG. 10.
图11为本申请实施例提供的另一种可选的TOF模组的结构示意图,如图11所示,本申请实施例提供了一种TOF模组1100,包括:FIG. 11 is a schematic structural diagram of another optional TOF module provided by an embodiment of the application. As shown in FIG. 11, an embodiment of the present application provides a TOF module 1100, including:
处理器111以及存储有所述处理器111可执行指令的存储介质112,所述存储介质112通过通信总线113依赖所述处理器111执行操作,当所述指令被所述处理器111执行时,执行上述TOF模组执行的所述的图像的深度值的确定方法。A processor 111 and a storage medium 112 storing executable instructions of the processor 111. The storage medium 112 relies on the processor 111 to perform operations through the communication bus 113. When the instructions are executed by the processor 111, The method for determining the depth value of the image executed by the TOF module is executed.
需要说明的是,实际应用时,终端中的各个组件通过通信总线113耦合在一起。可理解,通信总线113用于实现这些组件之间的连接通信。通 信总线113除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图11中将各种总线都标为通信总线113。It should be noted that in actual applications, various components in the terminal are coupled together through the communication bus 113. It can be understood that the communication bus 113 is used to implement connection and communication between these components. In addition to the data bus, the communication bus 113 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the communication bus 113 in FIG. 11.
图12为本申请实施例提供的另一种可选的双目视觉模组的结构示意图,如图12所示,本申请实施例提供了一种双目视觉模组1200,包括:FIG. 12 is a schematic structural diagram of another optional binocular vision module provided by an embodiment of the application. As shown in FIG. 12, an embodiment of the present application provides a binocular vision module 1200, including:
处理器121以及存储有所述处理器121可执行指令的存储介质122,所述存储介质122通过通信总线123依赖所述处理器121执行操作,当所述指令被所述处理器121执行时,执行上述双目视觉模组执行的所述的图像的深度值的确定方法。A processor 121 and a storage medium 122 storing instructions executable by the processor 121. The storage medium 122 relies on the processor 121 to perform operations through the communication bus 123. When the instructions are executed by the processor 121, The method for determining the depth value of the image executed by the binocular vision module is executed.
需要说明的是,实际应用时,终端中的各个组件通过通信总线123耦合在一起。可理解,通信总线123用于实现这些组件之间的连接通信。通信总线123除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图12中将各种总线都标为通信总线123。It should be noted that in actual applications, various components in the terminal are coupled together through the communication bus 123. It can be understood that the communication bus 123 is used to implement connection and communication between these components. In addition to the data bus, the communication bus 123 also includes a power bus, a control bus, and a status signal bus. However, for clarity of description, various buses are marked as the communication bus 123 in FIG. 12.
本申请实施例提供了一种计算机存储介质,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行上述一个或多个实施例中图像处理器所执行的图像的深度值的确定方法,或者一个或多个实施例中TOF模组所执行的图像的深度值的确定方法,或者一个或多个实施例中双目视觉模组所执行的图像的深度值的确定方法。An embodiment of the present application provides a computer storage medium that stores executable instructions. When the executable instructions are executed by one or more processors, the processors perform the image processing in one or more embodiments above. The method for determining the depth value of the image executed by the device, or the method for determining the depth value of the image executed by the TOF module in one or more embodiments, or the method executed by the binocular vision module in one or more embodiments How to determine the depth value of the image.
可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double Data Rate SDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(Synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DRRAM)。本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in the embodiments of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory. Among them, the non-volatile memory can be read-only memory (Read-Only Memory, ROM), programmable read-only memory (Programmable ROM, PROM), erasable programmable read-only memory (Erasable PROM, EPROM), and electrically available Erase programmable read-only memory (Electrically EPROM, EEPROM) or flash memory. The volatile memory may be random access memory (Random Access Memory, RAM), which is used as an external cache. By way of exemplary but not restrictive description, many forms of RAM are available, such as static random access memory (Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), synchronous dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), Synchronous Link Dynamic Random Access Memory (Synchlink DRAM, SLDRAM) And Direct Rambus RAM (DRRAM). The memories of the systems and methods described herein are intended to include, but are not limited to, these and any other suitable types of memories.
而处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件 组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。The processor may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The above-mentioned processor may be a general-purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (ASIC), a ready-made programmable gate array (Field Programmable Gate Array, FPGA) or other Programming logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application can be directly embodied as being executed and completed by a hardware decoding processor, or executed and completed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field, such as random access memory, flash memory, read-only memory, programmable read-only memory, or electrically erasable programmable memory, registers. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.
可以理解的是,本文描述的这些实施例可以用硬件、软件、固件、中间件、微码或其组合来实现。对于硬件实现,处理单元可以实现在一个或多个专用集成电路(Application Specific Integrated Circuits,ASIC)、数字信号处理器(Digital Signal Processing,DSP)、数字信号处理设备(DSP Device,DSPD)、可编程逻辑设备(Programmable Logic Device,PLD)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、通用处理器、控制器、微控制器、微处理器、用于执行本申请所述功能的其它电子单元或其组合中。It can be understood that the embodiments described herein can be implemented by hardware, software, firmware, middleware, microcode, or a combination thereof. For hardware implementation, the processing unit can be implemented in one or more application specific integrated circuits (ASIC), digital signal processor (Digital Signal Processing, DSP), digital signal processing equipment (DSP Device, DSPD), programmable Logic device (Programmable Logic Device, PLD), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), general-purpose processors, controllers, microcontrollers, microprocessors, and others for performing the functions described in this application Electronic unit or its combination.
对于软件实现,可通过执行本文所述功能的模块(例如过程、函数等)来实现本文所述的技术。软件代码可存储在存储器中并通过处理器执行。存储器可以在处理器中或在处理器外部实现。For software implementation, the technology described herein can be implemented by modules (such as procedures, functions, etc.) that perform the functions described herein. The software codes can be stored in the memory and executed by the processor. The memory can be implemented in the processor or external to the processor.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or device including a series of elements not only includes those elements, It also includes other elements not explicitly listed, or elements inherent to the process, method, article, or device. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or device that includes the element.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are only for description, and do not represent the superiority of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机、计算机、服务器、或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in each embodiment of the present application.
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,这些均属于本申请的保护之内。The embodiments of the application are described above with reference to the accompanying drawings, but the application is not limited to the above-mentioned specific embodiments. The above-mentioned specific embodiments are only illustrative and not restrictive. Those of ordinary skill in the art are Under the enlightenment of this application, many forms can be made without departing from the purpose of this application and the scope of protection of the claims, and these are all within the protection of this application.
本申请实施例提供了一种图像的深度值的确定方法、图像处理器以及模组,针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,第一深度图像和第二深度图像中分别还包括与各自的深度值对应的准确度值,准确度值用于表征对应的深度值的准确程度,根据第一深度图像的深度值、第一深度图像的深度值对应的准确度值、第二深度图像的深度值和第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到目标图像的深度值,能够提高图像的深度值的精确度。The embodiment of the application provides a method for determining the depth value of an image, an image processor, and a module. For the object in the target image, a first depth image is obtained from a TOF module and a second depth is obtained from a binocular vision module. Image; wherein the first depth image and the second depth image respectively include accuracy values corresponding to their respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values, according to the depth value of the first depth image , The accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image and the accuracy value corresponding to the depth value of the second depth image are weighted and calculated using a preset weighting algorithm to obtain the depth of the target image Value, which can improve the accuracy of the depth value of the image.
Claims (13)
- 一种图像的深度值的确定方法,包括:A method for determining the depth value of an image includes:针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,所述第一深度图像和所述第二深度图像中分别还包括与各自的深度值对应的准确度值,所述准确度值用于表征对应的深度值的准确程度;For the object in the target image, the first depth image is obtained from the TOF module and the second depth image is obtained from the binocular vision module; wherein, the first depth image and the second depth image respectively include An accuracy value corresponding to the depth value of, where the accuracy value is used to characterize the accuracy of the corresponding depth value;根据所述第一深度图像的深度值、所述第一深度图像的深度值对应的准确度值、所述第二深度图像的深度值和所述第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值。According to the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the second depth image , Performing weighting calculation using a preset weighting algorithm to obtain the depth value of the target image.
- 根据权利要求1所述的方法,其中,所述根据所述第一深度图像的深度值、所述第一深度图像的深度值对应的准确度值、所述第二深度图像的深度值和所述第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值,包括:The method according to claim 1, wherein the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the accuracy value corresponding to the depth value of the first depth image are The accuracy value corresponding to the depth value of the second depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image, including:按照预设的所述目标图像的分辨率,分别对所述第一深度图像和所述第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像;Respectively performing scaling processing on the first depth image and the second depth image according to the preset resolution of the target image to obtain a processed first depth image and a processed second depth image;根据所述处理后的第一深度图像的深度值、所述处理后的第一深度图像的深度值对应的准确度值、所述处理后的第二深度图像的深度值和所述处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值。According to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, the depth value of the processed second depth image, and the processed depth value The accuracy value corresponding to the depth value of the second depth image is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
- 根据权利要求2所述的方法,其中,在根据所述处理后的第一深度图像的深度值、所述处理后的第一深度图像的深度值对应的准确度值、所述处理后的第二深度图像的深度值和所述处理后的第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值之后,所述方法还包括:3. The method according to claim 2, wherein, according to the depth value of the processed first depth image, the accuracy value corresponding to the depth value of the processed first depth image, and the processed first depth image After the depth value of the second depth image and the accuracy value corresponding to the depth value of the processed second depth image are weighted and calculated using a preset weighting algorithm, after the depth value of the target image is obtained, the method further includes :对所述目标图像的深度值进行滤波处理,得到所述目标图像滤波处理后的深度值。Filter processing is performed on the depth value of the target image to obtain the depth value after the filter processing of the target image.
- 根据权利要求2或3所述的方法,其中,所述按照预设的所述目标图像的分辨率,分别对所述第一深度图像和所述第二深度图像进行缩放处理,得到处理后的第一深度图像和处理后的第二深度图像,包括:The method according to claim 2 or 3, wherein the first depth image and the second depth image are respectively scaled according to the preset resolution of the target image to obtain the processed The first depth image and the processed second depth image include:按照预设的所述目标图像的分辨率,分别对所述第一深度图像和所述第二深度图像进行缩放处理,得到缩放后的第一深度图像和缩放后的第二深度图像;Respectively performing scaling processing on the first depth image and the second depth image according to the preset resolution of the target image to obtain a scaled first depth image and a scaled second depth image;分别对所述缩放后的第一深度图像和所述缩放后的第二深度图像进行滤波处理,得到所述处理后的第一深度图像和所述处理后的第二深度图像。Filtering processing is performed on the scaled first depth image and the scaled second depth image respectively to obtain the processed first depth image and the processed second depth image.
- 一种图像的深度值的确定方法,包括:A method for determining the depth value of an image includes:获取目标图像的深度值;Obtain the depth value of the target image;根据所述目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到所述目标图像的深度值对应的准确度值;According to the depth value of the target image, find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value;利用所述目标图像的深度值和所述目标图像的深度值对应的准确度值形成第一深度图像;Forming a first depth image by using the depth value of the target image and the accuracy value corresponding to the depth value of the target image;将所述第一深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The first depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
- 一种图像的深度值的确定方法,包括:A method for determining the depth value of an image includes:获取目标图像的深度值和所述目标图像的双目视觉图像之后,分别确定所述双目视觉图像中第一目视觉图像的特征点和所述双目视觉图像中第二目视觉图像的特征点;After acquiring the depth value of the target image and the binocular vision image of the target image, the feature points of the first vision image in the binocular vision image and the features of the second vision image in the binocular vision image are respectively determined point;将所述第一目视觉图像的特征点与所述第二目视觉图像的特征点进行匹配,得到所述目标图像的匹配结果;Matching the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;根据所述目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到所述目标图像的匹配结果对应的准确度值;According to the matching result of the target image, the accuracy value corresponding to the matching result of the target image is found from the preset corresponding relationship between the matching result and the accuracy value;利用所述目标图像的深度值和所述目标图像的匹配结果对应的准确度值形成第二深度图像;Using the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;将所述第二深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The second depth image is sent to an image processor, so that the image processor determines the depth value of the target image.
- 一种图像处理器,包括:An image processor, including:第一获取模块,配置为针对目标图像的物体,从TOF模组获取第一深度图像,以及从双目视觉模组获取第二深度图像;其中,所述第一深度图像和所述第二深度图像中分别还包括与各自的深度值对应的准确度值,所述准确度值用于表征对应的深度值的准确程度;The first acquisition module is configured to acquire a first depth image from the TOF module and a second depth image from the binocular vision module for the object in the target image; wherein, the first depth image and the second depth The images also respectively include accuracy values corresponding to the respective depth values, and the accuracy values are used to characterize the accuracy of the corresponding depth values;加权模块,配置为根据所述第一深度图像的深度值、所述第一深度图像的深度值对应的准确度值、所述第二深度图像的深度值和所述第二深度图像的深度值对应的准确度值,利用预设的加权算法进行加权计算,得到所述目标图像的深度值。A weighting module, configured to be based on the depth value of the first depth image, the accuracy value corresponding to the depth value of the first depth image, the depth value of the second depth image, and the depth value of the second depth image The corresponding accuracy value is weighted and calculated using a preset weighting algorithm to obtain the depth value of the target image.
- 一种TOF模组,包括:A TOF module, including:第二获取模块,配置为获取目标图像的深度值;The second acquiring module is configured to acquire the depth value of the target image;第一查找模块,配置为根据所述目标图像的深度值,从预设的深度值与准确度值之间的对应关系中,查找到所述目标图像的深度值对应的准确度值;The first searching module is configured to find the accuracy value corresponding to the depth value of the target image from the preset corresponding relationship between the depth value and the accuracy value according to the depth value of the target image;第一形成模块,配置为利用所述目标图像的深度值和所述目标图像的深度值对应的准确度值形成第一深度图像;A first forming module configured to use the depth value of the target image and the accuracy value corresponding to the depth value of the target image to form a first depth image;第一发送模块,配置为将所述第一深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The first sending module is configured to send the first depth image to an image processor, so that the image processor determines the depth value of the target image.
- 一种双目视觉模组,包括:A binocular vision module, including:确定模块,配置为获取目标图像的深度值和所述目标图像的双目视觉图像之后,分别确定所述双目视觉图像中第一目视觉图像的特征点和所述双目图像中第二目视觉图像的特征点;The determining module is configured to, after acquiring the depth value of the target image and the binocular vision image of the target image, respectively determine the feature points of the first vision image in the binocular vision image and the second vision image in the binocular image Feature points of the visual image;匹配模块,配置为将所述第一目视觉图像的特征点与所述第二目视觉图像的特征点进行匹配,得到所述目标图像的匹配结果;A matching module, configured to match the feature points of the first visual image with the feature points of the second visual image to obtain a matching result of the target image;第二查找模块,配置为根据所述目标图像的匹配结果,从预设的匹配结果与准确度值之间的对应关系中,查找到所述目标图像的匹配结果对应的准确度值;The second searching module is configured to find the accuracy value corresponding to the matching result of the target image from the preset corresponding relationship between the matching result and the accuracy value according to the matching result of the target image;第二形成模块,配置为利用所述目标图像的深度值和所述目标图像的匹配结果对应的准确度值形成第二深度图像;A second forming module, configured to use the depth value of the target image and the accuracy value corresponding to the matching result of the target image to form a second depth image;第二发送模块,配置为将所述第二深度图像发送至图像处理器,以使得所述图像处理器确定所述目标图像的深度值。The second sending module is configured to send the second depth image to an image processor, so that the image processor determines the depth value of the target image.
- 一种图像处理器,其中,所述图像处理器包括:An image processor, wherein the image processor includes:处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求1至4任一项所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor. The storage medium relies on the processor to perform operations through a communication bus. When the instructions are executed by the processor, the foregoing claims 1 to are executed. 4. The method for determining the depth value of an image according to any one of the items.
- 一种TOF模组,其中,所述TOF模组包括:A TOF module, wherein the TOF module includes:处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求5所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relies on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, execute the above-mentioned claim 5 The method for determining the depth value of the image described above.
- 一种双目视觉模组,其中,所述双目视觉模组包括:A binocular vision module, wherein the binocular vision module includes:处理器以及存储有所述处理器可执行指令的存储介质,所述存储介质通过通信总线依赖所述处理器执行操作,当所述指令被所述处理器执行时,执行上述的权利要求6所述的图像的深度值的确定方法。A processor and a storage medium storing instructions executable by the processor, the storage medium relies on the processor to perform operations through a communication bus, and when the instructions are executed by the processor, execute the above-mentioned claim 6 The method for determining the depth value of the image described above.
- 一种计算机可读存储介质,其中,存储有可执行指令,当所述可执行指令被一个或多个处理器执行的时候,所述处理器执行所述权利要求1至4任一项所述的图像的深度值的确定方法,或者所述权利要求5所述的图像的深度值的确定方法,或者所述权利要求6所述的图像的深度值的确定方法。A computer-readable storage medium, wherein executable instructions are stored, and when the executable instructions are executed by one or more processors, the processor executes any one of claims 1 to 4 The method for determining the depth value of the image, or the method for determining the depth value of the image according to claim 5, or the method for determining the depth value of the image according to claim 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980100275.3A CN114365191A (en) | 2019-11-06 | 2019-11-06 | Image depth value determination method, image processor and module |
PCT/CN2019/116021 WO2021087812A1 (en) | 2019-11-06 | 2019-11-06 | Method for determining depth value of image, image processor and module |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/116021 WO2021087812A1 (en) | 2019-11-06 | 2019-11-06 | Method for determining depth value of image, image processor and module |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021087812A1 true WO2021087812A1 (en) | 2021-05-14 |
Family
ID=75848143
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/116021 WO2021087812A1 (en) | 2019-11-06 | 2019-11-06 | Method for determining depth value of image, image processor and module |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114365191A (en) |
WO (1) | WO2021087812A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115393224A (en) * | 2022-09-02 | 2022-11-25 | 点昀技术(南通)有限公司 | Depth image filtering method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222550A1 (en) * | 2012-02-29 | 2013-08-29 | Samsung Electronics Co., Ltd. | Synthesis system of time-of-flight camera and stereo camera for reliable wide range depth acquisition and method therefor |
CN107025660A (en) * | 2016-02-01 | 2017-08-08 | 北京三星通信技术研究有限公司 | A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic |
CN109213138A (en) * | 2017-07-07 | 2019-01-15 | 北京臻迪科技股份有限公司 | A kind of barrier-avoiding method, apparatus and system |
CN109544616A (en) * | 2018-12-11 | 2019-03-29 | 维沃移动通信有限公司 | A kind of depth information determines method and terminal |
CN110162085A (en) * | 2018-02-13 | 2019-08-23 | 霍尼韦尔国际公司 | Environment self-adaption perception and avoidance system for unmanned vehicle |
CN110335211A (en) * | 2019-06-24 | 2019-10-15 | Oppo广东移动通信有限公司 | Bearing calibration, terminal device and the computer storage medium of depth image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110376602A (en) * | 2019-07-12 | 2019-10-25 | 深圳奥比中光科技有限公司 | Multi-mode depth calculation processor and 3D rendering equipment |
-
2019
- 2019-11-06 CN CN201980100275.3A patent/CN114365191A/en active Pending
- 2019-11-06 WO PCT/CN2019/116021 patent/WO2021087812A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130222550A1 (en) * | 2012-02-29 | 2013-08-29 | Samsung Electronics Co., Ltd. | Synthesis system of time-of-flight camera and stereo camera for reliable wide range depth acquisition and method therefor |
CN107025660A (en) * | 2016-02-01 | 2017-08-08 | 北京三星通信技术研究有限公司 | A kind of method and apparatus for determining binocular dynamic visual sensor image parallactic |
CN109213138A (en) * | 2017-07-07 | 2019-01-15 | 北京臻迪科技股份有限公司 | A kind of barrier-avoiding method, apparatus and system |
CN110162085A (en) * | 2018-02-13 | 2019-08-23 | 霍尼韦尔国际公司 | Environment self-adaption perception and avoidance system for unmanned vehicle |
CN109544616A (en) * | 2018-12-11 | 2019-03-29 | 维沃移动通信有限公司 | A kind of depth information determines method and terminal |
CN110335211A (en) * | 2019-06-24 | 2019-10-15 | Oppo广东移动通信有限公司 | Bearing calibration, terminal device and the computer storage medium of depth image |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115393224A (en) * | 2022-09-02 | 2022-11-25 | 点昀技术(南通)有限公司 | Depth image filtering method and device |
Also Published As
Publication number | Publication date |
---|---|
CN114365191A (en) | 2022-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102581429B1 (en) | Method and apparatus for detecting obstacle, electronic device, storage medium and program | |
WO2021057474A1 (en) | Method and apparatus for focusing on subject, and electronic device, and storage medium | |
CN110793544B (en) | Method, device and equipment for calibrating parameters of roadside sensing sensor and storage medium | |
WO2021196548A1 (en) | Distance determination method, apparatus and system | |
CN111402170B (en) | Image enhancement method, device, terminal and computer readable storage medium | |
CN110874852A (en) | Method for determining depth image, image processor and storage medium | |
CN111368717B (en) | Line-of-sight determination method, line-of-sight determination device, electronic apparatus, and computer-readable storage medium | |
JP2016029564A (en) | Target detection method and target detector | |
CN105069804B (en) | Threedimensional model scan rebuilding method based on smart mobile phone | |
JP2019510234A (en) | Depth information acquisition method and apparatus, and image acquisition device | |
WO2020119467A1 (en) | High-precision dense depth image generation method and device | |
CN111080784B (en) | Ground three-dimensional reconstruction method and device based on ground image texture | |
CN111160232B (en) | Front face reconstruction method, device and system | |
WO2022213632A1 (en) | Millimeter-wave radar calibration method and apparatus, and electronic device and roadside device | |
CN109520480B (en) | Distance measurement method and distance measurement system based on binocular stereo vision | |
CN110209184A (en) | A kind of unmanned plane barrier-avoiding method based on binocular vision system | |
CN113965742B (en) | Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal | |
CN111383264B (en) | Positioning method, positioning device, terminal and computer storage medium | |
CN114494013A (en) | Image splicing method, device, equipment and medium | |
US11741671B2 (en) | Three-dimensional scene recreation using depth fusion | |
WO2021087812A1 (en) | Method for determining depth value of image, image processor and module | |
CN114821280A (en) | Sliding window-based SLAM local real-time relocation method | |
CN117745845A (en) | Method, device, equipment and storage medium for determining external parameter information | |
CN117522803A (en) | Bridge component accurate positioning method based on binocular vision and target detection | |
CN115965961B (en) | Local-global multi-mode fusion method, system, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19952002 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19952002 Country of ref document: EP Kind code of ref document: A1 |