CN110874852A - Method for determining depth image, image processor and storage medium - Google Patents
Method for determining depth image, image processor and storage medium Download PDFInfo
- Publication number
- CN110874852A CN110874852A CN201911075635.7A CN201911075635A CN110874852A CN 110874852 A CN110874852 A CN 110874852A CN 201911075635 A CN201911075635 A CN 201911075635A CN 110874852 A CN110874852 A CN 110874852A
- Authority
- CN
- China
- Prior art keywords
- depth
- information
- depth image
- image
- accuracy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 238000007499 fusion processing Methods 0.000 claims abstract description 23
- 238000004364 calculation method Methods 0.000 claims description 52
- 238000004590 computer program Methods 0.000 claims description 16
- 230000004927 fusion Effects 0.000 claims description 10
- 230000000007 visual effect Effects 0.000 description 33
- 238000003384 imaging method Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 9
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008447 perception Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a depth image determining method, an image processor and a storage medium, wherein the method comprises the following steps: aiming at an object to be shot, acquiring a first depth image from the structured light module and acquiring a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information; and according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a depth image determining method, an image processor, and a storage medium.
Background
Three-dimensional (3D) structured light models have good depth perception characteristics for near objects, but do not yield accurate depth information for distant objects. And when the 3D structured light is affected by strong light, the depth image imaging precision of the 3D structured light is greatly affected.
It can be seen that, in the current solution, the 3D structured light module has some defects when imaging the depth image, for example, the 3D structured light may be affected by strong light, and the 3D structured light is not suitable for a distant object, so that the accuracy of the depth image is low.
Disclosure of Invention
The embodiment of the application provides a depth image determining method, an image processor and a storage medium, which can improve the accuracy of a depth image and can improve the imaging quality of a target image.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a method for determining a depth image, where the method includes:
aiming at an object to be shot, acquiring a first depth image from the structured light module and acquiring a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
and according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot.
In a second aspect, embodiments of the present application provide an image processor, which includes a first acquisition unit and a fusion unit, wherein,
the first acquisition unit is configured to acquire a first depth image from the structured light module and a second depth image from the binocular vision module aiming at an object to be shot; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
and the fusion unit is configured to perform fusion processing on the first depth image and the second depth image according to the first depth information, the first accuracy information, the second depth information and the second accuracy information to obtain a target depth image of the object to be shot.
In a third aspect, an embodiment of the present application provides an image processor, including a first memory and a first processor; wherein,
a first memory for storing a computer program operable on the first processor;
a first processor for performing the method according to the first aspect when running the computer program.
In a fourth aspect, the present application provides a computer storage medium storing a computer program, which when executed by a first processor implements the method according to the first aspect.
According to the depth image determining method, the image processor and the storage medium provided by the embodiment of the application, aiming at an object to be shot, a first depth image is obtained from a structured light module, and a second depth image is obtained from a binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information; then, according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot; therefore, in the embodiment of the application, the first depth image and the second depth image are firstly obtained from the structured light module and the binocular vision module respectively, then the first depth image and the second depth image are subjected to fusion processing, so that the distance range of the depth image is widened in the obtained target depth image, the depth images in more distance ranges can be obtained, in addition, the second depth image obtained by the binocular vision module is also considered, the low-precision phenomenon of the depth image when the structured light module is influenced by strong light can be avoided, the accuracy of the depth image is improved, the capture of the more real target image is facilitated, and the imaging quality of the target image is also improved.
Drawings
Fig. 1 is a schematic structural diagram of an image processing system according to an embodiment of the present disclosure;
fig. 2 is a schematic detailed flowchart of a depth image determining method provided in an embodiment of the present application;
fig. 3 is a schematic view of an application scenario of a depth image determining method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a depth image determining method according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another depth image determining method according to an embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another depth image determining method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram illustrating a structure of an image processor according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a specific hardware structure of an image processor according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a structured light module according to an embodiment of the present disclosure;
fig. 10 is a schematic diagram of a specific hardware structure of a structured light module according to an embodiment of the present disclosure;
fig. 11 is a schematic structural diagram of a binocular vision module according to an embodiment of the present disclosure;
fig. 12 is a schematic diagram of a specific hardware structure of a binocular vision module according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for the convenience of description, only the parts related to the related applications are shown in the drawings.
A Three-dimensional (3D) structured light module includes an encoder spot transmitter, a spot receiver, and a spot decoder. The coded light spot transmitter continuously transmits coded light spot signals to a target object, the light spot receiver receives the light spot signals returned from the target object, and the received light spot signals are matched according to a light spot template in the light spot decoder to obtain a depth signal of the target object through decoding. The 3D structured light module has good depth perception characteristics for short-distance objects, but cannot obtain accurate depth information for longer-distance objects. And when the 3D structured light is influenced by external noise and ambient light, the depth image imaging precision of the 3D structured light is greatly influenced.
The binocular vision module imitates the characteristics of human eyes, the feature points of the left image and the right image are matched, the parallax caused by imaging the same object by the double viewpoints is utilized, and the depth signal of the object is obtained through geometric calculation. In practical application, the object feature point matching is inaccurate, so that the object feature point matching becomes the largest obstacle for acquiring the high-precision depth value by the binocular vision module.
As can be seen, the 3D structured light may be affected by strong light, so that the accuracy of the depth image is not high; the binocular vision module may cause the calculation error of the depth image due to inaccurate matching of the feature points; in addition, the 3D structured light module is only suitable for short distance, and the binocular vision module is suitable for long distance; that is to say, in the current solution, no matter the 3D structured light module or the binocular vision module, it is not possible to take into account more distance ranges when the depth image is imaged, and the accuracy of the obtained depth image is not high.
In order to improve the accuracy of a depth image, an embodiment of the present application provides a depth image fusion method, where a first depth image and a second depth image are obtained from a structured light module and a binocular vision module, and then the first depth image and the second depth image are subjected to fusion processing, so that in the obtained target depth image, first depth information and first accuracy information obtained from the structured light module and second depth information and second accuracy information obtained from the binocular vision module are considered comprehensively, so that not only is the distance range of the depth image widened, but also the accuracy of the depth image is improved, which is beneficial to capturing a more real target image, and meanwhile, the imaging quality of the target image is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a schematic structural diagram of an image processing system provided in an embodiment of the present application is shown. As shown in fig. 1, the image processing system includes: structured light module 11, binocular vision module 12 and image processor 13, wherein, establish communication connection between structured light module 11 and the image processor 13, establish communication connection between binocular vision module 12 and the image processor 13.
In the image processing system, the structured light module 11 is generally a 3D structured light module, and is capable of continuously transmitting a coded light spot signal to an object to be photographed, receiving a light spot signal returned from the object to be photographed, matching the received light spot signal according to a light spot template, and decoding to obtain depth information of the object. However, the depth information determined by the structured light module 11 has good depth perception characteristics only for short-distance objects, but for long-distance or ultra-long-distance objects, an accurate depth value cannot be obtained, and the structured light is also easily affected by external strong light, so that the accuracy of the depth image is affected. The binocular vision module 12 can match the feature points of the left and right images, utilize the disparity caused by imaging the same object by the two viewpoints, and then obtain the depth information of the object to be shot through geometric calculation, but in practical application, the accuracy of the depth image will be affected due to the fact that the feature point matching is easy to be inaccurate.
In this embodiment, the image processor 13 may be configured to collect depth values from the structured light module 11 and the binocular vision module 12, respectively, and process the collected depth values to determine a more accurate depth value of the target depth image. Here, the image processor 13 may also be referred to as an image processing device or an image fusion device, and may be a hardware entity device or a virtual device, and the embodiment of the present application is not particularly limited.
Based on the image processing system shown in fig. 1, referring to fig. 2, a detailed flowchart of a depth image determining method provided by an embodiment of the present application is shown. As shown in fig. 2, the method may include:
s201: the structured light module 11 determines a first depth image of an object to be shot;
in order to overcome the defects that the structured light module 11 cannot obtain an accurate depth value for a long-distance or ultra-long-distance object and the binocular vision module 12 has inaccurate feature point matching and affects the accuracy of the depth value, here, a first depth image of the object to be shot can be determined by using the structured light module 11, a second depth image of the object to be shot can be determined by using the binocular vision module 12, and then the image processor 13 can determine a target depth image by combining the first depth image and the second depth image.
In order to determine the first depth image, in an implementation, the structured light module 11 may include:
s2011: the structured light module 11 acquires first depth information of an object to be shot;
specifically, the structured light module 11 continuously sends the coded light spot signal to the object to be photographed, receives the light spot signal returned from the object to be photographed, and then obtains the first depth information of the object to be photographed by decoding according to the received light spot signal matched with the template of the light spot.
S2012: the structured light module 11 determines first accuracy information corresponding to the first depth information according to the first depth information;
it should be noted that after the structured light module 11 acquires the first depth information, first accuracy information corresponding to the first depth information may be obtained through calculation according to the first depth information; the first accuracy information corresponding to the first depth information may also be obtained by querying from a pre-stored correspondence between the depth information and the accuracy information, which is not specifically limited in the embodiment of the present application.
In practical application, the accuracy information is used for representing the reliability of the depth image corresponding to the target area; here, the unit of the accuracy information may be at a pixel level, a block level, a line level, or a frame level, and the embodiment of the present application is not particularly limited. Since the structured light module 11 has a high accuracy for the depth image of the object in the short distance, after the first depth information is acquired, if the depth value in the first depth information is smaller, it indicates that the distance between the object to be photographed and the structured light module 11 is closer, the first accuracy information is higher; the first accuracy information is lower if the larger the depth value in the first depth information indicates that the object to be photographed is farther away from the structured light module 11. In this way, according to the acquired first depth information, the structured light module 11 can determine the first accuracy information corresponding to the first depth information.
S2013: the structured light module 11 forms a first depth image using the first depth information and the first accuracy information.
Wherein the first accuracy information is used to characterize an accuracy degree of the first depth information. In this way, the first depth image determined by the structured light module 11 not only carries the first depth information, but also carries the first accuracy information corresponding to the first depth information, where the meaning of the accuracy information is the reliability of the depth information corresponding to the area.
S202: the structured light module 11 sends the first depth image to the image processor 13;
in S202, the structured light module 11 sends the first depth image carrying the first depth information and the first accuracy information to the image processor 13, so that the image processor 13 can obtain the first depth information and the first accuracy information collected by the structured light module 11, which is beneficial for the image processor 13 to determine the target depth image more accurately.
S203: the binocular vision module 12 determines a second depth image of the object to be shot;
in order to determine the second depth image, in the specific implementation process, the binocular vision module 12 may include, in order to determine the second depth image of the object to be photographed, S203:
s2031: the binocular vision module 12 acquires a first visual image and a second visual image of an object to be shot, and determines characteristic points of the first visual image and characteristic points of the second visual image;
specifically, the binocular vision module 12 matches feature points of objects in the left and right images, uses parallax caused by imaging of the two viewpoints on the same object, and obtains a distance between itself and the object to be photographed through geometric calculation, so as to obtain second depth information of the object to be photographed.
After the binocular vision module 12 acquires the first visual image and the second visual image, it is necessary to determine the feature points of the first visual image and the feature points of the second visual image in the binocular images, and the feature points can reflect the essential features of the images, so that the target object in the images can be identified, and the matching of the images can be completed through the matching of the feature points.
S2032: carrying out parallax matching on the characteristic points of the first visual image and the characteristic points of the second visual image to determine a parallax value; performing depth calculation according to the parallax value to obtain second depth information of the object to be shot;
here, the binocular vision module 12 performs parallax matching on the feature points of the first visual image and the feature points of the second visual image, so as to determine a parallax value; then, depth calculation is carried out on the parallax value by using a preset algorithm, wherein the preset algorithm can be a triangular distance measurement algorithm; thereby being capable of obtaining second depth information of the object to be shot.
S2033: the binocular vision module 12 determines second accuracy information corresponding to the second depth information according to the second depth information;
it should be noted that after the binocular vision module 12 acquires the second depth information, second accuracy information corresponding to the second depth information may be obtained through calculation according to the second depth information; second accuracy information corresponding to the second depth information may also be obtained by querying from a correspondence between the depth information and the accuracy information stored in advance, which is not specifically limited in the embodiment of the present application.
In practical application, the accuracy information is used for representing the reliability of the depth image corresponding to the target area; here, the unit of the accuracy information may be at a pixel level, a block level, a line level, or a frame level, and the embodiment of the present application is not particularly limited. Since the binocular vision module 12 has a high accuracy for the depth image of the remote object, after the second depth information is acquired, if the depth value in the second depth information is larger, it indicates that the distance between the object to be photographed and the binocular vision module 12 is farther, the second accuracy information is higher; the second accuracy information is lower if the smaller depth value in the second depth information indicates that the object to be photographed is closer to the binocular vision module 12. In this way, according to the acquired second depth information, the binocular vision module 12 may determine second accuracy information corresponding to the second depth information.
S2034: the binocular vision module 12 forms a second depth image using the second depth information and the second accuracy information.
It should be noted that the second accuracy information is used to characterize the accuracy of the second depth information. Thus, the second depth image determined by the binocular vision module 12 not only carries the second depth information, but also carries second accuracy information corresponding to the second depth information, where the meaning of the accuracy information is the reliability of the depth information corresponding to the region.
The first accuracy information and the second accuracy information may have different sizes and different representation units, and the units may be pixel level, block level, line level, frame level, or the like.
S204: the binocular vision module 12 sends the second depth image to the image processor 13;
in S204, the binocular vision module 12 sends the second depth image carrying the second depth information and the second accuracy information to the image processor 13, so that the image processor 13 can obtain the reliability of the second depth information and the second depth information acquired by the binocular vision module 12, which is beneficial for the image processor 13 to determine the target depth image more accurately.
It should be further noted that, the process of determining the first depth image by the structured light module 11 and the process of determining the second depth image by the binocular vision module 12 may be executed in parallel, or may be executed sequentially, and when executed sequentially, the process of determining the first depth image by the structured light module 11 may precede the process of determining the second depth image by the binocular vision module 12, or the process of determining the second depth image by the binocular vision module 12 may precede the process of determining the first depth image by the structured light module 11, where the embodiment of the present application is not particularly limited.
S205: the image processor 13 performs scaling processing on the first depth image and the second depth image, respectively;
in a specific implementation process, S205 may include:
the image processor 13 respectively performs scaling processing on the first depth image and the second depth image according to a preset resolution to obtain a processed first depth image and a processed second depth image; and enabling the processed first depth image and the processed second depth image to have the same resolution as the finally output target depth image.
That is to say, when the resolution of the acquired first depth image is different from the preset resolution and the resolution of the acquired second depth image is different from the preset resolution, scaling processing needs to be performed on the first depth image and the second depth image, so that the resolution of the first depth image and the resolution of the second depth image are both the same as the preset resolution, and thus the processed first depth image and the processed second depth image are obtained; the processed first depth image carries the processed first depth information and the processed first accuracy information, and the processed second depth image carries the processed second depth information and the processed second accuracy information.
S206: the image processor 13 determines a target depth image of an object to be photographed.
In a specific implementation, S206 may include:
the image processor 13 performs fusion processing on the first depth image and the second depth image according to the processed first depth information, the processed first accuracy information, the processed second depth information, and the processed second accuracy information, so as to obtain a target depth image of the object to be photographed.
It should be noted that, the fusion processing is performed on the first depth image and the second depth image, and specifically, the processed first depth information, the processed first accuracy information, the processed second depth information, and the processed second accuracy information may be input into a preset calculation model, a depth value corresponding to each pixel position in the target depth image is output according to the preset calculation model, and then a final target depth image is obtained according to the depth value corresponding to each pixel position.
It should be further noted that, if the resolution of the acquired first depth image is the same as the preset resolution and the resolution of the second depth image is the same as the preset resolution, step S205 may also be omitted in this embodiment of the application. At this time, the first depth image and the second depth image are fused, specifically, the first depth information, the first accuracy information, the second depth information, and the second accuracy information are input into a preset calculation model, a depth value corresponding to each pixel position in the target depth image is output according to the preset calculation model, and then a final target depth image is obtained according to the depth value corresponding to each pixel position.
Here, in order to determine a more accurate depth value, the image processor 13 may input the processed first depth information, the processed first accuracy information, the processed second depth information, and the processed second accuracy information to a preset calculation model, from which a final target depth image may be obtained. The preset calculation model may be a weighted algorithm model, or may be another algorithm model capable of implementing fusion processing, and the embodiment of the present application is not particularly limited.
When the preset calculation model is a weighted algorithm model, the processed first depth information, the processed first accuracy information, the processed second depth information and the processed second accuracy information are input into the preset calculation model, and the depth value corresponding to each pixel position in the target depth image can be obtained through weighted calculation of the preset calculation model.
Specifically, assuming that the row coordinate is i and the column coordinate is j, in the first depth image generated by the structured light module 11, the first depth information is represented by P _3D, and the first accuracy information is represented by a _ 3D; in the second depth image generated by the binocular vision module 12, the second depth information is represented by P _ BV, the second accuracy information is represented by a _ BV, and the depth value O corresponding to the jth pixel position of the ith row of the target depth image generated by the image processor 13i,jThe calculation is as follows:
wherein, the formula (1) is a preset calculation model, in the preset calculation model, K is a pixel compensation value of the first accuracy information in the structured light module 11, T is a pixel compensation value of the second accuracy information in the binocular vision module 12, and K and T can be set according to actual conditions; f is an integral compensation value and can be configured according to devices and scenes, and a and b are confidence values and can be set to different values according to depth values of different distances.
In this way, first, a scaling process, such as a scaling up process, is performed on the first depth image (including the first depth information and the first accuracy information) generated by the structured light module 11 and the second depth image (including the second depth information and the second accuracy information) generated by the binocular vision module 12, so that the processed first depth image and the processed second depth image have the same resolution as the finally output target depth image, and the processed first depth information and the processed first accuracy information, and the processed second depth information and the processed second accuracy information can be obtained; then inputting the processed first depth information, the processed first accuracy information, the processed second depth information and the processed second accuracy information into a preset calculation model, namely performing weighting calculation in the formula (1), so as to obtain a depth value corresponding to each pixel position in the target depth image, and then obtaining a final target depth image according to the depth value corresponding to each pixel position.
In addition, the image processor 13 may also compare the first accuracy information obtained by the structured light module 11 with a preset threshold, and determine the target depth image according to the comparison result.
In some embodiments, after S204, the method may further include:
the image processor 13 obtains an accuracy value corresponding to each pixel position from the first accuracy information; comparing the accuracy value corresponding to each pixel position with a preset threshold value; and then according to the comparison result, determining the depth value corresponding to each pixel position in the target depth image, and obtaining the target depth image according to the determined depth value corresponding to each pixel position.
Specifically, for a certain pixel position in each pixel position, if the accuracy value corresponding to the certain pixel position is greater than the preset threshold, the image processor 13 determines that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the first depth information; if the accuracy value corresponding to a certain pixel position is less than or equal to the preset threshold, the image processor 13 determines that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the second depth information; and then, according to the comparison result of each pixel position, obtaining the depth value corresponding to each pixel position in the target depth image.
It should be noted that, after the first depth image (including the first depth information and the first accuracy information) generated by the structured light module 11 and the second depth image (including the second depth information and the second accuracy information) generated by the binocular vision module 12, the accuracy information of the structured light module 11 in different areas or at different distances, that is, the accuracy value corresponding to each pixel position, may be compared with a preset threshold, and if the accuracy value corresponding to the pixel position is greater than the preset threshold, it indicates that the first depth information obtained by the structured light module at this time is more accurate, then the depth value corresponding to the pixel position in the first depth information obtained by the structured light module may be used as the depth value corresponding to the pixel position in the target depth image; if the accuracy value corresponding to the pixel position is smaller than or equal to the preset threshold, which indicates that the first depth information obtained by the structured light module is inaccurate at this time, the depth value corresponding to the pixel position in the second depth information obtained by the binocular vision module may be used as the depth value corresponding to the pixel position in the target depth image. In this way, the final target depth image can be obtained without performing complicated fusion processing (for example, performing weighting calculation by using a preset calculation model) on the first depth image and the second depth image.
Specifically, the accuracy information (using C) of the structured light module 11 in different areas or different distancesi,j,zRepresentation), the final target depth image may also be obtained directly. Wherein i represents a row coordinate, j represents a column coordinate, and z represents a distance, and in the first depth image generated by the structured light module 11, the first depth information is represented by P _ 3D; in the second depth image generated by the binocular vision module 12, the second depth information is represented by P _ BV; in this way, the depth value O corresponding to the jth pixel position of the ith row of the target depth image generated by the image processor 13i,jThe calculation is as follows:
here, thr is a preset threshold, which may be set according to actual conditions, and the embodiment of the present application is not particularly limited.
In this way, after acquiring the first depth image (including the first depth information and the first accuracy information) generated by the structured light module 11 and the second depth image (including the second depth information and the second accuracy information) generated by the binocular vision module 12, it is only necessary to compare the accuracy value corresponding to each pixel position in the first accuracy information with the preset threshold without performing complicated fusion processing (such as performing weighting calculation through a preset calculation model) on the first depth image and the second depth image, and if the accuracy value corresponding to the pixel position is greater than the preset threshold, the depth value corresponding to the pixel position in the first depth information may be used as the depth value corresponding to the pixel position in the target depth image; if the accuracy value corresponding to the pixel position is less than or equal to the preset threshold, the depth value corresponding to the pixel position in the second depth information may be used as the depth value corresponding to the pixel position in the target depth image, as shown in formula (2), so that the depth value corresponding to each pixel position in the target depth image may also be obtained, and then the final target depth image may be obtained according to the depth value corresponding to each pixel position.
Furthermore, the structured light module is suitable for a short distance, and the precision of the depth image obtained in the short distance is higher, so that after the distance between the object to be shot and the structured light module is obtained, the obtained distance value can be compared with a preset distance threshold value; in this way, when the distance value is smaller than the preset distance threshold, which indicates that the first depth information obtained by the structured light module is more accurate at this time, the depth value corresponding to the pixel position in the first depth information obtained by the structured light module may be used as the depth value corresponding to the pixel position in the target depth image; if the distance value is greater than or equal to the preset distance threshold value, which indicates that the first depth information obtained by the structured light module is inaccurate at this time, the depth value corresponding to the pixel position in the second depth information obtained by the binocular vision module may be used as the depth value corresponding to the pixel position in the target depth image. In this way, the final target depth image can be obtained without performing complicated fusion processing (for example, performing weighting calculation by using a preset calculation model) on the first depth image and the second depth image.
The method for determining a depth image described in one or more embodiments above will be described below by way of example.
Example 1: firstly, the image processor zooms a first depth image and a second depth image obtained from a structured light module and a binocular vision module, and the first depth image and the second depth image have the same resolution as a finally output target depth image to generate a new first depth image and a new second depth image;
secondly, assuming that the row and column coordinates are i and j, the structured light module 11 generates a new first depth image, wherein the first depth information is represented by P _3D, and the first accuracy information is represented by a _ 3D; in the new second depth image generated by the binocular vision module 12, the second depth information is represented by P _ BV, the second accuracy information is represented by a _ BV, and the image processor can calculate the depth value O corresponding to the jth pixel position of the ith row and jth column of the target depth image by using the preset calculation model (as shown in formula (1))i,jTo obtain a final target depth image.
Example 2: after the image processor obtains the first depth image and the second depth image from the structured light module and the binocular vision module, the image processor obtains the accuracy information (using C) according to the structured light module 11 in different areas or different distancesi,j,zExpressed), that is, the accuracy value corresponding to each pixel position is compared with a preset threshold (denoted by thr), so that the image processor can obtain the depth value O corresponding to the jth pixel position of the ith row and jth column of the target depth image by using the preset comparison model (expressed by formula (2) as described abovei,jTo obtain a final target depth image.
Example 3: fig. 3 is a schematic view of an application scene of a method for determining a depth image according to an embodiment of the present disclosure, as shown in fig. 3, where an object a is a close-range object, an object B is a distant-range object, and a depth of the object a is determined according to a structured light module and a binocular vision module, and the depth of the object B in the binocular vision module is corrected from B1 to B2 according to the structured light module; and when the final depth image is synthesized, the close-range object adopts the depth information of the structured light module, and the far-range object adopts the depth information of the binocular vision module.
Based on the application scenario shown in fig. 3, in the virtual depth-of-field photographing, the effective distance of the structured light module is limited, and the object depth information at a longer distance or an ultra-far distance cannot be acquired. This application embodiment can compensate this characteristic through adopting binocular vision module, judges the object depth information of long distance or super long distance according to binocular vision module, and the accuracy that final synthetic target depth image can obtain is higher, and degree of depth detail is abundanter moreover, can be used for blurring taking a picture of degree of depth.
Through the above example, the first depth image and the second depth image that will structured light module and binocular vision module obtain are fused and are handled (also can be called as joint processing), not only can effectively avoid the respective shortcoming of two kinds of techniques, can obtain the depth image of more distance ranges moreover, can adopt the depth image synthesis of structured light module and binocular vision module to the depth image of closely article, and the super-far distance can adopt the depth image of binocular vision module, so can obtain more accurate target depth image.
For example, a portrait is shot under strong light, and the structured light module and the binocular vision module can be matched; wherein, to close-range portrait, can effectively avoid the precision problem that close-range depth image that the highlight leads to obtained is not high, to distant view portrait, rely on the binocular vision module to obtain more depth information.
For example, when the structured light module shoots an object within an effective distance range, an accurate low-resolution depth image is obtained through the structured light module, and then based on the depth image, the binocular vision module is adopted to expand depth details, so that a higher-resolution depth image can be achieved.
In the embodiment of the application, the structured light technology and the binocular vision technology are adopted for combined processing, and more accurate target depth images can be obtained through respective depth information and corresponding accuracy information of two depth images and processing means such as weight. The method has the advantages that the respective advantages of the structured light technology and the binocular vision technology are utilized, the depth information with higher precision can be obtained at a middle and short distance, the depth information at a long distance and a super long distance can be obtained, and the perception and the obtaining of the depth information at any visual distance are finally realized.
The embodiment provides a depth image determining method, which includes acquiring a first depth image from a structured light module and acquiring a second depth image from a binocular vision module aiming at an object to be shot; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information; then, according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot; like this, in this application embodiment, at first obtain first depth image and second depth image respectively from structured light module and binocular vision module, then fuse first depth image and second depth image and handle, in the target depth image that so obtains, the range of depth image has not only been widened, can obtain the depth image of more range, and because the second depth image that binocular vision module obtained has still been considered, can avoid the low accuracy phenomenon of depth image when structured light module receives the highlight influence, thereby the accuracy of depth image has been improved, be favorable to catching more real target image, and the imaging quality of target image has been promoted, user's experience degree has still been improved simultaneously.
The following describes a method for determining depth values of the image on the side of each device belonging to the image processing system.
First, the determination method of the above-described depth image is described with the image processor side.
Referring to fig. 4, a flowchart of a depth image determining method provided in an embodiment of the present application is shown. As shown in fig. 4, the method may include:
s401: aiming at an object to be shot, acquiring a first depth image from the structured light module and acquiring a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, and the second depth image carries second depth information and second accuracy information;
the first accuracy information is used for representing the accuracy degree of the first depth information, and the second accuracy information is used for representing the accuracy degree of the second depth information.
It should be noted that the first depth image is obtained from the structured light module; that is, in the structured light module, a first depth image formed by the first depth information and the first accuracy information can be obtained. Thus, in some embodiments, for S401, the acquiring the first depth image from the structured light module for the object to be photographed may include:
determining first depth information of the object to be photographed and first accuracy information corresponding to the first depth information through the structured light module;
and after the structured light module forms a first depth image by using the first depth information and the first accuracy information, receiving the first depth image sent by the structured light module.
Here, through the structured light module, first depth information of an object to be shot can be determined, and first accuracy information corresponding to the first depth information can also be determined; the specific determination process can be described in detail in the following description of the structured light module. In this way, after obtaining the first depth information and the first accuracy information, a first depth image may be formed by the structured light module, and then the first depth image sent by the structured light module may be received by the image processor.
It should be further noted that the second depth image is obtained from a binocular vision module; that is, in the binocular vision module, the second depth image formed by the second depth information and the second accuracy information may be obtained. Therefore, in some embodiments, for S401, the acquiring a second depth image from the binocular vision module for the object to be photographed includes:
determining second depth information of the object to be shot and second accuracy information corresponding to the second depth information through the binocular vision module;
and after the binocular vision module forms a second depth image by using the second depth information and the second accuracy information, receiving the second depth image sent by the binocular vision module.
Here, through the binocular vision module, second depth information of the object to be shot can be determined, and second accuracy information corresponding to the second depth information can also be determined; the specific determination process can be detailed in the following description related to the binocular vision module. In this way, after the second depth information and the second accuracy information are obtained, a second depth image may be formed by the binocular vision module, and then the second depth image transmitted by the binocular vision module may be received by the image processor.
S402: and according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot.
In some embodiments, S402 may include:
inputting the first depth information, the first accuracy information, the second depth information and the second accuracy information into a preset calculation model, and outputting a depth value corresponding to each pixel position in a target depth image according to the preset calculation model;
and obtaining the target depth image according to the depth value corresponding to each pixel position.
It should be noted that the preset calculation model may be a weighted algorithm model or other algorithm models capable of implementing fusion processing, and the embodiment of the present application is not particularly limited.
It should be further noted that, the image processor may perform the fusion processing on the first depth image and the second depth image, specifically, the first depth information, the first accuracy information, the second depth information, and the second accuracy information may be input into a preset calculation model (as shown in formula (1)), and a depth value corresponding to each pixel position in the target depth image may be obtained (may be Oi,jRepresentation) and then according to the depth value corresponding to each pixel position, to obtain the final target depth image.
Further, if the resolutions of the first depth image and the second depth image acquired by the image processor are different from the preset resolution, the first depth image and the second depth image also need to be scaled, so that the resolutions of the first depth image and the second image depth are both the same as the preset resolution. Thus, in some embodiments, after S401, the method may further comprise:
respectively carrying out scaling processing on the first depth image and the second depth image according to a preset resolution ratio to obtain a processed first depth image and a processed second depth image; the processed first depth image carries processed first depth information and processed first accuracy information, and the processed second depth image carries processed second depth information and processed second accuracy information;
accordingly, S402 may include:
inputting the processed first depth information, the processed first accuracy information, the processed second depth information and the processed second accuracy information into a preset calculation model, and outputting a depth value corresponding to each pixel position in a target depth image according to the preset calculation model;
and obtaining the target depth image according to the depth value corresponding to each pixel position.
It should be noted that, when the resolution of the acquired first depth image is different from the preset resolution and the resolution of the second depth image is different from the preset resolution, scaling processing needs to be performed on the first depth image and the second depth image, so that the resolution of the first depth image and the resolution of the depth of the second depth image are both the same as the preset resolution, and thus the processed first depth image and the processed second depth image are obtained; the processed first depth image carries processed first depth information and processed first accuracy information, and the processed second depth image carries processed second depth information and processed second accuracy information; and then inputting the processed first depth information, the processed first accuracy information, the processed second depth information and the processed second accuracy information into a preset calculation model, and obtaining a final target depth image according to the preset calculation model.
Further, the image processor may compare the obtained first accuracy information with a preset threshold, and determine the target depth image according to the comparison result. Thus, in some embodiments, after S401, the method may further comprise:
acquiring an accuracy value corresponding to each pixel position from the first accuracy information;
comparing the accuracy value corresponding to each pixel position with a preset threshold value;
according to the comparison result, determining the depth value corresponding to each pixel position in the target depth image;
and obtaining the target depth image according to the depth value corresponding to each determined pixel position.
Further, the determining a depth value corresponding to each pixel position in the target depth image according to the comparison result may include:
for a certain pixel position in each pixel position, if the accuracy value corresponding to the certain pixel position is greater than a preset threshold, determining that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the first depth information;
if the accuracy value corresponding to the certain pixel position is smaller than or equal to a preset threshold, determining that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the second depth information;
and obtaining the depth value corresponding to each pixel position in the target depth image according to the comparison result of each pixel position.
It should be noted that after acquiring the first depth image (including the first depth information and the first accuracy information) from the structured light module and acquiring the second depth image (including the second depth information and the second accuracy information) from the binocular vision module, the accuracy information of the structured light module at different areas or different distances, that is, the accuracy value corresponding to each pixel position, may be compared with a preset threshold, and if the accuracy value corresponding to the pixel position is greater than the preset threshold, the depth value corresponding to the pixel position in the first depth information may be used as the depth value corresponding to the pixel position in the target depth image; if the accuracy value corresponding to the pixel position is less than or equal to the preset threshold, the depth value corresponding to the pixel position in the second depth information may be used as the depth value corresponding to the pixel position in the target depth image; thus, the final target depth image can be obtained without performing fusion processing on the first depth image and the second depth image.
Next, the method for determining the depth image will be described with reference to the structured light module.
Referring to fig. 5, a schematic flow chart of another depth image determination method provided in the embodiment of the present application is shown. As shown in fig. 5, the method may include:
s501: acquiring first depth information of an object to be shot;
s502: determining first accuracy information corresponding to the first depth information according to the first depth information;
s503: forming a first depth image using the first depth information and the first accuracy information;
s504: the first depth image is sent to an image processor such that the image processor determines a target depth image of an object to be photographed.
It should be noted that the structured light module continuously sends the coded light spot signal to the object to be photographed, receives the light spot signal returned from the object to be photographed, matches the received light spot signal according to the template of the light spot, and decodes the received light spot signal to obtain the first depth information of the object to be photographed.
In this way, after the structured light module acquires the first depth information, first accuracy information corresponding to the first depth information can be obtained through calculation according to the first depth information; the first accuracy information corresponding to the first depth information may also be obtained by querying from a pre-stored correspondence between the depth information and the accuracy information, which is not specifically limited in the embodiment of the present application. Therefore, after the first depth information and the first accuracy information are obtained, the structured light module can send the first depth image carrying the first depth information and the first accuracy information to the image processor, so that the image processor can acquire the credibility of the first depth information and the first depth information acquired by the structured light module, and the image processor can determine the target depth image more accurately.
Thirdly, the determination method of the depth image is described by using a binocular vision module.
Referring to fig. 6, a flowchart of a further depth image determination method provided in an embodiment of the present application is shown. As shown in fig. 6, the method may include:
s601: acquiring a first visual image and a second visual image of an object to be shot, and determining a characteristic point of the first visual image and a characteristic point of the second visual image;
s602: carrying out parallax matching on the characteristic points of the first visual image and the characteristic points of the second visual image to determine a parallax value; performing depth calculation according to the parallax value to obtain second depth information of the object to be shot;
s603: determining second accuracy information corresponding to the second depth information according to the second depth information;
s604: forming a second depth image using the second depth information and the second accuracy information;
s605: the second depth image is sent to an image processor such that the image processor determines a target depth image of the object to be photographed.
It should be noted that after the binocular vision module acquires the first visual image and the second visual image, the binocular vision module matches the feature points in the two images, uses the disparity caused by imaging the same object by the two views, and calculates the distance between the binocular vision module and the object to be photographed by using a preset algorithm, so as to obtain the second depth information of the object to be photographed.
In this way, after the binocular vision module acquires the second depth information, second accuracy information corresponding to the second depth information can be obtained through calculation according to the second depth information; second accuracy information corresponding to the second depth information may also be obtained by querying from a correspondence between the depth information and the accuracy information stored in advance, which is not specifically limited in the embodiment of the present application. Therefore, after the second depth information and the second accuracy information are obtained, the binocular vision module can send the second depth image carrying the second depth information and the second accuracy information to the image processor, so that the image processor can obtain the reliability of the second depth information and the second depth information acquired by the binocular vision module, and the image processor can determine the target depth image more accurately.
Through the above embodiments, the specific implementation of the foregoing embodiments is explained in detail, and it can be seen that according to the technical solution of the foregoing embodiments, a structured light technology and a binocular vision technology are used for combined processing, first, a first depth image (including first depth information and first accuracy information) can be obtained through a structured light module, a second depth image (including second depth information and second accuracy information) can be obtained through a binocular vision module, then, the first depth image and the second depth image are sent to an image processor, the image processor performs fusion processing on the first depth image and the second depth image, and in the obtained target depth image, not only the respective advantages of the structured light technology and the binocular vision technology are utilized, the distance range of the depth image is widened, but also the accuracy of the depth image is improved, the method and the device are beneficial to capturing a more real target image, so that the imaging quality of the target image is improved, and the experience degree of a user is also improved.
Based on the same inventive concept of the foregoing embodiment, refer to fig. 7, which shows a schematic structural diagram of an image processor according to an embodiment of the present application. As shown in fig. 7, the image processor 70 may include: a first acquisition unit 701 and a fusion unit 702, wherein,
a first obtaining unit 701 configured to obtain, for an object to be photographed, a first depth image from the structured light module and a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
a fusion unit 702, configured to perform fusion processing on the first depth image and the second depth image according to the first depth information, the first accuracy information, the second depth information, and the second accuracy information, so as to obtain a target depth image of the object to be photographed.
In the foregoing solution, the fusion unit 702 is specifically configured to input the first depth information, the first accuracy information, the second depth information, and the second accuracy information into a preset calculation model, and output a depth value corresponding to each pixel position in the target depth image according to the preset calculation model;
the first obtaining unit 701 is further configured to obtain the target depth image according to the depth value corresponding to each pixel position.
In the foregoing scheme, referring to fig. 7, the image processor 70 may further include a scaling unit 703 configured to perform scaling processing on the first depth image and the second depth image according to a preset resolution, respectively, to obtain a processed first depth image and a processed second depth image; the processed first depth image carries processed first depth information and processed first accuracy information, and the processed second depth image carries processed second depth information and processed second accuracy information;
a fusion unit 702, further configured to input the processed first depth information, the processed first accuracy information, the processed second depth information, and the processed second accuracy information into a preset calculation model, and output a depth value corresponding to each pixel position in the target depth image according to the preset calculation model;
the first obtaining unit 701 is further configured to obtain the target depth image according to the depth value corresponding to each pixel position.
In the above scheme, referring to fig. 7, the image processor 70 may further include a comparison unit 704, wherein,
a first obtaining unit 701, further configured to obtain, from the first accuracy information, an accuracy value corresponding to each pixel position;
a comparing unit 704 configured to compare the accuracy value corresponding to each pixel position with a preset threshold; determining a depth value corresponding to each pixel position in the target depth image according to the comparison result;
the first obtaining unit 701 is further configured to obtain the target depth image according to the determined depth value corresponding to each pixel position.
In the foregoing solution, the comparing unit 704 is specifically configured to, for a certain pixel position in each pixel position, determine that a depth value corresponding to the certain pixel position in the target depth image is a depth value corresponding to the certain pixel position in the first depth information if the accuracy value corresponding to the certain pixel position is greater than a preset threshold; if the accuracy value corresponding to the certain pixel position is smaller than or equal to a preset threshold value, determining that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the second depth information;
the first obtaining unit 701 is further configured to obtain a depth value corresponding to each pixel position in the target depth image according to the comparison result of each pixel position.
In the foregoing solution, the first obtaining unit 701 is specifically configured to determine, by the structured light module, first depth information of the object to be photographed and first accuracy information corresponding to the first depth information; and after the structured light module forms a first depth image by using the first depth information and the first accuracy information, receiving the first depth image sent by the structured light module.
In the above scheme, the first obtaining unit 701 is specifically configured to determine, through the binocular vision module, second depth information of the object to be photographed and second accuracy information corresponding to the second depth information; and after the binocular vision module forms a second depth image by using the second depth information and the second accuracy information, receiving the second depth image sent by the binocular vision module.
It is understood that in the embodiments of the present application, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, and the like, and may also be a module, and may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, embodiments of the present application provide a computer storage medium storing a computer program, which when executed by a first processor implements the method of any one of the preceding embodiments.
Based on the composition of the image processor 70 and the computer storage medium, referring to fig. 8, a specific hardware structure of the image processor 70 provided by the embodiment of the present application is shown, which may include: a first communication interface 801, a first memory 802, and a first processor 803; the various components are coupled together by a first bus system 804. It is understood that the first bus system 804 is used to enable connection communications between these components. The first bus system 804 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as the first bus system 804 in fig. 8.
Wherein,
a first communication interface 801, which is used for receiving and sending signals during the process of sending and receiving information with other external network elements (including a structured light module and a binocular vision module);
a first memory 802 for storing a computer program capable of running on the first processor 803;
a first processor 803, configured to execute, when running the computer program:
aiming at an object to be shot, acquiring a first depth image from the structured light module and acquiring a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
and according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot.
It will be appreciated that the first memory 802 described herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (ddr SDRAM ), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct memory bus RAM (DRRAM). The first memory 802 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the first processor 803 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the first processor 803. The first processor 803 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the first memory 802, and the first processor 803 reads the information in the first memory 802, and completes the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof. For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the first processor 803 is further configured to execute the method of any one of the previous embodiments when running the computer program.
The present embodiment provides an image processor, which may include a first acquisition unit, and a fusion unit. The method comprises the steps of obtaining a first depth image (comprising first depth information and first accuracy information) through a structured light module, obtaining a second depth image (comprising second depth information and second accuracy information) through a binocular vision module, sending the first depth image and the second depth image to an image processor, and fusing the first depth image and the second depth image by the image processor.
Based on the same inventive concept of the foregoing embodiments, refer to fig. 9, which shows a schematic structural diagram of a structured light module according to an embodiment of the present application. As shown in fig. 9, the structured light module 90 may include: a second acquisition unit 901, a first determination unit 902, a first formation unit 903, and a first transmission unit 904, wherein,
a second acquisition unit 901 configured to acquire first depth information of an object to be photographed;
a first determining unit 902 configured to determine, according to the first depth information, first accuracy information corresponding to the first depth information;
a first forming unit 903 configured to form a first depth image using the first depth information and the first accuracy information;
a first sending unit 904 configured to send the first depth image to an image processor so that the image processor determines a target depth image of the object to be photographed.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
The integrated unit, if implemented in the form of a software functional module and not sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such an understanding, the present embodiment provides a computer storage medium storing a computer program which, when executed by a second processor, implements the method of any of the preceding embodiments.
Based on the composition of the structured light module 90 and the computer storage medium, referring to fig. 10, a specific hardware structure of the structured light module 90 provided in the embodiment of the present application is shown, which may include: a second communication interface 1001, a second memory 1002, and a second processor 1003; the various components are coupled together by a second bus system 1004. It is understood that the second bus system 1004 is used to enable connection communications between these components. The second bus system 1004 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration the various buses are labeled as the second bus system 1004 in figure 10. Wherein,
a second communication interface 1001 for receiving and transmitting signals during information transmission and reception with the image processor;
a second memory 1002 for storing a computer program capable of running on the second processor 1003;
a second processor 1003 configured to, when running the computer program, perform:
acquiring first depth information of an object to be shot;
determining first accuracy information corresponding to the first depth information according to the first depth information;
forming a first depth image using the first depth information and the first accuracy information;
sending the first depth image to an image processor so that the image processor determines a target depth image of the object to be photographed.
It is to be understood that the second memory 1002 is similar in hardware functionality to the first memory 802, and the second processor 1003 is similar in hardware functionality to the first processor 803; and will not be described in detail herein.
The present embodiment provides a structured light module, which may include a second obtaining unit, a first determining unit, a first forming unit, and a first transmitting unit. After the structured light module obtains the first depth information and the first accuracy information, the first depth image carrying the first depth information and the first accuracy information can be sent to the image processor, so that the image processor can acquire the first depth information and the first depth information which are acquired by the structured light module, and the image processor can determine the target depth image more accurately.
Based on the same inventive concept of the foregoing embodiment, refer to fig. 11, which shows a schematic structural diagram of a binocular vision module provided in an embodiment of the present application. As shown in fig. 11, the binocular vision module 110 may include: a third acquisition unit 1101, a second determination unit 1102, a matching unit 1103, a second formation unit 1104, and a second transmission unit 1105, wherein,
a third acquisition unit 1101 configured to acquire a first visual image and a second visual image of an object to be photographed;
a second determination unit 1102 configured to determine feature points of the first visual image and feature points of the second visual image;
a matching unit 1103 configured to perform parallax matching on the feature points of the first visual image and the feature points of the second visual image, and determine a parallax value; performing depth calculation according to the parallax value to obtain second depth information of the object to be shot;
a second determining unit 1102 further configured to determine second accuracy information corresponding to the second depth information according to the second depth information;
a second forming unit 1104 configured to form a second depth image using the second depth information and the second accuracy information;
a second sending unit 1105 configured to send the second depth image to an image processor so that the image processor determines a target depth image of the object to be photographed.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
The integrated unit, if implemented in the form of a software functional module and not sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such an understanding, the present embodiment provides a computer storage medium storing a computer program which, when executed by a third processor, implements the method of any of the preceding embodiments.
Based on the above-mentioned composition of the binocular vision module 110 and the computer storage medium, referring to fig. 12, a specific hardware structure of the binocular vision module 110 provided in the embodiment of the present application is shown, which may include: a third communication interface 1201, a third memory 1202, and a third processor 1203; the various components are coupled together by a third bus system 1204. It is understood that the third bus system 1204 is used to enable connective communication between these components. The third bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to the data bus. For clarity of illustration, however, the various buses are labeled as the third bus system 1204 in fig. 12. Wherein,
a third communication interface 1201 for receiving and transmitting signals during information transmission and reception with the image processor;
a third memory 1202 for storing a computer program operable on the third processor 1203;
a third processor 1203, configured to, when executing the computer program, perform:
acquiring a first visual image and a second visual image of an object to be shot, and determining a characteristic point of the first visual image and a characteristic point of the second visual image;
carrying out parallax matching on the characteristic points of the first visual image and the characteristic points of the second visual image to determine a parallax value; performing depth calculation according to the parallax value to obtain second depth information of the object to be shot;
determining second accuracy information corresponding to the second depth information according to the second depth information;
forming a second depth image using the second depth information and the second accuracy information;
sending the second depth image to an image processor so that the image processor determines a target depth image of the object to be photographed.
It is to be understood that the third memory 1202 is similar in hardware functionality to the first memory 802, and the third processor 1203 is similar in hardware functionality to the first processor 803; and will not be described in detail herein.
The present embodiment provides a binocular vision module, which may include a third obtaining unit, a second determining unit, a matching unit, a second forming unit, and a second transmitting unit. After the binocular vision module obtains the second depth information and the second accuracy information, the second depth image carrying the second depth information and the second accuracy information can be sent to the image processor, so that the image processor can acquire the credibility of the second depth information and the second depth information acquired by the binocular vision module, and the image processor can determine the target depth image more accurately.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method for determining a depth image, the method comprising:
aiming at an object to be shot, acquiring a first depth image from the structured light module and acquiring a second depth image from the binocular vision module; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
and according to the first depth information, the first accuracy information, the second depth information and the second accuracy information, carrying out fusion processing on the first depth image and the second depth image to obtain a target depth image of the object to be shot.
2. The method according to claim 1, wherein the fusing the first depth image and the second depth image according to the first depth information, the first accuracy information, the second depth information, and the second accuracy information to obtain a target depth image of the object to be captured comprises:
inputting the first depth information, the first accuracy information, the second depth information and the second accuracy information into a preset calculation model, and outputting a depth value corresponding to each pixel position in a target depth image according to the preset calculation model;
and obtaining the target depth image according to the depth value corresponding to each pixel position.
3. The method of claim 1, wherein after the acquiring the first depth image from the structured light module and the second depth image from the binocular vision module, the method further comprises:
respectively carrying out scaling processing on the first depth image and the second depth image according to a preset resolution ratio to obtain a processed first depth image and a processed second depth image; the processed first depth image carries processed first depth information and processed first accuracy information, and the processed second depth image carries processed second depth information and processed second accuracy information;
correspondingly, the performing fusion processing on the first depth image and the second depth image according to the first depth information, the first accuracy information, the second depth information, and the second accuracy information to obtain a target depth image of the object to be photographed includes:
inputting the processed first depth information, the processed first accuracy information, the processed second depth information and the processed second accuracy information into a preset calculation model, and outputting a depth value corresponding to each pixel position in a target depth image according to the preset calculation model;
and obtaining the target depth image according to the depth value corresponding to each pixel position.
4. The method of claim 1, wherein after the acquiring the first depth image from the structured light module and the second depth image from the binocular vision module, the method further comprises:
acquiring an accuracy value corresponding to each pixel position from the first accuracy information;
comparing the accuracy value corresponding to each pixel position with a preset threshold value;
according to the comparison result, determining the depth value corresponding to each pixel position in the target depth image;
and obtaining the target depth image according to the depth value corresponding to each determined pixel position.
5. The method of claim 4, wherein determining the depth value corresponding to each pixel position in the target depth image according to the comparison result comprises:
for a certain pixel position in each pixel position, if the accuracy value corresponding to the certain pixel position is greater than a preset threshold, determining that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the first depth information;
if the accuracy value corresponding to the certain pixel position is smaller than or equal to a preset threshold, determining that the depth value corresponding to the certain pixel position in the target depth image is the depth value corresponding to the certain pixel position in the second depth information;
and obtaining the depth value corresponding to each pixel position in the target depth image according to the comparison result of each pixel position.
6. The method of claim 1, wherein acquiring the first depth image from the structured light module for the object to be photographed comprises:
determining first depth information of the object to be photographed and first accuracy information corresponding to the first depth information through the structured light module;
and after the structured light module forms a first depth image by using the first depth information and the first accuracy information, receiving the first depth image sent by the structured light module.
7. The method of claim 1, wherein the acquiring a second depth image from a binocular vision module for the object to be photographed comprises:
determining second depth information of the object to be shot and second accuracy information corresponding to the second depth information through the binocular vision module;
and after the binocular vision module forms a second depth image by using the second depth information and the second accuracy information, receiving the second depth image sent by the binocular vision module.
8. An image processor, characterized in that the image processor comprises a first acquisition unit and a fusion unit, wherein,
the first acquisition unit is configured to acquire a first depth image from the structured light module and a second depth image from the binocular vision module for an object to be photographed; the first depth image carries first depth information and first accuracy information, the second depth image carries second depth information and second accuracy information, the first accuracy information is used for representing the accuracy of the first depth information, and the second accuracy information is used for representing the accuracy of the second depth information;
the fusion unit is configured to perform fusion processing on the first depth image and the second depth image according to the first depth information, the first accuracy information, the second depth information, and the second accuracy information to obtain a target depth image of the object to be photographed.
9. An image processor, comprising a first memory and a first processor; wherein,
the first memory for storing a computer program operable on the first processor;
the first processor, when executing the computer program, is configured to perform the method of any of claims 1 to 7.
10. A computer storage medium, characterized in that it stores a computer program which, when executed by a first processor, implements the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911075635.7A CN110874852A (en) | 2019-11-06 | 2019-11-06 | Method for determining depth image, image processor and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911075635.7A CN110874852A (en) | 2019-11-06 | 2019-11-06 | Method for determining depth image, image processor and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110874852A true CN110874852A (en) | 2020-03-10 |
Family
ID=69717231
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911075635.7A Pending CN110874852A (en) | 2019-11-06 | 2019-11-06 | Method for determining depth image, image processor and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110874852A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111611425A (en) * | 2020-04-13 | 2020-09-01 | 四川深瑞视科技有限公司 | Raw material processing method and device based on depth information, electronic equipment and system |
CN111866490A (en) * | 2020-07-27 | 2020-10-30 | 支付宝(杭州)信息技术有限公司 | Depth image imaging system and method |
CN112312113A (en) * | 2020-10-29 | 2021-02-02 | 贝壳技术有限公司 | Method, device and system for generating three-dimensional model |
CN113139998A (en) * | 2021-04-23 | 2021-07-20 | 北京华捷艾米科技有限公司 | Depth image generation method and device, electronic equipment and computer storage medium |
CN113269823A (en) * | 2021-05-18 | 2021-08-17 | Oppo广东移动通信有限公司 | Depth data acquisition method and device, storage medium and electronic equipment |
WO2021218201A1 (en) * | 2020-04-27 | 2021-11-04 | 北京达佳互联信息技术有限公司 | Image processing method and apparatus |
CN113766284A (en) * | 2020-06-02 | 2021-12-07 | 云米互联科技(广东)有限公司 | Volume adjusting method, television and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3273387A1 (en) * | 2016-07-19 | 2018-01-24 | Siemens Healthcare GmbH | Medical image segmentation with a multi-task neural network system |
CN107635129A (en) * | 2017-09-29 | 2018-01-26 | 周艇 | Three-dimensional three mesh camera devices and depth integration method |
CN108881717A (en) * | 2018-06-15 | 2018-11-23 | 深圳奥比中光科技有限公司 | A kind of Depth Imaging method and system |
CN110335211A (en) * | 2019-06-24 | 2019-10-15 | Oppo广东移动通信有限公司 | Bearing calibration, terminal device and the computer storage medium of depth image |
-
2019
- 2019-11-06 CN CN201911075635.7A patent/CN110874852A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3273387A1 (en) * | 2016-07-19 | 2018-01-24 | Siemens Healthcare GmbH | Medical image segmentation with a multi-task neural network system |
CN107635129A (en) * | 2017-09-29 | 2018-01-26 | 周艇 | Three-dimensional three mesh camera devices and depth integration method |
CN108881717A (en) * | 2018-06-15 | 2018-11-23 | 深圳奥比中光科技有限公司 | A kind of Depth Imaging method and system |
CN110335211A (en) * | 2019-06-24 | 2019-10-15 | Oppo广东移动通信有限公司 | Bearing calibration, terminal device and the computer storage medium of depth image |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111611425A (en) * | 2020-04-13 | 2020-09-01 | 四川深瑞视科技有限公司 | Raw material processing method and device based on depth information, electronic equipment and system |
WO2021218201A1 (en) * | 2020-04-27 | 2021-11-04 | 北京达佳互联信息技术有限公司 | Image processing method and apparatus |
CN113766284A (en) * | 2020-06-02 | 2021-12-07 | 云米互联科技(广东)有限公司 | Volume adjusting method, television and storage medium |
CN111866490A (en) * | 2020-07-27 | 2020-10-30 | 支付宝(杭州)信息技术有限公司 | Depth image imaging system and method |
CN112312113A (en) * | 2020-10-29 | 2021-02-02 | 贝壳技术有限公司 | Method, device and system for generating three-dimensional model |
CN113139998A (en) * | 2021-04-23 | 2021-07-20 | 北京华捷艾米科技有限公司 | Depth image generation method and device, electronic equipment and computer storage medium |
CN113269823A (en) * | 2021-05-18 | 2021-08-17 | Oppo广东移动通信有限公司 | Depth data acquisition method and device, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110874852A (en) | Method for determining depth image, image processor and storage medium | |
CN109887087B (en) | SLAM mapping method and system for vehicle | |
US10455141B2 (en) | Auto-focus method and apparatus and electronic device | |
KR102278776B1 (en) | Image processing method, apparatus, and apparatus | |
CN110335211B (en) | Method for correcting depth image, terminal device and computer storage medium | |
US11010924B2 (en) | Method and device for determining external parameter of stereoscopic camera | |
KR102143456B1 (en) | Depth information acquisition method and apparatus, and image collection device | |
TWI567693B (en) | Method and system for generating depth information | |
JP6245885B2 (en) | Imaging apparatus and control method thereof | |
CN107945105B (en) | Background blurring processing method, device and equipment | |
US9619886B2 (en) | Image processing apparatus, imaging apparatus, image processing method and program | |
CN109712192B (en) | Camera module calibration method and device, electronic equipment and computer readable storage medium | |
US20110249117A1 (en) | Imaging device, distance measuring method, and non-transitory computer-readable recording medium storing a program | |
CN105069804B (en) | Threedimensional model scan rebuilding method based on smart mobile phone | |
CN113343745B (en) | Remote target detection method and system based on binocular camera and intelligent terminal | |
CN112106111A (en) | Calibration method, calibration equipment, movable platform and storage medium | |
US10904512B2 (en) | Combined stereoscopic and phase detection depth mapping in a dual aperture camera | |
CN113965742B (en) | Dense disparity map extraction method and system based on multi-sensor fusion and intelligent terminal | |
CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
JP6395429B2 (en) | Image processing apparatus, control method thereof, and storage medium | |
CN109031333B (en) | Distance measuring method and device, storage medium, and electronic device | |
KR102503976B1 (en) | Apparatus and method for correcting augmented reality image | |
CN113269823A (en) | Depth data acquisition method and device, storage medium and electronic equipment | |
CN109658459B (en) | Camera calibration method, device, electronic equipment and computer-readable storage medium | |
CN114365191A (en) | Image depth value determination method, image processor and module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200310 |