WO2021003807A1 - 一种图像深度估计方法及装置、电子设备、存储介质 - Google Patents

一种图像深度估计方法及装置、电子设备、存储介质 Download PDF

Info

Publication number
WO2021003807A1
WO2021003807A1 PCT/CN2019/101778 CN2019101778W WO2021003807A1 WO 2021003807 A1 WO2021003807 A1 WO 2021003807A1 CN 2019101778 W CN2019101778 W CN 2019101778W WO 2021003807 A1 WO2021003807 A1 WO 2021003807A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
inverse depth
sampling point
image
inverse
Prior art date
Application number
PCT/CN2019/101778
Other languages
English (en)
French (fr)
Inventor
齐勇
项骁骏
姜翰青
章国锋
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Priority to JP2021537988A priority Critical patent/JP7116262B2/ja
Priority to SG11202108201RA priority patent/SG11202108201RA/en
Priority to KR1020217017780A priority patent/KR20210089737A/ko
Publication of WO2021003807A1 publication Critical patent/WO2021003807A1/zh
Priority to US17/382,819 priority patent/US20210350559A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the present disclosure relates to the field of computer vision technology, in particular to an image depth estimation method and device, electronic equipment, and storage medium.
  • Image depth estimation is an important issue in the field of computer vision.
  • the 3D reconstruction of the scene can only be completed through the depth estimation method, which can serve applications such as augmented reality and games.
  • depth estimation methods based on computer vision can be divided into active vision methods and passive vision methods.
  • the active vision method is to point the measured object to emit a controllable beam, and then shoot the image formed by the beam on the surface of the object, and calculate the distance of the measured object through geometric relations.
  • the passive vision method includes stereo vision, focusing method, and scattered Focal method, etc., mainly determine depth information through two-dimensional image information obtained by one or more camera devices.
  • the embodiments of the present disclosure expect to provide an image depth estimation method and device, electronic equipment, and storage medium.
  • the embodiment of the present disclosure provides an image depth estimation method, and the method includes:
  • Pyramid down-sampling processing is performed on the current frame and the reference frame respectively to obtain k-layer current images corresponding to the current frame and k-layer reference images corresponding to the reference frame; k is a natural number greater than or equal to 2;
  • the current frame and the reference frame corresponding to the current frame are down-sampled, and the obtained multi-layer current image is combined with the multi-layer reference image to perform inverse depth estimation iterative processing to determine the current The inverse depth estimation result of the frame.
  • the inverse depth search space is reduced layer by layer, thereby reducing the calculation amount of the inverse depth estimation, improving the estimation speed, and obtaining the inverse depth estimation result in real time.
  • the obtaining a reference frame corresponding to the current frame includes:
  • At least one frame that satisfies a preset angle constraint with the current frame is selected, and the at least one frame is used as the reference frame.
  • the preset angle constraint condition includes:
  • the angle formed by the line between the pose center corresponding to the current frame and the pose center corresponding to the reference frame and the target point is within a first preset angle range;
  • the target point is the average corresponding to the current frame
  • the included angle of the optical axis corresponding to the current frame and the reference frame is within a second preset angle range
  • the included angle of the vertical axis corresponding to the current frame and the reference frame is in a third preset angle range.
  • the first angle condition defines the distance between the current scene and the two cameras. If the angle is too large, the scene is too close, the overlap of the two frames will be low, and the angle is too small. It means that the scene is too far, the parallax is small, and the error will be relatively large. When the camera is very close, the angle may also be too small, and the error is also relatively large.
  • the second angle condition is to ensure that the two cameras have a sufficient common viewing area.
  • the third angle condition is to prevent the camera from rotating around the optical axis and affecting the subsequent depth estimation calculation process. The frame that meets the above three angle conditions at the same time is used as the reference frame to improve the accuracy of the current frame depth estimation.
  • the inverse depth estimation iterative process is performed on the k-layer current image based on the k-layer reference image and the inverse depth space range to obtain the inverse depth estimation result of the current frame, include:
  • the i-th layer sampling point is a reference to the k-layer current image
  • the pixel points obtained by sampling the current image of layer i, i is a natural number greater than or equal to 1 and less than or equal to k;
  • the kth layer inverse depth value is determined as the inverse depth estimation result.
  • the k-layer current image is subjected to inverse depth estimation iterative processing.
  • the current image from the top layer (the first layer) (with the least pixels) Image) is carried out sequentially to the bottom layer, and the inverse depth search space is reduced layer by layer, thereby effectively reducing the amount of calculation of inverse depth estimation.
  • the determining the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point based on the k-layer current image and the inverse depth space range includes:
  • the i-1th layer inverse depth value Based on the i-1th layer inverse depth value, the i-1th layer sampling point, and the multiple initial inverse depth values, determine the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point.
  • interval division is performed for the inverse depth space range, so that inverse depth values are selected in different intervals, so that there is an inverse depth value in each interval as an inverse depth candidate value. That is to say, each sampling point has an inverse depth candidate value in different inverse depth ranges, and the inverse depth value of the sampling point is determined later to ensure that the inverse depth value of different inverse depth ranges can be estimated. Determine to ensure that the estimation process covers the entire inverse depth space range, so that an accurate inverse depth value can be estimated eventually.
  • the determination of the sampling points in the i-th layer based on the i-1th layer inverse depth estimation value, the i-1th layer sampling points, and the multiple initial inverse depth values includes:
  • the second sampling point closest to the first sampling point, and at least two third sampling points adjacent to the second sampling point;
  • the first sampling Point is any sampling point in the i-th layer sampling point;
  • the inverse depth value of each of the at least two third sampling points and the inverse depth value of the second sampling point are obtained to obtain at least three inverse depths value;
  • the inverse depth value is determined as the inverse depth candidate value corresponding to the first sampling point;
  • the inverse depth value corresponding to the sample point of the i-1 layer is used to determine the inverse depth candidate value of the sample point of the i-th layer from a plurality of initial inverse depth values, which can be more accurate Obtain the inverse depth candidate value of the sample point of the i-th layer, and reduce the number of inverse depth candidate values, correspondingly, reduce the calculation amount of inverse depth estimation.
  • the i-th layer sample is determined according to the inverse depth candidate value corresponding to each sample point in the i-th layer sample point and the i-th layer reference image in the k-layer reference image
  • the inverse depth value of each sampling point in the point to obtain the inverse depth value of the i-th layer including:
  • each sampling point in the i-th layer sampling point For each sampling point in the i-th layer sampling point, project each of the i-th layer sampling points to the i-th layer according to each inverse depth value in the corresponding inverse depth candidate value.
  • an inverse depth value of each sampling point in the i-th layer is determined to obtain the i-th layer inverse depth value.
  • the sample points of the i-th layer are matched with the corresponding projection points of the i-th layer respectively, so as to determine the degree of difference from the projection points projected with different inverse depth values. , The inverse depth value of the sampling point of the i-th layer can be accurately selected.
  • the block matching is performed according to the sampling points of the i-th layer and the projection points of the i-th layer to obtain the matching result of the i-th layer corresponding to each sampling point in the i-th layer sampling points ,include:
  • Each projection point in the layer projection points is a plurality of second image blocks in the center; the sample point to be matched is any one of the sample points of the i-th layer;
  • the first image block is respectively compared with each of the plurality of second image blocks to obtain a plurality of matching results, and the plurality of matching results are determined as the first corresponding to the sampling point to be matched i-layer matching result;
  • the sampling point and the projection point are matched by block matching, and the matching result obtained is actually the penalty value of the matching, which represents the difference between the projection point and the sampling point.
  • the result can be used to select the inverse depth value of the sampling point more accurately.
  • the determining the inverse depth value of each sampling point in the i-th layer according to the matching result of the i-th layer to obtain the i-th layer inverse depth value includes:
  • the target sampling point is any sampling point in the i-th layer sampling point
  • the above-mentioned matching process for sampling points is actually for one sampling point, respectively determining the degree of difference from the projection points projected with different inverse depth values.
  • the minimum value of the matching result is selected, which indicates that the difference between the corresponding projection point and the sampling point is the smallest. Therefore, the inverse depth value used by the projection point can be determined as the inverse depth value of the sampling point to obtain the accurate inverse depth value of the sampling point .
  • the method further includes:
  • the optimized k-th layer inverse depth value is determined as the inverse depth estimation result.
  • the estimated depth in the above process is a discrete value. Therefore, it is also possible to perform quadratic interpolation to adjust the inverse depth of each sampling point to obtain a more accurate inverse depth value. .
  • the performing interpolation optimization on the k-th layer inverse depth value to obtain the optimized k-th layer inverse depth value includes:
  • the adjacent inverse depth value is selected from the candidate inverse depth values of the corresponding sampling point in the k-th layer sampling point;
  • the k-th layer sampling point is Pixel points obtained by sampling the k-th current image in the k-th current image;
  • the adjacent inverse depth value and the adjacent inverse depth value correspond to the matching result, so that the inverse depth of the sampling point can be more accurately determined.
  • the value is adjusted by interpolation, and the adjustment method is simple and fast.
  • the embodiment of the present disclosure provides an image depth estimation device, including:
  • the acquiring part is configured to acquire the reference frame corresponding to the current frame and the inverse depth space range of the current frame;
  • the down-sampling part is configured to perform pyramid down-sampling processing on the current frame and the reference frame respectively to obtain the k-layer current image corresponding to the current frame and the k-layer reference image corresponding to the reference frame; k is greater than A natural number equal to 2;
  • the estimation part is configured to perform inverse depth estimation iterative processing on the k-layer current image based on the k-layer reference image and the inverse depth space range to obtain the inverse depth estimation result of the current frame.
  • the acquiring part is specifically configured to acquire at least two frames to be screened; from the at least two frames to be screened, select those that meet a preset angle constraint with the current frame At least one frame, using the at least one frame as the reference frame.
  • the preset angle constraint condition includes:
  • the angle formed by the line between the pose center corresponding to the current frame and the pose center corresponding to the reference frame and the target point is within a first preset angle range;
  • the target point is the average corresponding to the current frame
  • the included angle of the optical axis corresponding to the current frame and the reference frame is within a second preset angle range
  • the included angle of the vertical axis corresponding to the current frame and the reference frame is in a third preset angle range.
  • the estimation part is specifically configured to determine the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point based on the current image of the k layer and the inverse depth space range;
  • the sample point of the i-th layer is the pixel point obtained by sampling the current image of the i-th layer in the current image of the k layer, i is a natural number greater than or equal to 1 and less than or equal to k; according to each sampling point in the i-th layer
  • the estimation part is specifically configured to divide the inverse depth space range and select an inverse depth value in each divided interval to obtain multiple initial inverse depth values;
  • the multiple initial inverse depth values are determined as the inverse depth candidate values corresponding to each sampling point in the first layer sampling point; in the case that i is not equal to 1, the i-1th layer sample is obtained from the current image of the k layer Point, and the i-1th layer inverse depth value; based on the i-1th layer inverse depth estimation value, the i-1th layer sampling point, and the multiple initial inverse depth values, determine the i-th layer sample The inverse depth candidate value corresponding to each sampling point in the point.
  • the estimation part is specifically configured to determine the second sampling point closest to the first sampling point from the i-1th layer sampling points, and the second sampling point corresponding to the second sampling point. At least two adjacent third sampling points; the first sampling point is any one of the i-th layer sampling points; the at least two third sampling points are obtained according to the i-1th layer inverse depth value At least three inverse depth values obtained by the inverse depth value of each sampling point in the sampling points and the inverse depth value of the second sampling point; and determining the maximum inverse depth value from the at least three inverse depth values And a minimum inverse depth value; from the plurality of initial inverse depth values, an inverse depth value within the range of the maximum inverse depth value and the minimum inverse depth value is selected, and the selected inverse depth value is determined as the The inverse depth candidate value corresponding to the first sampling point; continue to determine the inverse depth candidate value corresponding to the sampling point other than the first sampling point in the i-th layer sampling point, until each of the i-
  • the estimation part is specifically configured to, for each sampling point in the i-th layer sampling point, according to each inverse depth value of the corresponding inverse depth candidate value, the first Each sampling point in the i-layer sampling point is projected into the i-th layer reference image, and the i-th layer projection point corresponding to each sampling point in the i-th layer sampling point is obtained; according to the i-th layer sampling point and Block matching is performed on the i-th layer projection points to obtain the i-th layer matching result corresponding to each sampling point in the i-th layer sampling point; according to the i-th layer matching result, the i-th layer sampling point is determined The inverse depth value of each sampling point is used to obtain the i-th layer inverse depth value.
  • the estimation part is specifically configured to use a preset window to select the first image block centered on the sampling point to be matched from the current image of the i-th layer, and select the From the layer reference image, select multiple second image blocks centered on each of the projection points of the i-th layer corresponding to the sample point to be matched; the sample point to be matched is the sample point of the i-th layer Any sampling point in the two; compare the first image block with each of the plurality of second image blocks to obtain multiple matching results, and determine the multiple matching results as the waiting Match the i-th layer matching result corresponding to the sampling point; continue to determine the i-th layer matching result corresponding to the sampling point different from the sample point to be matched in the i-th layer sampling point, until the i-th layer sampling point is obtained The matching result of the i-th layer corresponding to each sampling point.
  • the estimation part is specifically configured to select the target matching result from the matching result of the i-th layer corresponding to the target sampling point; the target sampling point is any one of the i-th layer sampling points Sampling points; among the i-th layer projection points corresponding to the target sampling points, the projection point corresponding to the target matching result is determined as the target projection point; among the candidate inverse depth values, the inverse corresponding to the target projection point The depth value is determined as the inverse depth value of the target sampling point; continue to determine the inverse depth value of the sampling point different from the target sampling point in the i-th layer sampling point, until the i-th layer sampling point is determined The inverse depth value of each sampling point is used to obtain the i-th layer inverse depth value.
  • the estimation part is further configured to perform interpolation optimization on the k-th layer inverse depth value to obtain an optimized k-th layer inverse depth value;
  • the depth value is determined as the inverse depth estimation result.
  • the estimation part is specifically configured to select each inverse depth value of the k-th layer inverse depth value from the candidate inverse depth values of the corresponding sampling point in the k-th layer. , Select the adjacent inverse depth value; the k-th layer sampling point is the pixel point obtained by sampling the k-th layer current image in the k-layer current image; obtain the matching result corresponding to the adjacent inverse depth value; According to the matching result corresponding to the adjacent inverse depth value and the adjacent inverse depth value, interpolation optimization is performed on each inverse depth value of the k-th layer inverse depth value to obtain the optimized k-th layer inverse depth value.
  • the embodiment of the present disclosure provides an electronic device, the electronic device includes: a processor, a memory, and a communication bus; wherein,
  • the communication bus is configured to implement connection and communication between the processor and the memory
  • the processor is configured to execute an image depth estimation program stored in the memory to implement the above image depth estimation method.
  • the electronic equipment is a mobile phone or a tablet computer.
  • the embodiments of the present disclosure provide a computer-readable storage medium that stores one or more programs, and the one or more programs can be executed by one or more processors to realize the above image Depth estimation method.
  • the embodiments of the present disclosure provide a computer program, including computer-readable code, which when executed by a processor, implements the steps corresponding to the above-mentioned image depth estimation method.
  • the reference frame corresponding to the current frame and the inverse depth space range of the current frame are acquired; the current frame and the reference frame are respectively subjected to pyramid down-sampling processing to obtain the k-layer current corresponding to the current frame Image, and the k-layer reference image corresponding to the reference frame; k is a natural number greater than or equal to 2; based on the k-layer reference image and the inverse depth space range, perform inverse depth estimation iterative processing on the k-layer current image to obtain the inverse depth estimation of the current frame result.
  • the technical solution provided by the present disclosure adopts the iterative process of inverse depth estimation on the multi-layer current image combined with the multi-layer reference image to reduce the inverse depth search space layer by layer, and determine the inverse depth estimation result of the current frame.
  • the depth estimation result is the reciprocal of the z-axis coordinate value of the pixel point of the current frame in the camera coordinate system. No additional coordinate transformation is required, and reducing the inverse depth search space layer by layer helps to reduce the calculation amount of inverse depth estimation and improve the estimation Speed, so that the depth estimation result of the image can be obtained in real time, and the accuracy of the depth estimation result is high.
  • FIG. 1 is a schematic flowchart of an image depth estimation method provided by an embodiment of the disclosure
  • FIG. 2 is a schematic diagram of an exemplary camera pose angle provided by an embodiment of the disclosure
  • FIG. 3 is a first schematic diagram of a flow of inverse depth estimation iterative processing provided by an embodiment of the disclosure
  • FIG. 4 is a schematic diagram of an exemplary 3-layer current image provided by an embodiment of the disclosure.
  • FIG. 5 is a schematic diagram of a process for determining an inverse depth candidate value provided by an embodiment of the disclosure
  • FIG. 6 is a schematic diagram of an exemplary projection of sampling points provided by an embodiment of the disclosure.
  • FIG. 7 is a second schematic diagram of a flow of inverse depth estimation iterative processing provided by an embodiment of the disclosure.
  • FIG. 8 is a schematic structural diagram of an image depth estimation apparatus provided by an embodiment of the disclosure.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure.
  • the embodiments of the present disclosure provide an image depth estimation method, the execution subject of which may be an image depth estimation device.
  • the image depth estimation method may be executed by a terminal device or a server or other electronic devices, where the terminal device may be a user equipment ( User Equipment (UE), mobile devices, user terminals, terminals, cellular phones, cordless phones, personal digital assistants (PDAs), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • UE User Equipment
  • PDAs personal digital assistants
  • FIG. 1 is a schematic flowchart of an image depth estimation method provided by an embodiment of the disclosure. As shown in Figure 1, it mainly includes the following steps:
  • the execution subject takes the image depth estimation device as an example for description.
  • the image depth estimation device performs depth estimation on the current frame, it needs to first obtain the reference frame corresponding to the current frame and the inverse depth space range of the current frame.
  • the current frame is an image that needs depth estimation
  • the reference frame is an image used for reference matching when depth estimation is performed on the current frame.
  • the number of reference frames may be There are many. Considering the balance between the speed and robustness of depth estimation, it is more appropriate to select about 5 reference frames.
  • the specific reference frames of the current frame are not limited in the embodiment of the present disclosure.
  • the image depth estimation apparatus obtaining the reference frame corresponding to the current frame includes the following steps: obtaining at least two frames to be screened; from the at least two frames to be screened, selecting a frame that satisfies the current frame At least one frame of the preset angle constraint condition is used as a reference frame.
  • the image depth estimation apparatus may also obtain reference frames in other ways, for example, receiving a selection instruction for at least two frames to be screened sent by a user, and setting at least one of the selection instructions Frame as a reference frame.
  • the specific reference frame acquisition method is not limited in this embodiment of the application.
  • the image depth estimation device may select multiple reference frames corresponding to the current frame from at least two frames to be screened, and each reference frame is different from the current frame. Meet the preset angle constraint conditions.
  • the frame to be filtered is an image acquired from the same scene as the current frame but at a different angle.
  • the image depth estimation device can be equipped with a camera module, through which the frame to be screened can be obtained.
  • the frame to be screened can also be obtained by other independent camera equipment, and the image depth estimation device further obtains the frame to be screened from the camera equipment frame.
  • the specific preset angle constraint conditions can be preset in the image depth estimation device according to the actual depth estimation requirements, or can be stored in other devices, and obtained from other devices when depth estimation is needed, or can be obtained by receiving the angle input by the user Obtaining constraint conditions, etc., are not limited in the embodiment of the present disclosure.
  • the preset angle constraint conditions include: the pose center corresponding to the current frame and the pose center corresponding to the reference frame, and the angle formed by the line with the target point is at the first preset angle Range; the target point is the midpoint between the average depth point corresponding to the current frame and the average depth point corresponding to the reference frame; the angle between the optical axis corresponding to the current frame and the reference frame is in the second preset angle range; the current frame and the reference frame
  • the corresponding vertical axis included angle is in the third preset angle range.
  • the vertical axis is the Y axis of the camera coordinate system in the three-dimensional space.
  • the pose center corresponding to the current frame is actually the center (optical center) of the camera when the camera is in the position and attitude when the current frame is acquired.
  • the pose center corresponding to the reference frame is actually the center (optical center) of the camera in the position and attitude when the reference frame is acquired.
  • the pose of the camera when acquiring the current frame is pose 1
  • the pose of the camera when acquiring the reference frame is pose 2
  • the pose of the camera is pose 1.
  • the average depth point from the center (optical center) to the corresponding scene is point P1
  • the average depth point from the center of the camera (optical center) to the corresponding scene in pose 2 is point P2
  • the midpoint of the connecting line between P1 and P2 is point P
  • the preset angle preset conditions specifically include three angle conditions: the first angle condition is that the angle of view ⁇ formed by the connection between the camera center and point P in pose 1 and pose 2 is between [5°, 45°]
  • the second angle condition is that the angle between the optical axis when the camera is in pose 1 and pose 2 is between [0°, 45°];
  • the third angle condition is that when the camera is in pose 1 and pose 2
  • the included Y-axis angle is between [0°, 30°], and only frames that meet these three angle conditions can be
  • the camera that obtains the current frame and the reference frame can be equipped with a positioning device, so that the corresponding pose can be directly obtained when the current frame and the reference frame are obtained, and the image depth estimation device can obtain The relevant pose obtained in the positioning device, of course, the image depth estimation device can also calculate the corresponding pose according to the pose estimation algorithm, combining the obtained current frame and some feature points in the reference frame.
  • the first angle condition defines the distance between the current scene and the two cameras. If the angle is too large, the scene is too close, the overlap of the two frames will be low, and the angle is too small. It means that the scene is too far, the parallax is small, and the error will be relatively large. When the camera is very close, the angle may also be too small, and the error is also relatively large.
  • the second angle condition is to ensure that the two cameras have a sufficient common viewing area.
  • the third angle condition is to prevent the camera from rotating around the optical axis and affecting the subsequent depth estimation calculation process. The frame that meets the above three angle conditions at the same time is used as the reference frame to improve the accuracy of the current frame depth estimation.
  • the image depth estimation device can directly obtain the corresponding inverse depth space range according to the current frame, and the inverse depth space range is the space range that the inverse depth value of the pixel in the current frame can take.
  • the image depth estimation device may also receive a setting instruction from the user, and obtain the inverse depth space range indicated by the user according to the setting instruction.
  • the specific inverse depth space range is not limited in the embodiments of the present disclosure.
  • the inverse depth space range is [dmin, dmax], dmin is the smallest inverse depth value in the inverse depth space range, and dmax is the largest inverse depth value in the inverse depth space range.
  • S102 Pyramid downsampling is performed on the current frame and the reference frame, respectively, to obtain the k-layer current image corresponding to the current frame and the k-layer reference image corresponding to the reference frame; k is a natural number greater than or equal to 2.
  • the image depth estimation device may perform pyramid down-sampling processing on the current frame and the reference frame, respectively, so as to obtain the k-layer current image corresponding to the current frame, and the reference The k-layer reference image corresponding to the frame.
  • the image depth estimation apparatus since there may be multiple reference frames, the image depth estimation apparatus performs pyramid down-sampling processing for each reference frame image, so that the k-layer reference image obtained is actually Multiple groups, the number of specific k-layer reference images is not limited in the embodiment of the present disclosure.
  • the image depth estimation device performs pyramid down-sampling processing on the current frame and the reference frame respectively, and the obtained current image pyramid and reference image pyramid have the same number of layers, and the scale factor is used The same is true.
  • the image depth estimation device performs down-sampling of the current frame and the reference frame with a scale factor of 2 to form a three-layer current image and a three-layer reference image.
  • the top-level image has the lowest resolution
  • the middle The resolution of the layer image is higher than the resolution of the top image
  • the resolution of the bottom image is the highest.
  • the bottom image is the original image, that is, the corresponding current frame and reference frame.
  • the specific number of image layers k and the scale factor of downsampling can be preset according to actual requirements, which are not limited in the embodiment of the present disclosure.
  • the image depth estimation apparatus has acquired 5 reference frames corresponding to the current frame It, which are: reference frame I1, reference frame I2, reference frame I3, reference frame I4, and reference frame I5.
  • the image depth estimation device performs down-sampling on these frames with a scale factor of 2 respectively, so as to obtain the current image of the 3 layers corresponding to the current frame It, as well as the reference frame I1, the reference frame I2, the reference frame I3, the reference frame I4, and the reference frame Three-layer reference image corresponding to each I5.
  • the image depth estimation device may perform inverse depth estimation iterative processing on the k-layer current image based on the k-layer reference image and the inverse depth space range, for example, You can start from the current image (the image with the fewest pixels) on the top layer (the first layer), and iterate to the bottom layer in order to reduce the inverse depth search space layer by layer, until the bottom kth layer is reached to obtain the inverse depth estimate corresponding to the current frame result.
  • FIG. 3 is a first schematic diagram of a flow of inverse depth estimation iterative processing provided by an embodiment of the disclosure.
  • the image depth estimation device performs inverse depth estimation iterative processing on the k-layer current image based on the k-layer reference image and the inverse depth space range to obtain the inverse depth estimation result corresponding to the current frame, including the following steps:
  • the current image of the k layer includes: the current image of the first layer, the current image of the second layer, the current image of the third layer, ..., the current image of the kth layer,
  • the current image of layer 1 is the top image of the current image of layer k
  • the current image of layer k is the bottom image of the current image pyramid.
  • the reference image of layer k includes from low to high resolution: the first layer reference Image, 2nd layer reference image, 3rd layer reference image,..., the kth layer reference image, the first layer reference image is the top image in the reference image pyramid, and the k layer reference image is the bottom image in the reference image pyramid .
  • the image depth estimation device can sample the pixel points of the current image of the i-th layer in the current image of the k-layer, and the pixel points obtained by sampling are the sampling points of the i-th layer.
  • the value of is a natural number greater than 1 and less than or equal to k, which is not limited in the embodiment of the present disclosure.
  • the image depth estimation device performs pixel sampling on the current image of the i-th layer, which can be implemented according to a preset sampling step.
  • the specific sampling step length may be determined according to actual requirements, which is not limited in the embodiment of the present disclosure.
  • Fig. 4 is a schematic diagram of an exemplary 3-layer current image provided by an embodiment of the disclosure.
  • the image depth estimation device can pre-sample the current frame on the x-axis and y-axis coordinates according to a sampling step of 2 to obtain a total of 3 layers of current images.
  • the current image resolution of the first layer The lowest, the current image resolution of the second layer is higher than the current image of the first layer, the current image resolution of the third layer is higher than the current image of the second layer, and the current image of the third layer is actually the original image of the current frame.
  • the image depth estimation device determines the i-th layer inverse depth candidate value corresponding to each sampling point in the i-th layer based on the k-layer current image and the inverse depth space range, including: When i is equal to 1, the inverse depth space range is divided into equal intervals to obtain multiple equal divided inverse depth values of the divided interval; multiple equal divided inverse depth values are determined as the inverse corresponding to each sampling point in the first layer.
  • Candidate depth value when i is not equal to 1, the sampling points of the i-1th layer and the inverse depth estimation value of the i-1th layer are obtained from the current image of the k layer; based on the i-1th inverse depth estimation value and the i-th layer -1 layer sampling points, and multiple equally divided inverse depth values, determine the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point.
  • the image depth estimation device performs interval division for the inverse depth space range, thereby selecting inverse depth values in different intervals, so that there is an inverse depth value in each interval as the inverse depth value.
  • Depth candidate value That is to say, each sampling point has an inverse depth candidate value in different inverse depth ranges, and the inverse depth value of the sampling point is determined later to ensure that the inverse depth value of different inverse depth ranges can be estimated. Determine to ensure that the estimation process covers the entire inverse depth space range, so that an accurate inverse depth value can be estimated eventually.
  • the image depth estimation device when i is equal to 1, that is, the image depth estimation device needs to determine the inverse depth candidate value corresponding to each sampling point in the first layer sampling point, where the first layer sampling point Is the sampling point in the current image of the first layer with the lowest resolution in the current image of the k layer, the image depth estimation device obtains the inverse depth space range corresponding to the current frame as [dmin, dmax], which can be divided equally to obtain The q equally divided inverse depth values d1, d2, ..., dq of the divided interval can be determined as the initial inverse depth value, that is, each sampling point in the first layer sampling point corresponds to The inverse depth candidate value of, of course, the inverse depth candidate value may also include dmin and dmax. That is, for each sampling point in the first layer sampling point, the corresponding inverse depth candidate value is exactly the same.
  • the image depth estimation device can set the equal partitions of the inverse depth space range according to actual requirements, which is not limited in the embodiment
  • the image depth estimation apparatus divides the inverse depth space range in the above-mentioned equal division manner, and uses the inverse depth value of the divided interval as the inverse depth candidate value, it can guarantee the inverse depth.
  • the depth candidate value uniformly covers the entire inverse depth space range, ensuring that the subsequent inverse depth value determined from the inverse depth candidate value is more accurate.
  • the inverse depth space range is divided in sequence at a plurality of different intervals set in advance, or based on the preset initial division interval, combined with the interval change rule, the interval is adjusted for each division, and then the adjusted interval is used for downloading. Division of an interval.
  • the selection of the initial inverse depth value can also directly select an inverse depth value randomly in the divided intervals, or select the inverse depth value in the middle of each divided interval.
  • the specific interval division method and the initial inverse depth value selection method are not limited in the embodiment of the present disclosure.
  • the image depth estimation device when i is not equal to 1, the image depth estimation device needs to obtain the sampling points of the i-1th layer from the current image of the k layer, which is , The pixel points obtained by sampling the sampling points of the i-1th layer, and the inverse depth value of the i-1th layer needs to be obtained.
  • the current image of each layer can be sampled with different sampling steps.
  • the image depth estimation device can directly obtain the inverse depth value of the i-1th layer, and further determine according to the inverse depth value of the i-1th layer, the sampling points of the i-1th layer, and a plurality of equally divided inverse depth values The inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point.
  • FIG. 5 is a schematic diagram of a process for determining an inverse depth candidate value provided by an embodiment of the disclosure.
  • the image depth estimation device determines the corresponding value of each sampling point in the i-th layer based on the i-1th layer inverse depth estimation value, the i-1th layer sampling points, and multiple initial inverse depth values.
  • Candidates for inverse depth including:
  • S501 Determine the second sampling point closest to the first sampling point from the sampling points of the i-1th layer, and at least two third sampling points adjacent to the second sampling point; the first sampling point is the i-th layer Any one of the sampling points.
  • S502 Obtain the inverse depth value of each of the at least two third sampling points and the depth value of the second sampling point according to the inverse depth value of the i-1th layer, to obtain at least three inverse depth values.
  • S503 Determine a maximum inverse depth value and a minimum inverse depth value from at least three inverse depth values.
  • S505 Continue to determine the inverse depth candidate value corresponding to the sampling point other than the first sampling point in the i-th layer sampling point, until the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point is determined.
  • the sampling points of the i-th layer when i is equal to 1, the sampling points of the i-th layer, that is, the inverse depth candidate value corresponding to each sampling point in the sampling point of the first layer, are all the same.
  • the i-th layer inverse depth candidate value corresponding to each sample point can be based on the i-1th layer sampling point and the i-1th layer inverse depth value from multiple The initial inverse depth value is selected to determine the inverse depth candidate value with a small range, and the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point may be different.
  • the image depth estimation device can find the distance in the i-1th layer sampling point Nearest sampling point So from the sampling points of the i-1th layer As the center, determine its neighboring multiple (for example, 8) sampling points, and then, according to the i-1th layer inverse depth value, obtain And the inverse depth value of each of the 8 adjacent sampling points, that is, 9 inverse depth values are obtained, and further, the largest inverse depth value d1 and the smallest inverse depth value are obtained from the 9 inverse depth values d2 is the limit, the depth value between d1 and d2 among multiple initial inverse depth values is selected, including d1 and d2, which are determined as The corresponding candidate inverse depth value.
  • the image depth estimation device determines the third sampling point adjacent to the second sampling point from the sampling points of the i-1th layer, and the 8 sampling points around it can be equalized. It is determined as the third sampling point.
  • the 2 sampling points adjacent to the left and right, or the 2 sampling points adjacent to the top and bottom can be determined as the third sampling point, and the 4 adjacent sampling points can also be determined as the third sampling point.
  • the points are all determined as third sampling points, and the specific number of third sampling points is not limited in the embodiment of the present disclosure.
  • the image depth estimation apparatus may also determine the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point according to other rules. For example, receiving different inverse depth candidate values set for different layer sampling points set by the user, the inverse depth candidate value corresponding to each sampling point in the same layer sampling point is the same.
  • the specific inverse depth candidate value determination method is not limited in the embodiment of the present disclosure.
  • the image depth estimation device determines the sample point of the i-th layer according to the inverse depth candidate value corresponding to each sample point in the sample point of the i-th layer and the reference image of the i-th layer in the k-layer reference image Obtain the inverse depth value of each sampling point in the i-th layer, including: for each sampling point in the i-th layer sampling point, according to each inverse depth value in the corresponding inverse depth candidate value, the first Each sampling point in the i-layer sampling point is projected into the i-th layer reference image, and the i-th layer projection point corresponding to each sampling point in the i-th layer sampling point is obtained; based on the i-th layer sampling point and the i-th layer projection point Block matching, to obtain the matching result of the i-th layer corresponding to each sampling point in the i-th layer sampling point; according to the matching result of the i-th layer, determine the inverse depth value of each sampling point in the i-
  • the image depth estimation apparatus projects each sample point in the i-th layer sampling point according to each inverse depth value in the corresponding inverse depth candidate value to the i-th layer In the reference image.
  • the image depth estimation device will each sample point of the i-th layer sample points according to the corresponding inverse depth candidate value Each inverse depth value is respectively projected into each k-th layer reference image.
  • the image depth estimation device performs any sampling point of the i-th layer sampling point u and v are the x-axis and y-axis coordinates of the sampling point, for Any one of the corresponding inverse depth candidate values d z is projected into the k-th layer reference image according to the following formula (1) and formula (2):
  • K is the camera internal parameter matrix corresponding to the camera that obtains the current frame t and the reference frame r, with Is the pixel-based scale factor on the x-axis and y-axis of the focal length corresponding to the current image of the i-th layer, To use pixels to describe the length of the focal length in the x-axis direction, It is the length of the focal length in the y-axis direction described in pixels. Is the position of the principal point of the current image of the i-th layer, R r is a 3 ⁇ 3 rotation matrix, and T r is a 3 ⁇ 1 translation vector.
  • the X r finally obtained by formula (1) is a 3 ⁇ 1 matrix, in which the first row element is X r (0), the second row element is X r (1), and the third row element is X r (2 ), further calculation according to formula (2), the sampling point can be obtained According to the inverse depth value d z in the corresponding inverse depth candidate value, project to the projection point in the reference image of the i-th layer in the reference frame r
  • formula (2) and formula (3) can be used to determine the inverse depth according to each of the corresponding inverse depth candidate values.
  • the depth value is projected into the i-th layer reference image. If there are multiple i-th layer reference images, repeat the execution.
  • the image depth estimation device may perform block matching according to the i-th layer sampling point and the i-th layer projection point, specifically sampling the i-th layer Each sampling point in the point is block-matched with each projection point in the corresponding i-th layer projection point, so as to obtain the i-th layer matching result corresponding to each sampling point.
  • the image depth estimation device performs block matching according to the sampling points of the i-th layer and the projection points of the i-th layer to obtain the matching result of the i-th layer corresponding to each sampling point in the i-th layer.
  • Each projection point of is a plurality of second image blocks in the center; the sampling point to be matched is any one of the sampling points of the i-th layer; the first image block is separated from each of the plurality of second image blocks Perform comparison to obtain multiple matching results, and determine multiple matching results as the matching results of the i-th layer corresponding to the sampling points to be matched; continue to determine the i-th layer of the sampling points that are different from the sampling points to be matched.
  • each i-th layer reference image can get a penalty value; when there are multiple i-th layer reference images, the obtained multiple penalty values are merged (for example, multiple penalty values are averaged), namely
  • the matching result of the i-th layer corresponding to an inverse depth value for each sampling point can be obtained.
  • a penalty value corresponding to each inverse depth value can be obtained, that is, the i-th layer matching result corresponding to each sampling point can be obtained.
  • m is a natural number greater than or equal to 1
  • the image depth estimation device samples any one of the i-th layer sampling points point
  • the projection point obtained by the inverse depth value d z projection in the corresponding i-th layer projection point Perform block matching to obtain the matching result with the inverse depth value d z in the matching result of the i-th layer:
  • the comparison function can be with The zero-mean Normalized Cross Correlation (ZNCC) of the gray value of the neighborhood, or the sum of absolute differences (Sum of absolute differences, SAD) or the sum of squared differences (Sum of Squared Differences, SSD) Two methods. Namely In the corresponding matching result of the i-th layer, the inverse depth value is the matching result of d z .
  • the matching result of the i-th layer corresponding to each sampling point includes the matching results of different inverse depth values among its own corresponding inverse depth candidate values .
  • the corresponding inverse depth candidate values include d1, d2, ..., dq, and the obtained i-th layer matching result includes the matching result of each inverse depth value.
  • the specific i-th layer matching result is not limited in the embodiment of the present disclosure.
  • the reference frame corresponding to the current frame includes 2 frames, and each frame corresponds to a set of 2-layer reference images, that is, there are two first-layer reference images, and the image depth estimation device will A sampling point of the current image of layer 1 in the frame According to their corresponding inverse depth candidate values d 1 , d 2, and d 3 , they are respectively projected into two first-layer reference images, and three projection points are obtained in the two first-layer reference images, a total of 6 projection points, As its corresponding first layer projection point.
  • the projection point projected to a reference image of the first layer according to d 1 is 1 D according to another layer onto a projection point of the reference image is the first Therefore, you can with Substituting into formula (3), that is, m is equal to 2, to obtain For the matching results with the inverse depth value d 1 , similarly, the matching results with the inverse depth candidate values d 2 and d 3 can also be obtained to form The corresponding matching result of the i-th layer.
  • the image depth estimation device determines the inverse depth value of each sampling point in the i-th layer according to the matching result of the i-th layer, and obtains the i-th layer inverse depth value, including:
  • the target matching result is selected from the matching result of the i-th layer corresponding to the sampling point;
  • the target sampling point is any sampling point in the i-th layer sampling point;
  • the projection corresponding to the target matching result is the projection point of the i-th layer corresponding to the target sampling point Point is determined as the target projection point; among the inverse depth candidate values, the inverse depth value corresponding to the target projection point is determined as the inverse depth value of the target sampling point; continue to determine the inverse of the sampling point different from the target sampling point in the i-th layer sampling point Depth value until the inverse depth value of each sampling point in the i-th layer is determined, and the i-th layer inverse depth value is obtained.
  • the image depth estimation device may determine the i-th layer sampling according to the following formula (4) Any sampling point The inverse depth value:
  • the matching result value is the smallest. Therefore, the corresponding inverse depth value d z is actually determined as The inverse depth value.
  • the above-mentioned matching process for sampling points is actually for one sampling point, respectively determining the degree of difference from the projection points using different inverse depth values, and the formula (4 ) To determine the inverse depth value is actually to select the minimum matching result value, which represents the smallest difference between the corresponding projection point and the sampling point. Therefore, the inverse depth value used by the projection point can be determined as the inverse depth of the sampling point Value to get the accurate inverse depth value of the sampling point.
  • the image depth estimation method may also determine the inverse depth value of each sampling point in the i-th layer sampling point in other ways. For example, select partial results in a specific range from the corresponding matching results in each sampling point, and then randomly select a matching result from the partial results, and determine the inverse depth value used by the projection point corresponding to the randomly selected matching result Is the inverse depth value of the sampling point.
  • the process is the same as obtaining the inverse depth value of the i-th layer, which will not be repeated here.
  • the image depth estimation device obtains the inverse depth value of the k-th layer, that is, in the current image of the k layer
  • the image depth estimation device may determine the k-th layer inverse depth value as the inverse depth estimation result.
  • the estimated depth in the foregoing process is a discrete value.
  • a second interpolation may be performed to adjust the inverse depth of each sampling point. Specifically, as shown in FIG. 7, after step S303, S305 to S306 may be further included:
  • the k-th layer inverse depth value includes the inverse depth value corresponding to each sample point in the k-th layer sample point, and in order to obtain more accurate
  • the k-th layer inverse depth value can be interpolated and optimized for the k-th layer inverse depth value, that is, the inverse depth value of each sampling point in the k-th layer sampling point is adjusted and optimized separately to obtain the optimized k-th layer Inverse depth value.
  • the image depth estimation device performs interpolation optimization on the k-th layer inverse depth value to obtain the optimized k-th layer inverse depth value, including: Depth value, select the adjacent inverse depth value of the inverse depth value from the candidate inverse depth value of the corresponding sampling point in the k-th layer sampling point; the k-th layer sampling point is the current image of the k-th layer in the current image of the k layer Sampling the pixel points obtained; obtain the matching result corresponding to the adjacent inverse depth value; interpolate each inverse depth value in the k-th layer inverse depth value based on the matching result corresponding to the adjacent inverse depth value and the adjacent inverse depth value Optimize to obtain the optimized inverse depth value of the kth layer.
  • the k-th layer inverse depth value includes the inverse depth value corresponding to each sampling point in the k-th layer sampling point, and the image depth estimation device needs to perform the
  • the corresponding inverse depth value is subjected to interpolation optimization, so as to obtain the interpolation optimization result as the inverse depth estimation result of the current frame.
  • interpolation optimization for any sampling point in the k-th layer sampling point If its corresponding inverse depth value is d z , it can be optimized by interpolation according to formula (5):
  • d opt d z +0.5 ⁇ (d z -d z-1 ) ⁇ (C z+1 -C z-1 )/(C z+1 +C z-1 -2 ⁇ C z ) (5)
  • d Z-1 is the sampling point Among the corresponding candidate inverse depth values, the previous inverse depth value adjacent to d Z.
  • C z+1 is C z-1 is C z is Can be calculated
  • d z+1 and d z-1 are The two adjacent inverse depth values d z in the corresponding candidate inverse depth values will not be repeated here.
  • the image depth estimation device performs interpolation optimization on the k-th layer inverse depth value according to formula (5), since in the k-layer current image, the k-th layer current image is actually the current frame , That is, after obtaining the inverse depth value of each sampling point in the current frame, it is further optimized to obtain a more accurate inverse depth value for each sampling point in the current frame, that is, the current frame is obtained The inverse depth estimation result.
  • the image depth estimation device can also obtain three or more adjacent inverse depth values and their corresponding matching results, and use a polynomial similar to formula (5) to perform interpolation optimization.
  • the image depth estimation device can also obtain the two depth values adjacent to the determined inverse depth value among the corresponding inverse depth candidate values for the inverse depth value of each sampling point in the k-th layer sampling point, and combine these three The average value of the inverse depth values is used as the final inverse depth value of the sampling point to realize the optimization of the inverse depth value.
  • the image depth estimation device after the image depth estimation device obtains the optimized k-th layer inverse depth value, it can determine the optimized k-th layer inverse depth value as the inverse depth estimation result.
  • the image depth estimation apparatus may further perform the following steps:
  • the image depth estimation device after the image depth estimation device obtains the inverse depth estimation result of the current frame, it can determine the depth estimation result of the current frame according to the inverse depth estimation result; the depth estimation result can be used to implement the current frame-based depth estimation result. Three-dimensional scene construction.
  • the image depth estimation device is obtaining the inverse depth estimation result of the current frame, that is, the current frame After the optimized inverse depth value is interpolated at each sampling point in, the corresponding depth value can be obtained by taking the reciprocal of each sample point, thereby obtaining the depth estimation result of the current frame. For example, if the inverse depth value of a certain sampling point in the current frame after interpolation is optimized, the depth value is 1/A.
  • the final depth determined by the above-mentioned image depth estimation method is the z-axis coordinate value of the sampling point of the current frame in the camera coordinate system, and no additional coordinate transformation is required.
  • the above-mentioned image depth estimation method may be applied in the process of realizing the construction of a three-dimensional scene based on the current frame. For example, when a user shoots a scene with a mobile device camera, he can use the above image depth estimation method to obtain the depth estimation result of the current frame, and then reconstruct the 3D structure of the video scene; the user clicks on a certain position in the current frame of the video on the mobile device.
  • the depth estimation result of the current frame determined by the above-mentioned image depth estimation method can be used to find the anchor point to place the virtual object through the intersection of the line of sight at the click position, so as to achieve the augmented reality effect of geometrically consistent fusion of the virtual object and the real scene;
  • the above-mentioned image depth estimation method can be used in the video to recover the three-dimensional scene structure, and to calculate the occlusion relationship between the real scene and the virtual object, so as to achieve the augmented reality effect of consistent fusion
  • the above step S104 may not be executed, and the inverse depth estimation result may be used for other image processing for non-three-dimensional scene establishment.
  • the above step S104 may not be executed, and the inverse depth estimation result may be used for other image processing for non-three-dimensional scene establishment.
  • directly output the change value of the depth information of the image sampling point and perform data processing such as target recognition or three-dimensional point distance calculation to other equipment.
  • the embodiments of the present disclosure provide an image depth estimation method to obtain the reference frame corresponding to the current frame and the inverse depth space range of the current frame; perform pyramid down-sampling processing on the current frame and the reference frame, respectively, to obtain the k-layer current corresponding to the current frame Image, and the k-layer reference image corresponding to the reference frame; k is a natural number greater than or equal to 2; based on the k-layer reference image and the inverse depth space range, perform inverse depth estimation iterative processing on the k-layer current image to obtain the inverse depth estimation of the current frame result.
  • the technical solution provided by the present disclosure adopts iterative processing of inverse depth estimation on the multi-layer current image combined with the multi-layer reference image to reduce the inverse depth search space layer by layer, determine the depth estimation result of the current frame, and finally
  • the depth estimation result is the z-axis coordinate value of the pixel of the current frame in the camera coordinate system, without additional coordinate transformation, so that the depth estimation result of the image can be obtained in real time, and the accuracy of the depth estimation result is high.
  • FIG. 8 is a schematic structural diagram of an image depth estimation device provided by an embodiment of the disclosure. As shown in Figure 8, including:
  • the acquiring part 801 is configured to acquire the reference frame corresponding to the current frame and the inverse depth space range of the current frame;
  • the down-sampling part 802 is configured to perform pyramid down-sampling processing on the current frame and the reference frame, respectively, to obtain the k-layer current image corresponding to the current frame and the k-layer reference image corresponding to the reference frame; k is A natural number greater than or equal to 2;
  • the estimation part 803 is configured to perform inverse depth estimation iterative processing on the k-layer current image based on the k-layer reference image and the inverse depth space range to obtain the inverse depth estimation result of the current frame;
  • the image depth estimation apparatus of the embodiment of the present disclosure may further include: a determining part 804 configured to determine the depth estimation result of the current frame according to the inverse depth estimation result; the depth estimation result may be used to implement The three-dimensional scene construction of the current frame.
  • the acquiring part 801 is specifically configured to acquire at least two frames to be screened; from the at least two frames to be screened, at least one frame that satisfies a preset angle constraint with the current frame is selected And use the at least one frame as the reference frame.
  • the preset angle constraint conditions include:
  • the angle formed by the line between the pose center corresponding to the current frame and the pose center corresponding to the reference frame and the target point is within a first preset angle range;
  • the target point is the average corresponding to the current frame
  • the included angle of the optical axis corresponding to the current frame and the reference frame is within a second preset angle range
  • the included angle of the vertical axis corresponding to the current frame and the reference frame is in a third preset angle range.
  • the estimation part 803 is specifically configured to determine the inverse depth candidate value corresponding to each sampling point in the i-th layer sampling point based on the k-layer current image and the inverse depth space range;
  • the estimation part 803 is specifically configured to perform interval division on the inverse depth space range, and select an inverse depth value in each division interval to obtain multiple initial inverse depth values;
  • the initial inverse depth values are determined as the inverse depth candidate values corresponding to each sampling point in the first layer sampling point; in the case that i is not equal to 1, the i-1th layer sampling point is obtained from the k-layer current image, And the i-1th layer inverse depth value; based on the i-1th layer inverse depth estimation value, the i-1th layer sampling point, and the multiple initial inverse depth values, determine the i-th layer sampling point The inverse depth candidate value corresponding to each sampling point.
  • the estimation part 803 is specifically configured to determine, from the sampling points of the i-1th layer, a second sampling point that is closest to the first sampling point, and at least a second sampling point adjacent to the second sampling point.
  • Two third sampling points the first sampling point is any one of the i-th layer sampling points; according to the i-1th layer inverse depth value, the at least two third sampling points
  • the inverse depth value of each sampling point and the inverse depth value of the second sampling point obtain at least three inverse depth values; from the at least three inverse depth values, the maximum inverse depth value and the minimum inverse depth value are determined From the plurality of initial inverse depth values, select an inverse depth value within the range of the maximum inverse depth value and the minimum inverse depth value, and determine the selected inverse depth value as the corresponding first sampling point
  • the inverse depth candidate value corresponding to the sampling point of the i-th layer sampling point that is not the first sampling point, until it is determined that each sampling point in the i-th layer sampling point corresponds to Inverse
  • the estimation part 803 is specifically configured to sample the i-th layer according to each inverse depth value in the corresponding inverse depth candidate value for each sampling point in the i-th layer sampling point.
  • Each sampling point in the points is projected into the i-th layer reference image to obtain the i-th layer projection point corresponding to each sampling point in the i-th layer sampling point; according to the i-th layer sampling point and the Block matching is performed on the projection points of the i-th layer to obtain the matching result of the i-th layer corresponding to each sampling point in the i-th layer sampling point; according to the matching result of the i-th layer, each sample of the i-th layer sampling point is determined The inverse depth value of the point to obtain the i-th layer inverse depth value.
  • the estimation part 803 is specifically configured to use a preset window to select the first image block centered on the sampling point to be matched from the current image of the i-th layer, and select the reference image from the i-th layer Multiple second image blocks centered on each of the projection points of the i-th layer corresponding to the sample point to be matched; the sample point to be matched is any one of the sample points of the i-th layer Sampling points; comparing the first image block with each of the plurality of second image blocks to obtain multiple matching results, and determining the multiple matching results as the sampling points to be matched The corresponding i-th layer matching result; continue to determine the i-th layer matching result corresponding to the sampling point different from the sample point to be matched in the i-th layer sampling point, until each sample in the i-th layer sampling point is obtained The matching result of the ith layer corresponding to the point.
  • the estimation part 803 is specifically configured to select a target matching result from the matching results of the i-th layer corresponding to the target sampling point; the target sampling point is any sampling point in the i-th layer sampling point; Determine the projection point corresponding to the target matching result among the projection points of the i-th layer corresponding to the target sampling point as the target projection point; determine the inverse depth value corresponding to the target projection point among the candidate inverse depth values Is the inverse depth value of the target sampling point; continue to determine the inverse depth value of the sampling point different from the target sampling point in the i-th layer sampling point, until each sample in the i-th layer sampling point is determined The inverse depth value of the point to obtain the i-th layer inverse depth value.
  • the estimation part 803 is further configured to perform interpolation optimization on the k-th layer inverse depth value to obtain an optimized k-th layer inverse depth value; determine the optimized k-th layer inverse depth value Is the inverse depth estimation result.
  • the estimating part 803 is specifically configured to, for each inverse depth value of the k-th layer inverse depth values, respectively select a corresponding inverse depth value from the candidate inverse depth values of the corresponding sampling points in the k-th layer.
  • Adjacent inverse depth value; the k-th layer sampling point is the pixel point obtained by sampling the k-th layer current image in the k-layer current image; obtaining the matching result corresponding to the adjacent inverse depth value; based on the adjacent
  • interpolation optimization is performed on each inverse depth value of the k-th layer inverse depth value to obtain the optimized k-th layer inverse depth value.
  • the embodiment of the present disclosure provides an image depth estimation device, which obtains the reference frame corresponding to the current frame and the inverse depth space range of the current frame; performs pyramid down-sampling processing on the current frame and the reference frame respectively to obtain the k-layer current corresponding to the current frame Image, and the k-layer reference image corresponding to the reference frame; k is a natural number greater than or equal to 2; based on the k-layer reference image and the inverse depth space range, perform inverse depth estimation iterative processing on the k-layer current image to obtain the inverse depth estimation of the current frame result.
  • the image depth estimation device adopts the iterative process of inverse depth estimation on the multi-layer current image combined with the multi-layer reference image to reduce the inverse depth search space layer by layer, and determine the depth estimation result of the current frame, and
  • the final depth estimation result is the z-axis coordinate value of the pixel point of the current frame in the camera coordinate system, without additional coordinate transformation, so that the depth estimation result of the image can be obtained in real time, and the accuracy of the depth estimation result is high.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the disclosure. As shown in FIG. 9, the electronic device includes: a processor 901, a memory 902, and a communication bus 903; wherein,
  • the communication bus 903 is configured to implement connection and communication between the processor 901 and the memory 902;
  • the processor 901 is configured to execute the image depth estimation program stored in the memory 902, so as to implement the above image depth estimation method.
  • the electronic device is a mobile phone or a tablet computer. Of course, it may also be other types of devices, which is not limited in the embodiment of the present disclosure.
  • the embodiments of the present disclosure also provide a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the above Image depth estimation method.
  • the computer-readable storage medium may be a volatile memory (volatile memory), such as random-access memory (Random-Access Memory, RAM); or a non-volatile memory (non-volatile memory), such as read-only memory (Read Only Memory). -Only Memory, ROM, flash memory, Hard Disk Drive (HDD) or Solid-State Drive (SSD); it can also be a respective device including one or any combination of the above-mentioned memories, Such as mobile phones, computers, tablet devices, personal digital assistants, etc.
  • the embodiments of the present disclosure also provide a computer program, including computer-readable code, which when executed by a processor, implements the steps corresponding to the above-mentioned image depth estimation method.
  • the embodiments of the present disclosure can be provided as methods, systems, or computer program products. Therefore, the present disclosure may adopt the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware. Moreover, the present disclosure may take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, optical storage, etc.) containing computer-usable program codes.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable signal processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable signal processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to be executed on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in a flow or multiple flows in the flowchart and/or a block or multiple blocks in the block diagram.
  • the technical solution provided by the present disclosure adopts the iterative process of inverse depth estimation on the multi-layer current image combined with the multi-layer reference image to reduce the inverse depth search space layer by layer, and determine the inverse depth estimation result of the current frame.
  • the depth estimation result is the reciprocal of the z-axis coordinate value of the pixel point of the current frame in the camera coordinate system. No additional coordinate transformation is required, and reducing the inverse depth search space layer by layer helps to reduce the calculation amount of inverse depth estimation and improve the estimation Speed, so that the depth estimation result of the image can be obtained in real time, and the accuracy of the depth estimation result is high.

Abstract

一种图像深度估计方法,包括:获取当前帧(It)对应的参考帧(I1,I2,I3,I4,I5)和当前帧(It)的逆深度空间范围(S101);对当前帧(It)和参考帧(I1,I2,I3,I4,I5)分别进行金字塔降采样处理,获得当前帧(It)对应的k层当前图像,以及参考帧(I1,I2,I3,I4,I5)对应的k层参考图像;k为大于等于2的自然数(S102);基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧(It)的逆深度估计结果(S103)。图像深度估计方法能够实时获得图像的深度估计结果,且深度估计结果的精确度较高。

Description

一种图像深度估计方法及装置、电子设备、存储介质
相关申请的交叉引用
本申请基于申请号为201910621318.4、申请日为2019年07月10日,申请名称为“一种图像深度估计方法及装置、电子设备、存储介质”的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式结合在本申请中。
技术领域
本公开涉及计算机视觉技术领域,尤其是一种图像深度估计方法及装置、电子设备、存储介质。
背景技术
图像的深度估计是计算机视觉领域的重要问题。在无法直接获得图像的深度信息时,只有通过深度估计方法才能完成场景的三维重建,进而为增强现实和游戏等应用服务。
目前,基于计算机视觉的深度估计方法可以分为主动视觉方法和被动视觉方法两类。其中,主动视觉方法是指向被测物体发射可控制光束,然后拍摄光束在物体表面上形成的图像,通过几何关系计算出被测物体距离的方法,被动视觉方法包括立体视觉、聚焦法,以及散焦法等,主要是通过一个或多个摄像装置获取的二维图像信息确定深度信息。
发明内容
本公开实施例期望提供一种图像深度估计方法及装置、电子设备、存储介质。
本公开实施例的技术方案是这样实现的:
本公开实施例提供了一种图像深度估计方法,所述方法包括:
获取当前帧对应的参考帧和所述当前帧的逆深度空间范围;
对所述当前帧和所述参考帧分别进行金字塔降采样处理,获得所述当前帧对应的k层当前图像,以及所述参考帧对应的k层参考图像;k为大于等于2的自然数;
基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果。
可以理解的是,在本公开的实施例中,对当前帧和当前帧对应的参考帧进行降采样处理,将获得的多层当前图像结合多层参考图像进行逆深度估计迭代处理,以确定当前帧的逆深度估计结果。由于在确定逆深度估计结果的过程中,逐层减少逆深度搜索空间,从而减少了逆深度估计的计算量,提升了估计速度,能够实时获得逆深度估计结果。
在上述图像深度估计方法中,所述获取当前帧对应的参考帧,包括:
获取至少两个待筛选帧;
从所述至少两个待筛选帧中,选取与所述当前帧之间满足预设角度约束条件的至少一帧,将所述至少一帧作为所述参考帧。
可以理解的是,在本公开的实施例中,按照预设角度预设条件从至少两个待筛选帧中选取参考帧,可以在一定程度上选取出质量较佳,适合与当前帧进行匹配的帧,从而在后续深度估计过程中提高估计的准确性。
在上述图像深度估计方法中,所述预设角度约束条件包括:
所述当前帧对应的位姿中心和所述参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;所述目标点为所述当前帧对应的平均深度点与所述参考帧对应的平均深度点连线的中点;
所述当前帧和所述参考帧对应的光轴夹角处于第二预设角度范围;
所述当前帧和所述参考帧对应的纵轴夹角处于第三预设角度范围。
可以理解的是,在本公开的实施例中,第一个角度条件限定了当前场景到两个相机的距离,角 度过大说明场景过近,两帧重合度会较低,角度过小,则说明场景过远,视差较小,误差会比较大,当相机特别接近时也可能发生角度过小的情况,此时误差同样较大。第二角度条件是为了保证两个相机有足够的共视区域。第三个角度条件是为了避免相机绕着光轴旋转,影响后续深度估计计算过程。同时满足上述三个角度条件的帧作为参考帧有利于提高当前帧深度估计的精度。
在上述图像深度估计方法中,所述基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果,包括:
基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;所述第i层采样点为对所述k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数;
根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值;
令i=i+1,继续对所述k层当前图像中分辨率高于所述第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值;
将所述第k层逆深度值确定为所述逆深度估计结果。
可以理解的是,在本公开的实施例中,基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,例如可以从顶层(第1层)当前图像(像素最少的图像)开始,依次向底层进行逆深度估计迭代,逐层缩小逆深度搜索空间,从而有效的减少逆深度估计的计算量。
在上述图像深度估计方法中,所述基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值,包括:
对所述逆深度空间范围进行区间划分,并在每个划分区间中选择一个逆深度值,得到多个初始逆深度值;
将所述多个初始逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;
当i不等于1的情况下,从所述k层当前图像中获取第i-1层采样点,以及第i-1层逆深度值;
基于所述第i-1层逆深度值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值。
可以理解的是,在本公开的实施例中,针对逆深度空间范围进行区间划分,从而在不同区间内选取逆深度值,可以使得每一个区间内都存在一个逆深度值作为逆深度候选值。也就是说,每一个采样点在不同逆深度范围内都存在一个逆深度候选值,在后续进行确定采样点的逆深度值,可以保证不同逆深度范围的逆深度值都可以进行逆深度值估计确定,保证估计过程覆盖整个逆深度空间范围,从而最终可以估计出准确的逆深度值。
在上述图像深度估计方法中,所述基于所述第i-1层逆深度估计值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值,包括:
从所述第i-1层采样点中,确定与第一采样点距离最近的第二采样点,以及与所述第二采样点相邻的至少两个第三采样点;所述第一采样点为所述第i层采样点中任意一个采样点;
根据所述第i-1层逆深度值,获取所述至少两个第三采样点中每一个采样点的逆深度值,以及所述第二采样点的逆深度值,得到至少三个逆深度值;
从所述至少三个逆深度值中,确定最大逆深度值和最小逆深度值;
从所述多个初始逆深度值中,选取处于所述最大逆深度值和所述最小逆深度值范围内的逆深度值,对所述多个等分逆深度值进行选取,将选取出的逆深度值确定为所述第一采样点对应的逆深度候选值;
继续确定所述第i层采样点中非所述第一采样点的采样点对应的逆深度候选值,直至确定出所述第i层采样点中每一个采样点对应的逆深度候选值。
可以理解的是,在本公开的实施例中,利用i-1层采样点对应的逆深度值从多个初始逆深度值中确定第i层采样点的逆深度候选值,可以更为准确的获得第i层采样点的逆深度候选值,并且,减少了逆深度候选值的数量,相应的,减少了逆深度估计的计算量。
在上述图像深度估计方法中,所述根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值,包括:
对所述第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将所述第i层采样点中每一个采样点投影到所述第i层参考图像中,获得所述第i层采样点中每一个采样点对应的第i层投影点;
根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果;
根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
可以理解的是,在本公开的实施例中,针对的第i层采样点,分别与相应的第i层投影点进行匹配,从而确定与采用不同逆深度值投影的投影点的差异程度,因此,可以准确选取出第i层采样点的逆深度值。
在上述图像深度估计方法中,所述根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果,包括:
利用预设窗口,从所述第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从所述第i层参考图像中选取以所述待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;所述待匹配采样点为所述第i层采样点中任意一个采样点;
将所述第一图像块分别与所述多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将所述多个匹配结果确定为所述待匹配采样点对应的第i层匹配结果;
继续确定所述第i层采样点中与所述待匹配采样点不同的采样点对应的第i层匹配结果,直至获得所述第i层采样点中每一个采样点对应的第i层匹配结果。
可以理解的是,在本公开的实施例中,利用块匹配的方式进行采样点和投影点的匹配,得到的匹配结果实际上就是匹配的惩罚值,其表征了该投影点与采样点的差异度,相应的,也体现了投影该投影点的逆深度值可作为采样点逆深度值的程度,因此,可以利用其结果后续较为准确的选取采样点的逆深度值。
在上述图像深度估计方法中,所述根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值,包括:
从目标采样点对应的第i层匹配结果中选取出目标匹配结果;所述目标采样点为所述第i层采样点中任意一个采样点;
将所述目标采样点对应的第i层投影点中,所述目标匹配结果对应的投影点确定为目标投影点;
将所述逆深度候选值中,所述目标投影点对应的逆深度值确定为所述目标采样点的逆深度值;
继续确定所述第i层采样点中与所述目标采样点不同的采样点的逆深度值,直至确定出所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
可以理解的是,在本公开的实施例中,上述针对采样点匹配的过程,实际上就是针对一个采样点,分别确定与采用不同逆深度值投影的投影点的差异程度。选取出匹配结果值最小结果,表征对应的投影点与采样点差异度最小,因此,可以将该投影点采用的逆深度值确定为采样点的逆深度值,从而得到采样点准确的逆深度值。
在上述图像深度估计方法中,所述获得第k层逆深度值之后,所述方法还包括:
对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值;
将所述优化后的第k层逆深度值确定为所述逆深度估计结果。
可以理解的是,在本公开的实施例中,上述过程中估计的深度为离散值,因此,还可以进行二次插值,调整每个采样点的逆深度,从而获得更为准确的逆深度值。
在上述图像深度估计方法中,所述对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值,包括:
对所述第k层逆深度值中每一个逆深度值,分别从第k层采样点中对应的采样点的候选逆深度值中,选取相邻逆深度值;所述第k层采样点为对所述k层当前图像中第k层当前图像采样获得的像素点;
获取所述相邻逆深度值对应的匹配结果;
基于所述相邻逆深度值和所述相邻逆深度值对应的匹配结果,对所述第k层逆深度值中的每一个逆深度值进行插值优化,获得所述优化后的第k层逆深度值。
可以理解的是,在本公开的实施例中,利用确定的采样点的逆深度值,其相邻逆深度值和相邻逆深度值对应匹配结果,可以更为精确的对采样点的逆深度值进行插值调整,且调整方式简单快速。
本公开实施例提供了一种图像深度估计装置,包括:
获取部分,配置为获取当前帧对应的参考帧和所述当前帧的逆深度空间范围;
降采样部分,配置为对所述当前帧和所述参考帧分别进行金字塔降采样处理,获得所述当前帧对应的k层当前图像,以及所述参考帧对应的k层参考图像;k为大于等于2的自然数;
估计部分,配置为基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果。
在上述图像深度估计装置中,所述获取部分,具体配置为获取至少两个待筛选帧;从所述至少两个待筛选帧中,选取与所述当前帧之间满足预设角度约束条件的至少一帧,将所述至少一帧作为所述参考帧。
在上述图像深度估计装置中,所述预设角度约束条件包括:
所述当前帧对应的位姿中心和所述参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;所述目标点为所述当前帧对应的平均深度点与所述参考帧对应的平均深度点连线的中点;
所述当前帧和所述参考帧对应的光轴夹角处于第二预设角度范围;
所述当前帧和所述参考帧对应的纵轴夹角处于第三预设角度范围。
在上述图像深度估计装置中,所述估计部分,具体配置为基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;所述第i层采样点为对所述k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数;根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值;令i=i+1,继续对所述k层当前图像中分辨率高于所述第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值;将所述第k层逆深度值确定为所述逆深度估计结果。
在上述图像深度估计装置中,所述估计部分,具体配置为对所述逆深度空间范围进行区间划分,并在每个划分区间中选择一个逆深度值,得到多个初始逆深度值;将所述多个初始逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;在i不等于1的情况下,从所述k层当前图像中获取第i-1层采样点,以及第i-1层逆深度值;基于所述第i-1层逆深度估计值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值。
在上述图像深度估计装置中,所述估计部分,具体配置为从所述第i-1层采样点中确定与第一采样点距离最近的第二采样点,以及与所述第二采样点相邻的至少两个第三采样点;所述第一采样点为所述第i层采样点中任意一个采样点;根据所述第i-1层逆深度值,获取所述至少两个第三采样点中每一个采样点的逆深度值,以及所述第二采样点的逆深度值,得到的至少三个逆深度值,;从所述至少三个逆深度值中,确定最大逆深度值和最小逆深度值;从所述多个初始逆深度值中,选取处于所述最大逆深度值和所述最小逆深度值范围内的逆深度值,将选取出的逆深度值确定为所述第一采样点对应的逆深度候选值;继续确定所述第i层采样点中非所述第一采样点的采样点对应的逆深度候选值,直至确定出所述第i层采样点中每一个采样点对应的逆深度候选值。
在上述图像深度估计装置中,所述估计部分,具体配置为对所述第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将所述第i层采样点中每一个采样点投影到所述第i层参考图像中,获得所述第i层采样点中每一个采样点对应的第i层投影点;根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果;根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
在上述图像深度估计装置中,所述估计部分,具体配置为利用预设窗口,从所述第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从所述第i层参考图像中选取以所述待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;所述待匹配采样点为所述第i层采样点中任意一个采样点;将所述第一图像块分别与所述多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将所述多个匹配结果确定为所述待匹配采样点对应的第i层匹配结果;继续确定所述第i层采样点中与所述待匹配采样点不同的采样点对应的第i层匹配结果,直至获得所述第i层采样点中每一个采样点对应的第i层匹配结果。
在上述图像深度估计装置中,所述估计部分,具体配置为从目标采样点对应的第i层匹配结果中选取出目标匹配结果;所述目标采样点为所述第i层采样点中任意一个采样点;将所述目标采样点对应的第i层投影点中,所述目标匹配结果对应的投影点确定为目标投影点;将所述逆深度候选值中,所述目标投影点对应的逆深度值确定为所述目标采样点的逆深度值;继续确定所述第i层采样点中与所述目标采样点不同的采样点的逆深度值,直至确定出所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
在上述图像深度估计装置中,所述估计部分,还配置为对所述第k层逆深度值进行插值优化, 获得优化后的第k层逆深度值;将所述优化后的第k层逆深度值确定为所述逆深度估计结果。
在上述图像深度估计装置中,所述估计部分,具体配置为对所述第k层逆深度值中每一个逆深度值,分别从第k层采样点中对应的采样点的候选逆深度值中,选取相邻逆深度值;所述第k层采样点为对所述k层当前图像中第k层当前图像采样获得的像素点;获取所述相邻逆深度值对应的匹配结果;基于所述相邻逆深度值和所述相邻逆深度值对应的匹配结果,对所述第k层逆深度值中的每一个逆深度值进行插值优化,获得所述优化后的第k层逆深度值。
本公开实施例提供了一种电子设备,所述电子设备包括:处理器、存储器和通信总线;其中,
所述通信总线,配置为实现所述处理器和所述存储器之间的连接通信;
所述处理器,配置为执行所述存储器中存储的图像深度估计程序,以实现上述图像深度估计方法。
在上述电子设备中,所述电子设备为手机或平板电脑。
本公开实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可以被一个或者多个处理器执行,以实现上述图像深度估计方法。
本公开实施例提供了一种计算机程序,包括计算机可读代码,所述计算机可读代码被处理器执行时,实现上述图像深度估计方法对应的步骤。
由此可见,本公开实施例的技术方案中,获取当前帧对应的参考帧和当前帧的逆深度空间范围;对当前帧和参考帧分别进行金字塔降采样处理,获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像;k为大于等于2的自然数;基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧的逆深度估计结果。也就是说,本公开提供的技术方案,采取了对多层当前图像结合多层参考图像进行逆深度估计迭代处理,以逐层减少逆深度搜索空间,确定当前帧的逆深度估计结果,该逆深度估计结果为当前帧的像素点在相机坐标系下的z轴坐标值的倒数,不需要额外进行坐标变换,且逐层减少逆深度搜索空间有助于减少逆深度估计的计算量,提升估计速度,从而能够实时获得图像的深度估计结果,且深度估计结果的精确度较高。
附图说明
图1为本公开实施例提供的一种图像深度估计方法的流程示意图;
图2为本公开实施例提供的一种示例性的相机位姿夹角的示意图;
图3为本公开实施例提供的一种逆深度估计迭代处理的流程示意图一;
图4为本公开实施例提供的一种示例性的3层当前图像的示意图;
图5为本公开实施例提供的一种确定逆深度候选值的流程示意图;
图6为本公开实施例提供的一种示例性的采样点投影示意图;
图7为本公开实施例提供的一种逆深度估计迭代处理的流程示意图二;
图8为本公开实施例提供的一种图像深度估计装置的结构示意图;
图9为本公开实施例提供的一种电子设备的结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。
本公开实施例提供了一种图像深度估计方法,其执行主体可以是图像深度估计装置,例如,图像深度估计方法可以由终端设备或服务器或其它电子设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该图像深度估计方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。图1为本公开实施例提供的一种图像深度估计方法的流程示意图。如图1所示,主要包括以下步骤:
S101、获取当前帧对应的参考帧和当前帧的逆深度空间范围。
在本公开的实施例中,执行主体以图像深度估计装置为例进行说明。首先,图像深度估计装置对当前帧进行深度估计时,需要先获取到当前帧对应的参考帧和当前帧的逆深度空间范围。
需要说明的是,在本公开的实施例中,当前帧为需要进行深度估计的图像,而参考帧为在对当前帧进行深度估计时,用于进行参考匹配的图像,参考帧的数量可以为多个,考虑到深度估计的速度和鲁棒性的平衡,选取5个左右的参考帧较为合适,具体的当前帧的参考帧本公开实施例不作限 定。
具体的,在本公开的实施例中,图像深度估计装置获取当前帧对应的参考帧包括以下步骤:获取至少两个待筛选帧;从至少两个待筛选帧中,选取与当前帧之间满足预设角度约束条件的至少一帧,将该至少一帧作为参考帧。
需要说明的是,在本公开的实施例中,图像深度估计装置还可以以其它方式获取参考帧,例如,接收用户发送的针对至少两个待筛选帧的选择指令,将选择指令指示的至少一帧作为参考帧。具体的参考帧获取方式本申请实施例不作限定。
需要说明的是,在本公开的实施例中,图像深度估计装置从至少两个待筛选帧中,选取出的当前帧对应的参考帧可以为多个,而每一个参考帧均与当前帧之间满足预设角度约束条件。待筛选帧即为与当前帧针对同一场景,但是不同角度下获取到的图像。图像深度估计装置可以配置有摄像模块,通过该摄像模块可以获取到待筛选帧,当然,也可以通过其它独立的摄像设备先获取待筛选帧,图像深度估计装置再从摄像设备中进一步获取待筛选帧。具体的预设角度约束条件可以根据实际深度估计需求预先设置在图像深度估计装置中,也可以存储在其他装置中,需要进行深度估计时从其他装置中获取,又或者可以通过接收用户输入的角度约束条件获取等,本公开实施例不作限定。
具体的,在本公开的实施例中,预设角度约束条件包括:当前帧对应的位姿中心和参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;目标点为当前帧对应的平均深度点与参考帧对应的平均深度点连线的中点;当前帧和参考帧对应的光轴夹角处于第二预设角度范围;当前帧和参考帧对应的纵轴夹角处于第三预设角度范围。其中,纵轴即为三维空间中相机坐标系的Y轴。
在本公开的一些实施例中,当前帧对应的位姿中心,实际上就是相机在处于获取当前帧时的位置和姿态下,相机的中心(光心)。参考帧对应的位姿中心,实际上就是相机在处于获取参考帧时的位置和姿态下,相机的中心(光心)。
示例性的,在本公开的实施例中,如图2所示,定义获取当前帧时相机的位姿为位姿1,获取参考帧时相机的位姿为位姿2,位姿1时相机的中心(光心)到对应场景的平均深度点为点P1,位姿2时相机的中心(光心)到对应场景的平均深度点为点P2,P1和P2的连线中点为点P,预设角度预设条件具体包括三个角度条件:第一个角度条件为,位姿1和位姿2时相机中心与P点的连线形成的视角α在[5°,45°]之间;第二个角度条件为相机处于位姿1和位姿2时的光轴夹角在[0°,45°]之间;第三个角度条件为相机处于位姿1和位姿2时的Y轴夹角在[0°,30°]之间,只有同时满足了这三个角度条件的帧才能作为参考帧。以上角度区间实际中都可以进行调整。
需要说明的是,在本公开的实施例中,获取当前帧和参考帧的相机可以配置有定位装置,从而在获取当前帧和参考帧时直接获取到相应的位姿,图像深度估计装置可以获取定位装置中获得的相关位姿,当然,图像深度估计装置还可以按照位姿估计算法,结合获得的当前帧和参考帧中的一些特征点,计算出相应的位姿。
可以理解的是,在本公开的实施例中,第一个角度条件限定了当前场景到两个相机的距离,角度过大说明场景过近,两帧重合度会较低,角度过小,则说明场景过远,视差较小,误差会比较大,当相机特别接近时也可能发生角度过小的情况,此时误差同样较大。第二角度条件是为了保证两个相机有足够的共视区域。第三个角度条件是为了避免相机绕着光轴旋转,影响后续深度估计计算过程。同时满足上述三个角度条件的帧作为参考帧有利于提高当前帧深度估计的精度。
需要说明的是,在本公开的实施例中,图像深度估计装置可以根据当前帧直接获取到对应的逆深度空间范围,逆深度空间范围为当前帧中的像素点的逆深度值可取的空间范围,当然,图像深度估计装置还可以接收用户的设置指令,根据设置指令获取用户指示的逆深度空间范围。具体的逆深度空间范围本公开实施例不作限定。例如,逆深度空间范围为[dmin,dmax],dmin为逆深度空间范围内最小的逆深度值,dmax为逆深度空间范围内最大的逆深度值。
S102、对当前帧和参考帧分别进行金字塔降采样处理,获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像;k为大于等于2的自然数。
在本公开的实施例中,图像深度估计装置在获取到当前帧对应的参考帧之后,可以对当前帧和参考帧分别进行金字塔降采样处理,从而获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像。
需要说明的是,在本公开的实施例中,由于参考帧可以为多个,因此,图像深度估计装置对于每一个参考帧图像分别进行金字塔降采样处理,从而获得的k层参考图像实际上为多组,具体的k层参考图像的数量本公开实施例不作限定。
需要说明的是,在本公开的实施例中,图像深度估计装置对当前帧和参考帧分别进行金字塔降采样处理,获得的当前图像金字塔和参考图像金字塔的层数是相同的,采用的尺度因子也是相同的。例如,图像深度估计装置对当前帧和参考帧分别进行尺度因子为2的降采样,形成三层当前图像和三层参考图像,在这两组三层图像中,顶层图像的分辨率最低,中间层图像的分辨率高于顶层图像的分辨率,底层图像的分辨率最高,实际上,底层图像也就是原图像,即对应的当前帧和参考帧。具体的图像层数k,以及降采样的尺度因子可以根据实际需求预先设置,本公开实施例不作限定。
示例性的,在本公开的实施例中,图像深度估计装置获取到了当前帧It对应的5个参考帧,分别为:参考帧I1、参考帧I2、参考帧I3、参考帧I4,以及参考帧I5,图像深度估计装置对这些帧分别进行尺度因子为2的降采样,从而获取当前帧It对应的3层当前图像,以及参考帧I1、参考帧I2、参考帧I3、参考帧I4和参考帧I5各自对应的三层参考图像。
S103、基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧对应的逆深度估计结果。
在本公开的实施例中,图像深度估计装置在获得k层当前图像和k层参考图像之后,可以基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,例如可以从顶层(第1层)当前图像(像素最少的图像)开始,依次向底层进行逆深度估计迭代,逐层缩小逆深度搜索空间,直到最底第k层,获得当前帧对应的逆深度估计结果。
图3为本公开实施例提供的一种逆深度估计迭代处理的流程示意图一。如图3所示,图像深度估计装置基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧对应的逆深度估计结果,包括如下步骤:
S301、基于k层当前图像和逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;第i层采样点为对k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数。
在本公开的实施例中,k层当前图像按照分辨率由低到高依次包括:第1层当前图像、第2层当前图像、第3层当前图像,……,第k层当前图像,第1层当前图像为k层当前图像中的顶层图像,第k层当前图像为当前图像金字塔中的底层图像,同样的,k层参考图像中按照分辨率由低到高依次包括:第1层参考图像、第2层参考图像、第3层参考图像,……,第k层参考图像,第1层参考图像为参考图像金字塔中的顶层图像,第k层参考图像为参考图像金字塔中的底层图像。
需要说明的是,在本公开的实施例中,图像深度估计装置可以对k层当前图像中第i层当前图像进行像素点采样,采样获得的像素点即为第i层采样点,具体的i的取值为大于1且小于等于k的自然数,本公开实施例不作限定。
需要说明的是,在本公开的实施例中,图像深度估计装置对第i层当前图像进行像素点采样,可以按照预设的采样步长来实现。具体的采样步长可以根据实际需求确定,本公开实施例不作限定。
图4为本公开实施例提供的一种示例性的3层当前图像的示意图。如图4所示,图像深度估计装置可以预先对当前帧,在x轴和y轴坐标按照采样步长为2进行像素点采样,共获得3层当前图像,其中,第1层当前图像分辨率最低,第2层当前图像分辨率高于第1层当前图像,第3层当前图像分辨率高于第2层当前图像,第3层当前图像实际上就是当前帧原图。
具体的,在本公开的实施例中,图像深度估计装置基于k层当前图像和逆深度空间范围,确定第i层采样点中每一个采样点对应的第i层逆深度候选值,包括:当i等于1时,对逆深度空间范围进行区间等分,获得划分区间的多个等分逆深度值;将多个等分逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;当i不等于1时,从k层当前图像中获取第i-1层采样点,以及第i-1层逆深度估计值;基于第i-1层逆深度估计值、第i-1层采样点,以及多个等分逆深度值,确定第i层采样点中每一个采样点对应的逆深度候选值。
可以理解的是,在本公开的实施例中,图像深度估计装置针对逆深度空间范围进行区间划分,从而在不同区间内选取逆深度值,可以使得每一个区间内都存在一个逆深度值作为逆深度候选值。也就是说,每一个采样点在不同逆深度范围内都存在一个逆深度候选值,在后续进行确定采样点的逆深度值,可以保证不同逆深度范围的逆深度值都可以进行逆深度值估计确定,保证估计过程覆盖整个逆深度空间范围,从而最终可以估计出准确的逆深度值。
可以理解的是,在本公开的实施例中,当i等于1时,即图像深度估计装置需要确定第1层采样点中每一个采样点对应的逆深度候选值,其中,第1层采样点为k层当前图像中分辨率最低的第1层当前图像中的采样点,图像深度估计装置获取到当前帧对应的逆深度空间范围为[dmin,dmax],可以将其进行等分,获得进行划分区间的q个等分逆深度值d1、d2,……,dq,可以将这q个等分 逆深度值均确定为初始逆深度值,也就是第1层采样点中每一个采样点对应的逆深度候选值,当然,逆深度候选值还可以包括dmin和dmax。即对于第1层采样点中每一个采样点而言,其对应的逆深度候选值是完全相同的。图像深度估计装置对逆深度空间范围的等分区间可以根据实际需求进行设置,本公开实施例不作限定。
需要说明的是,在本公开的实施例中,图像深度估计装置如果按照上述等分的方式对逆深度空间范围进行区间划分,并将划分区间的逆深度值作为逆深度候选值,可以保证逆深度候选值均匀覆盖整个逆深度空间范围,保证后续从逆深度候选值中确定的逆深度值更加准确。
需要说明的是,在本公开的实施例中,在i等于1的情况下,除了针对逆深度空间范围进行等分的方式进行划分,还可以以非等分方式进行划分。例如,以预先设置的多个不同间隔依次对逆深度空间范围进行划分,或者,基于预设的初始划分间隔,结合间隔变化规则,每划分一次进行间隔的调整,再利用调整后的间隔进行下一区间的划分。当然,初始逆深度值的选取也可以直接在划分的区间中随机选取一个逆深度值,也可以选取每个划分区间的中间的逆深度值。具体的区间划分方式和初始逆深度值选取方式本公开实施例不作限定。
需要说明的是,在本公开的实施例中,在i不等于1的情况下,图像深度估计装置需要从k层当前图像中获取第i-1层采样点,也就是对k层当前图像中,第i-1层采样点进行采样所获得的像素点,并且,还需要获取第i-1层逆深度值。每一层当前图像都可以以不同的采样步长进行采样。其中,在确定第i层采样点中每一个采样点对应的逆深度候选值之前,在i=i-1的情况下,图像深度估计装置已经按照上述逆深度估计步骤获得了第i层逆深度值,也就是第i-1层采样点中每一个采样点的逆深度值。因此,图像深度估计装置可以直接获取到第i-1层逆深度值,并进一步的根据第i-1层逆深度值、第i-1层采样点,以及多个等分逆深度值,确定第i层采样点中每一个采样点对应的逆深度候选值。
图5为本公开实施例提供的一种确定逆深度候选值的流程示意图。如图5所示,图像深度估计装置基于第i-1层逆深度估计值、第i-1层采样点,以及多个初始逆深度值,确定第i层采样点中每一个采样点对应的逆深度候选值,包括:
S501、从第i-1层采样点中确定与第一采样点距离最近的第二采样点,以及与第二采样点相邻的至少两个第三采样点;第一采样点为第i层采样点中任意一个采样点。
S502、根据第i-1层逆深度值,获取至少两个第三采样点中每一个采样点的逆深度值,以及第二采样点的深度值,得到至少三个逆深度值。
S503、从至少三个逆深度值中,确定最大逆深度值和最小逆深度值。
S504、从多个初始逆深度值中,选取处于最大逆深度值和最小逆深度值范围内的逆深度值,将选取出的逆深度值确定为第一采样点对应的逆深度候选值。
S505、继续确定第i层采样点中非第一采样点的采样点对应的逆深度候选值,直至确定出第i层采样点中每一个采样点对应的逆深度候选值。
需要说明的是,在本公开的实施例中,在i等于1的情况下,第i层采样点,即第1层采样点中每一个采样点对应的逆深度候选值,均相同,而在i不等于1的情况下,第i层采样点中,每一个采样点对应的第i层逆深度候选值可以根据第i-1层采样点和第i-1层逆深度值,从多个初始逆深度值中进行选取,确定出范围较小的逆深度候选值,且第i层采样点中每一个采样点对应的逆深度候选值可能均不相同。
示例性的,在本公开的实施例中,
Figure PCTCN2019101778-appb-000001
为第i层采样点中任意一个采样点,图像深度估计装置可以在第i-1层采样点中查找出距离
Figure PCTCN2019101778-appb-000002
最近的采样点
Figure PCTCN2019101778-appb-000003
从而从第i-1层采样点中以
Figure PCTCN2019101778-appb-000004
为中心,确定其相邻的多个(例如8个)采样点,之后,根据第i-1层逆深度值,获取
Figure PCTCN2019101778-appb-000005
以及与其相邻的8个采样点中每一个采样点的逆深度值,即获得9个逆深度值,进一步的,将9个逆深度值中以最大的逆深度值d1和最小的逆深度值d2为界限,将多个初始逆深度值中d1和d2之间的深度值选取出来,包括d1和d2,均确定为
Figure PCTCN2019101778-appb-000006
对应的候选逆深度值。
需要说明的是,在本公开的实施例中,图像深度估计装置从第i-1层采样点中确定与第二采样点相邻的第三采样点,可以将其周围的8个采样点均确定为第三采样点,当然,也可以将与其左右相邻的2个采样点,或者上下相邻的2个采样点确定为第三采样点,还可以将其上下左右相邻的4个采样点均确定为第三采样点,具体的第三采样点的数量本公开实施例不作限定。
需要说明的是,在本公开的实施例中,图像深度估计装置还可以按照其它的规则确定第i层采样点中每一个采样点对应的逆深度候选值。例如,接收用户设置的针对不同层采样点设置的不同逆深度候选值,同一层采样点中每一个采样点对应的逆深度候选值相同。具体的逆深度候选值确定方 式本公开实施例不作限定。
S302、根据第i层采样点中每一个采样点对应的逆深度候选值和k层参考图像中第i层参考图像,确定第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值。
具体的,在本公开的实施例中,图像深度估计装置根据第i层采样点中每一个采样点对应的逆深度候选值和k层参考图像中第i层参考图像,确定第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值,包括:对第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将第i层采样点中每一个采样点投影到第i层参考图像中,获得第i层采样点中每一个采样点对应的第i层投影点;根据第i层采样点和第i层投影点进行块匹配,获得第i层采样点中每一个采样点对应的第i层匹配结果;根据第i层匹配结果,确定第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值。
需要说明的是,在本公开的实施例中,图像深度估计装置对第i层采样点中的每一个采样点,均按照对应的逆深度候选值中的每一个逆深度值投影到第i层参考图像中。当然,如果有多个参考帧,相应的,有多个第i层参考图像,那么图像深度估计装置是将第i层采样点中的每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,分别投影到每一个第k层参考图像中。
具体的,在本公开的实施例中,对于当前帧t和参考帧r,图像深度估计装置对于第i层采样点中的任意一个采样点
Figure PCTCN2019101778-appb-000007
u和v为该采样点的x轴和y轴坐标,对于
Figure PCTCN2019101778-appb-000008
对应的逆深度候选值中的任意一个逆深度值d z,按照以下公式(1)和公式(2)进行投影到第k层参考图像中:
Figure PCTCN2019101778-appb-000009
Figure PCTCN2019101778-appb-000010
需要说明的是,K为获取当前帧t和参考帧r的相机对应的相机内参矩阵,
Figure PCTCN2019101778-appb-000011
Figure PCTCN2019101778-appb-000012
为第i层当前图像对应的焦距在x轴和y轴上的基于像素度量的尺度因子,
Figure PCTCN2019101778-appb-000013
为使用像素来描述的x轴方向焦距的长度,
Figure PCTCN2019101778-appb-000014
为使用像素来描述的y轴方向焦距的长度。
Figure PCTCN2019101778-appb-000015
为第i层当前图像的主点位置,R r为3×3的旋转矩阵,T r为3×1的平移向量。公式(1)最终获得的X r是一个3×1的矩阵,其中,第一行元素为X r(0),第二行元素为X r(1),第三行元素为X r(2),按照公式(2)进一步计算,即可获得采样点
Figure PCTCN2019101778-appb-000016
按照对应的逆深度候选值中的逆深度值d z,投影到参考帧r中第i层参考图像中的投影点
Figure PCTCN2019101778-appb-000017
可以理解的是,在本公开的实施例中,对于第i层采样点中的每一个采样点,均可以通过公式(2)和公式(3),按照对应逆深度候选值中的每一个逆深度值投影到第i层参考图像中,如果是多个第i层参考图像,重复执行即可。
需要说明的是,在本公开的实施例中,图像深度估计装置在获得第i层投影点之后,可以根据第i层采样点和第i层投影点进行块匹配,具体是对第i层采样点中的每一个采样点,与对应的第i层投影点中的每一个投影点分别进行块匹配,从而获得每一个采样点对应的第i层匹配结果。
具体的,在本公开的实施例中,图像深度估计装置根据第i层采样点和第i层投影点进行块匹配,获得第i层采样点中每一个采样点对应的第i层匹配结果,包括:利用预设窗口,从第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从第i层参考图像中选取以待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;待匹配采样点为第i层采样点中任意一个采样点;将第一图像块分别与多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将多个匹配结果确定为待匹配采样点对应的第i层匹配结果;继续确定第i层采样点中与待匹配采样点不同的采样点对应的第i层匹配结果,直至获得第i层采样点中每一个采样点对应的第i层匹配结果。例如,采用一个3×3的窗口,在第i层当前图像和第i层参考图像中,分别以第i层采样点中的每一个采样点和其对应的投影点为中心,获取采样点和投影点的领域点,得到两个图像块,然后对获取的图像块中对应位置的像素点的像素值进行比较,得到两个图像块的匹配的惩罚值(如像素差值的绝对值之和)。针对同一逆深度值,每个第i层参考图像,可以得到一个惩罚值;存在多个第i层参考图像时,对得到的多个惩罚值进行融合(例如多个惩罚值取平均),即可以得到每个采样点对应一个逆深度值的第i层匹配结果。针对每个采样点的多个逆深度值,均可以得到每个逆深度值对应的一个惩罚值,即得到每个采样点对应的第i层匹配结果。
具体的,在本公开的实施例中,如图6所示,对于当前帧t和m个参考帧,m为大于等于1的自然数,图像深度估计装置对于第i层采样点中的任意一个采样点
Figure PCTCN2019101778-appb-000018
按照以下公式(3) 所示,与对应的第i层投影点中以逆深度值为d z投影获得的投影点
Figure PCTCN2019101778-appb-000019
进行块匹配,从而获得第i层匹配结果中逆深度值为d z的匹配结果:
Figure PCTCN2019101778-appb-000020
其中,
Figure PCTCN2019101778-appb-000021
Figure PCTCN2019101778-appb-000022
按照自身对应的候选逆深度值中的逆深度值d z,分别投影到m个参考帧中每一帧分别对应的第i层参考图像中的投影点,共计m个。
Figure PCTCN2019101778-appb-000023
Figure PCTCN2019101778-appb-000024
Figure PCTCN2019101778-appb-000025
的邻域像素值比较函数,该比较函数可以是
Figure PCTCN2019101778-appb-000026
Figure PCTCN2019101778-appb-000027
的邻域灰度值的零均值归一化协方差(Zero-mean Normalized Cross Correlation,ZNCC),也可以使用绝对差之和(Sum of absolute differences,SAD)或差方和(Sum of Squared Differences,SSD)两种方法。
Figure PCTCN2019101778-appb-000028
即为
Figure PCTCN2019101778-appb-000029
对应的第i层匹配结果中,逆深度值为d z的匹配结果。
需要说明的是,在本公开的实施例中,第i层采样点中,每一个采样点对应的第i层匹配结果均包括了自身对应的逆深度候选值中,不同逆深度值的匹配结果,例如,对于第i层采样点中的任意一个采样点
Figure PCTCN2019101778-appb-000030
对应的逆深度候选值包括d1、d2,……,dq,获得的第i层匹配结果包括每一个逆深度值的匹配结果,具体的第i层匹配结果本公开实施例不作限定。
示例性的,在本公开的实施例中,当前帧对应的参考帧包括2个帧,每一个帧对应有一组2层参考图像,即有两个第1层参考图像,图像深度估计装置将当前帧中第1层当前图像的一个采样点
Figure PCTCN2019101778-appb-000031
按照其对应的逆深度候选值d 1、d 2和d 3分别投影到两个第1层参考图像中,分别在两个第1层参考图像中获得三个投影点,共6个投影点,作为其对应的第1层投影点。其中,按照d 1投影到一个第1层参考图像的投影点为
Figure PCTCN2019101778-appb-000032
按照d 1投影到另一个第1层参考图像的投影点为
Figure PCTCN2019101778-appb-000033
因此,可以将
Figure PCTCN2019101778-appb-000034
Figure PCTCN2019101778-appb-000035
Figure PCTCN2019101778-appb-000036
代入公式(3)中,即m等于2,获得
Figure PCTCN2019101778-appb-000037
对逆深度值为d 1的匹配结果,同样的,也可以获得逆深度候选值为d 2和d 3的匹配结果,组成
Figure PCTCN2019101778-appb-000038
对应的第i层匹配结果。
具体的,在本公开的实施例中,图像深度估计装置根据第i层匹配结果,确定第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值,包括:从目标采样点对应的第i层匹配结果中选取出目标匹配结果;目标采样点为第i层采样点中任意一个采样点;将目标采样点对应的第i层投影点中,目标匹配结果对应的投影点确定为目标投影点;将逆深度候选值中,目标投影点对应的逆深度值确定为目标采样点的逆深度值;继续确定第i层采样点中与目标采样点不同的采样点的逆深度值,直至确定出第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值。
需要说明的是,在本公开的实施例中,图像深度估计装置在获得第i层采样点中每一个采样点对应的第i层匹配结果之后,可以按照以下公式(4)确定第i层采样点中任意一个采样点
Figure PCTCN2019101778-appb-000039
的逆深度值:
Figure PCTCN2019101778-appb-000040
其中,由于
Figure PCTCN2019101778-appb-000041
对应的第i层匹配结果中逆深度值为d z的匹配结果
Figure PCTCN2019101778-appb-000042
相比与其它逆深度值的匹配结果值最小,因此,将对应的逆深度值d z实际上确定为
Figure PCTCN2019101778-appb-000043
的逆深度值。
可以理解的是,在本公开的实施例中,上述针对采样点匹配的过程,实际上就是针对一个采样点,分别确定与采用不同逆深度值投影的投影点的差异程度,而采用公式(4)进行逆深度值的确定,实际上就是选取出匹配结果值最小结果,表征对应的投影点与采样点差异度最小,因此,可以将该投影点采用的逆深度值确定为采样点的逆深度值,从而得到采样点准确的逆深度值。
需要说明的是,在本公开的实施例中,图像深度估计方法还可以以其它方式确定第i层采样点中每一个采样点的逆深度值。例如,从每一个采样点中对应的匹配结果中选取处于特定范围的部分结果,之后,从部分结果中随机选取一个匹配结果,将随机选取出的匹配结果对应的投影点采用的逆深度值确定为采样点的逆深度值。
S303、令i=i+1,继续对k层当前图像中分辨率高于第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值。
在本公开的实施例中,图像深度估计装置获得第i层逆深度值之后,令i=i+1,从而进一步继续对第i层当前图像的第i+1层当前图像进行逆深度估计,其过程与获取第i层逆深度值相同,在此不再赘述,在不断迭代估计过程中,直至i=k时,图像深度估计装置获得了第k层逆深度值,即k层当前图像中分辨率最高的图像,实际就是当前帧原图中每一个采样点的逆深度值,则停止令i=i+1。
S304、将第k层逆深度值确定为逆深度估计结果。
在本公开的实施例中,图像深度估计装置在获得第k层逆深度值之后,即可将第k层逆深度值确定为逆深度估计结果。
可选地,上述过程中估计的深度为离散值,为获得更为准确的逆深度,还可以进行二次插值, 调整每个采样点的逆深度。具体地,如图7所示,在步骤S303之后还可以包括S305~S306:
S305、对第k层逆深度值进行插值优化,获得逆深度估计结果。
在本公开的实施例中,图像深度估计装置在获得第k层逆深度值之后,第k层逆深度值包括第k层采样点中每一个采样点对应的逆深度值,而为了获得更准确的第k层逆深度值,可以对第k层逆深度值进行插值优化,也就是将第k层采样点中每一个采样点的逆深度值分别进行调整优化,从而获得优化后的第k层逆深度值。
具体的,在本公开的实施例中,图像深度估计装置对第k层逆深度值进行插值优化,获得优化后的第k层逆深度值,包括:对第k层逆深度值中每一个逆深度值,分别从第k层采样点中对应的采样点的候选逆深度值中,选取逆深度值的相邻逆深度值;第k层采样点为对k层当前图像中第k层当前图像采样获得的像素点;获取相邻逆深度值对应的匹配结果;基于相邻逆深度值和相邻逆深度值对应的匹配结果,对第k层逆深度值中的每一个逆深度值进行插值优化,获得优化后的第k层逆深度值。
具体的,在本公开的实施例中,第k层逆深度值包括第k层采样点中每一个采样点对应的逆深度值,图像深度估计装置需要对第k层采样点中每一个采样点对应的逆深度值进行插值优化,从而获得插值优化结果,作为当前帧的逆深度估计结果。其中,对第k层采样点中任意一个采样点
Figure PCTCN2019101778-appb-000044
若其对应的逆深度值为d z,可以按照公式(5)进行插值优化:
d opt=d z+0.5×(d z-d z-1)×(C z+1-C z-1)/(C z+1+C z-1-2×C z)   (5)
其中,d Z-1为采样点
Figure PCTCN2019101778-appb-000045
对应的逆深度候选值中,与d Z相邻的前一个逆深度值。C z+1
Figure PCTCN2019101778-appb-000046
C z-1
Figure PCTCN2019101778-appb-000047
C z
Figure PCTCN2019101778-appb-000048
均可在计算
Figure PCTCN2019101778-appb-000049
的逆深度值时,通过公式(3)计算得到,d z+1和d z-1
Figure PCTCN2019101778-appb-000050
对应的候选逆深度值中d z相邻的两个逆深度值,在此不再赘述。
可以理解的是,在本公开的实施例中,图像深度估计装置按照公式(5)对第k层逆深度值进行插值优化,由于k层当前图像中,第k层当前图像实际上就是当前帧,即实际上在获得了当前帧中每一个采样点的逆深度值之后,进一步对其进行了优化,从而获得了当前帧中每一个采样点更为准确的逆深度值,即获得了当前帧的逆深度估计结果。在本公开的实施例中,图像深度估计装置还可以获取三个或者更多个相邻逆深度值及其对应的匹配结果,利用与公式(5)类似的多项式进行插值优化。此外,图像深度估计装置还可以针对第k层采样点中每一个采样点的逆深度值,获取其对应逆深度候选值中与确定的逆深度值相邻的两个深度值,并将这三个逆深度值的均值作为采样点最终的逆深度值,实现逆深度值的优化。
S306、将优化后的第k层逆深度值确定为逆深度估计结果。
在本公开的实施例中,图像深度估计装置在获得优化后的第k层逆深度值之后,即可将优化后的第k层逆深度值确定为逆深度估计结果。
可选的,在本公开的实施例中,图像深度估计装置在确定出逆深度估计结果之后,即步骤S103之后,还可以执行以下步骤:
S104、根据逆深度估计结果,确定当前帧的深度估计结果。
在本公开的实施例中,图像深度估计装置在获得当前帧的逆深度估计结果之后,即可根据逆深度估计结果,确定当前帧的深度估计结果;该深度估计结果可用于实现基于当前帧的三维场景构建。
需要说明的是,在本公开的实施例中,对于一个采样点而言,其逆深度值和深度值互为倒数,因此,图像深度估计装置在获得当前帧的逆深度估计结果,即当前帧中每一个采样点插值优化后的逆深度值之后,分别取其倒数即可获得对应的深度值,从而获得当前帧的深度估计结果。例如,当前帧中某一个采样点插值优化后的逆深度值为A,则其深度值为1/A。
需要说明的是,在本公开的实施例中,相比于现有技术中需要进行三角化反求解等计算才能获得相机坐标系下的z轴坐标值,上述图像深度估计方法所确定最终的深度估计结果为当前帧的采样点在相机坐标系下的z轴坐标值,不需要额外进行坐标变换。
需要说明的是,在本公开的实施例中,上述图像深度估计方法可以应用在实现基于当前帧的三维场景构建过程中。例如,用户利用移动设备摄像头拍摄某个场景时,可以利用上述图像深度估计方法获得当前帧的深度估计结果,进而重建视频场景的3D结构;用户点击移动设备中视频的当前帧中的某个位置时,可以利用上述图像深度估计方法确定的当前帧的深度估计结果,进行点击位置的视线求交找到锚点摆放虚拟物体,从而实现虚拟物体和真实场景几何一致性融合的增强现实效果;单目视频中可以利用上述图像深度估计方法恢复出三维场景结构,计算真实场景和虚拟物体之间的遮挡关系,从而实现虚拟物体和真实场景遮挡一致性融合的增强现实效果;单目视频中可以利用上述图像深度估计方法恢复出场景三维结构,获得具有真实感的阴影效果,从而实现虚拟物体和真实 场景光照一致性融合的增强现实效果;单目视频中可以利用上述图像深度估计方法恢复出场景三维结构,与虚拟动画角色之间的物理碰撞,从而实现虚拟动画角色和真实场景物理一致性融合的真实感动画效果。
此外,本公开实施例中,也可以不执行上述步骤S104,该逆深度估计结果可以用于非三维场景建立的其它图像处理。例如,直接输出图像采样点的深度信息变化值,至其它设备进行目标识别或三维点距离计算等数据处理。
本公开实施例提供了一种图像深度估计方法,获取当前帧对应的参考帧和当前帧的逆深度空间范围;对当前帧和参考帧分别进行金字塔降采样处理,获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像;k为大于等于2的自然数;基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧的逆深度估计结果。也就是说,本公开提供的技术方案,采取了对多层当前图像结合多层参考图像进行逆深度估计迭代处理,以逐层减少逆深度搜索空间,确定当前帧的深度估计结果,且最终的深度估计结果为当前帧的像素点在相机坐标系下的z轴坐标值,不需要额外进行坐标变换,从而能够实时获得图像的深度估计结果,且深度估计结果的精确度较高。
本公开实施例还提供了一种图像深度估计装置,图8为本公开实施例提供的一种图像深度估计装置的结构示意图。如图8所示,包括:
获取部分801,配置为获取当前帧对应的参考帧和所述当前帧的逆深度空间范围;
降采样部分802,配置为对所述当前帧和所述参考帧分别进行金字塔降采样处理,获得所述当前帧对应的k层当前图像,以及所述参考帧对应的k层参考图像;k为大于等于2的自然数;
估计部分803,配置为基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果;
可选地,本公开实施例的图像深度估计装置还可以包括:确定部分804,配置为根据所述逆深度估计结果,确定所述当前帧的深度估计结果;所述深度估计结果可用于实现基于所述当前帧的三维场景构建。
可选的,所述获取部分801,具体配置为获取至少两个待筛选帧;从所述至少两个待筛选帧中,选取与所述当前帧之间满足预设角度约束条件的至少一帧,将所述至少一帧作为所述参考帧。
可选的,所述预设角度约束条件包括:
所述当前帧对应的位姿中心和所述参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;所述目标点为所述当前帧对应的平均深度点与所述参考帧对应的平均深度点连线的中点;
所述当前帧和所述参考帧对应的光轴夹角处于第二预设角度范围;
所述当前帧和所述参考帧对应的纵轴夹角处于第三预设角度范围。
可选的,所述估计部分803,具体配置为基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;所述第i层采样点为对所述k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数;根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值;令i=i+1,继续对所述k层当前图像中分辨率高于所述第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值;将所述第k层逆深度值确定为所述逆深度估计结果。
可选的,所述估计部分803,具体配置为当对所述逆深度空间范围进行区间划分,并在每个划分区间中选择一个逆深度值,得到多个初始逆深度值;将所述多个初始逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;在i不等于1的情况下,从所述k层当前图像中获取第i-1层采样点,以及第i-1层逆深度值;基于所述第i-1层逆深度估计值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值。
可选的,所述估计部分803,具体配置为从所述第i-1层采样点中确定与第一采样点距离最近的第二采样点,以及与所述第二采样点相邻的至少两个第三采样点;所述第一采样点为所述第i层采样点中任意一个采样点;根据所述第i-1层逆深度值,获取所述至少两个第三采样点中每一个采样点的逆深度值,以及所述第二采样点的逆深度值,得到至少三个逆深度值;从所述至少三个逆深度值中,确定最大逆深度值和最小逆深度值;从所述多个初始逆深度值中,选取处于所述最大逆深度值和所述最小逆深度值范围内的逆深度值,将选取出的逆深度值确定为所述第一采样点对应的逆深度候选值;继续确定所述第i层采样点中非所述第一采样点的采样点对应的逆深度候选值,直至确定 出所述第i层采样点中每一个采样点对应的逆深度候选值。
可选的,所述估计部分803,具体配置为对所述第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将所述第i层采样点中每一个采样点投影到所述第i层参考图像中,获得所述第i层采样点中每一个采样点对应的第i层投影点;根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果;根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
可选的,所述估计部分803,具体配置为利用预设窗口,从所述第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从所述第i层参考图像中选取以所述待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;所述待匹配采样点为所述第i层采样点中任意一个采样点;将所述第一图像块分别与所述多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将所述多个匹配结果确定为所述待匹配采样点对应的第i层匹配结果;继续确定所述第i层采样点中与所述待匹配采样点不同的采样点对应的第i层匹配结果,直至获得所述第i层采样点中每一个采样点对应的第i层匹配结果。
可选的,所述估计部分803,具体配置为从目标采样点对应的第i层匹配结果中选取出目标匹配结果;所述目标采样点为所述第i层采样点中任意一个采样点;将所述目标采样点对应的第i层投影点中,所述目标匹配结果对应的投影点确定为目标投影点;将所述逆深度候选值中,所述目标投影点对应的逆深度值确定为所述目标采样点的逆深度值;继续确定所述第i层采样点中与所述目标采样点不同的采样点的逆深度值,直至确定出所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
可选的,所述估计部分803,还配置为对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值;将所述优化后的第k层逆深度值确定为所述逆深度估计结果。
可选的,所述估计部分803,具体配置为对所述第k层逆深度值中每一个逆深度值,分别从第k层采样点中对应的采样点的候选逆深度值中,选取相邻逆深度值;所述第k层采样点为对所述k层当前图像中第k层当前图像采样获得的像素点;获取所述相邻逆深度值对应的匹配结果;基于所述相邻逆深度值和所述相邻逆深度值对应的匹配结果,对所述第k层逆深度值中的每一个逆深度值进行插值优化,获得所述优化后的第k层逆深度值。
本公开实施例提供了一种图像深度估计装置,获取当前帧对应的参考帧和当前帧的逆深度空间范围;对当前帧和参考帧分别进行金字塔降采样处理,获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像;k为大于等于2的自然数;基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧的逆深度估计结果。也就是说,本公开提供的图像深度估计装置,采取了对多层当前图像结合多层参考图像进行逆深度估计迭代处理,以逐层减少逆深度搜索空间,确定当前帧的深度估计结果,且最终的深度估计结果为当前帧的像素点在相机坐标系下的z轴坐标值,不需要额外进行坐标变换,从而能够实时获得图像的深度估计结果,且深度估计结果的精确度较高。
本公开实施例还提供了一种电子设备,图9为本公开实施例提供的一种电子设备的结构示意图。如图9所示,所述电子设备包括:处理器901、存储器902和通信总线903;其中,
所述通信总线903,配置为实现所述处理器901和所述存储器902之间的连接通信;
所述处理器901,配置为执行所述存储器902中存储的图像深度估计程序,以实现上述图像深度估计方法。
需要说明的是,在本公开的实施例中,所述电子设备为手机或平板电脑,当然,也可以为其它类型设备,本公开实施例不作限定。
本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可以被一个或者多个处理器执行,以实现上述图像深度估计方法。计算机可读存储介质可以是是易失性存储器(volatile memory),例如随机存取存储器(Random-Access Memory,RAM);或者非易失性存储器(non-volatile memory),例如只读存储器(Read-Only Memory,ROM),快闪存储器(flash memory),硬盘(Hard Disk Drive,HDD)或固态硬盘(Solid-State Drive,SSD);也可以是包括上述存储器之一或任意组合的各自设备,如移动电话、计算机、平板设备、个人数字助理等。
本公开实施例还提供了一种计算机程序,包括计算机可读代码,所述计算机可读代码被处理器执行时,实现上述图像深度估计方法对应的步骤。
本领域内的技术人员应明白,本公开的实施例可提供为方法、系统、或计算机程序产品。因此, 本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程信号处理设备的处理器以产生一个机器,使得通过计算机或其他可编程信号处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程信号处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程信号处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本公开的一些实施例而已,并非用于限定本公开的保护范围。在不违背逻辑的情况下,本申请不同实施例之间可以相互结合,不同实施例描述有所侧重,为侧重描述的部分可以参见其他实施例的记载。
工业实用性
在本公开实施例的技术方案中,获取当前帧对应的参考帧和当前帧的逆深度空间范围;对当前帧和参考帧分别进行金字塔降采样处理,获得当前帧对应的k层当前图像,以及参考帧对应的k层参考图像;k为大于等于2的自然数;基于k层参考图像和逆深度空间范围,对k层当前图像进行逆深度估计迭代处理,获得当前帧的逆深度估计结果。也就是说,本公开提供的技术方案,采取了对多层当前图像结合多层参考图像进行逆深度估计迭代处理,以逐层减少逆深度搜索空间,确定当前帧的逆深度估计结果,该逆深度估计结果为当前帧的像素点在相机坐标系下的z轴坐标值的倒数,不需要额外进行坐标变换,且逐层减少逆深度搜索空间有助于减少逆深度估计的计算量,提升估计速度,从而能够实时获得图像的深度估计结果,且深度估计结果的精确度较高。

Claims (26)

  1. 一种图像深度估计方法,所述方法包括:
    获取当前帧对应的参考帧和所述当前帧的逆深度空间范围;
    对所述当前帧和所述参考帧分别进行金字塔降采样处理,获得所述当前帧对应的k层当前图像,以及所述参考帧对应的k层参考图像;k为大于等于2的自然数;
    基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果。
  2. 根据权利要求1所述的图像深度估计方法,其中,所述获取当前帧对应的参考帧,包括:
    获取至少两个待筛选帧;
    从所述至少两个待筛选帧中,选取与所述当前帧之间满足预设角度约束条件的至少一帧,将所述至少一帧作为所述参考帧。
  3. 根据权利要求2所述的图像深度估计方法,其中,所述预设角度约束条件包括:
    所述当前帧对应的位姿中心和所述参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;所述目标点为所述当前帧对应的平均深度点与所述参考帧对应的平均深度点连线的中点;
    所述当前帧和所述参考帧对应的光轴夹角处于第二预设角度范围;
    所述当前帧和所述参考帧对应的纵轴夹角处于第三预设角度范围。
  4. 根据权利要求1-3任一项所述的图像深度估计方法,其中,所述基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果,包括:
    基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;所述第i层采样点为对所述k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数;
    根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值;
    令i=i+1,继续对所述k层当前图像中分辨率高于所述第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值;
    将所述第k层逆深度值确定为所述逆深度估计结果。
  5. 根据权利要求4所述的图像深度估计方法,其中,所述基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值,包括:
    对所述逆深度空间范围进行区间划分,并在每个划分区间中选择一个逆深度值,得到多个初始逆深度值;
    将所述多个初始逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;
    在i不等于1的情况下,从所述k层当前图像中获取第i-1层采样点,以及第i-1层逆深度值;
    基于所述第i-1层逆深度值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值。
  6. 根据权利要求5所述的图像深度估计方法,其中,所述基于所述第i-1层逆深度估计值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值,包括:
    从所述第i-1层采样点中确定与第一采样点距离最近的第二采样点,以及与所述第二采样点相邻的至少两个第三采样点;所述第一采样点为所述第i层采样点中任意一个采样点;
    根据所述第i-1层逆深度值,获取所述至少两个第三采样点中每一个采样点的逆深度值,以及所述第二采样点的逆深度值,得到至少三个逆深度值;
    从所述至少三个逆深度值中,确定最大逆深度值和最小逆深度值;
    从所述多个初始逆深度值中,选取处于所述最大逆深度值和所述最小逆深度值范围内的逆深度值,将选取出的逆深度值确定为所述第一采样点对应的逆深度候选值;
    继续确定所述第i层采样点中非所述第一采样点的采样点对应的逆深度候选值,直至确定出所述第i层采样点中每一个采样点对应的逆深度候选值。
  7. 根据权利要求4所述的图像深度估计方法,其中,所述根据所述第i层采样点中每一个采样 点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值,包括:
    对所述第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将所述第i层采样点中每一个采样点投影到所述第i层参考图像中,获得所述第i层采样点中每一个采样点对应的第i层投影点;
    根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果;
    根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
  8. 根据权利要求7所述的图像深度估计方法,其中,所述根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果,包括:
    利用预设窗口,从所述第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从所述第i层参考图像中选取以所述待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;所述待匹配采样点为所述第i层采样点中任意一个采样点;
    将所述第一图像块分别与所述多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将所述多个匹配结果确定为所述待匹配采样点对应的第i层匹配结果;
    继续确定所述第i层采样点中与所述待匹配采样点不同的采样点对应的第i层匹配结果,直至获得所述第i层采样点中每一个采样点对应的第i层匹配结果。
  9. 根据权利要求7所述的图像深度估计方法,其中,所述根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值,包括:
    从目标采样点对应的第i层匹配结果中选取出目标匹配结果;所述目标采样点为所述第i层采样点中任意一个采样点;
    将所述目标采样点对应的第i层投影点中,所述目标匹配结果对应的投影点确定为目标投影点;
    将所述逆深度候选值中,所述目标投影点对应的逆深度值确定为所述目标采样点的逆深度值;
    继续确定所述第i层采样点中与所述目标采样点不同的采样点的逆深度值,直至确定出所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
  10. 根据权利要求4-9任一项所述的图像深度估计方法,其中,所述获得第k层逆深度值之后,所述方法还包括:
    对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值;
    将所述优化后的第k层逆深度值确定为所述逆深度估计结果。
  11. 根据权利要求10所述的图像深度估计方法,其中,所述对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值,包括:
    对所述第k层逆深度值中每一个逆深度值,分别从第k层采样点中对应的采样点的候选逆深度值中,选取所述逆深度值的相邻逆深度值;所述第k层采样点为对所述k层当前图像中第k层当前图像采样获得的像素点;
    获取所述相邻逆深度值对应的匹配结果;
    基于所述相邻逆深度值和所述相邻逆深度值对应的匹配结果,对所述第k层逆深度值中的每一个逆深度值进行插值优化,获得所述优化后的第k层逆深度值。
  12. 一种图像深度估计装置,包括:
    获取部分,配置为获取当前帧对应的参考帧和所述当前帧的逆深度空间范围;
    降采样部分,配置为对所述当前帧和所述参考帧分别进行金字塔降采样处理,获得所述当前帧对应的k层当前图像,以及所述参考帧对应的k层参考图像;k为大于等于2的自然数;
    估计部分,配置为基于所述k层参考图像和所述逆深度空间范围,对所述k层当前图像进行逆深度估计迭代处理,获得所述当前帧的逆深度估计结果。
  13. 根据权利要求12所述的图像深度估计装置,其中,
    所述获取部分,具体配置为获取至少两个待筛选帧;从所述至少两个待筛选帧中,选取与所述当前帧之间满足预设角度约束条件的至少一帧,将所述至少一帧作为所述参考帧。
  14. 根据权利要求13所述的图像深度估计装置,其中,所述预设角度约束条件包括:
    所述当前帧对应的位姿中心和所述参考帧对应的位姿中心,与目标点的连线形成的夹角处于第一预设角度范围;所述目标点为所述当前帧对应的平均深度点与所述参考帧对应的平均深度点连线的中点;
    所述当前帧和所述参考帧对应的光轴夹角处于第二预设角度范围;
    所述当前帧和所述参考帧对应的纵轴夹角处于第三预设角度范围。
  15. 根据权利要求12-14任一项所述的图像深度估计装置,其中,
    所述估计部分,具体配置为基于所述k层当前图像和所述逆深度空间范围,确定第i层采样点中每一个采样点对应的逆深度候选值;所述第i层采样点为对所述k层当前图像中第i层当前图像采样获得的像素点,i为大于等于1且小于等于k的自然数;根据所述第i层采样点中每一个采样点对应的逆深度候选值和所述k层参考图像中第i层参考图像,确定所述第i层采样点中每一个采样点的逆深度值,获得第i层逆深度值;令i=i+1,继续对所述k层当前图像中分辨率高于所述第i层当前图像的第i+1层当前图像进行逆深度估计,直至i=k为止,获得第k层逆深度值;将所述第k层逆深度值确定为所述逆深度估计结果。
  16. 根据权利要求15所述的图像深度估计装置,其中,
    所述估计部分,具体配置为对所述逆深度空间范围进行区间划分,并在每个划分区间中选择一个逆深度值,得到多个初始逆深度值;将所述多个初始逆深度值确定为第1层采样点中每一个采样点对应的逆深度候选值;在i不等于1的情况下,从所述k层当前图像中获取第i-1层采样点,以及第i-1层逆深度值;基于所述第i-1层逆深度估计值、第i-1层采样点,以及所述多个初始逆深度值,确定所述第i层采样点中每一个采样点对应的逆深度候选值。
  17. 根据权利要求16所述的图像深度估计装置,其中,
    所述估计部分,具体配置为从所述第i-1层采样点中确定与第一采样点距离最近的第二采样点,以及与所述第二采样点相邻的至少两个第三采样点;所述第一采样点为所述第i层采样点中任意一个采样点;根据所述第i-1层逆深度值,获取所述至少两个第三采样点中每一个采样点的逆深度值,以及所述第二采样点的逆深度值,得到至少三个逆深度值;从所述至少三个逆深度值中,确定最大逆深度值和最小逆深度值;从所述多个初始逆深度值中,选取处于所述最大逆深度值和所述最小逆深度值范围内的逆深度值,将选取出的逆深度值确定为所述第一采样点对应的逆深度候选值;继续确定所述第i层采样点中非所述第一采样点的采样点对应的逆深度候选值,直至确定出所述第i层采样点中每一个采样点对应的逆深度候选值。
  18. 根据权利要求15所述的图像深度估计装置,其中,
    所述估计部分,具体配置为对所述第i层采样点中每一个采样点,分别按照对应的逆深度候选值中的每一个逆深度值,将所述第i层采样点中每一个采样点投影到所述第i层参考图像中,获得所述第i层采样点中每一个采样点对应的第i层投影点;根据所述第i层采样点和所述第i层投影点进行块匹配,获得所述第i层采样点中每一个采样点对应的第i层匹配结果;根据所述第i层匹配结果,确定所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
  19. 根据权利要求18所述的图像深度估计方法,其中,
    所述估计部分,具体配置为利用预设窗口,从所述第i层当前图像中选取以待匹配采样点为中心的第一图像块,并从所述第i层参考图像中选取以所述待匹配采样点对应的第i层投影点中的每一个投影点分别为中心的多个第二图像块;所述待匹配采样点为所述第i层采样点中任意一个采样点;将所述第一图像块分别与所述多个第二图像块中每一个图像块进行比较,获得多个匹配结果,并将所述多个匹配结果确定为所述待匹配采样点对应的第i层匹配结果;继续确定所述第i层采样点中与所述待匹配采样点不同的采样点对应的第i层匹配结果,直至获得所述第i层采样点中每一个采样点对应的第i层匹配结果。
  20. 根据权利要求18所述的图像深度估计装置,其中,
    所述估计部分,具体配置为从目标采样点对应的第i层匹配结果中选取出目标匹配结果;所述目标采样点为所述第i层采样点中任意一个采样点;将所述目标采样点对应的第i层投影点中,所述目标匹配结果对应的投影点确定为目标投影点;将所述逆深度候选值中,所述目标投影点对应的逆深度值确定为所述目标采样点的逆深度值;继续确定所述第i层采样点中与所述目标采样点不同的采样点的逆深度值,直至确定出所述第i层采样点中每一个采样点的逆深度值,获得所述第i层逆深度值。
  21. 根据权利要求15-20任一项所述的图像深度估计装置,其中,
    所述估计部分,还配置为对所述第k层逆深度值进行插值优化,获得优化后的第k层逆深度值;将所述优化后的第k层逆深度值确定为所述逆深度估计结果。
  22. 根据权利要求21所述的图像深度估计装置,其中,
    所述估计部分,具体配置为对所述第k层逆深度值中每一个逆深度值,分别从第k层采样点中 对应的采样点的候选逆深度值中,选取相邻逆深度值;所述第k层采样点为对所述k层当前图像中第k层当前图像采样获得的像素点;获取所述相邻逆深度值对应的匹配结果;基于所述相邻逆深度值和所述相邻逆深度值对应的匹配结果,对所述第k层逆深度值中的每一个逆深度值进行插值优化,获得所述优化后的第k层逆深度值。
  23. 一种电子设备,所述电子设备包括:处理器、存储器和通信总线;其中,
    所述通信总线,配置为实现所述处理器和所述存储器之间的连接通信;
    所述处理器,配置为执行所述存储器中存储的图像深度估计程序,以实现权利要求1-11任一项所述的图像深度估计方法。
  24. 根据权利要求23所述的电子设备,其中,所述电子设备为手机或平板电脑。
  25. 一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可以被一个或者多个处理器执行,以实现权利要求1-11任一项所述的图像深度估计方法。
  26. 一种计算机程序,包括计算机可读代码,所述计算机可读代码被处理器执行时,实现权利要求1-11任一项所述的图像深度估计方法对应的步骤。
PCT/CN2019/101778 2019-07-10 2019-08-21 一种图像深度估计方法及装置、电子设备、存储介质 WO2021003807A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021537988A JP7116262B2 (ja) 2019-07-10 2019-08-21 画像深度推定方法および装置、電子機器、ならびに記憶媒体
SG11202108201RA SG11202108201RA (en) 2019-07-10 2019-08-21 Image depth estimation method and apparatus, electronic device, and storage medium
KR1020217017780A KR20210089737A (ko) 2019-07-10 2019-08-21 이미지 깊이 추정 방법 및 장치, 전자 기기, 저장 매체
US17/382,819 US20210350559A1 (en) 2019-07-10 2021-07-22 Image depth estimation method and apparatus, electronic device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910621318.4A CN112215880B (zh) 2019-07-10 2019-07-10 一种图像深度估计方法及装置、电子设备、存储介质
CN201910621318.4 2019-07-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/382,819 Continuation US20210350559A1 (en) 2019-07-10 2021-07-22 Image depth estimation method and apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2021003807A1 true WO2021003807A1 (zh) 2021-01-14

Family

ID=74047542

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/101778 WO2021003807A1 (zh) 2019-07-10 2019-08-21 一种图像深度估计方法及装置、电子设备、存储介质

Country Status (7)

Country Link
US (1) US20210350559A1 (zh)
JP (1) JP7116262B2 (zh)
KR (1) KR20210089737A (zh)
CN (1) CN112215880B (zh)
SG (1) SG11202108201RA (zh)
TW (1) TWI738196B (zh)
WO (1) WO2021003807A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11727589B2 (en) 2021-03-16 2023-08-15 Toyota Research Institute, Inc. System and method to improve multi-camera monocular depth estimation using pose averaging

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313742A (zh) * 2021-05-06 2021-08-27 Oppo广东移动通信有限公司 图像深度估计方法、装置、电子设备及计算机存储介质
TWI817594B (zh) * 2022-07-04 2023-10-01 鴻海精密工業股份有限公司 圖像深度識別方法、電腦設備及儲存介質
CN116129036B (zh) * 2022-12-02 2023-08-29 中国传媒大学 一种深度信息引导的全方向图像三维结构自动恢复方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6487304B1 (en) * 1999-06-16 2002-11-26 Microsoft Corporation Multi-view approach to motion and stereo
US20140267243A1 (en) * 2013-03-13 2014-09-18 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
CN105007495A (zh) * 2015-08-20 2015-10-28 上海玮舟微电子科技有限公司 一种基于多层3drs的差异估计方法及装置
US20160292867A1 (en) * 2012-07-30 2016-10-06 Sony Computer Entertainment Europe Limited Localisation and mapping
CN108648274A (zh) * 2018-05-10 2018-10-12 华南理工大学 一种视觉slam的认知点云地图创建系统
CN109993113A (zh) * 2019-03-29 2019-07-09 东北大学 一种基于rgb-d和imu信息融合的位姿估计方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7889905B2 (en) * 2005-05-23 2011-02-15 The Penn State Research Foundation Fast 3D-2D image registration method with application to continuously guided endoscopy
US9576183B2 (en) * 2012-11-02 2017-02-21 Qualcomm Incorporated Fast initialization for monocular visual SLAM
EP3501009A4 (en) * 2016-08-19 2020-02-12 Movidius Ltd. DYNAMIC SORTING OF MATRIX OPERATIONS
TWI756365B (zh) * 2017-02-15 2022-03-01 美商脫其泰有限責任公司 圖像分析系統及相關方法
CN108010081B (zh) * 2017-12-01 2021-12-17 中山大学 一种基于Census变换和局部图优化的RGB-D视觉里程计方法
CN108520554B (zh) * 2018-04-12 2022-05-10 无锡信捷电气股份有限公司 一种基于orb-slam2的双目三维稠密建图方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6487304B1 (en) * 1999-06-16 2002-11-26 Microsoft Corporation Multi-view approach to motion and stereo
US20160292867A1 (en) * 2012-07-30 2016-10-06 Sony Computer Entertainment Europe Limited Localisation and mapping
US20140267243A1 (en) * 2013-03-13 2014-09-18 Pelican Imaging Corporation Systems and Methods for Synthesizing Images from Image Data Captured by an Array Camera Using Restricted Depth of Field Depth Maps in which Depth Estimation Precision Varies
CN105007495A (zh) * 2015-08-20 2015-10-28 上海玮舟微电子科技有限公司 一种基于多层3drs的差异估计方法及装置
CN108648274A (zh) * 2018-05-10 2018-10-12 华南理工大学 一种视觉slam的认知点云地图创建系统
CN109993113A (zh) * 2019-03-29 2019-07-09 东北大学 一种基于rgb-d和imu信息融合的位姿估计方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11727589B2 (en) 2021-03-16 2023-08-15 Toyota Research Institute, Inc. System and method to improve multi-camera monocular depth estimation using pose averaging

Also Published As

Publication number Publication date
SG11202108201RA (en) 2021-09-29
TW202103106A (zh) 2021-01-16
JP2022515517A (ja) 2022-02-18
CN112215880B (zh) 2022-05-06
TWI738196B (zh) 2021-09-01
JP7116262B2 (ja) 2022-08-09
KR20210089737A (ko) 2021-07-16
CN112215880A (zh) 2021-01-12
US20210350559A1 (en) 2021-11-11

Similar Documents

Publication Publication Date Title
TWI738196B (zh) 一種圖像深度估計方法、電子設備、儲存介質
EP2992508B1 (en) Diminished and mediated reality effects from reconstruction
US20200334842A1 (en) Methods, devices and computer program products for global bundle adjustment of 3d images
US9338437B2 (en) Apparatus and method for reconstructing high density three-dimensional image
CN113689578B (zh) 一种人体数据集生成方法及装置
CN111080776B (zh) 人体动作三维数据采集和复现的处理方法及系统
CN113643414B (zh) 一种三维图像生成方法、装置、电子设备及存储介质
TWI669683B (zh) 三維影像重建方法、裝置及其非暫態電腦可讀取儲存媒體
CN108028904B (zh) 移动设备上光场增强现实/虚拟现实的方法和系统
US10154241B2 (en) Depth map based perspective correction in digital photos
CN115690382A (zh) 深度学习模型的训练方法、生成全景图的方法和装置
CN114332125A (zh) 点云重建方法、装置、电子设备和存储介质
US10354399B2 (en) Multi-view back-projection to a light-field
US8340399B2 (en) Method for determining a depth map from images, device for determining a depth map
US20120093393A1 (en) Camera translation using rotation from device
US20120038785A1 (en) Method for producing high resolution image
CN112907657A (zh) 一种机器人重定位方法、装置、设备及存储介质
CN115294280A (zh) 三维重建方法、装置、设备、存储介质和程序产品
CN115086625A (zh) 投影画面的校正方法、装置、系统、校正设备和投影设备
CN114882106A (zh) 位姿确定方法和装置、设备、介质
CN114119701A (zh) 图像处理方法及其装置
CN113034345B (zh) 一种基于sfm重建的人脸识别方法及系统
WO2020146965A1 (zh) 图像重新聚焦的控制方法及系统
CN114494612A (zh) 构建点云地图的方法、装置和设备
WO2020118565A1 (en) Keyframe selection for texture mapping wien generating 3d model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19936833

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217017780

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021537988

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19936833

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19936833

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23.09.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19936833

Country of ref document: EP

Kind code of ref document: A1