CN113994382A - Depth map generation method, electronic device, calculation processing device, and storage medium - Google Patents

Depth map generation method, electronic device, calculation processing device, and storage medium Download PDF

Info

Publication number
CN113994382A
CN113994382A CN202080044087.6A CN202080044087A CN113994382A CN 113994382 A CN113994382 A CN 113994382A CN 202080044087 A CN202080044087 A CN 202080044087A CN 113994382 A CN113994382 A CN 113994382A
Authority
CN
China
Prior art keywords
target image
image
pixel point
target
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080044087.6A
Other languages
Chinese (zh)
Inventor
周游
刘洁
徐彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113994382A publication Critical patent/CN113994382A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images

Abstract

A depth map generation method, an electronic device, a device, and a storage medium. The method comprises the following steps: acquiring at least three images (101) shot by at least three shooting devices, calculating at least two matching degrees (102) between a target pixel point on the target image and matched pixel points on other images, fusing the at least two matching degrees, and generating a first depth map based on the fused matching degrees of the target pixel points; or, the depth values are respectively determined according to the at least two matching degrees, at least two depth values corresponding to each pixel point are obtained, and a second depth map (103) is generated according to the at least two depth values corresponding to each pixel point, so that the at least two matching degrees are not only the result obtained by matching based on the baseline direction, or the depth map is generated by combining the at least two depth values, the combination of the calculation results is realized, the problem that the matching between two shooting devices is wrong in the baseline direction is solved, and the accuracy of the depth map is improved.

Description

Depth map generation method, electronic device, calculation processing device, and storage medium Technical Field
The present application relates to the field of data processing technologies, and in particular, to a depth map generation method, an electronic device, a computing device, and a computer-readable storage medium.
Background
Computer Vision relies on an imaging system to replace visual organs as an input sensitive means, most commonly a camera, and a basic visual system can be formed by two cameras, which is called Stereo Vision.
The binocular camera System (Stereo Vision System) shoots two pictures at the same time and at different angles through two cameras, and then calculates the distance relationship between a scene and the cameras by utilizing the difference of the two pictures and the position and angle relationship between the two cameras and utilizing the triangular relationship, and the distance relationship is drawn on a picture, namely a Depth Map.
A common depth calculation method is to use Stereo Matching algorithm, in which the best Matching block needs to be searched in the baseline direction of the two cameras in the two pictures. Research shows that if repeated texture exists in a picture or weak texture exists in the picture, because description vectors of feature points with high similarity are close, it is difficult to correctly judge which feature point is corresponding to the feature point during scanning, baseline direction searching is easy to make mistakes, and accurate Depth Map is difficult to obtain.
Disclosure of Invention
In view of the above, the present application is made to provide a depth map generation method, an electronic device, a computing processing device, a computer-readable storage medium that overcome or at least partially solve the above-mentioned problems.
In accordance with an aspect of the present application, there is provided a depth map generating method, including:
acquiring at least three images shot by at least three shooting devices;
calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images;
for the same target pixel point, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
Optionally, the fusing the at least two matching degrees corresponding to the same target pixel point includes:
and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point.
Optionally, the generating a second depth map corresponding to the target image according to at least two depth values corresponding to each of the target pixel points includes:
selecting the smallest depth value from the at least two depth values corresponding to the same pixel point as the first depth value of the pixel point;
and generating a second depth map corresponding to the target image based on the first depth value corresponding to each target pixel point.
Optionally, the generating a second depth map corresponding to the target image based on the first depth values corresponding to the respective pixel points on the target image includes:
filtering the first depth value corresponding to each pixel point on the target image;
and generating a second depth map corresponding to the target image according to the filtered first depth value corresponding to each pixel point on the target image.
Optionally, before the calculating at least two matching degrees between the target pixel point on the target image in the at least three images and the matched pixel points on other respective images, the method further includes:
and respectively correcting the target image and the image to be matched to obtain a corrected target image and an image to be matched and a mapping relation between the corrected target image and the target image.
Optionally, the calculating at least two matching degrees between the target pixel point on the target image in the at least three images and the matched pixel points on other respective images includes:
matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
Optionally, the mapping relationship includes a first mapping relationship obtained by correcting the target image and the first image and a second mapping relationship obtained by correcting the target image and the second image; before the fusing the at least two matching degrees corresponding to the same target pixel point, the method further includes:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
Optionally, the mapping relationship includes a fourth mapping relationship obtained by correcting the target image and the third image; before the fusing the at least two matching degrees corresponding to the same target pixel point, the method further includes:
and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
Optionally, the determining the depth values according to the at least two matching degrees respectively, and obtaining the at least two depth values corresponding to each pixel point on the target image includes:
and respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the corrected image matched with the target image, so as to obtain at least two depth values corresponding to each pixel point on the target image.
Optionally, the mapping relationship includes a first mapping relationship obtained by correcting the target image and the first image and a second mapping relationship obtained by correcting the target image and the second image; before the generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the method further includes:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
Optionally, the mapping relationship includes a fourth mapping relationship obtained by correcting the target image and the third image; before the generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the method further includes:
and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
Optionally, the number of the shooting devices is three, and the position relationship between the shooting devices includes any one of a delta shape and an L shape.
Optionally, the at least three cameras are disposed on an electronic device, and the electronic device includes any one of a movable platform, a mobile terminal, a virtual reality terminal, or an augmented reality terminal.
Optionally, the at least three cameras are disposed on a movable platform, the method further comprising:
and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map.
In accordance with another aspect of the present application, there is provided an electronic device including a processor, a memory, and at least three cameras;
the processor is configured to: acquiring at least three images shot by at least three shooting devices; calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images; for the same target pixel point, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
Optionally, when the processor merges the at least two matching degrees corresponding to the same target pixel point, the processor is configured to:
and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point.
Optionally, when the processor generates the second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the processor is configured to:
selecting the smallest depth value from the at least two depth values corresponding to the same pixel point as the first depth value of the pixel point;
and generating a second depth map corresponding to the target image based on the first depth values corresponding to the pixel points on the target image.
Optionally, when the processor generates the second depth map corresponding to the target image based on the first depth values corresponding to the respective pixel points on the target image, the processor is configured to:
filtering the first depth value corresponding to each pixel point on the target image;
and generating a second depth map corresponding to the target image according to the filtered first depth value corresponding to each pixel point on the target image.
Optionally, before the processor calculates at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other respective images, the processor is further configured to:
and respectively correcting the target image and the image to be matched to obtain a corrected target image and an image to be matched and a mapping relation between the corrected target image and the target image.
Optionally, when calculating at least two matching degrees between a target pixel point on a target image in the at least three images and a matched pixel point on each of the other images, the processor is configured to:
matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
Optionally, the mapping relationship includes a first mapping relationship obtained by correcting the target image and the first image and a second mapping relationship obtained by correcting the target image and the second image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
Optionally, the mapping relationship includes a fourth mapping relationship obtained by correcting the target image and the third image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
Optionally, the processor determines depth values according to the at least two matching degrees, and when obtaining at least two depth values corresponding to each pixel point on the target image, the processor is configured to:
and respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the image matched with the target image, so as to obtain at least two depth values corresponding to each pixel point on the target image.
Optionally, the mapping relationship includes a first mapping relationship obtained by correcting the target image and the first image and a second mapping relationship obtained by correcting the target image and the second image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
Optionally, the mapping relationship includes a fourth mapping relationship obtained by correcting the target image and the third image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
Optionally, the number of the shooting devices is three, and the position relationship between the shooting devices includes any one of a delta shape and an L shape.
Optionally, the electronic device includes any one of a movable platform, a mobile terminal, a virtual reality terminal, or an augmented reality terminal.
Optionally, the electronic device is a movable platform, and the processor is further configured to:
and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map.
Optionally, the electronic device comprises a display, the display being configured to:
displaying the first depth map or the second depth map.
Optionally, the processor is further configured to:
and sending the first depth map or the second depth map to a control device of the electronic device, so that the control device can display the first depth map or the second depth map or generate a control instruction for the electronic device according to the first depth map or the second depth map.
According to another aspect of the present application, there is provided a computer program comprising computer readable code which, when run on a computing processing device, causes the computing processing device to perform the above-described depth map generation method.
According to another aspect of the application, a computer-readable medium is provided, in which a computer program as described above is stored.
According to the embodiment of the invention, at least three images shot by at least three shooting devices are obtained, at least two matching degrees between a target pixel point on a target image and matched pixel points on other images in the at least three images are calculated, the at least two matching degrees corresponding to the same target pixel point are fused, and a first depth map corresponding to the target image is generated based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only based on a result obtained by matching in a baseline direction between two shooting devices, and the depth map is generated after the at least two matching degrees are fused, or the depth map is generated by combining the at least two depth values determined by corresponding to the matching degrees, thereby realizing the combination of at least two calculation results, solving the problem that the two shooting devices are matched incorrectly in the baseline direction, and improving the accuracy of the depth map.
Furthermore, the minimum depth value is selected as the first depth value of the pixel point from the at least two depth values corresponding to the same pixel point, the second depth map corresponding to the target image is generated based on the first depth value corresponding to each pixel point on the target image, when autonomous obstacle avoidance is carried out, unnecessary collision or other accidents caused by selection of a larger depth value are avoided, the minimum depth value is selected to be more safe, and the success rate of autonomous obstacle avoidance is improved.
Furthermore, filtering processing is carried out on the first depth value corresponding to each pixel point on the target image, a second depth map corresponding to the target image is generated according to the filtered first depth value corresponding to each pixel point on the target image, filtering processing is carried out on the first depth value corresponding to each pixel point on the target image, some wrong depth values can be filtered, and a more accurate depth map is obtained.
Furthermore, the motion track of the movable platform or the operation track of the mechanical arm on the movable platform is determined according to the first depth map or the second depth map, so that the movable platform is prevented from colliding with surrounding objects or falling off. When a mechanical arm is arranged on the movable platform, the distance between the mechanical arm and surrounding objects can be determined, the operation track of the mechanical arm is generated, the mechanical arm is prevented from colliding with the surrounding objects, or the mechanical arm is used for operating the surrounding target objects.
Furthermore, when at least three shooting devices are arranged on an aircraft, a vehicle, a robot and the like, the depth map obtained by adopting the technical scheme of the embodiment of the invention is more accurate, so that the aircraft can carry out better autonomous obstacle avoidance, particularly the rotor aircraft needs accurate and agile autonomous obstacle avoidance, accidents such as crash or collision of the aircraft are avoided, and the safety of the aircraft is improved; the vehicle can also carry out better autonomous obstacle avoidance, and avoid collision between the vehicle and other objects or people, so that the vehicle has higher safety; the robot can also carry out better autonomous obstacle avoidance, the mechanical arm of the robot can operate objects more accurately, especially the floor sweeping robot can reduce the collision with furniture and moving objects, clean all reachable positions without dead angles and the like, and the working capacity of the robot is improved.
Further, the target image and the image to be matched are corrected respectively to obtain a corrected target image and an image to be matched, and a more accurate result can be obtained by searching the corrected image based on the baseline direction.
Further, by displaying the first depth map or the second depth map, the operator can intuitively know the environmental information around the electronic device in time, for example, which objects are around the electronic device, which objects are closer to the electronic device, and which objects are farther from the electronic device, so that the movement, operation, or other operations of the electronic device can be controlled in time. It should be noted that the electronic device includes a display and is not limited to the display being necessarily disposed on the electronic device, and the display may also be communicatively connected to the electronic device, for example, the display may be connected to the movable platform through bluetooth, a mobile network, or WiFi, the display may be disposed on a remote controller, or may be disposed on a remote computer, and the present invention is not limited to this.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flow chart illustrating the steps of a method for generating a depth map in accordance with one embodiment of the present invention;
FIG. 2 shows a schematic representation of three images taken by a camera in a delta configuration;
FIG. 3 shows a schematic view of three images taken by another camera in a delta-shaped configuration;
FIG. 4 shows a schematic representation of three images taken by a camera in an L-shaped configuration;
FIG. 5 shows a schematic diagram of the same point in both the left and right images;
FIG. 6 is a flow chart illustrating the steps of a method for depth map generation according to another embodiment of the present invention;
FIG. 7 illustrates a schematic diagram of an image rectification technique;
FIG. 8 is a flow chart illustrating the steps of a method for depth map generation in accordance with yet another embodiment of the present invention;
FIG. 9 illustrates a schematic diagram of depth value generation;
FIG. 10 shows a schematic diagram of depth map integration;
FIG. 11 shows a schematic view of an electronic device according to a further embodiment of the invention;
FIG. 12 schematically shows a block diagram of a computing processing device for executing a method according to the invention; and
fig. 13 schematically shows a storage unit for holding or carrying program code implementing the method according to the invention.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the present invention better understood by those skilled in the art, the following description is made of the concept related to the present invention:
the shooting device includes but is not limited to a camera, a basic visual system can be formed by double shooting devices generally, and at least three shooting devices are needed for realizing the technical scheme of the invention.
The target image may be any one of at least three images, and the target image may be an image captured by a pre-selected certain capturing device, or may be a randomly determined image, which is not limited in this embodiment of the present invention.
And selecting a plurality of pixel points from the target image, and marking as target pixel points. In image processing, a feature point refers to a point where the image gradation value changes drastically or a point where the curvature is large on an edge of an image (i.e., an intersection of two edges). The characteristic points can reflect the essential characteristics of the image and can identify the target object in the image, so that the matching of the image can be completed through the matching of the characteristic points. The feature points of the image are extracted and can be used as target pixel points, or any other suitable pixel points are used as target pixel points, which is not limited in the embodiment of the present invention. For example, feature points of an image are generally selected as Corner points, and an optional Corner Detection Algorithm includes: FAST (Accelerated segmentation Test) feature, SUSAN (a corner detection algorithm), and Harris operator (a corner detection algorithm).
The depth value corresponding to a pixel point may be a distance between the camera and an object corresponding to the pixel point, and the depth map corresponding to the image may be an image for representing the depth value of each pixel point in the image, for example, each pixel point in the depth map represents the depth value of the pixel point by using a different color or gray value. And establishing a corresponding relation between pixel points between the target image and the other image, and calculating a depth value according to the parallax of the corresponding point.
For example, when the Stereo Matching algorithm is adopted, the SGM (semi-global Matching) algorithm may be directly used to generate the depth map after feature point Matching. When a Plane-scanning (Plane-scanning) algorithm is adopted, after feature points are matched, a BA (Bundle Adjustment) algorithm is needed to estimate the pose of a shooting device of each image, then the Plane-scanning algorithm is adopted to calculate the relative distance between pixel points, further semi-global optimization Adjustment is carried out, and an SGM algorithm is used to generate a depth map.
When determining the pixel points on other images matched with the target pixel points on the target image, the matching degree between the target pixel points and the matched pixel points on other images can be obtained, and the matching degree represents the similarity degree between the two pixel points. The robustness of a single pixel is poor, the single pixel is easily affected by illumination change and different visual angles, one common mode is matching based on a sliding window, for a pixel in a window which takes a target pixel point as a center in a target image, a pixel in a sliding window with the same size and the pixel are used for calculating Cost Values (Cost value) in another image from left to right, the more similar the two windows are, the smaller the Cost value is, the higher the matching degree is represented, and therefore the reciprocal of the Cost Values (Cost value) can be used as the matching degree. The pixel point corresponding to the position with the maximum matching degree is the best matching result, namely the pixel point matched with the target pixel point on other images. It should be noted that, in practical applications, Cost Values may be directly used as the matching degree, so that the calculation result is opposite to the matching degree, and the matching degree may be adjusted in other steps of determining the matching degree. The specific parameters are selected as the matching degree, which is not limited in the present application, and all parameters capable of representing the matching degree belong to the protection scope of the present application.
For example, when calculating the matching degree between each target pixel point and a pixel point on another image, an MAD (Mean Absolute Difference) Algorithm, an SSD (Sum of Squared errors) Algorithm, an SAD (Sum of Absolute Differences) Algorithm, an NCC (Normalized Cross Correlation) Algorithm, an SSDA (Sequential similarity Detection Algorithm), an SATD (Sum of Absolute Transformed Differences) Algorithm, or the like may be used, and any one of the algorithms may be specifically selected according to actual needs, which is not limited in the embodiment of the present invention.
According to an embodiment of the invention, in the binocular stereo matching process, if repeated texture exists in the picture or weak texture exists in the picture, the problem that an accurate depth map is difficult to obtain is avoided. The invention provides a depth map generation mechanism, which comprises the steps of calculating at least two matching degrees between a target pixel point on a target image and matched pixel points on other images in at least three images by acquiring at least three images shot by at least three shooting devices, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only based on a result obtained by matching in a baseline direction between two shooting devices, and the depth map is generated after the at least two matching degrees are fused, or the depth map is generated by combining the at least two depth values determined by corresponding to the matching degrees, thereby realizing the combination of at least two calculation results, solving the problem that the two shooting devices are matched incorrectly in the baseline direction, and improving the accuracy of the depth map.
Referring to fig. 1, a flowchart illustrating steps of a depth map generation method according to an embodiment of the present invention is shown, which may specifically include the following steps:
step 101, at least three images shot by at least three shooting devices are obtained.
In the embodiment of the invention, when shooting is performed, each shooting device respectively shoots to obtain one image, namely at least three images are corresponding to at least three shooting devices. The position relationship among the shooting devices may be set according to actual needs, for example, the position relationship among four shooting devices includes a parallelogram, a trapezoid, and the like, the position relationship among five shooting devices includes a regular pentagon, and the like, and at least three shooting devices with different position relationships may all apply the technical solution of the present invention, which is not limited in this embodiment of the present invention.
In the embodiment of the present invention, the sides of different images captured by different capturing devices may be parallel or perpendicular to each other, or may form any angle with each other, which is not limited in the embodiment of the present invention.
Alternatively, when the number of the photographing devices is three, the positional relationship between the photographing devices includes any one of a delta shape or an L shape.
For example, fig. 2 is a schematic diagram of three images captured by a capturing device in a delta configuration, the upper diagram being located on a perpendicular bisector of a line connecting the left and right diagrams, and the long sides of the upper, left and right diagrams being parallel to each other. Fig. 3 is a schematic diagram of three images captured by another capturing device in a delta configuration, in which the positions of the upper, left and right images are symmetrical triangles, and the long sides of different images form an angle of 60 degrees. Fig. 4 is a schematic diagram of three images taken by a photographing device in an L-shaped configuration, in which an upper drawing is positioned above a left drawing, a right drawing is positioned at the right side of the left drawing, and long sides of the upper drawing, the left drawing and the right drawing are parallel to each other.
And 102, calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images.
In the embodiment of the present invention, a plurality of target pixel points are determined on a target image of at least three images, for example, feature points are extracted. And then for each target pixel point, searching a pixel point which is most matched with the target pixel point on the other image, and calculating to obtain the matching degree between the target pixel point and the matched pixel point on the other image. Because pixel matching is performed between the target image in the at least three images and each of the other images, each target pixel has a corresponding matching degree that is one less than the number of the images, for example, if the number of the target images is three, two matching degrees are obtained through calculation. And at least two matching degrees are correspondingly obtained for at least three images.
In the embodiment of the invention, because of installation deviation or insufficient manufacturing process precision, the two cameras may not be completely parallel, for the two horizontally installed cameras, the same point may not be in the same horizontal line in the left and right images, and for the two vertically installed cameras, the same point may not be in the same vertical line in the upper and lower images. As shown in fig. 5, which is a schematic diagram of the same point in the left and right images, the five-pointed star is not on the same horizontal line in the left and right images.
In the embodiment of the invention, some pixel point Matching algorithms need the same point to be in the same horizontal line or vertical line in the two images, so that the pixel point matched with the target pixel point can be found by searching on the horizontal line or vertical line, for example, the Stereo Matching algorithm, while some pixel point Matching algorithms do not need the same point to be in the same horizontal line or vertical line in the two images, for example, the Plane-Sweeping algorithm. Not all target images and images to be matched need to be corrected, and only when the adopted pixel point matching algorithm needs to be adopted, the target images and the images to be matched need to be corrected.
For example, as shown in fig. 2, in three images captured by three cameras in a delta configuration, a Stereo Matching algorithm is adopted between the left image and the right image for pixel Matching, the left image and the right image need to be corrected, a Plane-Sweeping algorithm is adopted between the upper image and the left image for pixel Matching, and the upper image and the left image do not need to be corrected. As shown in fig. 3, another schematic diagram of three images captured by a capturing device in a delta-shaped configuration is shown, a Plane-Sweeping algorithm is adopted between an upper image and a left image, and between the left image and a right image for pixel point matching, and no correction is required for the upper image, the left image and the right image. As shown in fig. 4, in the schematic diagram of three images captured by the L-shaped capturing device, a Stereo Matching algorithm is adopted between the upper image and the left image, and between the left image and the right image for pixel Matching, and the upper image and the left image, and the left image and the right image need to be respectively corrected.
103, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
In the embodiment of the invention, two strategies are combined for a plurality of calculation results, wherein one strategy is a strategy for fusing matching degrees in the process, and the other strategy is a strategy for combining depth values. Either of the two strategies can combine multiple calculation results, thereby obtaining a more accurate depth map.
Introducing a strategy of fusing matching degrees in the process, wherein each target pixel point has at least two corresponding matching degrees, fusing the at least two matching degrees corresponding to the same target pixel point to obtain a fused matching degree corresponding to the target pixel point, and finally obtaining a fused matching degree corresponding to each target pixel point on the target image. The method for fusing at least two matching degrees may include multiple methods, for example, summing the at least two matching degrees, multiplying the at least two matching degrees by respective coefficients, and then performing summation operation, or any other suitable fusing method, which is not limited in this embodiment of the present invention. The at least two matching degrees corresponding to the same target pixel point are fused, and the first depth map corresponding to the target image is generated based on the fused matching degrees corresponding to the target pixel points, so that the at least two matching degrees are not only based on the result obtained by matching the baseline direction between the two shooting devices, and the depth map is generated after the at least two matching degrees are fused, the combination of at least two calculation results is realized, the problem that the matching between the two shooting devices is wrong in the baseline direction is solved, and the accuracy of the depth map is improved.
Optionally, for the same target pixel point, an implementation manner of fusing the at least two matching degrees corresponding to the same target pixel point may include: and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point. For example, in the Stereo Matching algorithm or the Plane-Sweeping algorithm, each feature point has a Cost Value (Cost Value), and at least two Cost values of each feature point are added to obtain a total Cost Value.
And after the matching degrees are fused, generating a depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points, and recording the depth map as a first depth map. For example, the SGM algorithm is used to generate a first depth map, and the total Cost Value obtained by adding the above values is used in a Cost function constructed according to the SGBM (Semi-Global Block Matching) algorithm to perform calculation. The SGBM algorithm is to perform blocking calculation on each cost value and then perform parallax optimization by using the SGM algorithm, in the SGM algorithm, the parallax calculation adopts a mode of being popular with winners, each target pixel point selects a parallax value corresponding to the minimum aggregation cost value as a final parallax, the parallax calculation result is a parallax image with the same size as that of the target image, the parallax value of each pixel point is stored, and under the condition that the internal and external parameters of the image are known, the parallax image can be converted into a depth image, the depth value of each pixel point is stored, and the position of each pixel point in the space is represented.
Then, a strategy of combining depth values is introduced, the depth values are respectively determined according to at least two matching degrees corresponding to each target pixel point, at least two depth values corresponding to each pixel point on the target image are obtained, namely, after the matching degrees corresponding to each target pixel point are obtained according to the target image and another image, a depth map is directly generated according to the matching degrees, and for other images, a depth map is generated. For example, by using the SGM algorithm, a depth map is generated by using each matching degree of a target pixel point, that is, the depth value of each pixel point on the target image is determined. Thus, each pixel point on the target image corresponds to at least two depth values.
And then generating a depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, and recording the depth map as a second depth map. For example, for a same pixel point, the smallest depth value among the at least two depth values is selected as the first depth value of the pixel point, and the first depth values of all the pixel points are aggregated to obtain the second depth map; or for the same pixel point, calculating an average value of at least two depth values of the pixel point, and generating a second depth map by using the average value, or any other suitable manner, which is not limited in this embodiment of the present invention. The depth values are respectively determined according to the at least two matching degrees to obtain at least two depth values corresponding to each pixel point on the target image, and the second depth map corresponding to the target image is generated according to the at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only the result obtained by matching based on the baseline direction between the two shooting devices, but also the depth map is generated by combining the depth values correspondingly determined by the at least two matching degrees, the combination of at least two calculation results is realized, the problem that the two shooting devices are matched wrongly in the baseline direction at times is solved, and the accuracy of the depth map is improved.
Optionally, an implementation manner of generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image may include: and selecting the minimum depth value from at least two depth values corresponding to the same pixel point as the first depth value of the pixel point, and generating a second depth map corresponding to the target image based on the first depth values corresponding to the pixel points on the target image.
Each pixel point is corresponding to at least two depth values, the minimum depth value is selected as the first depth value of the pixel point, the first depth value corresponding to each pixel point on the target image is obtained, and therefore the second depth image corresponding to the target image is formed. The benefits of selecting the smallest depth value as the first depth value of the pixel point include at least: when the autonomous obstacle avoidance is carried out, unnecessary collision or other accidents caused by selection of larger depth values are avoided, the selection of the minimum depth value is more safe, and the success rate of autonomous obstacle avoidance is improved.
Optionally, an implementation manner of generating a second depth map corresponding to the target image based on the first depth value corresponding to each pixel point on the target image may include: the first depth values corresponding to the pixel points on the target image are filtered, the second depth map corresponding to the target image is generated according to the filtered first depth values corresponding to the pixel points on the target image, the first depth values corresponding to the pixel points on the target image are filtered, and therefore some wrong depth values can be filtered, and a more accurate depth map can be obtained.
In order to Filter out erroneous depth values of some pixel points, filtering processing needs to be performed on the first depth value corresponding to each pixel point on the target image, for example, filtering processing is performed on the first depth value corresponding to each pixel point by using a Speckles Filter (speckle Filter). And then, generating a second depth map corresponding to the target image by adopting the filtered first depth value corresponding to each pixel point on the target image.
Optionally, when at least three cameras are disposed on the movable platform, after generating the first depth map or the second depth map, the method may further include: and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map. The distance between the object around the movable platform and the shooting device on the movable platform is stored in the first depth map or the second depth map, and then the distance between each part of the movable platform and the object around the movable platform can be determined according to the position of the shooting device on the movable platform, so that the motion track of the movable platform can be determined according to the first depth map or the second depth map, and the movable platform is prevented from colliding with the object around the movable platform or falling off. When a mechanical arm is arranged on the movable platform, the distance between the mechanical arm and surrounding objects can be determined, the operation track of the mechanical arm is generated, the mechanical arm is prevented from colliding with the surrounding objects, or the mechanical arm is used for operating the surrounding target objects.
According to the embodiment of the invention, at least three images shot by at least three shooting devices are obtained, at least two matching degrees between a target pixel point on a target image and matched pixel points on other images in the at least three images are calculated, the at least two matching degrees corresponding to the same target pixel point are fused, and a first depth map corresponding to the target image is generated based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only based on a result obtained by matching in a baseline direction between two shooting devices, and the depth map is generated after the at least two matching degrees are fused, or the depth map is generated by combining the at least two depth values determined by corresponding to the matching degrees, thereby realizing the combination of at least two calculation results, solving the problem that the two shooting devices are matched incorrectly in the baseline direction, and improving the accuracy of the depth map.
Optionally, at least three cameras are disposed on the electronic device, and the disposing includes mounting, installing, connecting, or other disposing manners. The electronic device includes a movable platform, a mobile terminal, a virtual reality terminal, an augmented reality terminal, or any other suitable electronic device, which is not limited in this embodiment of the present invention.
Wherein the movable platform includes, but is not limited to, an aircraft, a vehicle, a robot, and the like. The aircraft includes an unmanned aircraft, such as a rotary wing aircraft, or a fixed wing aircraft, or any other suitable aircraft, and embodiments of the present invention are not limited in this respect. The vehicle includes a manned vehicle, an unmanned vehicle, a remote control vehicle, or any other suitable vehicle, which is not limited in this embodiment of the present invention. The robot includes a sweeping robot, a cargo robot for transportation, a monitoring robot, and the like, or any other suitable robot, which is not limited in this embodiment of the present invention. When at least three shooting devices are arranged on an aircraft, a vehicle, a robot and the like, the depth map obtained by adopting the technical scheme of the embodiment of the invention is more accurate, so that the aircraft can carry out better autonomous obstacle avoidance, particularly the rotor aircraft very needs accurate and agile autonomous obstacle avoidance, accidents such as crash or collision of the aircraft are avoided, and the safety of the aircraft is improved; the vehicle can also carry out better autonomous obstacle avoidance, and avoid collision between the vehicle and other objects or people, so that the vehicle has higher safety; the robot can also carry out better autonomous obstacle avoidance, the mechanical arm of the robot can operate objects more accurately, especially the floor sweeping robot can reduce the collision with furniture and moving objects, clean all reachable positions without dead angles and the like, and the working capacity of the robot is improved.
The mobile terminal includes a mobile phone, a tablet computer, a notebook computer, etc., or any other suitable mobile terminal, which is not limited in this embodiment of the present invention. A Virtual Reality (VR) terminal includes an external head-mounted device, an integrated head display, or any other suitable Virtual Reality terminal, which is not limited in this embodiment of the present invention. The Augmented Reality (AR) terminal includes a see-through helmet, Augmented Reality glasses, or any other suitable Augmented Reality terminal, which is not limited in this embodiment of the present invention.
Referring to fig. 6, a flowchart illustrating steps of a depth map generating method according to another embodiment of the present invention is shown, which may specifically include the following steps:
step 201, at least three images shot by at least three shooting devices are obtained.
Step 202, the target image and the image to be matched are respectively corrected, so as to obtain a corrected target image and an image to be matched, and a mapping relation between the corrected target image and the target image.
In the embodiment of the invention, the target image and the image to be matched are corrected, so that the same pixel point on the target image and the image to be matched is in the same horizontal line or vertical line, and a more accurate result can be obtained by searching the corrected image based on the baseline direction. As shown in fig. 7, in the schematic diagram of the image rectification technology, in (1), the left and right cameras respectively capture images, and rectify the two images, so that the two images change from (1) to (2), and on the two images, Corresponding Point (Corresponding Point) should be on parallel Epipolar Line (Epipolar Line), for example, the Point at the top of the tree in (2) is on the same horizontal Line, so that only the horizontal Line needs to be searched to find a matched pixel Point. And correcting the target image and the image to be matched to obtain a corrected target image, a corrected image to be matched, a mapping relation between the corrected target image and the target image, and a mapping relation between the corrected image to be matched and the image to be matched.
Step 203, matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
In the embodiment of the invention, after the target image and the image to be matched are corrected, the pixel points in the corrected image are matched, namely the target pixel points on the corrected target image are matched with the pixel points on the corrected image to be matched of the target image, for each target pixel point, the pixel point which is most matched with the target pixel point is searched on the other image, and the matching degree between the target pixel point and the pixel point which is matched on the other image is calculated.
Step 204, determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation.
In the embodiment of the present invention, taking three images as an example, the target image is subjected to pixel matching with the first image and the second image, and therefore, the mapping relationship between the corrected target image and the target image includes a first mapping relationship obtained by correcting the target image and the first image and a second mapping relationship obtained by correcting the target image and the second image.
Since the corrected target image corresponding to the first image is not the same as the corrected target image corresponding to the second image, nor is it the original target image, it is not possible to directly fuse the matching degree based on the corrected target image corresponding to the first image and the matching degree based on the corrected target image corresponding to the second image, and it is necessary to correspond the two matching degrees to the same image by the mapping relationship first. Since both the first mapping relationship and the second mapping relationship are the mapping relationships with the original target image, a third mapping relationship between the rectified target image corresponding to the first image and the rectified target image corresponding to the second image can be calculated from the first mapping relationship and the second mapping relationship.
Step 205, converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relationship.
In the prior art, only two shooting devices are provided, so that two images shot by the two shooting devices can be directly calculated to obtain a matching degree, and further a depth value is obtained. In the embodiment of the invention, at least three shooting devices are needed, and at least three shooting devices shoot at least three shot images, so that when one target image is selected to be respectively matched with at least two other images, at least two matching degrees can be obtained for the same pixel point on the target image, and if fusion operation is required to be performed on the at least two matching degrees, the at least two matching degrees are required to correspond to the same image. As can be seen from the foregoing, in order to obtain a more accurate result, the correction operation is performed on at least three captured images, and the corrected images corresponding to the matching degrees obtained by the calculation are not the same image, so that, in the embodiment of the present invention, the matching degree corresponding to the corrected target image corresponding to the second image may be converted into the matching degree corresponding to the corrected target image corresponding to the first image by using the third mapping relationship, so that the matching degrees of the target pixel points are all based on the same image, and therefore, the subsequent fusion may be performed.
Optionally, the target image and the image to be matched are corrected only when the adopted pixel point matching algorithm needs to be corrected. The mapping relation comprises a fourth mapping relation obtained by correcting the target image and the third image; before the fusing the at least two matching degrees corresponding to the same target pixel point, the method may further include: and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
For example, the target image and the third image are matched with each other by using a Stereo Matching algorithm, so that the target image and the third image need to be corrected, the obtained Matching degree is the Matching degree corresponding to the corrected target image corresponding to the third image, the target image and the other images are matched with each other by using a Plane-Sweeping algorithm, correction is not needed, and the obtained Matching degree is the Matching degree corresponding to the target image, so that the Matching degrees which are not corresponding to the target image are converted into the Matching degree corresponding to the target image.
And 206, fusing the at least two corresponding matching degrees of the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points.
According to the embodiment of the invention, at least three images shot by at least three shooting devices are obtained, the target image and the image to be matched are respectively corrected to obtain a corrected target image and an image to be matched, and a mapping relation between the corrected target image and the target image, target pixel points on the corrected target image and pixel points on the image to be matched of the target image are matched, the matching degree between each target pixel point on the corrected target image and the matched pixel points on the image matched with the target pixel point is respectively calculated, a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image is determined according to the first mapping relation and the second mapping relation, and according to the third mapping relation, the matching degree corresponding to the corrected target image corresponding to the second image is converted into the matching degree corresponding to the corrected target image corresponding to the first image, the at least two matching degrees corresponding to the same target pixel point are fused, and a first depth map corresponding to the target image is generated based on the fused matching degree corresponding to each target pixel point, so that the at least two matching degrees are not only based on a result obtained by matching in the baseline direction between the two shooting devices, but also are fused to generate the depth map, the combination of at least two calculation results is realized, the problem that the matching between the two shooting devices is wrong in the baseline direction is solved, and the accuracy of the depth map is improved.
Referring to fig. 8, a flowchart illustrating steps of a depth map generating method according to another embodiment of the present invention is shown, which may specifically include the following steps:
step 301, at least three images shot by at least three shooting devices are obtained.
Step 302, the target image and the image to be matched are respectively corrected, so as to obtain a corrected target image and an image to be matched, and a mapping relationship between the corrected target image and the target image.
Step 303, matching the target pixel points on the corrected target image with the pixel points on the image to be matched with the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel point on the image to be matched with the target image.
Step 304, respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the corrected image matched with the target image, and obtaining at least two depth values corresponding to each pixel point on the target image.
In the embodiment of the present invention, if the target image and the image to be matched do not need to be corrected, the depth values are respectively determined according to the at least two matching degrees, so as to obtain at least two depth values corresponding to each pixel point on the target image, and if the target image and the image to be matched need to be corrected, the depth values are respectively calculated according to the matching degrees between each target pixel point on the corrected target image and the corrected image to be matched, so as to obtain at least two depth values corresponding to each pixel point on the target image. The depth value is calculated in the same manner as when no correction is required, and will not be described herein.
For example, a schematic diagram of depth value generation as shown in fig. 9. When the upper and left drawings are corrected, the upper drawing is corrected to obtain the upper drawing 1, the left drawing is corrected to obtain the left drawing 1, when the upper and right drawings are corrected, the upper drawing is corrected to obtain the upper drawing 2, and the right drawing is corrected to obtain the right drawing 1. The depth map 1 is based on the left map 1, and the depth map 2 is based on the left map 2. Therefore, the depth map 1 and the depth map 2 cannot be directly combined in practice, and the mapping relationship needs to be found first and then combined in a one-to-one correspondence manner.
Step 305, determining a third mapping relationship between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relationship and the second mapping relationship.
Step 306, converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relationship.
In the embodiment of the present invention, by using the third mapping relationship, the depth value corresponding to the corrected target image corresponding to the second image may be converted into the depth value corresponding to the corrected target image corresponding to the first image, so that the depth values of the pixel points are all based on the same image, and therefore, the pixel points can be subsequently aggregated.
For example, as shown in the diagram of depth map combination shown in fig. 10, maplex 1 represents the mapping relationship (i.e., the first mapping relationship) between the left map and the left map 1, maplex 2 represents the mapping relationship (i.e., the second mapping relationship) between the left map and the left map 2, so that the mapping relationship maplex 3 (i.e., the third mapping relationship) from the left map 2 to the left map 1 can be calculated through maplex 1 and maplex 2, and the depth value of the depth map 2 can be mapped onto the left map 1 by using maplex 3, so that the depth map 1 and the depth map 2 are both based on the left map 1, and then the depth map 4 is finally obtained by combining the above-mentioned strategy of depth value combination.
Optionally, the target image and the image to be matched are corrected only when the adopted pixel point matching algorithm is needed. The mapping relation comprises a fourth mapping relation obtained by correcting the target image and the third image; before generating the second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the method may further include: and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
For example, the target image and the third image are matched with each other by using a Stereo Matching algorithm, so that the target image and the third image need to be corrected, the obtained depth value is the depth value corresponding to the corrected target image corresponding to the third image, the target image and the other images are matched with each other by using a Plane-Sweeping algorithm, no correction is needed, the obtained Matching degree is the depth value corresponding to the target image, and therefore the depth values not corresponding to the target image are all converted into the depth values corresponding to the target image.
Step 307, generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image.
According to the embodiment of the invention, at least three images shot by at least three shooting devices are obtained, the target image and the image to be matched are respectively corrected to obtain a corrected target image and an image to be matched, and the mapping relation between the corrected target image and the target image, target pixel points on the corrected target image and pixel points on the image to be matched of the target image are matched, the matching degree between each target pixel point on the corrected target image and the matched pixel points on the image matched with the target image is respectively calculated, the depth values are respectively calculated according to the matching degree between each target pixel point on the corrected target image and the matched pixel points on the image matched with the target pixel points, and at least two depth values corresponding to each pixel point on the target image are obtained, determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation, converting a depth value corresponding to the corrected target image corresponding to the second image into a depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation, and generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only results obtained by matching based on the baseline direction between the two photographing devices, and the depth map is generated by combining the at least two depth values, thereby realizing the combination of at least two calculation results and solving the problem that the two photographing devices are matched incorrectly in the baseline direction, thereby improving the accuracy of the depth map.
Referring to fig. 11, there is shown a schematic diagram of an electronic apparatus according to still another embodiment of the present invention, the electronic apparatus including a processor 401, a memory 402, and at least three cameras 403;
the processor is configured to: acquiring at least three images shot by at least three shooting devices; calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images; for the same target pixel point, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
When the processor merges the at least two matching degrees corresponding to the same target pixel point, the processor is configured to:
and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point.
When the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is configured to:
selecting the smallest depth value from the at least two depth values corresponding to the same pixel point as the first depth value of the pixel point;
and generating a second depth map corresponding to the target image based on the first depth values corresponding to the pixel points on the target image.
The processor, when generating a second depth map corresponding to the target image based on the first depth values corresponding to the respective pixel points on the target image, is configured to:
filtering the first depth value corresponding to each pixel point on the target image;
and generating a second depth map corresponding to the target image according to the filtered first depth value corresponding to each pixel point on the target image.
Before the processor calculates at least two matching degrees between a target pixel point on a target image and matched pixel points on other respective images in the at least three images, the processor is further configured to:
and respectively correcting the target image and the image to be matched to obtain a corrected target image and an image to be matched and a mapping relation between the corrected target image and the target image.
The processor is configured to, when calculating at least two matching degrees between a target pixel point on a target image in the at least three images and a pixel point matched with each of the other images:
matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
The mapping relation comprises a first mapping relation obtained by correcting the target image and the first image and a second mapping relation obtained by correcting the target image and the second image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
The mapping relation comprises a fourth mapping relation obtained by correcting the target image and the third image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
The processor determines depth values according to the at least two matching degrees, and when obtaining at least two depth values corresponding to each pixel point on the target image, the processor is configured to:
and respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the image matched with the target image, so as to obtain at least two depth values corresponding to each pixel point on the target image.
The mapping relation comprises a first mapping relation obtained by correcting the target image and the first image and a second mapping relation obtained by correcting the target image and the second image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
and converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
The mapping relation comprises a fourth mapping relation obtained by correcting the target image and the third image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
The number of the shooting devices is three, and the position relation among the shooting devices comprises any one of a triangle shape and an L shape.
The electronic equipment comprises any one of a movable platform, a mobile terminal, a virtual reality terminal or an augmented reality terminal. The movable platform comprises a flight chess, a vehicle or a robot and the like. The aircraft includes an unmanned aircraft, such as a rotary wing aircraft, or a fixed wing aircraft, or any other suitable aircraft, and embodiments of the present invention are not limited thereto. The vehicle includes a manned vehicle, an unmanned vehicle, a remote control vehicle, or any other suitable vehicle, which is not limited in this embodiment of the present invention. The robot includes a sweeping robot, a cargo robot for transportation, a monitoring robot, and the like, or any other suitable robot, which is not limited in this embodiment of the present invention.
The processor is further configured to:
and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map.
The electronic device includes a display to:
the first depth map or the second depth map is displayed, and the depth map is displayed, so that an operator can intuitively know the environmental information around the electronic equipment in time, for example, which objects are around the electronic equipment, which objects are closer to the electronic equipment, and which objects are farther from the electronic equipment, so that the timely movement, operation or other operations of the electronic equipment are controlled. It should be noted that the electronic device includes a display and is not limited to the display being necessarily disposed on the electronic device, and the display may also be communicatively connected to the electronic device, for example, the display may be connected to the movable platform through bluetooth, a mobile network, or WiFi, the display may be disposed on a remote controller, or may be disposed on a remote computer, and the present invention is not limited to this.
The processor is further configured to:
and sending the first depth map or the second depth map to a control device of the movable platform, so that the control device can display the first depth map or the second depth map or generate a control instruction for the movable platform according to the first depth map or the second depth map.
The control device may communicate with the movable platform through bluetooth, a wireless network, a 5G network, and the like, the control device may display the first depth map or the second depth map in real time, or the control device may generate a control instruction according to the first depth map or the second depth map, for example, the control device determines a motion trajectory of the movable platform or an operation trajectory of a mechanical arm on the movable platform according to the first depth map or the second depth map, generates a corresponding control instruction according to the motion trajectory or the operation trajectory, and sends the control instruction to the movable platform.
According to the embodiment of the invention, at least three images shot by at least three shooting devices are obtained, at least two matching degrees between a target pixel point on a target image and matched pixel points on other images in the at least three images are calculated, the at least two matching degrees corresponding to the same target pixel point are fused, and a first depth map corresponding to the target image is generated based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, so that the at least two matching degrees are not only based on a result obtained by matching in a baseline direction between two shooting devices, and the depth map is generated after the at least two matching degrees are fused, or the depth map is generated by combining the at least two depth values determined by corresponding to the matching degrees, thereby realizing the combination of at least two calculation results, solving the problem that the two shooting devices are matched incorrectly in the baseline direction, and improving the accuracy of the depth map.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in a computing processing device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
For example, FIG. 12 illustrates a computing processing device in which a method in accordance with the present invention may be implemented. The computing processing device conventionally includes a processor 1010 and a computer program product or computer-readable medium in the form of a memory 1020. The memory 1020 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 1020 has a storage space 1030 for program code 1031 for performing any of the method steps of the above-described method. For example, the storage space 1030 for program code may include respective program code 1031 for implementing various steps in the above method, respectively. The program code can be read from or written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a portable or fixed storage unit as described with reference to fig. 13. The memory unit may have memory segments, memory spaces, etc. arranged similarly to the memory 1020 in the computing processing device of fig. 12. The program code may be compressed, for example, in a suitable form. Typically, the memory unit comprises computer readable code 1031', i.e. code that can be read by a processor, such as 1010, for example, which when executed by a computing processing device causes the computing processing device to perform the steps of the method described above.
Reference herein to "one embodiment," "an embodiment," or "one or more embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Moreover, it is noted that instances of the word "in one embodiment" are not necessarily all referring to the same embodiment.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (32)

  1. A depth map generation method, comprising:
    acquiring at least three images shot by at least three shooting devices;
    calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images;
    for the same target pixel point, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
  2. The method according to claim 1, wherein the fusing the at least two matching degrees corresponding to the same target pixel point comprises:
    and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point.
  3. The method according to claim 1, wherein the generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each of the target pixel points comprises:
    selecting the smallest depth value from the at least two depth values corresponding to the same pixel point as the first depth value of the pixel point;
    and generating a second depth map corresponding to the target image based on the first depth value corresponding to each target pixel point.
  4. The method of claim 3, wherein generating the second depth map corresponding to the target image based on the first depth values corresponding to the respective pixel points on the target image comprises:
    filtering the first depth value corresponding to each pixel point on the target image;
    and generating a second depth map corresponding to the target image according to the filtered first depth value corresponding to each pixel point on the target image.
  5. The method of claim 1, wherein prior to said calculating at least two degrees of match between a target pixel point on a target image and matched pixel points on other respective images in said at least three images, said method further comprises:
    and respectively correcting the target image and the image to be matched to obtain a corrected target image and an image to be matched and a mapping relation between the corrected target image and the target image.
  6. The method of claim 5, wherein calculating at least two degrees of matching between a target pixel point on a target image and matched pixel points on other respective images in the at least three images comprises:
    matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
  7. The method of claim 6, wherein the mapping relationship comprises a first mapping relationship obtained by rectifying the target image and the first image and a second mapping relationship obtained by rectifying the target image and the second image; before the fusing the at least two matching degrees corresponding to the same target pixel point, the method further includes:
    determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
    and converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
  8. The method of claim 6, wherein the mapping comprises a fourth mapping obtained by rectifying the target image and the third image; before the fusing the at least two matching degrees corresponding to the same target pixel point, the method further includes:
    and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
  9. The method of claim 6, wherein the determining the depth values according to the at least two matching degrees respectively to obtain the at least two depth values corresponding to each pixel point on the target image comprises:
    and respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the corrected image matched with the target image, so as to obtain at least two depth values corresponding to each pixel point on the target image.
  10. The method of claim 9, wherein the mapping relationship comprises a first mapping relationship obtained by rectifying the target image and the first image and a second mapping relationship obtained by rectifying the target image and the second image; before the generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the method further includes:
    determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
    and converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
  11. The method of claim 9, wherein the mapping comprises a fourth mapping obtained by rectifying the target image and the third image; before the generating a second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, the method further includes:
    and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
  12. The method according to claim 1, wherein the number of the cameras is three, and the positional relationship between the cameras includes any one of a delta shape or an L shape.
  13. The method of claim 1, wherein the at least three cameras are disposed on an electronic device, the electronic device comprising any one of a movable platform, a mobile terminal, a virtual reality terminal, or an augmented reality terminal.
  14. The method of claim 1, wherein the at least three cameras are disposed on a movable platform, the method further comprising:
    and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map.
  15. An electronic device, comprising a processor, a memory, and at least three cameras;
    the processor is configured to: acquiring at least three images shot by at least three shooting devices; calculating at least two matching degrees between a target pixel point on a target image in the at least three images and matched pixel points on other images; for the same target pixel point, fusing the at least two matching degrees corresponding to the same target pixel point, and generating a first depth map corresponding to the target image based on the fused matching degrees corresponding to the target pixel points; or determining depth values according to the at least two matching degrees respectively to obtain at least two depth values corresponding to each pixel point on the target image, and generating a second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image.
  16. The electronic device of claim 15, wherein the processor, when fusing the at least two matching degrees corresponding to a same target pixel point, is configured to:
    and for the same target pixel point, performing summation operation on the at least two matching degrees corresponding to the same target pixel point to obtain the fused matching degree corresponding to the target pixel point.
  17. The electronic device of claim 15, wherein the processor, when generating the second depth map corresponding to the target image according to at least two depth values corresponding to each pixel point on the target image, is configured to:
    selecting the smallest depth value from the at least two depth values corresponding to the same pixel point as the first depth value of the pixel point;
    and generating a second depth map corresponding to the target image based on the first depth values corresponding to the pixel points on the target image.
  18. The electronic device of claim 17, wherein the processor, when generating the second depth map corresponding to the target image based on the first depth values corresponding to the respective pixel points on the target image, is configured to:
    filtering the first depth value corresponding to each pixel point on the target image;
    and generating a second depth map corresponding to the target image according to the filtered first depth value corresponding to each pixel point on the target image.
  19. The electronic device of claim 15, wherein before the processor calculates at least two degrees of match between a target pixel point on a target image and matched pixel points on other respective images in the at least three images, the processor is further configured to:
    and respectively correcting the target image and the image to be matched to obtain a corrected target image and an image to be matched and a mapping relation between the corrected target image and the target image.
  20. The electronic device of claim 19, wherein the processor, in calculating at least two degrees of match between a target pixel point on a target image and matched pixel points on other respective images in the at least three images, is configured to:
    matching the target pixel points on the corrected target image with the pixel points on the corrected image to be matched of the target image, and respectively calculating the matching degree between each target pixel point on the corrected target image and the matched pixel points on the corrected image to be matched.
  21. The electronic device of claim 20, wherein the mapping relationship comprises a first mapping relationship obtained by rectifying the target image and a first image and a second mapping relationship obtained by rectifying the target image and a second image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
    determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
    and converting the matching degree corresponding to the corrected target image corresponding to the second image into the matching degree corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
  22. The electronic device of claim 20, wherein the mapping relationship comprises a fourth mapping relationship obtained by rectifying the target image and the third image; before the processor fuses the at least two matching degrees corresponding to the same target pixel point, the processor is further configured to:
    and converting the matching degree corresponding to the corrected target image corresponding to the third image into the matching degree corresponding to the target image according to the fourth mapping relation.
  23. The electronic device of claim 20, wherein the processor determines the depth values according to the at least two matching degrees, and when obtaining the at least two depth values corresponding to each pixel point on the target image, the processor is configured to:
    and respectively calculating depth values according to the matching degree between each target pixel point on the corrected target image and the matched pixel point on the image matched with the target image, so as to obtain at least two depth values corresponding to each pixel point on the target image.
  24. The electronic device of claim 23, wherein the mapping relationship comprises a first mapping relationship obtained by rectifying the target image and a first image and a second mapping relationship obtained by rectifying the target image and a second image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
    determining a third mapping relation between the corrected target image corresponding to the first image and the corrected target image corresponding to the second image according to the first mapping relation and the second mapping relation;
    and converting the depth value corresponding to the corrected target image corresponding to the second image into the depth value corresponding to the corrected target image corresponding to the first image according to the third mapping relation.
  25. The electronic device of claim 23, wherein the mapping relationship comprises a fourth mapping relationship obtained by rectifying the target image and the third image; before the processor generates the second depth map corresponding to the target image according to the at least two depth values corresponding to each pixel point on the target image, the processor is further configured to:
    and converting the depth value corresponding to the corrected target image corresponding to the third image into the depth value corresponding to the target image according to the fourth mapping relation.
  26. The electronic device according to claim 15, wherein the number of the cameras is three, and the positional relationship between the cameras includes any one of a delta shape and an L shape.
  27. The electronic device of claim 15, wherein the electronic device comprises any one of a movable platform, a mobile terminal, a virtual reality terminal, or an augmented reality terminal.
  28. The electronic device of claim 15, wherein the electronic device is a movable platform, and wherein the processor is further configured to:
    and determining the motion track of the movable platform or the operation track of the mechanical arm on the movable platform according to the first depth map or the second depth map.
  29. The electronic device of claim 15, wherein the electronic device comprises a display configured to:
    displaying the first depth map or the second depth map.
  30. The electronic device of claim 15, wherein the processor is further configured to:
    and sending the first depth map or the second depth map to a control device of the electronic device, so that the control device can display the first depth map or the second depth map or generate a control instruction for the electronic device according to the first depth map or the second depth map.
  31. A computer program comprising computer readable code which, when run on a computing processing device, causes the computing processing device to perform a depth map generation method according to any of claims 1-14.
  32. A computer-readable medium, in which a computer program according to claim 31 is stored.
CN202080044087.6A 2020-04-28 2020-04-28 Depth map generation method, electronic device, calculation processing device, and storage medium Pending CN113994382A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/087569 WO2021217444A1 (en) 2020-04-28 2020-04-28 Depth map generation method, electronic device, computer processing device and storage medium

Publications (1)

Publication Number Publication Date
CN113994382A true CN113994382A (en) 2022-01-28

Family

ID=78331582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080044087.6A Pending CN113994382A (en) 2020-04-28 2020-04-28 Depth map generation method, electronic device, calculation processing device, and storage medium

Country Status (2)

Country Link
CN (1) CN113994382A (en)
WO (1) WO2021217444A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115297249B (en) * 2022-09-28 2023-01-06 深圳慧源创新科技有限公司 Binocular camera and binocular obstacle avoidance method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855846B2 (en) * 2005-10-20 2014-10-07 Jason W. Grzywna System and method for onboard vision processing
CN106127788B (en) * 2016-07-04 2019-10-25 触景无限科技(北京)有限公司 A kind of vision barrier-avoiding method and device
WO2018086050A1 (en) * 2016-11-11 2018-05-17 深圳市大疆创新科技有限公司 Depth map generation method and unmanned aerial vehicle based on this method
CN106960454B (en) * 2017-03-02 2021-02-12 武汉星巡智能科技有限公司 Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
CN110570468A (en) * 2019-08-16 2019-12-13 苏州禾昆智能科技有限公司 Binocular vision depth estimation method and system based on depth learning

Also Published As

Publication number Publication date
WO2021217444A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN110568447B (en) Visual positioning method, device and computer readable medium
Veľas et al. Calibration of rgb camera with velodyne lidar
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
KR101784183B1 (en) APPARATUS FOR RECOGNIZING LOCATION MOBILE ROBOT USING KEY POINT BASED ON ADoG AND METHOD THEREOF
US20040125207A1 (en) Robust stereo-driven video-based surveillance
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
Meilland et al. Dense visual mapping of large scale environments for real-time localisation
WO2012023593A1 (en) Position and orientation measurement apparatus, position and orientation measurement method, and storage medium
CN110926330B (en) Image processing apparatus, image processing method, and program
Momeni-k et al. Height estimation from a single camera view
US11727637B2 (en) Method for generating 3D skeleton using joint-based calibration acquired from multi-view camera
CN109902675B (en) Object pose acquisition method and scene reconstruction method and device
US20230344979A1 (en) Wide viewing angle stereo camera apparatus and depth image processing method using the same
JP6922348B2 (en) Information processing equipment, methods, and programs
CN111127556A (en) Target object identification and pose estimation method and device based on 3D vision
CN114266823A (en) Monocular SLAM method combining SuperPoint network characteristic extraction
CN113994382A (en) Depth map generation method, electronic device, calculation processing device, and storage medium
CN113724335A (en) Monocular camera-based three-dimensional target positioning method and system
US11417063B2 (en) Determining a three-dimensional representation of a scene
CN112802112B (en) Visual positioning method, device, server and storage medium
CN113011212B (en) Image recognition method and device and vehicle
Aliakbarpour et al. Geometric exploration of virtual planes in a fusion-based 3D data registration framework
CN115797446A (en) Shelf positioning method, shelf docking method, device, equipment, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination