WO2020001149A1 - 用于在深度图像中提取物体的边缘的方法、装置和计算机可读存储介质 - Google Patents
用于在深度图像中提取物体的边缘的方法、装置和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2020001149A1 WO2020001149A1 PCT/CN2019/084392 CN2019084392W WO2020001149A1 WO 2020001149 A1 WO2020001149 A1 WO 2020001149A1 CN 2019084392 W CN2019084392 W CN 2019084392W WO 2020001149 A1 WO2020001149 A1 WO 2020001149A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- edge pixel
- edge
- pixel set
- depth image
- final
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 106
- 238000003708 edge detection Methods 0.000 claims abstract description 101
- 230000011218 segmentation Effects 0.000 claims abstract description 40
- 238000012545 processing Methods 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 19
- 238000004458 analytical method Methods 0.000 claims description 10
- 230000004927 fusion Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present disclosure relates to image processing technologies, and in particular, to a method, an apparatus, a storage medium, and a computer device for extracting an edge of a depth image object.
- the depth camera obtains depth information by calculating the distance from the surface of the object to the camera by emitting infrared structured light and receiving the light beam reflected back by the object.
- Related depth image segmentation methods are based on random Hough transform, some are based on normal component edge fusion, some are based on morphological waterline area, and some are based on mathematical morphology.
- embodiments of the present disclosure provide a method, an apparatus, a storage medium, and a computer device for extracting an edge of a depth image object.
- a method for extracting an edge of an object in a depth image includes: calculating at least two edge pixel sets of a first depth image by using at least two edge detection algorithms; and obtaining a final edge pixel set based on the at least two edge pixel sets.
- the method before using at least two edge detection algorithms to calculate and obtain at least two edge pixel sets of the first depth image, the method further includes: determining at least one frame of related depth image related to the depth image to be segmented; And performing the fusion of the determined related depth image and the to-be-divided depth image one or more times to obtain the first depth image.
- the at least two edge pixel sets include a base edge pixel set and at least one other edge pixel set.
- the step of obtaining a final edge pixel set based on the at least two edge pixel sets includes: compensating a basic edge pixel set with at least one other edge pixel set to obtain the final edge pixel set, wherein, all The basic edge pixel set is calculated by using a first edge detection algorithm, and the at least one other edge pixel set is calculated by using at least one other edge detection algorithm.
- the step of obtaining a final edge pixel set based on the at least two edge pixel sets includes: taking a union of at least two edge pixel sets as the final edge pixel set.
- the step of obtaining a final edge pixel set based on the at least two edge pixel sets includes performing, for each pixel point, the following processing: when the number of edge pixel sets including the pixel points exceeds a first preset When a threshold is set, the pixel point is counted into the final edge pixel set.
- the step of compensating a base edge pixel set with at least one other edge pixel set includes: adding the base edge pixel set to the final edge pixel set; and performing the at least one other edge pixel set.
- the following processing selecting a point directly connected to a point in the final edge pixel set from the other edge pixel sets to join the final edge pixel set.
- the step of calculating at least two edge pixel sets of the first depth image by using at least two edge detection algorithms includes: determining at least two edge detection algorithms; and according to preset values for each edge detection algorithm.
- the weight value determines the number of frames to be processed by each edge detection algorithm; and uses the determined edge detection algorithm to calculate a related depth image related to the first depth image according to the determined frame number to obtain an edge pixel set, where
- the step of obtaining a final edge pixel set based on the at least two edge pixel sets includes performing the following processing for each pixel point in the first image: when the number of edge pixel sets including the pixel points exceeds a second preset threshold The pixel point is counted into the final edge pixel set.
- the first edge detection algorithm is: extracting pixels with a depth value of zero.
- the method further includes: performing a connected domain analysis on the final edge pixel set to obtain a segmentation result.
- the method further includes: performing denoising processing on the segmentation result according to a priori information.
- an apparatus for extracting an edge of an object in a depth image includes: a processor; a memory that stores instructions that, when executed by the processor, causes the processor to: use at least two edge detection algorithms to calculate and obtain at least two edge pixel sets of a first depth image; A final edge pixel set is obtained based on the at least two edge pixel sets.
- the instructions when executed by the processor, further cause the processor to: determine at least one frame of the related depth image related to the depth image to be segmented; and compare the determined related depth image with the determined The fusion of the depth image to be divided is performed once or twice to obtain the first depth image.
- the at least two edge pixel sets include a base edge pixel set and at least one other edge pixel set.
- the instructions when executed by the processor, further cause the processor to: compensate a basic edge pixel set with at least one other edge pixel set to obtain the final edge pixel set, wherein the basic The edge pixel set is calculated by using a first edge detection algorithm, and the at least one other edge pixel set is calculated by using at least one other edge detection algorithm.
- the instructions when executed by the processor, further cause the processor to: take a union of at least two edge pixel sets as the final edge pixel set.
- the instructions when executed by the processor, further cause the processor to perform the following processing for each pixel: when the number of edge pixel sets containing the pixels exceeds a first preset threshold The pixel point is counted into the final edge pixel set.
- the instructions when executed by the processor, further cause the processor to: add the base edge pixel set to the final edge pixel set; and perform the at least one other edge pixel set The following processing: selecting a point directly connected to a point in the final edge pixel set from the other edge pixel sets to join the final edge pixel set.
- the instructions when executed by the processor, further cause the processor to: determine at least two edge detection algorithms; determine each edge detection algorithm according to a weight value preset for each edge detection algorithm Number of frames to be processed; use the determined edge detection algorithm to calculate the relevant depth image related to the first depth image according to the determined frame number to obtain an edge pixel set; for each pixel in the first image, perform the following Processing: When the number of edge pixel sets including the pixel points exceeds a second preset threshold, the pixel points are counted into the final edge pixel set.
- the instructions when executed by the processor, further cause the processor to perform a connected domain analysis on the final edge pixel set to obtain a segmentation result.
- the instructions when executed by the processor, further cause the processor to perform denoising processing on the segmentation result according to a priori information.
- a computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the foregoing method.
- FIG. 1 is a flowchart of a method for extracting an edge of a depth image object according to an embodiment of the present disclosure
- FIG. 2 is a schematic structural diagram of a device for extracting an edge of a depth image object according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of a depth image object edge extraction device according to another embodiment of the present disclosure.
- FIG. 4 is a schematic structural diagram of a depth image object edge extraction device according to still another embodiment of the present disclosure.
- FIG. 6A is an original image of a to-be-separated image of an application example of the present disclosure
- FIG. 6B is a depth image of an image to be segmented in an application example of the present disclosure.
- FIG. 6C is a partial view of a depth image of an image to be segmented in an application example of the present disclosure
- FIG. 8 is a binarization result diagram of a connected domain analysis of the final edge pixel set according to an application example of the present disclosure
- FIG. 9 is a segmentation result of noise generated by an application example of the present disclosure.
- FIG. 10A is a final segmentation result of an application example depth image
- FIG. 10B is a correspondence diagram between the final segmentation result of the application example of the present disclosure and the original image.
- FIG. 11 is a hardware layout diagram of a depth image object edge extraction device according to an embodiment of the present disclosure.
- FIG. 1 is a flowchart of a method for extracting an edge of a depth image object according to an embodiment of the present disclosure. As shown in FIG. 1, the method includes:
- Step 11 Use at least two edge detection algorithms to calculate and obtain at least two edge pixel sets of the first depth image.
- Step 12 Obtain a final edge pixel set based on the at least two edge pixel sets.
- the embodiments of the present disclosure propose to use at least two edges
- the detection algorithm obtains at least two edge pixel sets, and a more accurate final edge pixel set is obtained by combining the edge pixel sets, thereby achieving stable and accurate pixel-level object segmentation.
- the method may further include the following steps: At least one frame of the related depth image related to the depth image is obtained by fusing the determined related depth image with the depth image to be segmented one or more times to obtain the first depth image.
- the first depth image is obtained by fusing multiple depth images, and the edge pixel set based on the first depth image is more accurate, has less noise, and has better segmentation effects.
- the related depth image related to the depth image to be segmented is an image having substantially the same content as the depth image to be segmented, and may be a depth image at least one frame before the depth image to be segmented, or at least one frame after the depth image to be segmented. Depth image, or a combination of the two.
- the first depth image may be the fused depth image as described above, or may be the depth image to be segmented.
- the first depth image may be the depth image to be divided itself or a first depth image obtained by fusing the depth image to be divided and a related depth image as described above.
- calculating at least two edge pixel sets of the first depth image in the above step 11 includes: obtaining a basic edge pixel set of the first depth image by using the first edge detection algorithm, and using at least one other edge detection algorithm. Calculating to obtain at least one other edge pixel set of the first depth image; obtaining the final edge pixel set based on at least two edge pixel sets described in step 12 above includes: compensating the basic edge pixels with at least one other edge pixel set Set to obtain the final edge pixel set.
- the basic edge pixel set may be added to the final edge pixel set, and then the at least one other edge pixel set may be processed as follows: a point directly connected to a point in the final edge pixel set is selected from the other edge pixel sets and added. Final edge pixel set. Until all other edge pixel sets have no points directly connected to the points in the final edge pixel set.
- the direct connection may be a four-connected domain connection or an eight-connected domain connection, that is, for any point x, four points immediately above, below, left, and right of the point may be considered directly connected, or the point Point up, down, left, right, top left, top right, bottom left, bottom right four close positions and four diagonally adjacent points are considered directly connected.
- This embodiment provides a simple and reliable compensation algorithm. In addition to this compensation algorithm, it does not exclude the use of other related compensation algorithms for calculation. Segmentation based on the set of compensated edge pixels can achieve pixel-level segmentation.
- another method is to use a determined edge detection algorithm to calculate a related depth image related to the first depth image to obtain at least two edge pixel sets, and combine the obtained at least two edge pixel sets to obtain Final edge pixel set.
- the number of the obtained edge pixel sets is the same as the number of the related depth images, that is, each frame of the depth image obtains a corresponding edge pixel set.
- At least two edge detection algorithms are determined, the number of frames to be processed by each edge detection algorithm is determined according to a weight value preset for each edge detection algorithm, and the determined edge detection algorithm is used according to the determined frame number.
- the second preset threshold here may be the same as or different from the first preset threshold.
- the first depth image may be a determined depth image to be segmented.
- the related depth image related to the first depth image includes the depth image to be segmented and the related depth image related to the depth image to be segmented.
- the image-related related depth image may be a depth image at least one frame before the depth image to be segmented, or a depth image at least one frame after the first depth image, or a combination of the above.
- the corresponding depth image may be directly selected as the related depth image related to the first depth image according to the total number of frames.
- the first depth image may also be a fused depth image.
- the related depth image related to the first depth image may include: a first depth image, a depth image to be divided, and a related depth related to the depth image to be divided. image. Since the depth images processed by the edge detection algorithm are all related to the first depth image, the edge pixel set obtained in this manner is still referred to as the edge pixel set of the first depth image.
- this method determines the number of frames of the depth image processed by each edge detection algorithm according to a weight value preset for each edge detection algorithm. For example, one of the following methods may be used:
- each edge detection algorithm determines the weight value of each edge detection algorithm, multiply the weight value of each edge algorithm by a preset parameter to obtain the depth image frame processed by each edge detection algorithm If the two edge detection algorithms are determined, the corresponding weights are w1 and w2, the preset parameter a is a positive integer not less than 1, the number of image frames processed by the first edge detection algorithm is w1 * a, and the second edge detection The number of image frames processed by the algorithm is w2 * a;
- the edge detection algorithm determines the correlation related to the first depth image
- the depth image has a total of n frames
- the number of frames of the depth image processed by each edge detection algorithm is determined according to the weight value of each edge detection algorithm. For example, determine two edge detection algorithms.
- the weight value of the first edge detection algorithm is w1
- the number of frames of the processed depth image is n1
- the weight value of the second edge detection algorithm is w2
- the number of frames of the processed depth image is n2.
- Then w1 / w2 n1 / n2
- n1 + n2 n.
- the edge detection algorithm that performs better or has a more reliable result takes a higher weight than other edge detection algorithms.
- the pixel points are counted into the final edge pixel set, where the second preset The threshold may be set to, for example, half of the edge pixel set, that is, if more than half of the pixel set contains a pixel A, the pixel A is counted into the final edge pixel set, otherwise it is not counted.
- the edge detection algorithms that can be used include, for example, the zero-value pixel acquisition method, Canny edge detection algorithm, Roberts edge detection algorithm, Prewitt edge detection algorithm, Sobel edge detection algorithm, Laplacian edge detection algorithm, Log edge detection algorithm, etc. .
- the first edge detection algorithm is a zero-valued pixel acquisition method, that is, pixels with a depth value of zero are extracted.
- the method further includes: performing a connected domain analysis on the final edge pixel set to obtain a segmentation result, and performing denoising processing on the segmentation result according to prior information.
- the prior information includes, for example, but is not limited to one or more of the following information: position information, size information, shape information, etc. of the object, and performing denoising processing on the segmentation result according to the prior information can remove non- Segmentation results from objects (ie noise).
- At least two edge pixel sets of the first depth image are calculated by using at least two edge detection algorithms, and a final edge pixel set is obtained based on the at least two edge pixel sets. Combination to obtain a more accurate final edge pixel set, thereby achieving stable and accurate pixel-level object segmentation.
- FIG. 2 is a schematic structural diagram of a device that can implement the method of the foregoing embodiment.
- the depth image object edge extraction device includes an initial pixel set acquisition module 21 and a final pixel set acquisition module, where:
- the initial pixel set acquisition module 21 is configured to calculate at least two edge pixel sets of the first depth image by using at least two edge detection algorithms;
- the final pixel set obtaining module 22 is configured to obtain a final edge pixel set based on the at least two edge pixel sets.
- the apparatus may further include a first depth image acquisition module 20, configured to determine at least one frame of the related depth image related to the depth image to be segmented, The related depth image is fused with the depth image to be divided one or more times to obtain the first depth image.
- a first depth image acquisition module 20 configured to determine at least one frame of the related depth image related to the depth image to be segmented, The related depth image is fused with the depth image to be divided one or more times to obtain the first depth image.
- the final pixel set obtaining module 22 obtains a final edge pixel set based on the at least two edge pixel sets, including:
- the final pixel set acquisition module 22 compensates a basic edge pixel set with at least one other edge pixel set to obtain the final edge pixel set, wherein the basic edge pixel set is calculated by using a first edge detection algorithm, and the at least one Other edge pixel sets are calculated using at least one other edge detection algorithm; or
- the final pixel set acquisition module 22 takes a union of at least two edge pixel sets as the final edge pixel set; or
- the final pixel set acquisition module 22 performs the following processing for each pixel: when the number of edge pixel sets including the pixel points exceeds a first preset threshold, the pixel points are counted into the final edge pixel set .
- An optional compensation method is: the final pixel set acquisition module 22 adds the basic edge pixel set to the final edge pixel set; and performs the following processing on the at least one other edge pixel set: from the other edge pixels The points directly connected to the points in the final edge pixel set are collectively selected to be added to the final edge pixel set.
- the initial pixel set acquisition module 21 uses at least two edge detection algorithms to calculate and obtain at least two edge pixel sets of the first depth image, including:
- the initial pixel set acquisition module 21 determines at least two edge detection algorithms, determines the number of frames to be processed by each edge detection algorithm according to a weight value preset for each edge detection algorithm, and uses the determined edge detection algorithm according to the Calculating the determined number of frames on a related depth image related to the first depth image to obtain an edge pixel set;
- the final pixel set acquisition module 22 obtains a final edge pixel set based on the at least two edge pixel sets, including: the final pixel set acquisition module 22 performs the following processing for each pixel point in the first image: When the number of edge pixel sets of the pixel points exceeds a second preset threshold, the pixel points are counted into the final edge pixel set.
- the device may further include a segmentation module 23 configured to perform a connected domain analysis on the final edge pixel set to obtain a segmentation result, and perform an analysis on the aforesaid information based on prior information.
- the segmentation result is denoised.
- multiple edge pixel sets are obtained using multiple different edge detection algorithms, and a more accurate final edge pixel set is obtained by combining multiple edge pixel sets, thereby achieving stable and accurate pixels Level object segmentation.
- This example uses the example of performing depth image fusion and compensating the basic edge pixel set with one edge pixel set to specifically describe the method of the foregoing embodiment, and other implementations may be performed with reference to this example.
- a zero-value pixel acquisition method is used to obtain a base edge pixel set, however, the present disclosure is not limited thereto.
- the edge pixel is a pixel at an edge position of an object in an image. Accurately obtaining the edge pixels of the object in the image can achieve stable and accurate object segmentation.
- Step 51 Extract multiple frames of depth images of objects in a still scene, determine one of the frames as the depth image to be segmented, and set the other frame depth images to be related depth images related to the depth image to be segmented;
- the depth image described in this embodiment refers to an image containing depth information obtained by a depth camera.
- the depth camera emits infrared structured light and receives the light beam reflected by the infrared structured light from the object.
- the object surface is calculated based on the reflected light beam.
- Distance information to the camera is depth information.
- the related depth image related to the depth image to be segmented is an image within a preset time window with the depth image to be segmented, for example, it may be a depth image at least one frame before the depth image to be segmented, or it may be after the depth image to be segmented A depth image of at least one frame, or a combination of the two.
- Step 52 Fusion the extracted multi-frame depth images to obtain a first depth image
- the imaging unit used to receive the reflected light may not be able to receive the reflected light at the edge position of the object.
- the depth value of some pixels is 0, that is, 0-value pixels.
- the 0-pixel set of pixels in the edge part of a single image may not be stable. For example, a point in the image is 0 at time t1, but not 0 at time t2.
- Multi-frame depth images including a depth image to be segmented and a related depth image of an object in a still scene are first extracted, and then they are fused into one image, that is, a first depth image.
- a depth image to be segmented may be directly selected as the first depth image.
- the fusion method can be one of the following: take union method, take intersection method, threshold method, continuous interval method, etc., and for non-zero-value pixels in the depth image, that is, ordinary pixels, can be used.
- Mean filtering For consecutive N-frame depth images, the so-called join-set method is to take the union of the 0-valued pixel set of all frames; a similar intersection method is to take the intersection of the 0-valued pixel sets of all the frames; the threshold method is for each The pixel counts the number of frames N 0 whose value is 0.
- the pixel When N 0 is greater than a preset threshold N th1 , the pixel is considered to be a 0-value pixel, otherwise it is not considered to be a 0-value pixel. For example, 9 frames of images are counted. when more than 5 or 5 to 0, it is considered that the pixel value of 0 pixel, otherwise not considered; continuum method to each pixel statistical value continuous length N length sequence of frames 0, if N When the length is greater than another preset threshold N th2 , the pixel is considered to be a zero-value pixel.
- Different models of depth sensor noise have different models. In practical applications, these fusion methods can be used to experiment separately to obtain the best results. The results of different fusion methods can also be re-fused, that is, multiple fusions. Multi-frame fusion can improve the stability of the 0-value pixel set.
- Step 53 Use a first edge detection algorithm to extract a pixel point set (P 0 ) with zero depth data from the first depth image as a basic edge pixel set;
- an algorithm with a more reliable edge effect is selected as the first edge detection algorithm.
- the first edge detection algorithm in this example is the zero-value pixel extraction method, that is, extracting pixels with zero depth data to form a basic edge pixel set.
- FIGS. 6A and 6B FIG. 6A is the original image
- FIG. 6B is the depth image corresponding to the extracted basic edge pixel set
- the dark points in the figure are pixels with a depth value of zero.
- Step 54 Use the second edge detection algorithm to extract the edge pixel point set (P c ) of the depth map as the second edge pixel set.
- FIG. 6B is a partial view of Figure 6B, and points A and B on the edge
- the point A and B are disconnected, which indicates that the edge pixels in the first edge pixel set are missing. Therefore, other edge detection algorithms can be used to extract the edge pixels in the first depth image to obtain other edge pixel sets.
- the edge pixel set compensates the base pixel set.
- the second edge detection algorithm is used to obtain the second edge pixel set.
- the third edge detection algorithm obtains a third edge pixel set, and continues to perform compensation using the third edge pixel set, and so on, until a final edge pixel set with closed edges is obtained.
- edge detection algorithms can use related edge detection algorithms.
- Canny edge detection algorithm is used.
- Gaussian filtering is used to smooth the image to remove noise.
- the intensity gradient in the image is found. The idea is to find the position in the image where the gray intensity changes the most. The so-called strongest change refers to the direction of the gradient.
- Two aspects of data need to be obtained. One is the intensity information of the gradient, and the other is the direction information of the gradient.
- the first depth image obtained in step 52 may be first mapped to a grayscale image.
- the first depth image is converted and its depth information is converted to In the range of 0-255, you can multiply the depth information by a conversion coefficient (or scaling factor) to make the value in the range of 0-255.
- Using Canny edge detection algorithm to perform edge extraction on the converted first depth image can get the results shown in Figure 7.
- Step 55 Use the second edge pixel set to compensate the basic edge pixel set to obtain an object edge pixel candidate set ( PE ), that is, the final edge pixel set.
- PE object edge pixel candidate set
- the base edge pixel set (P 0 ) is combined with the second edge pixel set (P c ) to obtain a more complete object edge ( PE ).
- all the points in P 0 are first entered into P E ; and if it is required that points p in P C are entered into P E , then it is necessary to satisfy if and only if p 0 and a point P is directly or indirectly connected, i.e., the presence of 'a path PP' from a point P to p p 0, each point on the path P 0 or P belong to E; or: first P 0 P E entering all points, and then determine whether the point P C P E p and any point in the connected, if the connection, then the join point p P E, P C until judged complete in all points.
- connection refers to direct connection, which can be a four-connected domain connection or an eight-connected domain connection, that is, for any point x, the point can be moved up and down.
- the four points immediately adjacent to the left, right, and left are considered to be directly connected. You can also up, down, left, right, upper left, upper right, lower left, and lower right next to the point and four diagonally adjacent points. Both are considered to be directly connected.
- a combination method of taking a union can also be adopted, for example, taking the union of the at least two edge pixel sets as the final edge pixel set, that is, as long as an edge detection algorithm considers that the pixel point is at the edge, then This pixel is recorded in the final edge pixel set.
- the final edge pixel set uses an odd number of edge detection algorithms to obtain the odd number of edge pixel sets in the first depth image. For each pixel, if more than half of the edge pixel sets contain the pixel, that is, there are more than half of the edges The detection algorithm considers that the pixel is located at the edge, then the pixel is counted into the final edge pixel set, otherwise it is not counted.
- the closed edge can be obtained by using the second edge pixel set to compensate the basic edge pixel set, so there is no need to use another edge pixel set for compensation. If the final edge pixel set obtained by compensating the basic edge pixel set with the second edge pixel set still has edge missing, an edge detection algorithm may be selected to obtain a third edge pixel set, and the third edge pixel set is used to correct the aforementioned edge missing. The final edge pixel set is compensated until the final edge pixel set with closed edges is obtained.
- Step 56 For the final edge pixel set obtained, a connected domain analysis can be performed to obtain the corresponding contour of the object;
- Figure 8 shows the analysis results of the connected domain. It can be seen that the second edge pixel set (P c ) is used to compensate the basic edge pixel set (P 0 ) to obtain a more accurate final edge pixel set. Based on this final edge pixel set The obtained segmentation results can achieve pixel-level effects.
- Step 57 Post-processing is performed to obtain a segmentation mask.
- the segmentation results in the circle in Figure 9 are the segmentation results generated by noise.
- the staff can input a priori information in advance, including the following one Or multiple types: position information, size information, shape information of the object. After the segmentation result is obtained, the segmentation results that do not meet the conditions are automatically removed according to the prior information entered in advance.
- the only connected region in FIG. 10A is the final segmentation result.
- the final segmentation result obtained by using this example method is just the object in the foreground of the original image, as shown in FIG. 10B.
- An embodiment of the present disclosure further provides a computer storage medium, where the computer storage medium stores a computer program; after the computer program is executed, the division method provided by one or more of the foregoing embodiments can be implemented, for example, as shown in FIG. 1 The method shown.
- An embodiment of the present disclosure further provides a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor can implement one or the foregoing when the program is executed. Segmentation methods provided by various embodiments.
- FIG. 11 is a hardware layout diagram of a depth image object edge extraction device 1100 according to an embodiment of the present disclosure.
- the hardware arrangement 1100 includes a processor 1106 (eg, a digital signal processor (DSP), a central processing unit (CPU), etc.).
- the processor 1106 may be a single processing unit or multiple processing units for performing different actions of the processes described herein.
- the arrangement 1100 may also include an input unit 1102 for receiving signals from other entities, and an output unit 1104 for providing signals to other entities.
- the input unit 1102 and the output unit 1104 may be arranged as a single entity or separate entities.
- the arrangement 1100 may include at least one readable storage medium 1108 in the form of a non-volatile or volatile memory, such as an electrically erasable programmable read-only memory (EEPROM), a flash memory, and / or a hard drive.
- the readable storage medium 1108 includes a computer program 1110 including code / computer-readable instructions that, when executed by the processor 1106 in the arrangement 1100, enables the hardware arrangement 1100 and / or devices including the hardware arrangement 1100 to For example, the processes described above in connection with FIG. 1 or FIG. 5 and any variations thereof are performed.
- the computer program 1110 may be configured as computer program code having, for example, a computer program module 1110A-1110B architecture. Therefore, the code in the computer program arranging 1100 may include: module 1110A for calculating at least two edge pixel sets of the first depth image using at least two edge detection algorithms; and module 1110B for based on at least two edges The pixel set gets the final edge pixel set.
- the computer program module can substantially perform various actions in the flow shown in FIG. 1 or FIG. 5 to simulate the device 800.
- different computer program modules when executed in the processor 1106, they may correspond to different units or modules in the aforementioned depth image object edge extraction device described in conjunction with FIGS. 2 to 4.
- code means in the embodiment disclosed above in connection with FIG. 11 is implemented as a computer program module, which when executed in the processor 1106 causes the hardware arrangement 1100 to perform the actions described above in connection with FIG. 1 or FIG.
- at least one of the code means may be implemented at least partially as a hardware circuit.
- the processor may be a single CPU (Central Processing Unit), but may also include two or more processing units.
- the processor may include a general-purpose microprocessor, an instruction set processor and / or an associated chipset and / or a special-purpose microprocessor (eg, an application-specific integrated circuit (ASIC)).
- ASIC application-specific integrated circuit
- the processor may also include on-board memory for caching purposes.
- the computer program may be carried by a computer program product connected to a processor.
- the computer program product may include a computer-readable medium having a computer program stored thereon.
- the computer program product may be a flash memory, a random access memory (RAM), a read-only memory (ROM), and an EEPROM, and the above-mentioned computer program module may be distributed to different computers in the form of a memory in the device in an alternative embodiment.
- Program product may be a flash memory, a random access memory (RAM), a read-only memory (ROM), and an EEPROM, and the above-mentioned computer program module may be distributed to different computers in the form of a memory in the device in an alternative embodiment.
- Program product may be a flash memory, a random access memory (RAM), a read-only memory (ROM), and an EEPROM
- Embodiments of the present disclosure provide a method, an apparatus, a storage medium, and a computer device for extracting an edge of a depth image object. At least two edge detection algorithms are used to calculate and obtain at least two edge pixel sets of a first depth image. The edge pixel set obtains the final edge pixel set. Compared with the related art, the solution of the embodiment of the present disclosure is simple to implement, and can obtain a more accurate set of edge pixels, thereby obtaining a pixel-level segmentation result.
- the solution of the embodiment of the present disclosure further fuses multiple depth images to achieve the purpose of removing noise, thereby obtaining a more stable and accurate segmentation result.
- solutions of the embodiments of the present disclosure can be used in the aspects of sample collection, automatic labeling, machine learning and deep learning, GroundTruth acquisition and training data generation.
- Computer storage medium includes both volatile and nonvolatile implementations in any method or technology used to store information such as computer-readable instructions, data structures, program modules or other data.
- Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cartridges, magnetic tape, disk storage or other magnetic storage devices, or Any other medium used to store desired information and which can be accessed by a computer.
- a communication medium typically contains computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transmission mechanism, and may include any information delivery medium .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (22)
- 一种用于在深度图像中提取物体的边缘的方法,包括:采用至少两种边缘检测算法计算获得第一深度图像的至少两个边缘像素集;以及基于所述至少两个边缘像素集得到最终边缘像素集。
- 根据权利要求1所述的方法,其中,在采用至少两种边缘检测算法计算获得第一深度图像的至少两个边缘像素集前,所述方法还包括:确定与待分割深度图像相关的至少一帧相关深度图像;以及将确定的所述相关深度图像与所述待分割深度图像进行一次或两次以上的融合,得到所述第一深度图像。
- 根据权利要求1所述的方法,其中,所述至少两个边缘像素集包括基础边缘像素集和至少一个其他边缘像素集。
- 根据权利要求3所述的方法,其中,所述基于所述至少两个边缘像素集得到最终边缘像素集的步骤包括:用至少一个其他边缘像素集补偿基础边缘像素集,得到所述最终边缘像素集,其中,所述基础边缘像素集采用第一边缘检测算法计算得到,所述至少一个其他边缘像素集采用至少一个其他边缘检测算法计算获得。
- 根据权利要求1所述的方法,其中,所述基于所述至少两个边缘像素集得到最终边缘像素集的步骤包括:取至少两个边缘像素集的并集为所述最终边缘像素集。
- 根据权利要求1所述的方法,其中,所述基于所述至少两个边缘像素集得到最终边缘像素集的步骤包括对于每一像素点执行以下处理:当包含所述像素点的边缘像素集的数量超过第一预设阈值时,将所述像素点计入所述最终边缘像素集。
- 根据权利要求4所述的方法,其中,所述用至少一个其他边缘像素集补偿基础边缘像素集的步骤包括:将所述基础边缘像素集加入所述最终边缘像素集;以及对所述至少一个其他边缘像素集进行以下处理:从所述其他边缘像素集中选择与所述最终边缘像素集中的点直接相连的点加入所述最终边缘像素集。
- 根据权利要求1所述的方法,其中,所述采用至少两种边缘检测算法计算获得第一深度图像的至少两个边缘像素集的步骤包括:确定至少两种边缘检测算法;根据为每种边缘检测算法预设的权重值确定每种边缘检测算法待处理的帧数;以及用所确定的边缘检测算法按照所确定的帧数对与第一深度图像相关的相关深度图像进行计算,获得边缘像素集,其中,所述基于所述至少两个边缘像素集得到最终边缘像素集的步骤包括对于第一图像中的每一像素点执行以下处理:当包含所述像素点的边缘像素集的数量超过第二预设阈值时,将所述像素点计入所述最终边缘像素集。
- 根据权利要求4所述的方法,其中,所述第一边缘检测算法为:提取深度值为零的像素。
- 根据权利要求1-9中任一权利要求所述的方法,其中,所述方法还包括:对所述最终边缘像素集进行连通域分析得到分割结果。
- 根据权利要求10所述的方法,其中,所述方法还包括:根据先验信息对所述分割结果进行去噪处理。
- 一种用于在深度图像中提取物体的边缘的装置,包括:处理器;存储器,存储指令,所述指令在由所述处理器执行时使得所述处理器:采用至少两种边缘检测算法计算获得第一深度图像的至少两个边缘像素集;基于所述至少两个边缘像素集得到最终边缘像素集。
- 根据权利要求12所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:确定与待分割深度图像相关的至少一帧相关深度图像;以及将确定的所述相关深度图像与所述待分割深度图像进行一次或两次以上 的融合,得到所述第一深度图像。
- 根据权利要求12所述的装置,其中,所述至少两个边缘像素集包括基础边缘像素集和至少一个其他边缘像素集。
- 根据权利要求14所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:用至少一个其他边缘像素集补偿基础边缘像素集,得到所述最终边缘像素集,其中,所述基础边缘像素集采用第一边缘检测算法计算得到,所述至少一个其他边缘像素集采用至少一个其他边缘检测算法计算获得。
- 根据权利要求12所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:取至少两个边缘像素集的并集为所述最终边缘像素集。
- 根据权利要求12所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器对于每一像素点执行以下处理:当包含所述像素点的边缘像素集的数量超过第一预设阈值时,将所述像素点计入所述最终边缘像素集。
- 根据权利要求12所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:将所述基础边缘像素集加入所述最终边缘像素集;以及对所述至少一个其他边缘像素集进行以下处理:从所述其他边缘像素集中选择与所述最终边缘像素集中的点直接相连的点加入所述最终边缘像素集。
- 根据权利要求12所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:确定至少两种边缘检测算法;根据为每种边缘检测算法预设的权重值确定每种边缘检测算法待处理的帧数;用所确定的边缘检测算法按照所确定的帧数对与第一深度图像相关的相关深度图像进行计算,获得边缘像素集;对于第一图像中的每一像素点执行以下处理:当包含所述像素点的边缘像素集的数量超过第二预设阈值时,将所述像素点计入所述最终边缘像素集。
- 根据权利要求12-19中任一权利要求所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:对所述最终边缘像素集进行连通域分析得到分割结果。
- 根据权利要求20所述的装置,其中,所述指令在由所述处理器执行时还使得所述处理器:根据先验信息对所述分割结果进行去噪处理。
- 一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现权利要求1-11中任一项所述方法的步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/615,644 US11379988B2 (en) | 2018-06-29 | 2019-04-25 | Method and apparatus for extracting edge of object in depth image and computer readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810701449.9 | 2018-06-29 | ||
CN201810701449.9A CN108830873B (zh) | 2018-06-29 | 2018-06-29 | 深度图像物体边缘提取方法、装置、介质及计算机设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020001149A1 true WO2020001149A1 (zh) | 2020-01-02 |
Family
ID=64134528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/084392 WO2020001149A1 (zh) | 2018-06-29 | 2019-04-25 | 用于在深度图像中提取物体的边缘的方法、装置和计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11379988B2 (zh) |
CN (1) | CN108830873B (zh) |
WO (1) | WO2020001149A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445513A (zh) * | 2020-02-24 | 2020-07-24 | 浙江科技学院 | 基于深度图像的植株冠层体积获取方法、装置、计算机设备和存储介质 |
CN112560779A (zh) * | 2020-12-25 | 2021-03-26 | 中科云谷科技有限公司 | 进料口溢料识别方法、设备及搅拌站进料控制系统 |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108830873B (zh) * | 2018-06-29 | 2022-02-01 | 京东方科技集团股份有限公司 | 深度图像物体边缘提取方法、装置、介质及计算机设备 |
CN109452941B (zh) * | 2018-11-23 | 2021-04-23 | 中国科学院自动化研究所 | 基于图片正畸与边界提取的肢体周径测量方法及系统 |
US20220108435A1 (en) * | 2020-10-02 | 2022-04-07 | Baker Hughes Oilfield Operations Llc | Automated turbine blade to shroud gap measurement |
CN112697042B (zh) * | 2020-12-07 | 2023-12-05 | 深圳市繁维科技有限公司 | 手持式tof相机及其强适应包裹体积测量方法 |
CN112634302B (zh) * | 2020-12-28 | 2023-11-28 | 航天科技控股集团股份有限公司 | 基于深度学习的移动端类矩形物体边缘检测方法 |
CN114187267B (zh) * | 2021-12-13 | 2023-07-21 | 沭阳县苏鑫冲压件有限公司 | 基于机器视觉的冲压件缺陷检测方法 |
CN115423829B (zh) * | 2022-07-29 | 2024-03-01 | 江苏省水利科学研究院 | 一种单波段遥感影像水体快速提取方法及系统 |
CN115682941B (zh) * | 2022-12-27 | 2023-03-07 | 广东技术师范大学 | 一种基于结构光相机的包装箱几何尺寸测量方法 |
CN115797750B (zh) * | 2023-02-02 | 2023-04-25 | 天津滨海迅腾科技集团有限公司 | 一种基于深度学习算法的大尺寸图像快速传输方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927717A (zh) * | 2014-03-28 | 2014-07-16 | 上海交通大学 | 基于改进型双边滤波的深度图像恢复方法 |
US20140219547A1 (en) * | 2013-02-01 | 2014-08-07 | Mitsubishi Electric Research Laboratories, Inc | Method for Increasing Resolutions of Depth Images |
CN104272323A (zh) * | 2013-02-14 | 2015-01-07 | Lsi公司 | 使用至少一个附加图像的图像增强和边缘验证方法和设备 |
CN104954770A (zh) * | 2014-03-31 | 2015-09-30 | 联咏科技股份有限公司 | 图像处理装置及其方法 |
CN105825494A (zh) * | 2015-08-31 | 2016-08-03 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
CN107578053A (zh) * | 2017-09-25 | 2018-01-12 | 重庆虚拟实境科技有限公司 | 轮廓提取方法及装置、计算机装置及可读存储介质 |
CN108830873A (zh) * | 2018-06-29 | 2018-11-16 | 京东方科技集团股份有限公司 | 深度图像物体边缘提取方法、装置、介质及计算机设备 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100565554C (zh) * | 2007-12-19 | 2009-12-02 | 重庆大学 | 多方法结合的人耳图像边缘提取方法 |
US8396269B2 (en) * | 2010-04-08 | 2013-03-12 | Digital Pathco LLC | Image quality assessment including comparison of overlapped margins |
CN101859385A (zh) * | 2010-06-29 | 2010-10-13 | 上海大学 | 基于图像的局部模糊篡改盲检测方法 |
CN102521802A (zh) * | 2011-11-28 | 2012-06-27 | 广东省科学院自动化工程研制中心 | 一种数学形态学和LoG算子结合的边缘检测算法 |
CN103440662B (zh) * | 2013-09-04 | 2016-03-09 | 清华大学深圳研究生院 | Kinect深度图像获取方法与装置 |
JP6283419B2 (ja) * | 2013-12-12 | 2018-02-21 | ゼネラル・エレクトリック・カンパニイ | 欠陥指標の検出方法 |
-
2018
- 2018-06-29 CN CN201810701449.9A patent/CN108830873B/zh active Active
-
2019
- 2019-04-25 WO PCT/CN2019/084392 patent/WO2020001149A1/zh active Application Filing
- 2019-04-25 US US16/615,644 patent/US11379988B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140219547A1 (en) * | 2013-02-01 | 2014-08-07 | Mitsubishi Electric Research Laboratories, Inc | Method for Increasing Resolutions of Depth Images |
CN104272323A (zh) * | 2013-02-14 | 2015-01-07 | Lsi公司 | 使用至少一个附加图像的图像增强和边缘验证方法和设备 |
CN103927717A (zh) * | 2014-03-28 | 2014-07-16 | 上海交通大学 | 基于改进型双边滤波的深度图像恢复方法 |
CN104954770A (zh) * | 2014-03-31 | 2015-09-30 | 联咏科技股份有限公司 | 图像处理装置及其方法 |
CN105825494A (zh) * | 2015-08-31 | 2016-08-03 | 维沃移动通信有限公司 | 一种图像处理方法及移动终端 |
CN107578053A (zh) * | 2017-09-25 | 2018-01-12 | 重庆虚拟实境科技有限公司 | 轮廓提取方法及装置、计算机装置及可读存储介质 |
CN108830873A (zh) * | 2018-06-29 | 2018-11-16 | 京东方科技集团股份有限公司 | 深度图像物体边缘提取方法、装置、介质及计算机设备 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445513A (zh) * | 2020-02-24 | 2020-07-24 | 浙江科技学院 | 基于深度图像的植株冠层体积获取方法、装置、计算机设备和存储介质 |
CN111445513B (zh) * | 2020-02-24 | 2024-01-16 | 浙江科技学院 | 基于深度图像的植株冠层体积获取方法、装置、计算机设备和存储介质 |
CN112560779A (zh) * | 2020-12-25 | 2021-03-26 | 中科云谷科技有限公司 | 进料口溢料识别方法、设备及搅拌站进料控制系统 |
CN112560779B (zh) * | 2020-12-25 | 2024-01-05 | 中科云谷科技有限公司 | 进料口溢料识别方法、设备及搅拌站进料控制系统 |
Also Published As
Publication number | Publication date |
---|---|
CN108830873B (zh) | 2022-02-01 |
CN108830873A (zh) | 2018-11-16 |
US20210358132A1 (en) | 2021-11-18 |
US11379988B2 (en) | 2022-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020001149A1 (zh) | 用于在深度图像中提取物体的边缘的方法、装置和计算机可读存储介质 | |
Emberton et al. | Underwater image and video dehazing with pure haze region segmentation | |
JP6416208B2 (ja) | 近赤外ガイドイメージノイズ除去 | |
CN102113015B (zh) | 使用修补技术进行图像校正 | |
US6707940B1 (en) | Method and apparatus for image segmentation | |
US10230935B2 (en) | Method and a system for generating depth information associated with an image | |
TWI489418B (zh) | Parallax Estimation Depth Generation | |
US10169673B2 (en) | Region-of-interest detection apparatus, region-of-interest detection method, and recording medium | |
CN108447068B (zh) | 三元图自动生成方法及利用该三元图的前景提取方法 | |
JP2016505186A (ja) | エッジ保存・ノイズ抑制機能を有するイメージプロセッサ | |
US20160259972A1 (en) | Complex background-oriented optical character recognition method and device | |
JP2012038318A (ja) | ターゲット検出方法及び装置 | |
US20150220804A1 (en) | Image processor with edge selection functionality | |
US20120320433A1 (en) | Image processing method, image processing device and scanner | |
JP2022551366A (ja) | カメラストリームのためのマスクを生成する方法、コンピュータープログラム製品およびコンピューター可読媒体 | |
US20170103536A1 (en) | Counting apparatus and method for moving objects | |
WO2017128646A1 (zh) | 一种图像处理的方法及装置 | |
JP2017085570A (ja) | 画像補正方法及び画像補正装置 | |
JP5662890B2 (ja) | 画像処理方法、画像処理装置及び画像処理プログラム、並びに、画像処理による放射線量推定方法 | |
Srikakulapu et al. | Depth estimation from single image using defocus and texture cues | |
Xu et al. | Improved Canny Edge Detection Operator | |
RU2580466C1 (ru) | Устройство восстановления карты глубины сцены | |
EP3070669A1 (en) | Method and apparatus for color smoothing in an alpha matting process | |
KR101373603B1 (ko) | 홀 발생 억제를 위한 3d워핑 방법 및 이를 적용한 영상 처리 장치 | |
Lee et al. | A 4K-capable hardware accelerator of haze removal algorithm using haze-relevant features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19827017 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19827017 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.05.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19827017 Country of ref document: EP Kind code of ref document: A1 |