CN111986203A - Depth image segmentation method and device - Google Patents
Depth image segmentation method and device Download PDFInfo
- Publication number
- CN111986203A CN111986203A CN202010655269.9A CN202010655269A CN111986203A CN 111986203 A CN111986203 A CN 111986203A CN 202010655269 A CN202010655269 A CN 202010655269A CN 111986203 A CN111986203 A CN 111986203A
- Authority
- CN
- China
- Prior art keywords
- depth image
- island
- target
- depth
- horizontal plane
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a depth image segmentation method and device, and belongs to the technical field of image processing. According to the method, a background in a depth image is regarded as a sea bottom, a target is regarded as a shrunken island, the lower limit of the depth value of the depth image is used as an initial horizontal plane, the rising of the horizontal plane is gradually controlled, the background and the low target are gradually submerged in the rising process, a series of water-level maps are generated by taking the horizontal plane as a boundary, and each water-level map is identified to obtain an island region; based on the principle that the area of an island region formed by a target hardly changes in a certain water level interval and the area of the island region formed by a raised background part gradually becomes smaller and is quickly submerged in the rising process of a water level, the target is identified from the island image, and therefore the target in the depth image is segmented. The invention fully considers the situation of large depth difference of the surface of the target background, and can accurately realize the target segmentation of the depth image.
Description
Technical Field
The invention relates to a depth image segmentation method and device, and belongs to the technical field of image processing.
Background
With the development of artificial intelligence and machine vision technologies, the demand of various industries on image data increases year by year, and the requirements on image segmentation technologies are higher and higher, so that the development of automation and intelligent technologies is promoted. In the face of complex image data, how to accurately find and segment the required target has become an important bottleneck restricting the development of image processing technology, and it can be said that accurate segmentation of images has become the most basic and the most important problem in the image processing field and needs to be solved urgently. In recent years, with the development of binocular vision technology, depth images are increasingly used in the fields of target detection, three-dimensional reconstruction, and the like. Depth images, also known as range images, are images that take as pixel values the distance (depth) of an image collector to each point in a scene, which directly reflects the geometry of the visible surface of an object. In the field of target detection, a threshold segmentation method is mainly adopted in traditional depth image segmentation, and the threshold segmentation is roughly divided into fixed threshold segmentation and dynamic threshold segmentation. For example, the name of 'brown mushroom in-situ measurement technology based on an SR300 depth camera' (Wanglin, Xuwei, Dukaiwei and the like) provides a brown mushroom depth image segmentation method, which aims at interfering a background, utilizes the mode of a substrate surface depth value in the depth image, combines the height of mushroom stipe, and adaptively selects a dynamic threshold value to realize background segmentation and extract a binary image of a mushroom cap contour, thereby realizing segmentation of the brown mushroom depth image. Although the method can realize the segmentation of the depth image of the brown mushroom, actually, the substrate surface depth difference of the brown mushroom is large, and if only the substrate surface depth value mode is adopted as the dynamic threshold value of the segmentation, the segmentation is inaccurate. In the depth image segmentation in the face of a complex background, no good segmentation effect can be obtained by a fixed threshold value method or a dynamic threshold value method. Therefore, a depth image segmentation method with a wider application range and a better segmentation effect is urgently needed.
The invention fully utilizes and excavates rich three-dimensional structure information provided by a depth image, and develops a depth image target segmentation method and a depth image target segmentation device according to the characteristics of a target depth structure.
Disclosure of Invention
The invention aims to provide a depth image segmentation method and a depth image segmentation device, which are used for solving the problem that the current depth image segmentation is inaccurate.
The present invention provides a depth image segmentation method for solving the above-mentioned technical problems, the segmentation method comprising the steps of:
1) acquiring a depth image containing a target;
2) carrying out depth value conversion on the obtained depth image, and converting the depth value of the depth image into the distance between the target object and the reference surface;
3) taking the lower limit of the depth value of the converted depth image as a horizontal plane, and gradually increasing the horizontal plane according to a set step length until the horizontal plane reaches a set height; in the process of rising the horizontal plane each time, carrying out binarization processing on the depth image by taking the horizontal plane as a boundary to generate corresponding puddle images, and extracting island regions separated from the boundary from each puddle image;
4) and extracting island regions from each water-level depression graph for analysis, and selecting the island regions with the island region area change rate smaller than a set threshold value in the horizontal plane rising process as targets.
The invention also provides a depth image segmentation device, which comprises a processor and a memory, wherein the processor executes a computer program stored by the memory to realize the depth image segmentation method.
According to the method, a background in a depth image is regarded as a sea bottom, a target is regarded as a shrunken island, the lower limit of the depth value of the depth image is used as an initial horizontal plane, the rising of the horizontal plane is gradually controlled, the background and the low target are gradually submerged in the rising process, a series of water-level maps are generated by taking the horizontal plane as a boundary, and each water-level map is identified to obtain an island region; based on the principle that the area of an island region formed by a target hardly changes in a certain water level interval and the area of the island region formed by a raised background part gradually becomes smaller and is quickly submerged in the rising process of a water level, the target is identified from the island image, and therefore the target in the depth image is segmented. The method fully considers the situation that the surface depth difference of the substrate where the target is located is large, and can accurately realize the segmentation of the depth image.
Further, in order to obtain the island region more accurately, the formation process of the island region in step 3) is as follows: filling the generated puddle graph to form a corresponding filling graph; and subtracting the corresponding filling map from the water hollow map to obtain an island map, and obtaining an island region from the island map.
Further, in order to more accurately screen out the target, the basis for selecting the target in the step 4) is as follows:
ISjis one island region with water level j, AISjIS an island region ISjArea of (a), k IS the depth of water level backtracking, R IS the index of area change rate, T IS the island area threshold, ISj-kIS for water level backtracking to j-kjThe precursor of (1).
Further, in order to avoid repeated judgment, the method further comprises the step of determining an object and deleting the object from the next island image.
Further, in order to improve the accuracy of segmentation, the step 1) further includes a step of removing abnormal points from the acquired depth image.
Drawings
FIG. 1 is a flow chart of a method of segmentation of a depth image of the present invention;
FIG. 2 is a flow chart of the flooding method in the depth image segmentation method of the present invention;
FIG. 3 is a schematic illustration of the flooding method of the present invention;
FIG. 4 is a schematic diagram illustrating the evolution of a puddle map in accordance with an embodiment of the method of the present invention;
FIG. 5 is a schematic diagram of the evolution process of a puddle map, a fill map and an island map in a second embodiment of the method of the invention;
FIG. 6-a is an image of an Agaricus bisporus depth image obtained in the first embodiment of the method of the present invention;
FIG. 6-b is a schematic diagram showing a segmentation result of an Agaricus bisporus depth image in the first embodiment of the method of the present invention;
FIG. 7 is a schematic structural diagram of a depth image segmentation apparatus according to the present invention;
wherein 1 is background matrix, 2 is shadow, 3 is target, 4 is bulge, and 5 is horizontal plane.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings.
Method embodiment one
The method comprises the steps of regarding a background in a depth image as a sea bottom, regarding a target as a shrunken island, continuously injecting water into a dry sea area, gradually submerging the background and the low target along with the rise of a horizontal plane to generate a series of water-level maps, and identifying each water-level map to obtain an island area; based on the principle that the island area formed by the target hardly changes in a certain section of water level interval and the island area formed by the raised part of the background where the target is located gradually becomes smaller and is quickly submerged in the rising process of the water level, the target is identified from the island area, and therefore the target in the depth image is segmented. The method is suitable for accurately segmenting the image background under the condition that the background in the depth image is a plane and the object and the background have certain depth, for example, the detection of objects such as agaricus bisporus, brown mushroom and the like growing on a mushroom bed, and the image segmentation method is described in detail below by taking the depth image of the agaricus bisporus as an example, the implementation flow of the method is shown in figure 1, and the specific implementation process is as follows:
1. and obtaining a depth image of the agaricus bisporus.
According to the invention, the RealSense SR300 type RGBD camera is adopted to shoot the agaricus bisporus image, and the depth camera is arranged on the agaricus bisporus row frame picking robot and can move linearly in the plane of the mushroom bed along with the mechanical arm because the agaricus bisporus room is aimed at in the embodiment, and the depth camera is arranged on the agaricus bisporus row frame picking robot and can move in the plane of the mushroom bed along with the mechanical arm. Because the space above the mushroom frame is 300-400 mm, the effective shooting distance of a RealSense SR300 type RGBD camera is 100-1500 mm, and the requirement of image acquisition is met. In order to guarantee the moving precision of the traveling frame platform, a servo motor is adopted to be matched with a sliding table module of 0.02mm to drive, the traveling frame platform moves to a specified position, and a camera acquires images of a specified area.
2. And preprocessing the acquired depth image.
There are two kinds of abnormal points in the depth image, one is a point with a depth value of 0 due to the shielding of the structured light; the other is a pixel point which is obviously deviated from a normal value due to the error of an imaging system or the influence of light and the like; the main purpose of the pre-processing is to eliminate these two types of data points. Firstly, counting the lower four-digit fraction (Q1) and the upper four-digit fraction (Q3) of the depth value of the non-0 pixel point, then counting the difference value between Q3 and Q1 as delta Q, and finally only keeping the pixel points within the interval [ Q1-delta Q, Q3+ delta Q ] to remove the abnormal points.
3. And carrying out depth value conversion on the preprocessed depth image.
The original point of the depth value of the depth image is in the center of the camera sensor, the depth value of each point on the image represents the distance between the point and the sensor, and in order to facilitate image processing by using a 'flooding method', the remaining pixel points are uniformly added with h, so that the depth value of the depth image is converted into the distance between a target object and a reference surface. Here, we use the supporting surface of the mushroom frame of this layer as the reference surface, and combine the spatial range 300mm-400mm above the mushroom frame, the mushroom bed substrate thickness 200mm, and the value of h is 550mm to obtain the new depth value of the pixel point.
4. And carrying out inundation processing on the depth image to obtain an island region.
When the agaricus bisporus depth image is collected, as shown in fig. 3, the information of the shadow 2 below the agaricus bisporus cap cannot be obtained, and the agaricus bisporus on the depth image is compressed to be similar to a cylinder from the overlooking view. Therefore, according to the present invention, the background substrate 1 (referred to as a mushroom bed substrate in this embodiment) in the agaricus bisporus depth image is regarded as the seabed, and the target 3 (referred to as an agaricus bisporus in this embodiment) is regarded as a shrunken island, as shown in fig. 3, a flooding method is proposed, in which a hill (a protrusion 4 in fig. 3) on the seabed is submerged during continuous water injection into a dry sea area, and only the shrunken island is stably maintained during continuous rising of the water level, thereby achieving segmentation of the target. The specific process is as follows:
gradually raising the horizontal plane 5 from the lower limit of the depth value of the image (i.e., the bottom of the sea) (each time the horizontal plane is raised by a set length, which is 1mm in the embodiment); with a horizontal plane (water level value) as a reference, the pixel points below the horizontal plane are reset to 1, and the pixel points above the horizontal plane are reset to 0, so that a series of water-level maps are generated, and as shown in fig. 4, each time the horizontal plane rises, a water-level map is correspondingly formed. And identifying the region with the pixel point of 0 in each puddle graph, and determining the island region in each puddle graph. The invention can identify island regions from the puddle map using the regionprops command in Matlab.
5. And analyzing the island region to determine a target image.
In order to distinguish the agaricus bisporus target from the island target, island analysis needs to be carried out on each island image. Because the agaricus bisporus stems have a certain height and the agaricus bisporus covers have a certain thickness, the island area formed by agaricus bisporus targets hardly changes in a certain water level interval in the water level rising process; the island area formed by the raised part of the substrate is gradually reduced and is quickly submerged, so that the mushroom target is distinguished from the protrusions in the substrate. When a certain island meets the following conditions in the water level rising process, the island is judged to be a mushroom target:
in the formula, ISjIs one island region with water level j, AISjIS an islandjArea of (d), k is the depth of water level backtracking, R isArea change rate index, T island area threshold, ISj-kIS for water level backtracking to j-kjThe precursor of (1).
The accuracy of island analysis is determined by a water level backtracking depth k, an area change rate index R and an island area threshold value T, the length of mushroom stems of mature agaricus bisporus is 8-13mm through statistical analysis, and a backtracking value k is selected to be 5mm in consideration of the influence of the mushroom bed matrix smoothness problem on island analysis; in the process, the island area change rate is between 0.8 and 1.1, so that the value of R is 0.2. Considering that the diameter of the mature agaricus bisporus is larger than 15mm, the number of pixel points in the depth image is larger than 1600, and taking T as 1500 to filter the interference target.
6. And extracting the contour of the target image to realize the segmentation of the target image.
And storing the coordinates of the agaricus bisporus contour points found in the island analysis in an array, and in order to avoid the repeated finding of the same agaricus bisporus target, subtracting the known agaricus bisporus target when the next island diagram is generated when the agaricus bisporus target is found for the first time, and sequentially circulating to find and store the coordinates of the agaricus bisporus contour points in each island diagram. And finally, drawing a partitioning result of the agaricus bisporus-mushroom bed matrix by using all the agaricus bisporus contour coordinate points.
The original agaricus bisporus depth image in the embodiment is shown in fig. 6-a, and the segmentation result obtained by adopting the method is shown in fig. 6-b.
Method embodiment two
The identification method of the invention is basically the same as the first embodiment of the identification method, and is different from the method of obtaining the island image from the puddle image in the flooding method, and the flow is shown in fig. 2.
Although the islanding area can be identified from the puddle map by using the regionprops command in Matlab, the black area causing the large area in fig. 4 is identified as the islanding area because the pixel values of the islanding and the not-yet-submerged area are both 0 (i.e., black is displayed).
Therefore, in the embodiment, after the puddle map is obtained, obtaining island region filling by using a filling map means that even if a closed region is filled (the pixel point value of the closed region is filled from 0 to 1, namely, the closed region is changed from black to white), the target circled in the puddle map is filled with an isolated closed region, and is directly filled when the filling operation is performed, as shown in fig. 5, and then the puddle map is subtracted from the filling map to obtain an island map, wherein the white target in the island map is the closed region which is filled and disappeared (the closed regions are temporarily considered as agaricus bisporus targets and then are further screened).
The island region obtained by using the filling map has the following effects: in the island region, the pixel value of the island object is 1 (i.e. displayed as white), and the position and the area size of the island object are better extracted.
Device embodiment
The apparatus proposed in this embodiment, as shown in fig. 7, includes a processor and a memory, where a computer program operable on the processor is stored in the memory, and the processor implements the method of the above method embodiment when executing the computer program. That is, the method in the above method embodiment should be understood that the flow of the mushroom-type depth image segmentation method may be implemented by computer program instructions. These computer program instructions may be provided to a processor such that execution of the instructions by the processor results in the implementation of the functions specified in the method flow described above.
The processor referred to in this embodiment refers to a processing device such as a microprocessor MCU or a programmable logic device FPGA; the memory referred to in this embodiment includes a physical device for storing information, and generally, information is digitized and then stored in a medium using an electric, magnetic, optical, or the like. For example: various memories for storing information by using an electric energy mode, such as RAM, ROM and the like; various memories for storing information by magnetic energy, such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and U disk; various types of memory, CD or DVD, that store information optically. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
The apparatus comprising the memory, the processor and the computer program is realized by the processor executing corresponding program instructions in the computer, and the processor can be loaded with various operating systems, such as windows operating system, linux system, android, iOS system, and the like. As other embodiments, the device can also comprise a display, and the display is used for displaying the diagnosis result for the reference of workers.
Claims (6)
1. A method for segmenting a depth image, the method comprising the steps of:
1) acquiring a depth image containing a target;
2) carrying out depth value conversion on the obtained depth image, and converting the depth value of the depth image into the distance between the target object and the reference surface;
3) taking the lower limit of the depth value of the converted depth image as a horizontal plane, and gradually increasing the horizontal plane according to a set step length until the horizontal plane reaches a set height; in the process of rising the horizontal plane each time, carrying out binarization processing on the depth image by taking the horizontal plane as a boundary to generate corresponding puddle images, and extracting island regions separated from the boundary from each puddle image;
4) and extracting island regions from each water-level depression graph for analysis, and selecting the island regions with the island region area change rate smaller than a set threshold value in the horizontal plane rising process as targets.
2. The method for segmenting the depth image according to claim 1, wherein the island region in step 3) is formed as follows: filling the generated puddle graph to form a corresponding filling graph; and subtracting the corresponding filling map from the water hollow map to obtain an island map, and obtaining an island region from the island map.
3. The method for segmenting the depth image according to claim 1 or 2, wherein the target in the step 4) is selected according to the following criteria:
ISjis one island region with water level j, AISjIS an island region ISjArea of (a), k IS the depth of water level backtracking, R IS the index of area change rate, T IS the island area threshold, ISj-kIS for water level backtracking to j-kjThe precursor of (1).
4. The method for segmenting the depth image according to claim 2, further comprising determining an object and deleting the object from a next islanding image.
5. The method for segmenting the depth image according to claim 1, wherein the step 1) further comprises a step of removing abnormal points from the acquired depth image.
6. A depth image segmentation apparatus comprising a processor and a memory, the processor executing a computer program stored by the memory to implement the depth image segmentation method as claimed in any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655269.9A CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655269.9A CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986203A true CN111986203A (en) | 2020-11-24 |
CN111986203B CN111986203B (en) | 2022-10-11 |
Family
ID=73438661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010655269.9A Active CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986203B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284137A (en) * | 2021-06-24 | 2021-08-20 | 中国平安人寿保险股份有限公司 | Paper wrinkle detection method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
US20170039704A1 (en) * | 2015-06-17 | 2017-02-09 | Stoecker & Associates, LLC | Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique |
WO2017071160A1 (en) * | 2015-10-28 | 2017-05-04 | 深圳大学 | Sea-land segmentation method and system for large-size remote-sensing image |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
-
2020
- 2020-07-09 CN CN202010655269.9A patent/CN111986203B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170039704A1 (en) * | 2015-06-17 | 2017-02-09 | Stoecker & Associates, LLC | Detection of Borders of Benign and Malignant Lesions Including Melanoma and Basal Cell Carcinoma Using a Geodesic Active Contour (GAC) Technique |
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
WO2017071160A1 (en) * | 2015-10-28 | 2017-05-04 | 深圳大学 | Sea-land segmentation method and system for large-size remote-sensing image |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
Non-Patent Citations (2)
Title |
---|
STEFANIA MATTEOLI等: ""POSEIDON: An Analytical End-to-End Performance Prediction Model for Submerged Object Detection and Recognition by Lidar Fluorosensors in the Marine Environment"", 《IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING》 * |
闫士举等: ""基于Hough变换和连通体分析的混合圆形体检测算法"", 《自动化学报》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284137A (en) * | 2021-06-24 | 2021-08-20 | 中国平安人寿保险股份有限公司 | Paper wrinkle detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111986203B (en) | 2022-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111724433B (en) | Crop phenotype parameter extraction method and system based on multi-view vision | |
CN110120042B (en) | Crop image pest and disease damage area extraction method based on SLIC super-pixel and automatic threshold segmentation | |
CN109242888A (en) | Infrared and visible light image fusion method combining image significance and non-subsampled contourlet transformation | |
CN111598780B (en) | Terrain adaptive interpolation filtering method suitable for airborne LiDAR point cloud | |
CN109002418B (en) | Tree breast-height diameter automatic calculation method based on voxel growth and ground laser point cloud | |
CN111127613B (en) | Image sequence three-dimensional reconstruction method and system based on scanning electron microscope | |
CN111523535B (en) | Circle-like object recognition counting detection algorithm based on machine vision and deep learning | |
CN107247926A (en) | A kind of human body detecting method and device | |
CN115097442A (en) | Water surface environment map construction method based on millimeter wave radar | |
CN111860321A (en) | Obstacle identification method and system | |
CN104182976B (en) | Field moving object fining extraction method | |
CN111986203B (en) | Depth image segmentation method and device | |
CN116523898A (en) | Tobacco phenotype character extraction method based on three-dimensional point cloud | |
CN108280833B (en) | Skeleton extraction method for plant root system bifurcation characteristics | |
CN114119902A (en) | Building extraction method based on unmanned aerial vehicle inclined three-dimensional model | |
CN116433672A (en) | Silicon wafer surface quality detection method based on image processing | |
CN106846324B (en) | Irregular object height measuring method based on Kinect | |
CN117671529A (en) | Unmanned aerial vehicle scanning measurement-based farmland water level observation device and observation method | |
CN116129320A (en) | Target detection method, system and equipment based on video SAR | |
CN113686600B (en) | Performance identification device for rotary cultivator and ditcher | |
CN107437254B (en) | Orchard adjacent overlapping shape fruit distinguishing method | |
CN114359403A (en) | Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image | |
CN113361532A (en) | Image identification method, system, storage medium, equipment, terminal and application | |
CN113506301A (en) | Tooth image segmentation method and device | |
CN112991303A (en) | Automatic extraction method of electric tower insulator string based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |