CN111986203B - Depth image segmentation method and device - Google Patents
Depth image segmentation method and device Download PDFInfo
- Publication number
- CN111986203B CN111986203B CN202010655269.9A CN202010655269A CN111986203B CN 111986203 B CN111986203 B CN 111986203B CN 202010655269 A CN202010655269 A CN 202010655269A CN 111986203 B CN111986203 B CN 111986203B
- Authority
- CN
- China
- Prior art keywords
- depth image
- island
- depth
- target
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and a device for segmenting a depth image, and belongs to the technical field of image processing. According to the method, a background in a depth image is regarded as a sea bottom, a target is regarded as a towering island, the lower limit of the depth value of the depth image is used as an initial horizontal plane, the rising of the horizontal plane is gradually controlled, the background and the low target are gradually submerged in the rising process, a series of puddle images are generated by taking the horizontal plane as a boundary, and each puddle image is identified to obtain an island area; based on the principle that the area of an island region formed by a target hardly changes in a certain water level interval and the area of the island region formed by a raised background part gradually becomes smaller and is quickly submerged in the rising process of a water level, the target is identified from the island image, and therefore the target in the depth image is segmented. The invention fully considers the situation of large depth difference of the surface of the target background, and can accurately realize the target segmentation of the depth image.
Description
Technical Field
The invention relates to a depth image segmentation method and device, and belongs to the technical field of image processing.
Background
With the development of artificial intelligence and machine vision technologies, the demand of various industries on image data increases year by year, and the requirements on image segmentation technologies are higher and higher, so that the development of automation and intelligent technologies is promoted. In the face of complex image data, how to accurately find and segment the required target has become an important bottleneck restricting the development of image processing technology, and it can be said that accurate segmentation of images has become the most basic and the most important problem in the image processing field and needs to be solved urgently. In recent years, with the development of binocular vision technology, depth images are increasingly used in the fields of target detection, three-dimensional reconstruction, and the like. Depth images, also known as range images, are images of the image capture to each point (depth) in a scene as pixel values that directly reflect the geometry of the visible surface of an object. In the field of target detection, a threshold segmentation method is mainly adopted in traditional depth image segmentation, and the threshold segmentation is roughly divided into fixed threshold segmentation and dynamic threshold segmentation. For example, the name of 'brown mushroom in-situ measurement technology based on an SR300 depth camera' (Wanglin, xuwei, dukaiwei and the like) provides a brown mushroom depth image segmentation method, which is used for segmenting a brown mushroom depth image by utilizing a background surface depth value mode and combining the height of mushroom stipe to adaptively select a dynamic threshold value in a depth image according to the interference of a background, so that a binary image of a mushroom cap contour is extracted by segmenting the background, and the segmentation of the brown mushroom depth image is realized. Although the method can realize the segmentation of the depth image of the brown mushroom, actually, the substrate surface depth difference of the brown mushroom is large, and if only the substrate surface depth value mode is adopted as the dynamic threshold value of the segmentation, the segmentation is inaccurate. In the depth image segmentation in the face of a complex background, no good segmentation effect can be obtained by a fixed threshold value method or a dynamic threshold value method. Therefore, a depth image segmentation method with a wider application range and a better segmentation effect is urgently needed.
The invention fully utilizes and excavates rich three-dimensional structure information provided by a depth image, and develops a depth image target segmentation method and a device according to the characteristics of a target depth structure.
Disclosure of Invention
The invention aims to provide a method and a device for segmenting a depth image, which are used for solving the problem that the segmentation of the depth image is inaccurate at present.
The present invention provides a depth image segmentation method for solving the above-mentioned technical problems, the segmentation method comprising the steps of:
1) Acquiring a depth image containing a target;
2) Carrying out depth value conversion on the obtained depth image, and converting the depth value of the depth image into the distance between the target object and the reference surface;
3) Taking the lower limit of the depth value of the converted depth image as a horizontal plane, and gradually raising the horizontal plane according to a set step length until the horizontal plane reaches a set height; in each horizontal plane rising process, carrying out binarization processing on the depth image by taking the horizontal plane as a boundary to generate corresponding puddle images, and extracting island regions separated from the boundary from each puddle image;
4) And extracting island regions from each water-level depression graph for analysis, and selecting the island regions with the island region area change rate smaller than a set threshold value in the horizontal plane rising process as targets.
The invention also provides a depth image segmentation device, which comprises a processor and a memory, wherein the processor executes a computer program stored by the memory to realize the depth image segmentation method.
According to the method, a background in a depth image is regarded as a sea bottom, a target is regarded as a towering island, the lower limit of the depth value of the depth image is used as an initial horizontal plane, the rising of the horizontal plane is gradually controlled, the background and the low target are gradually submerged in the rising process, a series of puddle images are generated by taking the horizontal plane as a boundary, and each puddle image is identified to obtain an island area; based on the principle that the area of an island region formed by a target hardly changes in a certain water level interval in the process of rising of a water level, and the area of the island region formed by a raised part of a background gradually becomes smaller and is quickly submerged, the target is identified from an island image, so that the target in a depth image is segmented. The method fully considers the situation that the surface depth difference of the substrate where the target is located is large, and can accurately realize the segmentation of the depth image.
Further, in order to obtain the island region more accurately, the island region in step 3) is formed as follows: filling the generated puddle graph to form a corresponding filling graph; and subtracting the corresponding filling map from the water hollow map to obtain an island map, and obtaining an island region from the island map.
Further, in order to more accurately screen out the target, the basis for selecting the target in the step 4) is as follows:
IS j is one island region with water level j, AIS j IS an island region IS j Area of (a), k IS the depth of water level backtracking, R IS the index of area change rate, T IS the island area threshold, IS j-k IS for water level backtracking to j-k j The precursor of (1).
Further, in order to avoid repeated judgment, the method further comprises the step of determining an object and deleting the object from the next island image.
Further, in order to improve the accuracy of segmentation, the step 1) further includes a step of removing abnormal points from the acquired depth image.
Drawings
FIG. 1 is a flow chart of a method of segmenting a depth image according to the present invention;
FIG. 2 is a flow chart of the flooding method in the depth image segmentation method of the present invention;
FIG. 3 is a schematic representation of the submerging method of the present invention;
FIG. 4 is a schematic diagram illustrating the evolution of a puddle map in accordance with an embodiment of the method of the present invention;
FIG. 5 is a schematic diagram of the evolution process of a puddle map, a fill map and an island map in a second embodiment of the method of the invention;
FIG. 6-a is an image of the depth of Agaricus bisporus obtained in the first embodiment of the method of the present invention;
FIG. 6-b is a schematic diagram showing a segmentation result of an Agaricus bisporus depth image in the first embodiment of the method of the present invention;
FIG. 7 is a schematic structural diagram of a depth image segmentation apparatus according to the present invention;
where 1 is the background substrate, 2 is the shadow, 3 is the target, 4 is the bump, and 5 is the horizontal plane.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings.
The method comprises the steps of regarding a background in a depth image as a sea bottom, regarding a target as a shrunken island, continuously injecting water into a dry sea area, gradually submerging the background and the low target along with the rise of a horizontal plane to generate a series of water-level maps, and identifying each water-level map to obtain an island area; based on the principle that the island area formed by the target hardly changes in a certain section of water level interval in the process of rising of the horizontal plane, and the island area formed by the raised part of the background where the target is located can gradually become smaller and can be quickly submerged, the target is identified from the island area, and therefore the target in the depth image is segmented. The method is suitable for accurately segmenting the image background under the condition that the background in the depth image is a plane and the target and the background have certain depth, for example, the detection of the targets such as agaricus bisporus, brown mushroom and the like growing on a mushroom bed, and the image segmentation method is described in detail below by taking the depth image of the agaricus bisporus as an example, the implementation flow of the method is shown in figure 1, and the specific implementation process is as follows:
1. and obtaining a depth image of the agaricus bisporus.
According to the invention, the RealSense SR300 type RGBD camera is adopted to shoot the agaricus bisporus image, and the depth camera is arranged on the agaricus bisporus row frame picking robot and can move linearly in the plane of the mushroom bed along with the mechanical arm because the agaricus bisporus room is aimed at in the embodiment, and the depth camera is arranged on the agaricus bisporus row frame picking robot and can move in the plane of the mushroom bed along with the mechanical arm. Because the space above the mushroom frame is 300-400 mm, the effective shooting distance of a RealSense SR300 type RGBD camera is 100-1500 mm, and the requirement of image acquisition is met. For guaranteeing the removal precision of truss platform, adopt servo motor cooperation 0.02 mm's slip table module to drive, the truss platform moves the assigned position, and the camera carries out image acquisition to the assigned area.
2. And preprocessing the acquired depth image.
There are two kinds of abnormal points in the depth image, one is a point with a depth value of 0 due to the shielding of the structured light; the other is a pixel point which is obviously deviated from a normal value due to the error of an imaging system or the influence of light and the like; the main purpose of the pre-processing is to eliminate these two types of data points. Firstly, counting the lower four-digit fraction (Q1) and the upper four-digit fraction (Q3) of the depth value of the non-0 pixel point, then counting the difference value of the Q3 and the Q1 as delta Q, and finally only keeping the pixel points within the interval [ Q1-delta Q, Q3+ delta Q ] to remove the abnormal points.
3. And carrying out depth value conversion on the preprocessed depth image.
The original point of the depth value of the depth image is in the center of the camera sensor, the depth value of each point on the image represents the distance between the point and the sensor, and in order to facilitate the image processing by using a 'submerging method', the residual pixel points are uniformly added with h, so that the depth value of the depth image is converted into the distance between a target object and a reference surface. Here, we use the supporting surface of the mushroom frame as the reference surface, and combine the space range 300mm-400mm above the mushroom frame, the thickness of the mushroom bed matrix 200mm, and the value of h is 550mm to obtain the new depth value of the pixel point.
4. And carrying out inundation processing on the depth image to obtain an island region.
When the agaricus bisporus depth image is collected, as shown in fig. 3, the information of the shadow 2 below the agaricus bisporus cap cannot be obtained, and the agaricus bisporus on the depth image is compressed to be similar to a cylinder from the overlooking view. Therefore, according to the present invention, the background substrate 1 (referred to as a mushroom bed substrate in this embodiment) in the agaricus bisporus depth image is regarded as the sea bottom, and the target 3 (referred to as an agaricus bisporus bed substrate in this embodiment) is regarded as shrunken islands, as shown in fig. 3, a flooding method is proposed, in which hills (protrusions 4 in fig. 3) on the sea bottom are submerged during continuous water injection into a dry sea area, and only the shrunken islands are stably maintained during continuous rising of the water level, thereby achieving segmentation of the target. The specific process is as follows:
gradually raising the horizontal plane 5 from the lower limit of the depth value of the image (i.e., the bottom of the sea) (each time the horizontal plane is raised by a set length, which is 1mm in the embodiment); with a horizontal plane (water level value) as a reference, the pixel points below the horizontal plane are reset to 1, and the pixel points above the horizontal plane are reset to 0, so that a series of water-level maps are generated, and as shown in fig. 4, each time the horizontal plane rises, a water-level map is correspondingly formed. And identifying the region of each hollow map with the pixel points of 0, and determining the island region of each hollow map. The invention uses the regionprops command in Matlab to identify islanding areas from puddle patterns.
5. And analyzing the island region to determine a target image.
In order to distinguish the agaricus bisporus target from the island target, island analysis needs to be carried out on each island image. Because the agaricus bisporus stems have a certain height and the agaricus bisporus covers have a certain thickness, the island area formed by agaricus bisporus targets hardly changes in a certain water level interval in the water level rising process; the island area formed by the raised part of the substrate is gradually reduced and is quickly submerged, so that the mushroom target is distinguished from the protrusions in the substrate. When a certain island meets the following conditions in the water level rising process, the island is judged to be a mushroom target:
in the formula, IS j Is one island region with water level j, AIS j IS an island j The area of (a), k IS the depth of water level backtracking, R IS the index of area change rate, T IS the island area threshold, IS j-k IS for water level backtracking to j-k j The precursor of (1).
The island analysis accuracy is determined by a water level backtracking depth k, an area change rate index R and an island area threshold T, the mushroom stem length of mature agaricus bisporus is 8-13mm through statistical analysis, and a backtracking value k is selected to be 5mm in consideration of the influence of the mushroom bed matrix smoothness problem on island analysis; in the process, the island area change rate is between 0.8 and 1.1, so that the value of R is 0.2. Considering that the diameter of the mature agaricus bisporus is larger than 15mm, the number of pixel points in the depth image is larger than 1600, and taking T as 1500 to filter the interference target.
6. And extracting the contour of the target image to realize the segmentation of the target image.
And storing the coordinates of the agaricus bisporus contour points found in the island analysis in an array, and subtracting the known agaricus bisporus targets when the agaricus bisporus targets are found for the first time to generate the next island diagram, and circulating in sequence to find and store the coordinates of the agaricus bisporus contour points in each island diagram. And finally, drawing a partitioning result of the agaricus bisporus-mushroom bed matrix by using all the agaricus bisporus contour coordinate points.
The original agaricus bisporus depth image in the embodiment is shown in fig. 6-a, and the segmentation result obtained by adopting the method is shown in fig. 6-b.
Method embodiment two
The identification method of the invention is basically the same as the first embodiment of the identification method, and is different from the method of obtaining the island image from the puddle image in the flooding method, and the flow is shown in fig. 2.
Although islanding areas can be identified from the puddle by using the regionprops command in Matlab, the black areas that cause the large area in fig. 4 are identified as islanding areas because the pixel values of the islanding and not-yet-submerged areas are both 0 (i.e., black is displayed).
Therefore, in the embodiment, after the puddle map is obtained, obtaining island region filling by using a filling map means that even if a closed region is filled (the pixel point value of the closed region is filled from 0 to 1, namely, the closed region is changed from black to white), the target circled in the puddle map is filled with an isolated closed region, and is directly filled when the filling operation is performed, as shown in fig. 5, and then the puddle map is subtracted from the filling map to obtain an island map, wherein the white target in the island map is the closed region which is filled and disappeared (the closed regions are temporarily considered as agaricus bisporus targets and then are further screened).
The effect of using the filling map to obtain the island region is as follows: in the island region, the pixel value of the island object is 1 (i.e. displayed as white), and the position and the area size of the island object are better extracted.
Device embodiment
The apparatus proposed in this embodiment, as shown in fig. 7, includes a processor and a memory, where a computer program operable on the processor is stored in the memory, and the processor implements the method of the foregoing method embodiment when executing the computer program. That is, the method in the above method embodiment should be understood that the flow of the mushroom-type depth image segmentation method may be implemented by computer program instructions. These computer program instructions may be provided to a processor such that the execution of the instructions by the processor results in the implementation of the functions specified in the method flow described above.
The processor referred to in this embodiment refers to a processing device such as a microprocessor MCU or a programmable logic device FPGA; the memory referred to in this embodiment includes a physical device for storing information, and generally, the information is digitized and stored in a medium using an electric, magnetic, optical, or the like. For example: various memories for storing information by using an electric energy mode, such as RAM, ROM and the like; various memories for storing information by magnetic energy, such as hard disk, floppy disk, magnetic tape, magnetic core memory, bubble memory, and U disk; various types of memory, CD or DVD, that store information optically. Of course, there are other ways of memory, such as quantum memory, graphene memory, and so forth.
The apparatus comprising the memory, the processor and the computer program is realized by the processor executing corresponding program instructions in the computer, and the processor can be loaded with various operating systems, such as windows operating system, linux system, android, iOS system, and the like. As other embodiments, the device can also comprise a display, and the display is used for displaying the diagnosis result for staff to refer.
Claims (5)
1. A method for segmenting a depth image, the method comprising the steps of:
1) Acquiring a depth image containing a target;
2) Carrying out depth value conversion on the obtained depth image, and converting the depth value of the depth image into the distance between the target object and the reference surface;
3) Taking the lower limit of the depth value of the converted depth image as a horizontal plane, and gradually increasing the horizontal plane according to a set step length until the horizontal plane reaches a set height; in the process of rising the horizontal plane each time, carrying out binarization processing on the depth image by taking the horizontal plane as a boundary to generate corresponding puddle images, and extracting island regions separated from the boundary from each puddle image;
4) Island regions extracted from each water-level depression graph are analyzed, the island regions with the island region area change rate smaller than a set threshold value in the horizontal plane rising process are selected as targets, and the targets are selected according to the following steps:
IS j is one island region with water level j, AIS j IS an island region IS j Area of (a), k IS the depth of water level backtracking, R IS the index of area change rate, T IS the island area threshold, IS j-k IS for water level backtracking to j-k j The precursor of (1).
2. The method for segmenting the depth image according to claim 1, wherein the formation of the island region in step 3) is as follows: filling the generated water depression graph to form a corresponding filling graph; and subtracting the corresponding filling map from the water hollow map to obtain an island map, and obtaining an island region from the island map.
3. The method for segmenting the depth image according to claim 2, further comprising determining an object and deleting the object from a next islanding image.
4. The method for segmenting the depth image according to claim 1, wherein the step 1) further comprises a step of removing abnormal points from the acquired depth image.
5. A depth image segmentation apparatus comprising a processor and a memory, the processor executing a computer program stored by the memory to implement the depth image segmentation method as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655269.9A CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010655269.9A CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986203A CN111986203A (en) | 2020-11-24 |
CN111986203B true CN111986203B (en) | 2022-10-11 |
Family
ID=73438661
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010655269.9A Active CN111986203B (en) | 2020-07-09 | 2020-07-09 | Depth image segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986203B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113284137B (en) * | 2021-06-24 | 2023-07-18 | 中国平安人寿保险股份有限公司 | Paper fold detection method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
WO2017071160A1 (en) * | 2015-10-28 | 2017-05-04 | 深圳大学 | Sea-land segmentation method and system for large-size remote-sensing image |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10229492B2 (en) * | 2015-06-17 | 2019-03-12 | Stoecker & Associates, LLC | Detection of borders of benign and malignant lesions including melanoma and basal cell carcinoma using a geodesic active contour (GAC) technique |
-
2020
- 2020-07-09 CN CN202010655269.9A patent/CN111986203B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069808A (en) * | 2015-08-31 | 2015-11-18 | 四川虹微技术有限公司 | Video image depth estimation method based on image segmentation |
WO2017071160A1 (en) * | 2015-10-28 | 2017-05-04 | 深圳大学 | Sea-land segmentation method and system for large-size remote-sensing image |
CN110414411A (en) * | 2019-07-24 | 2019-11-05 | 中国人民解放军战略支援部队航天工程大学 | The sea ship candidate region detection method of view-based access control model conspicuousness |
Non-Patent Citations (2)
Title |
---|
"POSEIDON: An Analytical End-to-End Performance Prediction Model for Submerged Object Detection and Recognition by Lidar Fluorosensors in the Marine Environment";Stefania Matteoli等;《IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing》;20171130;第10卷(第11期);第5110-5133页 * |
"基于Hough变换和连通体分析的混合圆形体检测算法";闫士举等;《自动化学报》;20080430;第34卷(第4期);第408-413页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111986203A (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110120042B (en) | Crop image pest and disease damage area extraction method based on SLIC super-pixel and automatic threshold segmentation | |
CN111598780B (en) | Terrain adaptive interpolation filtering method suitable for airborne LiDAR point cloud | |
CN110544300B (en) | Method for automatically generating three-dimensional model based on two-dimensional hand-drawn image characteristics | |
CN104036483A (en) | Image processing system and image processing method | |
CN111986203B (en) | Depth image segmentation method and device | |
CN104182976B (en) | Field moving object fining extraction method | |
CN108280833B (en) | Skeleton extraction method for plant root system bifurcation characteristics | |
CN116523898A (en) | Tobacco phenotype character extraction method based on three-dimensional point cloud | |
CN116433672A (en) | Silicon wafer surface quality detection method based on image processing | |
CN113129323A (en) | Remote sensing ridge boundary detection method and system based on artificial intelligence, computer equipment and storage medium | |
CN106846324B (en) | Irregular object height measuring method based on Kinect | |
CN116704333A (en) | Single tree detection method based on laser point cloud data | |
CN113963314A (en) | Rainfall monitoring method and device, computer equipment and storage medium | |
CN116740337A (en) | Safflower picking point identification positioning method and safflower picking system | |
CN114359314B (en) | Real-time visual key detection and positioning method for humanoid piano playing robot | |
CN107437254B (en) | Orchard adjacent overlapping shape fruit distinguishing method | |
CN113361532B (en) | Image recognition method, system, storage medium, device, terminal and application | |
CN113686600B (en) | Performance identification device for rotary cultivator and ditcher | |
CN112581472B (en) | Target surface defect detection method facing human-computer interaction | |
CN114359403A (en) | Three-dimensional space vision positioning method, system and device based on non-integrity mushroom image | |
CN113506301A (en) | Tooth image segmentation method and device | |
CN113744184A (en) | Snakehead ovum counting method based on image processing | |
CN111489434A (en) | Medical image three-dimensional reconstruction method based on three-dimensional graph cut | |
CN114187254B (en) | Shatian pomelo image identification method for picking robot | |
CN112991303A (en) | Automatic extraction method of electric tower insulator string based on three-dimensional point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |