CN115115652A - Pavement tree target point cloud online segmentation method - Google Patents
Pavement tree target point cloud online segmentation method Download PDFInfo
- Publication number
- CN115115652A CN115115652A CN202210499652.9A CN202210499652A CN115115652A CN 115115652 A CN115115652 A CN 115115652A CN 202210499652 A CN202210499652 A CN 202210499652A CN 115115652 A CN115115652 A CN 115115652A
- Authority
- CN
- China
- Prior art keywords
- tree
- street
- point cloud
- segmentation
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000013507 mapping Methods 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000001514 detection method Methods 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 9
- 238000005259 measurement Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 230000000149 penetrating effect Effects 0.000 claims description 3
- 238000002592 echocardiography Methods 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 6
- 239000000575 pesticide Substances 0.000 description 5
- 241000238631 Hexapoda Species 0.000 description 2
- 241000607479 Yersinia pestis Species 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002420 orchard Substances 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000005507 spraying Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Abstract
The invention provides a road tree target point cloud online segmentation method, which comprises the following steps: establishing a buffer area, point cloud-image mapping, lane tree image instance segmentation, lane tree image instance fusion, lane tree instance integrity detection and image-point cloud mapping to obtain a lane tree point cloud instance and finish segmentation. The method for online segmenting the street tree target point cloud realizes online accurate segmentation of the street tree target point cloud by building a street point cloud online processing frame, fusing and optimizing a real-time segmentation result of a street tree image instance.
Description
Technical Field
The invention relates to online segmentation of a street tree target, in particular to online segmentation of a street tree target based on two-dimensional laser radar point cloud data, and specifically relates to online segmentation of a street tree target point cloud based on image instance segmentation.
Background
Spraying pesticide on leaf surfaces is an important means for preventing and treating diseases and insect pests of trees in a walkway, at present, an on-line targeted pesticide application technology is successfully applied to the disease and insect pest prevention and treatment of orchards, and the technology detects crowns in real time by adopting a sensor carried on a pesticide application vehicle and applies pesticide in a targeted mode according to the positions of the crowns. However, unlike a single orchard environment, the urban street environment contains multiple surface feature targets, which greatly increases the computation/time complexity of tree crown target online segmentation and limits the popularization and application of online target application on street trees.
Among a plurality of target detection sensors, a laser radar (LiDAR) can quickly and accurately measure object surface distance information, and high-resolution and high-precision three-dimensional point cloud data of object surfaces on two sides of a road are obtained along with the movement of a pesticide application vehicle. The existing street tree point cloud segmentation method is mainly divided into two types:
1. and (5) dividing and identifying the edges. And (4) gradually filtering the non-street tree point cloud by adopting an image/point cloud segmentation method according to the appearance characteristics of the street trees and other ground object targets. The method has weak street tree identification capability, and needs to extract global features of the street tree and other ground objects from all point cloud data of a street scene, so that the requirement of on-line segmentation of a target of the street tree cannot be met.
2. And identifying and then segmenting. Firstly, recognizing a street tree point cloud point by point from street point cloud data by adopting a street tree detector obtained by supervised learning training, and then segmenting the street tree point cloud into individual street trees by adopting algorithms such as clustering and the like. The method has higher identification precision of the street tree, but the point-by-point detection calculation amount of the street tree is large, and the requirement of on-line segmentation of the target of the street tree can not be met.
Disclosure of Invention
The invention aims to provide a street tree target point cloud online segmentation method aiming at the problem of street tree target point cloud online segmentation based on two-dimensional laser radar point cloud data.
The technical scheme of the invention is as follows:
the invention provides a road tree target point cloud online segmentation method, which comprises the following steps:
s1, establishing a buffer area:
constructing an FIFO buffer area for storing N frames of street point cloud data with the street length of L, which are collected by a two-dimensional laser radar, initializing a processing time t after the FIFO buffer area is fully written for the first time, setting the t as 1, and executing S2;
updating the buffer area, writing delta N frame street point cloud updating data with the street updating length delta L into the FIFO buffer area every time in the subsequent processing, adding 1 to the processing time t, and executing S2;
assigning tags T to buffer first written and updated street point cloud data P Let the label T P 0, representing a non-street tree point;
s2, point cloud-image mapping:
converting street point cloud data in an FIFO buffer area at the current moment into a three-channel street image C, wherein the three-channel street image C comprises depth coordinates, echo intensities and echo times of all points in N frames of street point cloud data;
s3, dividing the image example of the street tree:
segmenting a mask map M from the street image C by adopting a real-time street tree image example segmentation algorithm, extracting candidate examples of the street tree according to the mask map M, and recording the column range C of the candidate examples of the street tree L And c R Frame range f L And f R And a division time t';
s4, merging the image examples of the street trees:
in an initial state, namely when T is equal to 1, taking the lane tree candidate instance as a lane tree instance, sequentially allocating a counting label T and a mapping indication m to the lane tree candidate instance, and enabling m to be equal to 0 to represent that the lane tree instance is not mapped back to the point cloud;
in the subsequent segmentation processing, namely when t is larger than 1, comparing and fusing each street tree candidate example at the current moment with the existing street tree examples to obtain updated information of all street tree examples; the information of the example of the street tree comprises a counting label T, a mask map M and a column range c L And c R Frame range f L And f R A division time t' and a mapping indication m; for the newly added street tree example at the current moment, making m equal to 0;
s5, integrity detection of the street tree example:
calculating the frame number of the 1 st pixel of the street image C at the current time t in all the point clouds:
f 1 =ΔN(t-1)+1
for all street tree instances where m is 0, which are not mapped back to the point cloud, if f L ≤f 1 ≤f R Considering the road tree instance as completely split, and performing S6, otherwise, performing the buffer update in S1;
s6, image-point cloud mapping:
and reversely mapping the segmentation result of the street tree instance back to the point cloud according to the counting label T and the mask map M of the street tree instance, and enabling the pixels and the points of the mask map M to correspond one to obtain the street tree point cloud instance so as to complete segmentation.
Further, in S1, the N frames of street point cloud data and the Δ N frames of street point cloud update data are respectively obtained by using the following formulas:
wherein: l is the street length of each online segmentation; v is the moving speed of the two-dimensional laser radar, and delta t is the time for the two-dimensional laser radar to obtain 1 frame of point cloud data; Δ L is the street update length.
In S1, L is 10m, and Δ L is 0.5 m.
Further, the S2 specifically includes: converting the range of the depth coordinate y, the echo intensity I and the echo times n of the point cloud data in the FIFO buffer area to 0-255 to obtain the depth coordinate y ', the echo intensity I ' and the echo times n ' of corresponding points:
wherein: one frame of point cloud data corresponds to a row of pixels, one point corresponds to one pixel, and a belongs to { y, I, n }, a min 、a max Respectively representThe minimum and maximum values of depth coordinates/echo intensity/echo times, the minimum and maximum value of depth being the distance measurement range of the LiDAR, the minimum and maximum value of echo intensity being the intensity measurement range of the LiDAR, the minimum and maximum value of echo times being the echo times range of the LiDAR, the minimum and minimum values all being 0.
Further, the S3 specifically includes:
s3.1, processing the street image C by adopting a real-time street tree image example segmentation algorithm to generate a mask map M, wherein the size of the M is the same as that of the C;
s3.2, the mask map M is plotted with the pixel positions of the candidate instances of the street tree, the pixel value 0 represents the background, the pixel value 255 represents the street tree, and the column range c of the candidate instances of the street tree is recorded according to the pixel values of the street tree L And c R Frame range f L And f R And a segmentation instant t', t ═ t;
wherein: c. C L And c R Number of start and end columns, f, respectively, in mask M for a candidate instance of a street tree L And f R Respectively, the starting frame and the ending frame number of the candidate example of the road tree in all the point clouds: f. of L =c L +ΔN(t-1),f R =c R +ΔN(t-1)。
Further, the comparison process in S4 specifically includes:
s4.1, repeated segmentation and coarse detection: the frame range f of each lane tree candidate example at the current moment is divided into L And f R Respectively carrying out frame overlapping comparison with all the street tree examples, if frame overlapping exists, executing S4.2, otherwise, taking the street tree candidate example as a street tree example, and distributing a counting label to the street tree candidate example, wherein the counting label adds 1 to the number of all the street tree examples in the past;
s4.2, repeated segmentation fine detection: for the road tree candidate example with the overlapping, calculating the frame intersection ratio of the road tree candidate example and the road tree example T with the overlapping:
wherein: t represents a count tag of a street tree instance;
if f IoU >0.5, judging that the street tree candidate example is the repeated segmentation of the street tree example T, and executing S4.3, otherwise, judging that the street tree candidate example is a new street tree example and distributing a counting label to the street tree candidate example;
s4.3, repeated segmentation and fusion: calculating the pixel area A of the candidate example of the street tree and the pixel area A of the T example of the street tree according to the mask images of the candidate example of the street tree and the T example of the street tree T If A is>A T If yes, the lane tree candidate example is divided more completely, the mask map, the column range, the frame range and the dividing time of the lane tree example T are replaced by the mask map, the column range, the frame range and the dividing time of the lane tree candidate example, and the lane tree candidate example is deleted; otherwise, the division of the street tree example T is more complete, and the candidate examples of the street tree are deleted.
Further, step S6 is specifically: for the street tree example which passes the street tree example integrity detection of step S5, finding the intra-frame sequence number S and the frame sequence number f of the corresponding point for all pixels covered by the street tree example;
s=r
f=c+ΔN(t′-1)
wherein: r denotes the row number of the pixel in the mask map M, c denotes the column number of the pixel in the mask map M, and t' denotes the division time of the row-road-tree instance; when the pixel value is 255, let the label T of the point P Equal to the count tag T of the road tree instance, otherwise, no processing is done.
Further, the segmentation method further comprises step S7 of optimizing the road tree point cloud instance: and optimizing the road tree point cloud example segmentation result by utilizing the vehicle driving direction coordinate, the depth coordinate and the height coordinate (x, y, z) of the point cloud, and removing two types of misjudgment conditions, wherein one is that the non-road tree point cloud measured by the two-dimensional laser radar penetrating through the gaps of branches and leaves is wrongly segmented into the road tree point cloud, and the other is that the road tree tip point cloud is wrongly segmented into the non-road tree point cloud.
Further, the street tree point cloud example optimization step specifically comprises:
s7.1, filtering the region of interest: depth coordinate y in filtering street tree point cloud exceeds set area [ y min ′,y max ′]Point clouds of (2), having their labels T P 0, wherein: y' min =0,y′ max =7m;
S7.2, defining radius nearest neighbor classification:
the frame range according to the current road tree point cloud example isFrame range of previous street tree point cloud instanceFor point clouds meeting the following three conditions, a limited-radius nearest neighbor classifier is adopted to reassign a label T P :
(2) the height coordinate z is more than or equal to 2 m;
(3) label T P =0。
The classification rule is as follows: and counting labels of all points in the ball with the point to be classified as the center and the radius of r, wherein the label of the point to be classified is equal to the label with the maximum point number, and r is 0.5 m.
The invention has the beneficial effects that:
the method for online segmenting the street tree target point cloud realizes online accurate segmentation of the street tree target point cloud by building a street point cloud online processing frame, fusing and optimizing a real-time segmentation result of a street tree image instance.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 shows a process flow diagram of the present invention.
FIG. 2 shows a schematic diagram of the full street point cloud data collected by the two-dimensional lidar in an embodiment.
Fig. 3 shows a schematic diagram of the online segmentation result of the street tree target in the embodiment.
FIG. 4 is a diagram illustrating the segmentation of the road tree image instance, the merging of the road tree image instances, and the integrity detection of the road tree image instances in the embodiment.
Fig. 5 is a schematic diagram illustrating a road tree point cloud segmentation result obtained through image-point cloud mapping in the embodiment.
Fig. 6 shows a schematic diagram of an example segmentation optimization result of a street tree point cloud after region-of-interest filtering and limited radius nearest neighbor classification in the embodiment.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
The invention provides a road tree target point cloud online segmentation method, which comprises the following steps:
s1, establishing a buffer area:
constructing an FIFO buffer area for storing N frames of street point cloud data with the street length of L, which are collected by a two-dimensional laser radar, initializing a processing time t after the FIFO buffer area is fully written for the first time, setting the t as 1, and executing S2;
updating the buffer area, namely writing delta N frames of street point cloud updating data with the street updating length delta L into the FIFO buffer area in subsequent processing, adding 1 to the processing time t, and executing S2;
assigning tags T to buffer first written and updated street point cloud data P Let the label T P 0, representing a non-street tree point; in S1, the N frames of street point cloud data and the delta N frames of street point cloud updating data are respectively obtained by the following formulas:
Wherein: l is the street length of each online segmentation; v is the moving speed of the two-dimensional laser radar, and delta t is the time for the two-dimensional laser radar to obtain 1 frame of point cloud data; Δ L is the street update length; in S1, L is 10m, and Δ L is 0.5 m. S2, point cloud-image mapping:
converting street point cloud data in an FIFO buffer area at the current moment into a three-channel street image C, wherein the three-channel street image C comprises depth coordinates, echo intensities and echo times of all points in N frames of street point cloud data;
the S2 specifically includes: converting the range of the depth coordinate y, the echo intensity I and the echo frequency n of the point cloud data in the FIFO buffer area to 0-255 to obtain the depth coordinate y ', the echo intensity I ' and the echo frequency n ' of the corresponding point:
wherein: one frame of point cloud data corresponds to a row of pixels, one point corresponds to one pixel, and a belongs to { y, I, n }, a min 、a max The minimum and maximum values of depth coordinate/echo intensity/echo times are respectively represented, the minimum and maximum value of depth is the distance measurement range of LiDAR, the minimum and maximum value of echo intensity is the intensity measurement range of LiDAR, the minimum and maximum value of echo times is the echo times range of LiDAR, and the minimum values are all 0.
S3, dividing the image example of the street tree:
segmenting a mask map M from a street image C using a real-time street tree image instance segmentation algorithm, extracting street tree candidate instances from the mask map M, and recording the street tree candidate instancesColumn Range c L And c R Frame range f L And f R And a division time t'; the S3 specifically includes:
s3.1, processing the street image C by adopting a real-time street tree image example segmentation algorithm to generate a mask map M, wherein the size of the M is the same as that of the C;
s3.2, the mask map M is plotted with the pixel positions of the candidate instances of the street tree, the pixel value 0 represents the background, the pixel value 255 represents the street tree, and the column range c of the candidate instances of the street tree is recorded according to the pixel values of the street tree L And c R Frame range f L And f R And a segmentation instant t', t ═ t;
wherein: c. C L And c R Number of start and end columns, f, respectively, in mask M for a candidate instance of a street tree L And f R Respectively, the starting frame and the ending frame number of the candidate example of the road tree in all the point clouds: f. of L =c L +ΔN(t-1),f R =c R +ΔN(t-1)。
S4, merging the image examples of the street trees:
in an initial state, namely when T is 1, taking the lane tree candidate instance as a lane tree instance, sequentially allocating a counting label T and a mapping indication m to the lane tree candidate instance, and making m be 0 to represent that the lane tree instance is not mapped back to the point cloud;
in the subsequent segmentation processing, namely when t is larger than 1, comparing and fusing each street tree candidate example at the current moment with the existing street tree examples to obtain updated information of all street tree examples; the information of the example of the street tree comprises a counting label T, a mask map M and a column range c L And c R Frame range f L And f R A division time t' and a mapping indication m; for the newly added road tree example at the current moment, making m equal to 0; the comparison process in the step S4 specifically comprises the following steps:
s4.1, repeated segmentation and coarse detection: the frame range f of each street tree candidate example at the current moment is divided into L And f R Respectively carrying out frame overlapping comparison with all the street tree examples, if frame overlapping exists, executing S4.2, otherwise, taking the street tree candidate example as a street tree real exampleFor example, it is assigned a count tag that adds 1 to the number of all previous street tree instances;
s4.2, repeated segmentation and fine detection: for the overlapped road tree candidate example, calculating the frame intersection ratio of the road tree candidate example and the road tree example T overlapped with the road tree candidate example:
wherein: t represents a count tag of a street tree instance;
if f IoU >0.5, judging that the street tree candidate example is the repeated segmentation of the street tree example T, and executing S4.3, otherwise, judging that the street tree candidate example is a new street tree example and distributing a counting label to the street tree candidate example;
s4.3, repeated segmentation and fusion: calculating the pixel area A of the candidate example of the street tree and the pixel area A of the T example of the street tree according to the mask images of the candidate example of the street tree and the T example of the street tree T If A is>A T If yes, the lane tree candidate example is divided more completely, the mask map, the column range, the frame range and the dividing time of the lane tree example T are replaced by the mask map, the column range, the frame range and the dividing time of the lane tree candidate example, and the lane tree candidate example is deleted; otherwise, the division of the street tree example T is more complete, and the candidate examples of the street tree are deleted.
S5, integrity detection of the street tree instance:
calculating the frame number of the 1 st pixel of the street image C at the current time t in all the point clouds:
f 1 =ΔN(t-1)+1
for all street tree instances where m is 0, which are not mapped back to the point cloud, if f L ≤f 1 ≤f R Considering the road tree instance as being completely split, and performing S6, otherwise, performing the buffer update in S1;
s6, image-point cloud mapping:
inversely mapping the division result of the pavement tree instance to the point cloud according to the counting label T of the pavement tree instance and the mask M, and enabling the pixels of the mask M and the points to correspond one to obtain the pavement tree point cloud instance to finish division;
the method specifically comprises the following steps: for the street tree example which passes the street tree example integrity detection of step S5, finding the intra-frame sequence number S and the frame sequence number f of the corresponding point for all pixels covered by the street tree example;
s=r
f=c+ΔN(t′-1)
wherein: r denotes a row number of a pixel in the mask map M, c denotes a column number of a pixel in the mask map M, and t' denotes a division timing of a row tree instance; when the pixel value is 255, let the label T of the point P Equal to the count tag T of the road tree instance, otherwise, no processing is done.
Further, the segmentation method further comprises step S7 of optimizing the road tree point cloud instance: and optimizing the road tree point cloud example segmentation result by utilizing the vehicle driving direction coordinate, the depth coordinate and the height coordinate (x, y, z) of the point cloud, and removing two types of misjudgment conditions, wherein one is that the non-road tree point cloud measured by the two-dimensional laser radar penetrating through the gaps of branches and leaves is wrongly segmented into the road tree point cloud, and the other is that the road tree tip point cloud is wrongly segmented into the non-road tree point cloud.
Wherein: the road tree point cloud example optimization steps are as follows:
s7.1, filtering the region of interest: depth coordinate y in filtering street tree point cloud exceeds set area [ y min ′,y max ′]Point clouds of (2), having their labels T P 0, wherein: y' min =0,y′ max =7m;
S7.2, defining radius nearest neighbor classification:
the frame range according to the current road tree point cloud example isFrame range of previous street tree point cloud instanceAnd for point clouds meeting the following three conditions, re-assigning the label T by adopting a radius-limited nearest neighbor classifier P :
(2) the height coordinate z is more than or equal to 2 m;
(3) label T P =0。
The classification rule is as follows: and counting labels of all points in the ball with the point to be classified as the center and the radius of r, wherein the label of the point to be classified is equal to the label with the maximum point number, and r is 0.5 m.
In the specific implementation:
in the experiment, a two-dimensional laser radar with the model of UTM-30LX-EW is adopted to collect street point cloud data, each frame of point cloud data comprises 1081 measuring points, the time delta t for obtaining 1 frame of point cloud data is 25ms, and the moving speed v of the two-dimensional laser radar is 0.4 m/s.
The computer configuration is shown in table 1. Fig. 2 shows all street point cloud data collected by the two-dimensional laser radar, and fig. 3 shows the on-line segmentation result of the street tree target, and all street trees can be accurately segmented. Table 2 shows the frame processing time of each step, which is 4.373ms/f in total and is less than delta t, and the real-time segmentation of the target point cloud of the street tree is realized.
TABLE 1 computer configuration
Item | Configuration of |
Operating system | Windows 11 |
CPU | Intel i5-11400F |
GPU | GeForce RTX 3060 |
GPU acceleration | CUDA 11.5+cuDNN 8.3.1 |
IDE | Spyder |
Deep learning framework | Pytorch 1.7.0 |
TABLE 2 frame processing time
Fig. 4 shows the course tree target point cloud online segmentation process. And when T is 1, segmenting 1 lane tree candidate example from a three-channel image obtained by point cloud-image mapping, taking the lane tree candidate example as a lane tree example, and setting the counting label T to 1. And when t is 2, segmenting 2 street tree candidate examples from a three-channel image obtained after point cloud-image mapping. The 1 st candidate instance is a repeated division of the street tree instance 1 and is larger in area, thus updating the information of the street tree instance 1. The 2 nd candidate instance is a new road tree instance, and the count tag T is 2. And when t is 3, segmenting 2 street tree candidate examples from a three-channel image obtained by point cloud-image mapping. The 1 st candidate instance is a repeated division of the road tree instance 1, but the area is smaller, and therefore the information of the road tree instance 1 is not updated. The 2 nd candidate instance is a repeated division of the street tree instance 2 and the area is larger, so the information of the street tree instance 2 is updated. And when t is 10, segmenting 2 street tree candidate examples from a three-channel image obtained by point cloud-image mapping. The 1 st candidate instance is a repeated division of the road tree instance 1, but the area is smaller, and therefore the information of the road tree instance 1 is not updated. The 2 nd candidate instance is a repeated division of the street tree instance 2 and the area is larger, so the information of the street tree instance 2 is updated. At this time, the frame number of the 1 st column of the image is within the frame number range of the road tree instance 1, which means that only the incomplete road tree instance 1 can be observed in the current image, therefore, the road tree instance 1 is considered to be completely segmented, and the segmentation result is mapped to the point cloud, as shown in fig. 5, two types of errors can be seen: (1) the non-street tree point cloud measured by the two-dimensional laser radar through the gaps of the branches and leaves is wrongly segmented into street tree point clouds; (2) part of the treetop point cloud is segmented erroneously into non-street treetop point clouds.
Fig. 6 shows the road tree point cloud example segmentation optimization result after region-of-interest filtering and radius-limited nearest neighbor classification, which filters out non-road tree point clouds measured by passing through the gaps between branches and leaves and can accurately detect the tree tip point cloud.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.
Claims (9)
1. A street tree target point cloud online segmentation method is characterized by comprising the following steps:
s1, establishing a buffer area:
constructing an FIFO buffer area for storing N frames of street point cloud data with the street length of L, which are collected by a two-dimensional laser radar, initializing a processing time t after the FIFO buffer area is fully written for the first time, setting the t as 1, and executing S2;
updating the buffer area, namely writing delta N frames of street point cloud updating data with the street updating length delta L into the FIFO buffer area in subsequent processing, adding 1 to the processing time t, and executing S2;
assigning labels T to buffer first written and updated street point cloud data P Let the label T P 0, representing a non-street tree point;
s2, point cloud-image mapping:
converting street point cloud data in a FIFO buffer area at the current moment into a three-channel street image C, wherein the three-channel street image C comprises depth coordinates, echo intensities and echo times of all points in N frames of street point cloud data;
s3, dividing the image example of the street tree:
segmenting a mask map M from the street image C by adopting a real-time street tree image example segmentation algorithm, extracting candidate examples of the street tree according to the mask map M, and recording the column range C of the candidate examples of the street tree L And c R Frame range f L And f R And a division time t';
s4, merging the image examples of the street trees:
in an initial state, namely when T is 1, taking the lane tree candidate instance as a lane tree instance, sequentially allocating a counting label T and a mapping indication m to the lane tree candidate instance, and making m be 0 to represent that the lane tree instance is not mapped back to the point cloud;
in the subsequent segmentation processing, namely when t is larger than 1, comparing and fusing each street tree candidate example at the current moment with the existing street tree example to obtain updated information of all street tree examples; the information of the example of the street tree comprises a counting label T, a mask map M and a column range c L And c R Frame range f L And f R A division time t' and a mapping indication m; for the newly added street tree example at the current moment, making m equal to 0;
s5, integrity detection of the street tree instance:
calculating the frame number of the 1 st pixel of the street image C at the current time t in all the point clouds:
f 1 =ΔN(t-1)+1
for all street tree instances where m is 0, which are not mapped back to the point cloud, if f L ≤f 1 ≤f R Considering the road tree instance as being completely split, and performing S6, otherwise, performing the buffer update in S1;
s6, image-point cloud mapping:
and reversely mapping the segmentation result of the street tree instance back to the point cloud according to the counting label T and the mask map M of the street tree instance, and enabling the pixels and the points of the mask map M to correspond one to obtain the street tree point cloud instance so as to complete segmentation.
2. The method for online segmentation of a street tree target point cloud according to claim 1, wherein in S1, N frames of street point cloud data and Δ N frames of street point cloud update data are obtained by using the following formulas:
wherein: l street length of each online segmentation; v is the moving speed of the two-dimensional laser radar, and delta t is the time for the two-dimensional laser radar to obtain 1 frame of point cloud data; Δ L is the street update length.
3. The method for on-line segmentation of street tree target point cloud according to claim 2, wherein L is 10m and Δ L is 0.5m in S1.
4. The method for on-line segmentation of a street tree target point cloud according to claim 1, wherein S2 specifically comprises: converting the range of the depth coordinate y, the echo intensity I and the echo frequency n of the point cloud data in the FIFO buffer area to 0-255 to obtain the depth coordinate y ', the echo intensity I ' and the echo frequency n ' of the corresponding point:
wherein: one frame of point cloud data corresponds to a row of pixels, one point corresponds to one pixel, and a belongs to { y, I, n }, a min 、a max Respectively representing the minimum and maximum values of depth coordinates/echo intensity/echo times, the minimum and maximum values of depth being the range measurement range of LiDARThe minimum maximum value of the echo intensity is the intensity measurement range of the LiDAR, the minimum maximum value of the number of echoes is the range of the number of echoes of the LiDAR, and the minimum values are all 0.
5. The method for on-line segmentation of a street tree target point cloud according to claim 1, wherein S3 specifically comprises:
s3.1, processing the street image C by adopting a real-time street tree image example segmentation algorithm to generate a mask map M, wherein the size of the M is the same as that of the C;
s3.2, the mask map M is plotted with the pixel positions of the candidate instances of the street tree, the pixel value 0 represents the background, the pixel value 255 represents the street tree, and the column range c of the candidate instances of the street tree is recorded according to the pixel values of the street tree L And c R Frame range f L And f R And a segmentation instant t', t ═ t;
wherein: c. C L And c R Number of start and end columns, f, in mask M for a candidate instance of a street tree, respectively L And f R Respectively, the starting frame and the ending frame number of the candidate example of the road tree in all the point clouds: f. of L =c L +ΔN(t-1),f R =c R +ΔN(t-1)。
6. The method for online segmentation of a shade tree target point cloud according to claim 1, wherein the comparison process in S4 specifically comprises:
s4.1, repeated segmentation and coarse detection: the frame range f of each lane tree candidate example at the current moment is divided into L And f R Respectively carrying out frame overlapping comparison with all the street tree examples, if frame overlapping exists, executing S4.2, otherwise, taking the street tree candidate example as a street tree example, and distributing a counting label to the street tree candidate example, wherein the counting label adds 1 to the number of all the street tree examples in the past;
s4.2, repeated segmentation fine detection: for the overlapped road tree candidate example, calculating the frame intersection ratio of the road tree candidate example and the road tree example T overlapped with the road tree candidate example:
wherein: t represents a count tag of a street tree instance;
if f is IoU >0.5, judging that the street tree candidate example is the repeated segmentation of the street tree example T, and executing S4.3, otherwise, judging that the street tree candidate example is a new street tree example and distributing a counting label to the street tree candidate example;
s4.3, repeated segmentation and fusion: calculating the pixel area A of the candidate example of the street tree and the pixel area A of the T example of the street tree according to the mask images of the candidate example of the street tree and the T example of the street tree T If A is>A T If yes, the lane tree candidate example is divided more completely, the mask map, the column range, the frame range and the dividing time of the lane tree example T are replaced by the mask map, the column range, the frame range and the dividing time of the lane tree candidate example, and the lane tree candidate example is deleted; otherwise, the division of the street tree example T is more complete, and the candidate examples of the street tree are deleted.
7. The method for on-line segmentation of a street tree target point cloud according to claim 1, wherein step S6 specifically comprises:
for the street tree example which passes the street tree example integrity detection of step S5, finding the intra-frame sequence number S and the frame sequence number f of the corresponding point for all pixels covered by the street tree example;
s=r
f=c+ΔN(t′-1)
wherein: r denotes a row number of a pixel in the mask map M, c denotes a column number of a pixel in the mask map M, and t' denotes a division timing of a row tree instance; when the pixel value is 255, let the label T of the point P Equal to the count tag T of the road tree instance, otherwise, no processing is done.
8. The shade tree target point cloud online segmentation method according to claim 1, further comprising S7, shade tree point cloud instance optimization: and optimizing the road tree point cloud example segmentation result by utilizing the vehicle driving direction coordinate, the depth coordinate and the height coordinate (x, y, z) of the point cloud, and removing two types of misjudgment conditions, wherein one is that the non-road tree point cloud measured by the two-dimensional laser radar penetrating through the gaps of branches and leaves is wrongly segmented into the road tree point cloud, and the other is that the road tree tip point cloud is wrongly segmented into the non-road tree point cloud.
9. The shade tree target point cloud online segmentation method according to claim 8, wherein the shade tree point cloud instance optimization step specifically comprises:
s7.1, filtering the region of interest: depth coordinate y in filtering street tree point cloud exceeds set area [ y min ′,y max ′]Point clouds of (2), having their labels T P 0, wherein: y' min =0,y′ max =7m;
S7.2, defining radius nearest neighbor classification:
the frame range according to the current road tree point cloud example isFrame range of previous street tree point cloud instanceAnd for point clouds meeting the following three conditions, re-assigning the label T by adopting a radius-limited nearest neighbor classifier P :
(2) the height coordinate z is more than or equal to 2 m;
(3) label T P =0。
The classification rule is as follows: and counting labels of all points in the ball with the point to be classified as the center and the radius of r, wherein the label of the point to be classified is equal to the label with the maximum point number, and r is 0.5 m.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210499652.9A CN115115652B (en) | 2022-05-09 | 2022-05-09 | On-line dividing method for street tree target point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210499652.9A CN115115652B (en) | 2022-05-09 | 2022-05-09 | On-line dividing method for street tree target point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115115652A true CN115115652A (en) | 2022-09-27 |
CN115115652B CN115115652B (en) | 2024-03-19 |
Family
ID=83325696
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210499652.9A Active CN115115652B (en) | 2022-05-09 | 2022-05-09 | On-line dividing method for street tree target point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115115652B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201813197D0 (en) * | 2018-08-13 | 2018-09-26 | Imperial Innovations Ltd | Mapping object instances using video data |
US10929694B1 (en) * | 2020-01-22 | 2021-02-23 | Tsinghua University | Lane detection method and system based on vision and lidar multi-level fusion |
CN113947605A (en) * | 2021-11-02 | 2022-01-18 | 南京林业大学 | Road tree target point cloud segmentation method based on YOLACT |
-
2022
- 2022-05-09 CN CN202210499652.9A patent/CN115115652B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201813197D0 (en) * | 2018-08-13 | 2018-09-26 | Imperial Innovations Ltd | Mapping object instances using video data |
US10929694B1 (en) * | 2020-01-22 | 2021-02-23 | Tsinghua University | Lane detection method and system based on vision and lidar multi-level fusion |
CN113947605A (en) * | 2021-11-02 | 2022-01-18 | 南京林业大学 | Road tree target point cloud segmentation method based on YOLACT |
Non-Patent Citations (1)
Title |
---|
李秋洁;郑加强;周宏平;陶冉;束义平;: "基于变尺度格网索引与机器学习的行道树靶标点云识别", 农业机械学报, no. 06, 30 March 2018 (2018-03-30) * |
Also Published As
Publication number | Publication date |
---|---|
CN115115652B (en) | 2024-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | A comparative study of state-of-the-art deep learning algorithms for vehicle detection | |
AU2023248173B2 (en) | Method for Property Feature Segmentation | |
Che et al. | Object recognition, segmentation, and classification of mobile laser scanning point clouds: A state of the art review | |
Cheng et al. | Extraction and classification of road markings using mobile laser scanning point clouds | |
CN110244322B (en) | Multi-source sensor-based environmental perception system and method for pavement construction robot | |
Zai et al. | 3-D road boundary extraction from mobile laser scanning data via supervoxels and graph cuts | |
Yang et al. | Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data | |
CN110379168B (en) | Traffic vehicle information acquisition method based on Mask R-CNN | |
CN108830246B (en) | Multi-dimensional motion feature visual extraction method for pedestrians in traffic environment | |
CN108280450A (en) | A kind of express highway pavement detection method based on lane line | |
CN115372958A (en) | Target detection and tracking method based on millimeter wave radar and monocular vision fusion | |
Guo et al. | Lane detection and tracking in challenging environments based on a weighted graph and integrated cues | |
CN108171695A (en) | A kind of express highway pavement detection method based on image procossing | |
CN109377511B (en) | Moving target tracking method based on sample combination and depth detection network | |
JP6826023B2 (en) | Target identification device, program and method for identifying a target from a point cloud | |
Li et al. | 3D lidar point-cloud projection operator and transfer machine learning for effective road surface features detection and segmentation | |
CN107808524A (en) | A kind of intersection vehicle checking method based on unmanned plane | |
CN111666860A (en) | Vehicle track tracking method integrating license plate information and vehicle characteristics | |
CN113281782A (en) | Laser radar snow point filtering method based on unmanned vehicle | |
Qing et al. | A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation | |
CN112597926A (en) | Method, device and storage medium for identifying airplane target based on FOD image | |
Zheng et al. | Dim target detection method based on deep learning in complex traffic environment | |
CN113505638A (en) | Traffic flow monitoring method, traffic flow monitoring device and computer-readable storage medium | |
Maurya et al. | A modified U-net-based architecture for segmentation of satellite images on a novel dataset | |
CN115115652B (en) | On-line dividing method for street tree target point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |