CN110910407A - Pavement tree trunk extraction method based on mobile laser scanning point cloud data - Google Patents

Pavement tree trunk extraction method based on mobile laser scanning point cloud data Download PDF

Info

Publication number
CN110910407A
CN110910407A CN201911166759.6A CN201911166759A CN110910407A CN 110910407 A CN110910407 A CN 110910407A CN 201911166759 A CN201911166759 A CN 201911166759A CN 110910407 A CN110910407 A CN 110910407A
Authority
CN
China
Prior art keywords
trunk
point cloud
frame
point
delta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911166759.6A
Other languages
Chinese (zh)
Other versions
CN110910407B (en
Inventor
李秋洁
袁鹏成
刘旭
周宏平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Forestry University
Original Assignee
Nanjing Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Forestry University filed Critical Nanjing Forestry University
Priority to CN201911166759.6A priority Critical patent/CN110910407B/en
Publication of CN110910407A publication Critical patent/CN110910407A/en
Application granted granted Critical
Publication of CN110910407B publication Critical patent/CN110910407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

A pavement tree trunk extraction method based on mobile laser scanning point cloud data comprises the following steps: constructing an urban street point cloud annotation data set, and extracting 14 local point cloud features such as depth, elevation, dimensionality, density and intensity; automatically learning the difference between the trunk point cloud and the non-trunk point cloud by using a boosting supervised learning algorithm, and acquiring a high-precision trunk point cloud detector through feature fusion; and based on the trunk point cloud recognition result, each trunk is segmented and identified. The method can accurately extract the trunk of the street tree and is applied to the field of street tree resource investigation.

Description

Pavement tree trunk extraction method based on mobile laser scanning point cloud data
Technical Field
The invention relates to a trunk extraction method of a street tree, in particular to a trunk extraction method of a street tree based on mobile laser scanning point cloud data.
Background
The pavement tree resource survey is an important work of urban forest resource survey and is a precondition for developing urban forest ecological research, at present, the pavement tree resource survey mainly adopts a method of artificial actual measurement and sampling survey, the workload is large, the efficiency is low, and the dynamic change condition of the pavement tree cannot be timely and accurately reflected. The Mobile Laser Scanning (MLS) measurement technology can quickly acquire high-resolution and high-precision tree vertical structure data, improve the traditional street tree resource investigation operation mode with high difficulty and long period, and realize large-range, comprehensive and quick acquisition of single-tree-scale street tree parameter information.
The street tree point cloud segmentation is a primary step of extracting street tree parameters based on MLS, at present, the street tree position is mainly obtained through trunk detection, and a trunk is taken as a seed point to perform region growing segmentation to obtain a complete street tree, and the specific process is as follows: (1) voxelization: dividing the space into cubic voxels, classifying the point cloud into corresponding voxels according to space coordinates, and taking the voxels as data processing basic units; (2) ground filtering: filtering ground point cloud by using methods such as plane fitting and the like, and reserving ground target point cloud data; (3) and (3) detecting a rod-shaped object: extracting point cloud data in a lower height range to perform ground projection, and identifying the rod-shaped object according to the characteristics of projection area, shape, density distribution and the like; (4) extracting a rod-shaped object: performing point cloud segmentation by taking point cloud data at the bottom of the rod-shaped object as seed points, and extracting a complete rod-shaped object; (5) and (3) identifying the street trees: and filtering rod-shaped targets such as street lamps, telegraph poles and the like by utilizing characteristics such as point cloud distribution, size and the like, and identifying street trees.
The existing method is driven by knowledge, detection rules are designed manually, semantic gaps from low-level point cloud data to high-level ground object targets are difficult to span, the trunks and other rod-shaped targets cannot be distinguished in the early stage, and identification rules need to be further designed during or after the extraction process of the rod-shaped targets to filter the rod-shaped targets of the non-street trees.
Disclosure of Invention
The invention aims to provide a trunk extraction method of a street tree based on mobile laser scanning point cloud data, aiming at the problem of trunk identification of the street tree. The method comprises the following steps of dividing trunk extraction into two steps of trunk point cloud identification and trunk point cloud segmentation: firstly, automatically learning the difference between trunk point cloud and non-trunk point cloud from a city street point cloud labeling data set through a supervised learning algorithm to obtain a high-precision trunk point cloud detector; then, each trunk is segmented and identified based on the trunk point cloud identification result.
The technical scheme of the invention is as follows:
the invention provides a pavement tree trunk extraction method based on mobile laser scanning point cloud data, which comprises the following steps:
step 1: acquiring point cloud data of an urban street training sample by using an MLS (Multi-level modeling System);
step 2: performing point cloud neighborhood extraction on each measuring point P in the point cloud data of each training sample, acquiring a spherical domain U (P, delta) with P as a spherical center and delta as a radius, and recording the attribute value of each measuring point in the spherical domain;
and step 3: calculating point cloud characteristic parameters based on the attributes of all measurement points of the point cloud spherical domain of each training sample;
and 4, step 4: carrying out trunk point cloud detector training based on the point cloud characteristic parameters of each training sample to obtain a trained strong classifier F (x);
and 5: and (3) performing sphere extraction and point cloud characteristic parameter calculation on the moving laser scanning point cloud data of the urban street to be identified according to the step 2-3, then identifying the trunk point cloud by adopting a strong classifier F (x), obtaining a trunk point cloud identification result, and extracting the point cloud data of the trunk of the street tree.
Further, step 1 specifically comprises:
step 1-1, mounting a two-dimensional laser radar on a vehicle, and acquiring a measurement distance r (I, j) and a laser reflection intensity I (I, j) of each angle measurement point in a sector plane perpendicular to the moving direction of the vehicle by using an MLS (Multi-level laser ranging) measurement system of the two-dimensional laser radar on a moving vehicle on an urban street, wherein I represents a frame number of the measurement point, and j represents an intra-frame number of the measurement point;
step 1-2, establishing an MLS coordinate system, and acquiring point cloud three-dimensional space coordinates (x (i, j), y (i, j), and z (i, j)) by adopting the following formula:
Figure BDA0002287670180000031
the X axis is the moving direction of the vehicle, the Y axis is the depth direction, and the Z axis is vertical to the ground and faces upwards; v represents vehicle speed, Δ t represents two-dimensional laser radar scanning period, r (i, j) represents distance of jth measuring point of ith frame, θ (j) represents scanning angle of jth measuring point of each frame, x (i, j) represents x coordinate of jth measuring point of ith frame, y (i, j) represents y coordinate of jth measuring point of ith frame, and z (i, j) represents z coordinate of jth measuring point of ith frame; the measurement data in the MLS coordinate system is indexed by the measurement point frame number i and the intra-frame number j.
Further, step 2 specifically comprises:
step 2-1: calculating the inter-frame resolution ratio delta i according to the vehicle speed v and the two-dimensional laser radar scanning period delta t, namely the minimum distance between adjacent frame measuring points in the vehicle driving direction:
Δi=vΔt
step 2-2: and calculating the intra-frame resolution delta j by adopting the following formula, namely the minimum distance between adjacent measuring points in the scanning direction of the two-dimensional laser radar:
Δj=r(i,j)Δα
wherein r (i, j) represents the measurement distance of the jth measurement point of the ith frame, and delta α represents the radian resolution of the two-dimensional laser radar;
step 2-3: calculating the frame offset and the intra-frame offset delta corresponding to the sphere domain of the measuring point according to the radius of the sphere domain, the inter-frame resolution delta i and the intra-frame resolution delta ji、δj
Figure BDA0002287670180000032
Wherein δ is the spherical domain of the radius;
step 2-4: obtaining the range of the frame number [ i-delta ] of a sphere domain with P as the sphere center and delta as the radiusi,i+δi]The sequence number range in the frame is [ j-delta ]j,j+δj](ii) a Wherein, i represents the frame number of the measuring point, and j represents the frame number of the measuring point;
step 2-5: obtaining the sphere area with P as the center and delta as the radius and recording as
Figure BDA0002287670180000041
n is a neighborhood point number; and N is the number of the neighborhood points.
Further, step 3 specifically comprises: all measuring points based on point cloud spherical domain
Figure BDA0002287670180000042
Extracting point cloud characteristic parameters according to the attribute values; the property value of the measuring point comprises the x coordinate x of the measuring pointnY coordinate ynZ coordinate znMeasuring the distance rnAnd intensity of laser reflection In(ii) a Is recorded as:
Figure BDA0002287670180000043
further, the aforementioned features include depth features, elevation features, dimension features, density features, and intensity features, wherein:
the depth features include: a depth mean parameter, a depth variance parameter, and a depth range parameter;
the elevation features include: elevation mean parameters, elevation variance parameters and elevation range parameters;
the dimensional characteristics include: a total variance parameter, a linear characteristic value parameter, a planarity characteristic value parameter and a sphericity characteristic value parameter;
the density characteristics include: normalizing the density parameter;
the strength characteristics include: an intensity mean parameter, an intensity variance parameter, and an intensity range parameter;
the point cloud characteristic parameters comprise one or more combinations of characteristic parameters contained in the characteristics; any number of features can be selected, and any number of feature parameters can be selected from the features to synthesize the point cloud feature parameters.
Further, step 4, training a trunk point cloud detector by using a Discrete AdaBoost machine learning method;
step 4-1: taking the point cloud characteristic parameters obtained in the step 3 as a training sample set
Figure BDA0002287670180000044
Wherein x ismThe multidimensional characteristic vector of the measuring point, the dimension number of the characteristic vector is consistent with the quantity of the characteristic parameters of the point cloud obtained in the step 3, cmE {1, -1} is a measuring point category, 1 represents a trunk, -1 represents a non-trunk, M represents a training sample number, and M represents the total number of training samples;
step 4-2: and setting a weak learning algorithm L, and learning a strong classifier F (x) based on the training set S.
Further, the training step of step 4-2 is specifically:
step 4-2-1: weight distribution parameter D for each training sample1And (3) initializing:
Figure BDA0002287670180000051
step 4-2-2: learning weak classifier ft:ft=L(S,Dt);
Wherein f istA classifier representing an output after the t-th learning; t represents the number of learning, T is 1, 2 … T, and T represents the total number of learning;
step 4-2-3: calculating a weighted classification error rate e of a weak classifiert
Figure BDA0002287670180000052
Step 4-2-4 calculating weak classifier weights αt
Figure BDA0002287670180000053
Step 4-2-5: updating the weight distribution parameter D of each training sample in the next training by adopting the following formulat+1
Figure BDA0002287670180000054
Step 4-2-6: weight distribution parameter D after updating each training samplet+1And (3) carrying out normalization treatment:
Figure BDA0002287670180000055
step 4-2-7: repeating the steps from 4-2-2 to 4-2-6 to learn the training samples for T times, and outputting the trained strong classifier
Figure BDA0002287670180000056
Sign is a Sign function.
Further, in step 5, after a trunk point cloud identification result is obtained, the trunk information of the street tree is extracted by adopting the following steps:
step 5-1, trunk segmentation is carried out, and the number of measuring points of each frame in a point cloud identification result is counted;
step 5-2, if the number of trunk points of a certain frame is larger than a preset trunk identification height threshold value, identifying the certain frame as a trunk frame, and otherwise, taking the certain frame as a non-trunk frame;
step 5-3, if the distance between the trunk frame and the current trunk starting frame is larger than a preset trunk identification diameter threshold value, identifying the trunk frame as a new trunk starting frame, and if not, classifying the trunk frame into the current trunk starting frame; and traversing all the trunk frames, clustering all the trunk points to the nearest trunk starting frame, and segmenting and extracting the point cloud data of the trunk of each street tree. The frame where the trunk is located has a distinct peak. Detecting a trunk start frame, classifying the trunk points into the nearest trunk start frame, segmenting point cloud data of each trunk, and effectively positioning the trunk by detecting the trunk start frame.
The invention has the beneficial effects that:
according to the invention, a sample is intensively learned from urban street point cloud labeling data through a learning algorithm, the difference between trunk point cloud and non-trunk point cloud is obtained, a high-precision trunk point cloud detector is obtained, and rod-shaped objects such as iron piles, street lamps and the like similar to the trunk shape can be effectively filtered; then, each trunk is segmented and identified based on the trunk point cloud identification result.
The supervised learning-based trunk point cloud detector designed by the invention has better classification performance and provides accurate data for trunk segmentation and extraction.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in more detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
Fig. 1 shows a schematic flow diagram of the present invention.
Fig. 2 illustrates a diagram of a stem point cloud identification result according to one embodiment of the invention.
Fig. 3 shows a plot of the number of trunk points for each frame, according to one embodiment of the invention.
FIG. 4 illustrates a tree trunk point cloud segmentation result graph according to one embodiment of the invention.
Detailed Description
Preferred embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein.
A pavement tree trunk extraction method based on mobile laser scanning point cloud data comprises the following steps:
step 1: acquiring point cloud data of an urban street training sample by using an MLS (Multi-level laser system) measuring system, wherein the point cloud data specifically comprises the following steps;
step 1-1, mounting a two-dimensional laser radar on a vehicle, and acquiring a measurement distance r (I, j) and a laser reflection intensity I (I, j) of each angle measurement point in a sector plane perpendicular to the moving direction of the vehicle by using an MLS (Multi-level laser ranging) measurement system of the two-dimensional laser radar on a moving vehicle on an urban street, wherein I represents a frame number of the measurement point, and j represents an intra-frame number of the measurement point;
step 1-2, establishing an MLS coordinate system, and acquiring point cloud three-dimensional space coordinates (x (i, j), y (i, j), and z (i, j)) by adopting the following formula:
Figure BDA0002287670180000071
the X axis is the moving direction of the vehicle, the Y axis is the depth direction, and the Z axis is vertical to the ground and faces upwards; v represents vehicle speed, Δ t represents two-dimensional laser radar scanning period, r (i, j) represents distance of jth measuring point of ith frame, θ (j) represents scanning angle of jth measuring point of each frame, x (i, j) represents x coordinate of jth measuring point of ith frame, y (i, j) represents y coordinate of jth measuring point of ith frame, and z (i, j) represents z coordinate of jth measuring point of ith frame; the measurement data in the MLS coordinate system is indexed by the measurement point frame number i and the intra-frame number j.
Step 2: performing point cloud neighborhood extraction on each measuring point P in the point cloud data of each training sample, acquiring a spherical domain U (P, delta) taking P as a spherical center and delta as a radius, and recording the attribute value of each measuring point in the spherical domain, wherein the method specifically comprises the following steps of;
step 2-1: calculating the inter-frame resolution ratio delta i according to the vehicle speed v and the two-dimensional laser radar scanning period delta t, namely the minimum distance between adjacent frame measuring points in the vehicle driving direction:
Δi=vΔt
step 2-2: and calculating the intra-frame resolution delta j by adopting the following formula, namely the minimum distance between adjacent measuring points in the scanning direction of the two-dimensional laser radar:
Δj=r(i,j)Δα
wherein r (i, j) represents the measurement distance of the jth measurement point of the ith frame, and delta α represents the radian resolution of the two-dimensional laser radar;
step 2-3: calculating the frame offset and the intra-frame offset delta corresponding to the sphere domain of the measuring point according to the radius of the sphere domain, the inter-frame resolution delta i and the intra-frame resolution delta ji、δj
Figure BDA0002287670180000081
Wherein δ is the spherical domain of the radius;
step 2-4: obtaining the range of the frame number [ i-delta ] of a sphere domain with P as the sphere center and delta as the radiusi,i+δi]The sequence number range in the frame is [ j-delta ]j,j+δj](ii) a Wherein, i represents the frame number of the measuring point, and j represents the frame number of the measuring point;
step 2-5: obtaining the sphere area with P as the center and delta as the radius and recording as
Figure BDA0002287670180000082
n is a neighborhood point number; and N is the number of the neighborhood points.
And step 3: calculating point cloud characteristic parameters based on the attributes of all measurement points of the point cloud spherical domain of each training sample; the method specifically comprises the following steps: all measuring points based on point cloud spherical domain
Figure BDA0002287670180000083
Extracting point cloud characteristic parameters according to the attribute values; the property value of the measuring point comprises the x coordinate x of the measuring pointnY coordinate ynZ coordinate znMeasuring the distance rnAnd intensity of laser reflection In(ii) a Is recorded as:
Figure BDA0002287670180000084
the aforementioned features include depth features, elevation features, dimension features, density features, and intensity features, wherein:
the depth features include: a depth mean parameter, a depth variance parameter, and a depth range parameter;
the elevation features include: elevation mean parameters, elevation variance parameters and elevation range parameters;
the dimensional characteristics include: a total variance parameter, a linear characteristic value parameter, a planarity characteristic value parameter and a sphericity characteristic value parameter; wherein, the dimension characteristic needs to firstly obtain the covariance matrix of the three-dimensional space coordinates of the point cloud spherical domain, and then utilize the characteristic value
Figure BDA0002287670180000085
Perform a calculation of λ1≥λ2≥λ3>0。
The density characteristics include: normalizing the density parameter;
the strength characteristics include: an intensity mean parameter, an intensity variance parameter, and an intensity range parameter;
the point cloud characteristic parameters comprise one or more combinations of characteristic parameters contained in the characteristics; any number of features can be selected, and any number of feature parameters can be selected from the features to synthesize point cloud feature parameters, for example, 14 features including depth, elevation, dimension, density and intensity are extracted, as shown in table 1.
TABLE 1 Point cloud local features
Figure BDA0002287670180000091
And 4, step 4: carrying out trunk point cloud detector training based on the point cloud characteristic parameters of each training sample to obtain a trained strong classifier F (x); the method comprises the following specific steps;
step 4-1: taking the point cloud characteristic parameters obtained in the step 3 as a training sample set
Figure BDA0002287670180000092
Figure BDA0002287670180000101
Wherein x ismThe multidimensional characteristic vector of the measuring point, the dimension number of the characteristic vector is consistent with the quantity of the characteristic parameters of the point cloud obtained in the step 3, cmE {1, -1} is a measuring point category, 1 represents a trunk, -1 represents a non-trunk, M represents a training sample number, and M represents the total number of training samples;
step 4-2: setting a weak learning algorithm L, and learning a strong classifier F (x) based on a training set S, wherein the training step specifically comprises the following steps:
step 4-2-1: weight distribution parameter D for each training sample1And (3) initializing:
Figure BDA0002287670180000102
step 4-2-2: learning weak classifier ft:ft=L(S,Dt) (ii) a Wherein f istA classifier representing an output after the t-th learning; t represents the number of learning, T is 1, 2 … T, and T represents the total number of learning;
step 4-2-3: calculating a weighted classification error rate e of a weak classifiert
Figure BDA0002287670180000103
Step 4-2-4 calculating weak classifier weights αt
Figure BDA0002287670180000104
Step 4-2-5: updating the weight distribution parameter D of each training sample in the next training by adopting the following formulat+1
Figure BDA0002287670180000105
Step 4-2-6: weight distribution parameter D after updating each training samplet+1And (3) carrying out normalization treatment:
Figure BDA0002287670180000106
step 4-2-7: repeating the steps from 4-2-2 to 4-2-6 to learn the training samples for T times, and outputting the trained strong classifier
Figure BDA0002287670180000107
Sign is a Sign function.
And 5: and (3) performing ball domain extraction and point cloud characteristic parameter calculation on the moving laser scanning point cloud data of the urban street to be identified according to the step (2-3), then identifying the trunk point cloud by adopting a strong classifier F (x) to obtain a trunk point cloud identification result, and then extracting the point cloud data of the trunk of the street tree by adopting the following steps.
Step 5-1, trunk segmentation is carried out, and the number of measuring points of each frame in a point cloud identification result is counted;
step 5-2, if the number of trunk points of a certain frame is larger than a preset trunk identification height threshold value, identifying the certain frame as a trunk frame, and otherwise, taking the certain frame as a non-trunk frame;
step 5-3, if the distance between the trunk frame and the current trunk starting frame is larger than a preset trunk identification diameter threshold value, identifying the trunk frame as a new trunk starting frame, and if not, classifying the trunk frame into the current trunk starting frame; and traversing all the trunk frames, clustering all the trunk points to the nearest trunk starting frame, and segmenting and extracting the point cloud data of the trunk of each street tree. As shown in fig. 3, the frame where the trunk is located has a distinct peak. Detecting a trunk start frame, classifying the trunk points into the nearest trunk start frame, segmenting point cloud data of each trunk, and effectively positioning the trunk by detecting the trunk start frame.
In the specific implementation:
2D LiDAR, model UTM-30LX, manufactured by Hokuyo corporation of Japan was used for the experiment. This kind of LiDAR adopts the infrared ray of wavelength 905nm, obtains the measured value of different angles through the motor swing, measuring distance 0.1m-30m, measurement accuracy 30mm, scanning range 270, angle resolution 0.25, and scanning period delta t is 25 ms. The UTM-30LX acquires 1 frame of data per scan, including 1081 different angular distances and laser reflection intensities, represented by 4 bytes and 2 bytes, respectively.
The experiment adopts remote control MLS measurement system to contain 2 action wheels, 18 caterpillar base plate dollies from the driving wheel as moving platform, regard STM32F103ZET6 as the dolly controller, obtain dolly speed through the tacho encoder.
The data acquisition place is a road section with the length of 50m in a Nanjing forestry university campus, and comprises ground and object targets such as buildings, road trees, pedestrians, lanes, sidewalks, curbs, street lamps, turf and the like, the MLS system is adopted to carry out single-side scanning on the street, the 2D LiDAR scanning angle range is [135 degrees and-135 degrees ], the vehicle speed v is 0.2m/s, the acquisition time is 250s, 10,000 frame data are acquired, and 10,810,000 point cloud data are included.
When the trunk detector is trained, a classification decision tree is used as a weak learning algorithm, and the iteration time T is 300. From the point cloud dataset, 5% were randomly drawn for training and the remaining 95% were used for testing.
The trunk point cloud detector performance was evaluated with precision (precision) and recall (recall):
Figure BDA0002287670180000111
when the radius of the sphere area is selected to be delta equal to 0.25m, the precision ratio of the test set is 95.11 percent, and the recall ratio is 98.08 percent. The trunk point cloud recognition result is shown in fig. 2, and it can be seen that the trunk detector can effectively filter out the shaft-shaped objects similar to the trunk shape, such as iron piles, street lamps and the like.
For the trunk segmentation, the number of trunk points in each frame is counted first, and as shown in fig. 3, the frame where the trunk is located has an obvious peak. Then, a trunk start frame is detected, the trunk points are classified into the nearest trunk start frame, and point cloud data of each trunk is segmented, as shown in fig. 4.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.

Claims (8)

1. A pavement tree trunk extraction method based on mobile laser scanning point cloud data is characterized by comprising the following steps:
step 1: acquiring point cloud data of an urban street training sample by using an MLS (Multi-level modeling System);
step 2: performing point cloud neighborhood extraction on each measuring point P in the point cloud data of each training sample, acquiring a spherical domain U (P, delta) with P as a spherical center and delta as a radius, and recording the attribute value of each measuring point in the spherical domain;
and step 3: calculating point cloud characteristic parameters based on the attributes of all measurement points of the point cloud spherical domain of each training sample;
and 4, step 4: carrying out trunk point cloud detector training based on the point cloud characteristic parameters of each training sample to obtain a trained strong classifier F (x);
and 5: and (3) performing sphere extraction and point cloud characteristic parameter calculation on the moving laser scanning point cloud data of the urban street to be identified according to the step 2-3, then identifying the trunk point cloud by adopting a strong classifier F (x), obtaining a trunk point cloud identification result, and extracting the point cloud data of the trunk of the street tree.
2. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 1, wherein the step 1 is specifically as follows:
step 1-1, mounting a two-dimensional laser radar on a vehicle, and acquiring a measurement distance r (I, j) and a laser reflection intensity I (I, j) of each angle measurement point in a sector plane perpendicular to the moving direction of the vehicle by using an MLS (Multi-level laser ranging) measurement system of the two-dimensional laser radar on a moving vehicle on an urban street, wherein I represents a frame number of the measurement point, and j represents an intra-frame number of the measurement point;
step 1-2, establishing an MLS coordinate system, and acquiring point cloud three-dimensional space coordinates (x (i, j), y (i, j), and z (i, j)) by adopting the following formula:
Figure FDA0002287670170000011
the X axis is the moving direction of the vehicle, the Y axis is the depth direction, and the Z axis is vertical to the ground and faces upwards; v represents vehicle speed, Δ t represents two-dimensional laser radar scanning period, r (i, j) represents distance of jth measuring point of ith frame, θ (j) represents scanning angle of jth measuring point of each frame, x (i, j) represents x coordinate of jth measuring point of ith frame, y (i, j) represents y coordinate of jth measuring point of ith frame, and z (i, j) represents z coordinate of jth measuring point of ith frame; the measurement data in the MLS coordinate system is indexed by the measurement point frame number i and the intra-frame number j.
3. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 1, wherein the step 2 is specifically as follows:
step 2-1: calculating the inter-frame resolution ratio delta i according to the vehicle speed v and the two-dimensional laser radar scanning period delta t, namely the minimum distance between adjacent frame measuring points in the vehicle driving direction:
Δi=vΔt
step 2-2: and calculating the intra-frame resolution delta j by adopting the following formula, namely the minimum distance between adjacent measuring points in the scanning direction of the two-dimensional laser radar:
Δj=r(i,j)Δα
wherein r (i, j) represents the measurement distance of the jth measurement point of the ith frame, and delta α represents the radian resolution of the two-dimensional laser radar;
step 2-3: calculating the frame offset and the intra-frame offset delta corresponding to the sphere domain of the measuring point according to the radius of the sphere domain, the inter-frame resolution delta i and the intra-frame resolution delta ji、δj
Figure FDA0002287670170000021
Wherein δ is the spherical domain of the radius;
step 2-4: obtaining the range of the frame number [ i-delta ] of a sphere domain with P as the sphere center and delta as the radiusi,i+δi]The sequence number range in the frame is [ j-delta ]j,j+δj](ii) a Wherein, i represents the frame number of the measuring point, and j represents the frame number of the measuring point;
step 2-5: obtaining the sphere area with P as the center and delta as the radius and recording as
Figure FDA0002287670170000022
n is a neighborhood point number; and N is the number of the neighborhood points.
4. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 1, wherein step 3 is specifically as follows: all measuring points based on point cloud spherical domain
Figure FDA0002287670170000031
Extracting point cloud characteristic parameters according to the attribute values; the property value of the measuring point comprises the x coordinate x of the measuring pointnY coordinate ynZ coordinate znMeasuring the distance rnAnd intensity of laser reflection In(ii) a Is recorded as:
Figure FDA0002287670170000032
5. the method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 4, wherein: the aforementioned features include depth features, elevation features, dimension features, density features, and intensity features, wherein:
the depth features include: a depth mean parameter, a depth variance parameter, and a depth range parameter;
the elevation features include: elevation mean parameters, elevation variance parameters and elevation range parameters;
the dimensional characteristics include: a total variance parameter, a linear characteristic value parameter, a planarity characteristic value parameter and a sphericity characteristic value parameter;
the density characteristics include: normalizing the density parameter;
the strength characteristics include: an intensity mean parameter, an intensity variance parameter, and an intensity range parameter;
the point cloud characteristic parameters comprise one or more combinations of characteristic parameters contained in the characteristics.
6. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 5, wherein: step 4, training a trunk point cloud detector by adopting a Discrete AdaBoost machine learning method;
step 4-1: taking the point cloud characteristic parameters obtained in the step 3 as a training sample set
Figure FDA0002287670170000033
Wherein x ismThe multidimensional characteristic vector of the measuring point, the dimension number of the characteristic vector is consistent with the quantity of the characteristic parameters of the point cloud obtained in the step 3, cmE {1, -1} is a measuring point category, 1 represents a trunk, -1 represents a non-trunk, M represents a training sample number, and M represents the total number of training samples;
step 4-2: and setting a weak learning algorithm L, and learning a strong classifier F (x) based on the training set S.
7. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 6, wherein: the training step of the step 4-2 is specifically as follows:
step 4-2-1: weight distribution parameter D for each training sample1And (3) initializing:
Figure FDA0002287670170000041
step 4-2-2: learning weak classifier ft:ft=L(S,Dt);
Wherein f istA classifier representing an output after the t-th learning; t represents the number of learning, T is 1, 2 … T, and T represents the total number of learning;
step 4-2-3: calculating a weighted classification error rate e of a weak classifiert
Figure FDA0002287670170000042
Step 4-2-4 calculating weak classifier weights αt
Figure FDA0002287670170000043
Step 4-2-5: updating the weight distribution parameter D of each training sample in the next training by adopting the following formulat+1
Figure FDA0002287670170000044
Step 4-2-6: weight distribution parameter D after updating each training samplet+1And (3) carrying out normalization treatment:
Figure FDA0002287670170000045
step 4-2-7: repeating the steps from 4-2-2 to 4-2-6 to learn the training samples for T times, and outputting the trained strong classifier
Figure FDA0002287670170000046
Sign is a Sign function.
8. The method for extracting trunk of street tree based on mobile laser scanning point cloud data as claimed in claim 1, wherein: and 5, after obtaining a trunk point cloud identification result, extracting trunk information of the street tree by adopting the following steps:
step 5-1, trunk segmentation is carried out, and the number of measuring points of each frame in a point cloud identification result is counted;
step 5-2, if the number of trunk points of a certain frame is larger than a preset trunk identification height threshold value, identifying the certain frame as a trunk frame, and otherwise, taking the certain frame as a non-trunk frame;
step 5-3, if the distance between the trunk frame and the current trunk starting frame is larger than a preset trunk identification diameter threshold value, identifying the trunk frame as a new trunk starting frame, and if not, classifying the trunk frame into the current trunk starting frame; and traversing all the trunk frames, clustering all the trunk points to the nearest trunk starting frame, and segmenting and extracting the point cloud data of the trunk of each street tree.
CN201911166759.6A 2019-11-25 2019-11-25 Street tree trunk extraction method based on mobile laser scanning point cloud data Active CN110910407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911166759.6A CN110910407B (en) 2019-11-25 2019-11-25 Street tree trunk extraction method based on mobile laser scanning point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911166759.6A CN110910407B (en) 2019-11-25 2019-11-25 Street tree trunk extraction method based on mobile laser scanning point cloud data

Publications (2)

Publication Number Publication Date
CN110910407A true CN110910407A (en) 2020-03-24
CN110910407B CN110910407B (en) 2023-09-15

Family

ID=69819349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911166759.6A Active CN110910407B (en) 2019-11-25 2019-11-25 Street tree trunk extraction method based on mobile laser scanning point cloud data

Country Status (1)

Country Link
CN (1) CN110910407B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111983637A (en) * 2020-08-20 2020-11-24 南京林业大学 Orchard inter-row path extraction method based on laser radar
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN113313005A (en) * 2021-05-25 2021-08-27 国网山东省电力公司济宁供电公司 Power transmission conductor on-line monitoring method and system based on target identification and reconstruction

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651863A (en) * 2016-11-30 2017-05-10 厦门大学 Point cloud data based automatic tree cutting method
CN107833244A (en) * 2017-11-02 2018-03-23 南京市测绘勘察研究院股份有限公司 A kind of shade tree attribute automatic identifying method based on mobile lidar data
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
WO2019104780A1 (en) * 2017-11-29 2019-06-06 北京数字绿土科技有限公司 Laser radar point cloud data classification method, apparatus and device, and storage medium
CN110415259A (en) * 2019-07-30 2019-11-05 南京林业大学 A kind of shade tree point cloud recognition methods based on laser reflection intensity

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651863A (en) * 2016-11-30 2017-05-10 厦门大学 Point cloud data based automatic tree cutting method
CN107833244A (en) * 2017-11-02 2018-03-23 南京市测绘勘察研究院股份有限公司 A kind of shade tree attribute automatic identifying method based on mobile lidar data
WO2019104780A1 (en) * 2017-11-29 2019-06-06 北京数字绿土科技有限公司 Laser radar point cloud data classification method, apparatus and device, and storage medium
CN108564650A (en) * 2018-01-08 2018-09-21 南京林业大学 Shade tree target recognition methods based on vehicle-mounted 2D LiDAR point clouds data
CN110415259A (en) * 2019-07-30 2019-11-05 南京林业大学 A kind of shade tree point cloud recognition methods based on laser reflection intensity

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李秋洁,郑加强,周宏平,陶冉,束义平: "基于变尺度格网索引与机器学习的行道树靶标点云识别" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111983637A (en) * 2020-08-20 2020-11-24 南京林业大学 Orchard inter-row path extraction method based on laser radar
CN112363503A (en) * 2020-11-06 2021-02-12 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN112363503B (en) * 2020-11-06 2022-11-18 南京林业大学 Orchard vehicle automatic navigation control system based on laser radar
CN113313005A (en) * 2021-05-25 2021-08-27 国网山东省电力公司济宁供电公司 Power transmission conductor on-line monitoring method and system based on target identification and reconstruction
CN113313005B (en) * 2021-05-25 2023-03-24 国网山东省电力公司济宁供电公司 Power transmission conductor on-line monitoring method and system based on target identification and reconstruction

Also Published As

Publication number Publication date
CN110910407B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
Lim et al. ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building
Lehtomäki et al. Object classification and recognition from mobile laser scanning point clouds in a road environment
Yang et al. A shape-based segmentation method for mobile laser scanning point clouds
Zaganidis et al. Integrating deep semantic segmentation into 3-d point cloud registration
Hou et al. Deep learning-based subsurface target detection from GPR scans
CN110910407B (en) Street tree trunk extraction method based on mobile laser scanning point cloud data
CN106204705A (en) A kind of 3D point cloud segmentation method based on multi-line laser radar
CN108564650B (en) Lane tree target identification method based on vehicle-mounted 2D LiDAR point cloud data
CN108171131A (en) Based on the Lidar point cloud data road marking line extracting methods and system for improving MeanShift
Wang et al. 3-D point cloud object detection based on supervoxel neighborhood with Hough forest framework
CN112330661A (en) Multi-period vehicle-mounted laser point cloud road change monitoring method
CN108052886A (en) A kind of puccinia striiformis uredospore programming count method of counting
Börcs et al. Fast 3-D urban object detection on streaming point clouds
Yadav et al. Road surface detection from mobile lidar data
Yang et al. Using mobile laser scanning data for features extraction of high accuracy driving maps
Zhao et al. Ground surface recognition at voxel scale from mobile laser scanning data in urban environment
CN103456029B (en) The Mean Shift tracking of a kind of anti-Similar color and illumination variation interference
Roynard et al. Fast and robust segmentation and classification for change detection in urban point clouds
Gong et al. A two-level framework for place recognition with 3D LiDAR based on spatial relation graph
Xu et al. Instance segmentation of trees in urban areas from MLS point clouds using supervoxel contexts and graph-based optimization
Boerner et al. Voxel based segmentation of large airborne topobathymetric lidar data
Miyazaki et al. Line-based planar structure extraction from a point cloud with an anisotropic distribution
Jeong et al. Classification of LiDAR data for generating a high-precision roadway map
Liu et al. Road classification using 3D LiDAR sensor on vehicle
Sirmacek et al. Road detection from remotely sensed images using color features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant