CN115690513A - Urban street tree species identification method based on deep learning - Google Patents

Urban street tree species identification method based on deep learning Download PDF

Info

Publication number
CN115690513A
CN115690513A CN202211420428.2A CN202211420428A CN115690513A CN 115690513 A CN115690513 A CN 115690513A CN 202211420428 A CN202211420428 A CN 202211420428A CN 115690513 A CN115690513 A CN 115690513A
Authority
CN
China
Prior art keywords
image
crown
tree
deep learning
street tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211420428.2A
Other languages
Chinese (zh)
Inventor
单晓明
严君
魏配配
朱铭凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jiuzhi Environmental Technology Service Co ltd
Original Assignee
Jiangsu Jiuzhi Environmental Technology Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jiuzhi Environmental Technology Service Co ltd filed Critical Jiangsu Jiuzhi Environmental Technology Service Co ltd
Priority to CN202211420428.2A priority Critical patent/CN115690513A/en
Publication of CN115690513A publication Critical patent/CN115690513A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an urban street tree species identification method based on deep learning. And then removing image objects which are possibly misjudged as the tree crowns according to the preprocessed reflectivity, detecting the vertexes of the tree crowns by using a digital elevation model, judging the heights of the vertexes of the tree crowns, removing the vertexes which are too high or too low, detecting and describing the tree crowns by using a fuzzy C-means classifier and an active contour algorithm, inputting the generated single-tree crown images into the constructed deep learning network, and identifying the types of the trees based on the spectrum and the spatial information of the generated tree crown images. The fuzzy C-means classifier adopted in the method for detecting the crown can effectively reduce the problems of intra-class pixel noise and variance, can perform classification on the premise of assuming that a single pixel can belong to different classes, and is obviously improved by a harder classifier.

Description

Urban street tree species identification method based on deep learning
Technical Field
The invention belongs to the technical field of target identification and computer vision, and relates to an intelligent identification method for urban street tree types based on multispectral images of unmanned aerial vehicles.
Technical Field
As one of important constituent elements of urban green land landscape, the street tree plays an important role in aspects of urban sustainable development, air quality improvement, urban environment beautification and the like, and is closely related to the social and economic benefits of cities. The diversity, structure and spatial distribution of tree species is crucial for street trees to play their important role in cities. The traditional detection of the tree species of the street trees still needs manpower to a great extent, so that the efficiency is low, the cost is high, and the accuracy and the integrity of statistics cannot be guaranteed.
In recent years, multispectral sensors have been integrated into unmanned aerial vehicles, which have great advantages in terms of operating cost and flexibility, and urban street tree pictures taken by unmanned aerial vehicles provide rich data sources for tree species identification.
The detection and the description of the single-tree crown are the premise of tree species identification, the completion of the work cannot be separated from the image segmentation, and the currently used mark control watershed algorithm usually has errors in the description of the crown at the overlapping part of the crown, so that the incomplete description of the crown is caused.
Deep learning is one of important models in the field of machine learning, and has made a breakthrough in the fields of image classification, image recognition and the like, and particularly, the appearance of a Convolutional Neural Network (CNN) solves the problem that the traditional machine learning has uncertainty in image classification.
Disclosure of Invention
The invention aims to solve the technical problem of providing an urban street tree species identification method based on deep learning aiming at the defects of the prior art, reasonably segmenting a multispectral image provided by an unmanned aerial vehicle, realizing the detection and the description of a single tree crown, and simultaneously building a spectral image parallel convolution neural network model to accurately identify street tree species.
In order to solve the technical problem, the technical scheme adopted by the invention comprises the following steps:
step 1: acquiring multispectral images of multiple true-color unmanned aerial vehicles under different scenes;
step 2: preprocessing the multispectral image acquired in the step 1, wherein the multispectral image comprises the generation of a 3D point cloud P and the conversion from a digital quantization value image to a radiation value image and then to a reflection value image;
and 3, step 3: judging the image object according to the reflectivity presented by the reflection value image in the step 2, and removing the object which is possibly misjudged as a crown in the image object;
and 4, step 4: detecting the crown vertex, generating a normalized Digital Surface model (nDSM) according to the 3D point cloud P generated in the step 2, and obtaining the crown vertex position and height by using a local maximum detection algorithm;
and 5: judging according to the height of the crown top point obtained in the step 4, and removing the top point with too high or too low height;
and 6: estimating the span of the crown using a Fuzzy C-means classifier (FCM) containing local background information, obtaining a fractional image u of the crown by performing a Markov random field on a Fuzzy classification frame i Determining the crown boundary through the fractional image, and depicting the crown boundary of a single tree by using an active contour algorithm;
and 7: acquiring a street tree image, labeling the street tree image through manual investigation, and constructing a street tree image training set and a verification set;
and step 8: establishing a Spectral-spatial parallel convolutional neural network (SSPCNN) and training;
and step 9: and (4) inputting the segmented image obtained in the step (6) into a spectrum space parallel convolution neural network to realize tree species identification.
Further, in step 2, in the image preprocessing process, scale-invariant Feature Transform (SIFT) is used to perform automatic key point generation and connection point matching on the multispectral image acquired in step one, so as to estimate the internal and external camera direction parameters, the estimated result is used to generate a 3D point cloud P describing the tree crown horizontal structure and surface height changes, a multispectral camera such as RedEdge is used to complete the conversion from a digital quantization value image to a radiation brightness value image, and a calibration reflection panel is used to convert the radiation brightness value image into a reflection brightness value image by combining the panel reflectivity of a single wave band.
In step 3, the judgment of the crown object is influenced by the shadow generated by the urban street tree and the bright spot generated at the top of the building, and a reflectivity threshold value R is introduced H And R L The judgment is as follows: if the image object reflectivity R ≦ R L Then, the image object is considered as a shade of the street tree; if R is L ≤R≤R H If so, the image object is considered as a street tree; if R.gtoreq.R H And if the image is the building roof bright spots, the shade of the street tree and the building roof bright spots are shielded, and the influence on the crown object is avoided.
In step 4, filtering the 3D point cloud P generated in step 2 to 5m per time 2 Selecting a ground point mode for interpolation to generate a Digital Elevation Model (DEM), resampling the original point cloud data to obtain a Digital Surface Model (DSM), subtracting the Digital Elevation Model from the Digital Surface Model to obtain a normalized Digital Surface Model, and obtaining a peak T of the crown by using a Local Maximum Detection Algorithm (LMDA) in the normalized Digital Surface Model c
In the step 5, the process is carried out,because part of buildings in the city meet the shape of the tree crown, and because of the greening design in the city, part of short shrubs often exist around the street tree, in order to avoid influencing the judgment of the tree crown object, a height threshold value T is introduced H And T L The judgment is as follows: if the vertex T is c ≤T L Judging the top point as the top point of the short shrub; if T is L ≤T c ≤T H If the vertex is the vertex of the tree crown of the street tree, the vertex is considered as the vertex of the tree crown of the street tree; if T is c ≥T H And considering the top point as the top point of the tree-like building, shielding the short shrubs and the tree-like building according to the top point, and avoiding influencing the judgment of the crown object.
In step 6, estimating the span of the crown by using a fuzzy C-means classifier containing local background information, and obtaining a fractional image u of the crown by executing a Markov random field on a fuzzy classification frame i Determining the crown boundary through the fractional image, and depicting the crown boundary of a single tree by using an active contour algorithm, wherein the method specifically comprises the following steps:
a, step a: unlike hard classifiers, which assign pixels to a class entirely, fuzzy C-means classifiers perform classification under the assumption that a single pixel can belong to different classes, with a fractional image in the data u e { u } representing the spatial likelihood of a single class 1 ,u 2 ,…,u C That can be obtained by minimizing the following equation 1, and the minimization of equation 1 can be achieved by iterating the membership degrees and the cluster centers in equations 2 and 3,
Figure BDA0003942922070000041
Figure BDA0003942922070000042
Figure BDA0003942922070000043
wherein equation 1 is fuzzy CAn objective function of the value classifier, N is the number of pixels, C is the number of classes, m is the fuzzification index, equation 2 is the membership matrix, where D is the data point x i And a cluster center c j The euclidean distance of equation 3 is the cluster center, and all parameter iterations must satisfy:
Figure BDA0003942922070000051
step b: defining the posterior probability, the prior probability and the conditional probability of the pixel y and the classification label w as p (w/y), p (w), p (y/w), estimating the prior probability by using a smooth prior Markov random field model, and assuming that the physical boundary of the system changes smoothly, the conditional probability is derived from equation 1 described in step a, the maximum value of the posterior probability is the minimum value of the posterior energy U, whereby the global posterior probability of the ith pixel and the jth class is obtained by minimizing the following equation 4 by using a simulated annealing algorithm,
Figure BDA0003942922070000052
where λ is the control variable, controlling the influence of the local spectral and spatial components in determining class membership, β controls the degree of smoothing at class boundaries, N j Can be defined as v as the neighborhood 1 (w r )+v 2 (w r ,w r' )+v 3 (w r ,w r' ,w r” ),v 1 (w r ),v 2 (w r ,w r' ),v 3 (w r ,w r' ,w r” ) Is a blob potential function corresponding to single, double, and triple points;
step c: the boundaries of the individual crowns are delineated using an active contour algorithm that can consider both the curve shape parameters and the fractional images of the crown classifications to determine the trend of the curve around the crown vertices.
As the urban street tree species identification method based on deep learning, the spectral space parallel convolution neural network model built in the step 8 integrates 1-D-CNN and 2-D-CNN, and can process the spectral and spatial information of the image at the same time, specifically as follows:
step a: the input image is represented as I ∈ R h×w×d Where h, w, d denote the height, width and number of light channels, respectively, and the spectral information at pixel (m, n) is represented as
Figure BDA0003942922070000061
The spatial information of each pixel can be expressed as
Figure BDA0003942922070000062
Wherein p is mn Is a pixel A mn E is the number of optical channels after dimension reduction, and the method used in the dimension reduction process is a principal component analysis method;
step b: for the input spectral information, first two pairs of one-dimensional convolution and pooling layers are passed, and then smoothed, to obtain the spectral feature f 1
Step c: for the input spatial information, firstly two pairs of two-dimensional convolution layer and pooling layer are passed, then the smoothing process is implemented to obtain spatial information f 2
Step d: f. of 1 And f 2 And (4) processing through three full-connection layers, and finally classifying by using a softmax classifier.
As the urban street tree species identification method based on deep learning, the spectral information of the image segmented in the step 6 is acquired in the step 9
Figure BDA0003942922070000063
And spatial information
Figure BDA0003942922070000064
And then inputting the spectral space parallel convolution neural network to identify the tree species of the street tree, comparing the identification result with the verification set, and determining the identification precision.
Compared with the prior art, the invention has the following beneficial technical effects:
(1) The fuzzy C-means classifier adopted in the method for detecting the crown can effectively reduce the problems of intra-class pixel noise and variance, can perform classification on the premise of assuming that a single pixel can belong to different classes, and is obviously improved by a harder classifier;
(2) An active contour method adopted in the process of describing the tree crowns can effectively segment the overlapping parts of the tree crowns, and the existing marking control watershed algorithm cannot be realized;
(3) The built spectral image parallel convolution neural network model can simultaneously process the spectral and spatial information of the image.
Drawings
FIG. 1 is a flowchart of an embodiment of the deep learning-based urban street tree species identification according to the present invention.
Detailed Description
The present invention will now be described in more detail with reference to the accompanying drawings, but it should be understood that the embodiments are illustrative only and not limiting.
According to the urban street tree type identification method based on deep learning, the multispectral image of the urban street tree is obtained through the unmanned aerial vehicle, single-tree crown detection and crown drawing are carried out through image preprocessing, the generated single-tree crown image is input into the built deep learning network, and the tree type identification is carried out based on the spectrum and the space information of the generated crown image. The embodiment of the invention provides a city road tree type identification method based on deep learning, wherein an FCM classifier adopted in crown detection effectively reduces the noise of pixels in the class, and can classify under the assumption that a single pixel can belong to a plurality of classes. The active contour algorithm adopted in crown delineation can reduce errors caused by crown shadows and can effectively classify the overlapped parts of the crown.
As shown in fig. 1, a flowchart of a method for identifying urban street tree types based on deep learning according to an embodiment of the present invention is provided, and the method is systematically described, and includes the following steps:
step 1: carrying out high-altitude shooting by using a multispectral camera through an unmanned aerial vehicle to obtain multispectral images of various street trees in different scenes in a city, wherein the scenes include but are not limited to suburb open areas, urban road greenbelts, greenbelts of universities and cells and the like;
step 2: preprocessing the multispectral image acquired in the step (1) to generate a 3D point cloud P representing a pavement tree canopy, converting the acquired image from a digital value image into a radiation value image, and converting the radiation value image into a reflection value image;
and step 3: analyzing the reflection value image in the step 2, judging according to the reflectivity of the image object, and removing other image objects which possibly interfere with the acquisition of the crown image object;
and 4, step 4: acquiring crown vertexes, generating a normalized digital surface model on the basis of the 3D point cloud P generated in the step 2, and calculating three-dimensional coordinates of the vertexes of the crown layers by using a local maximum detection algorithm;
and 5: judging the height of the crown top point according to the three-dimensional coordinates of the crown top point obtained in the step 4, and removing the top point with too high or too low height;
step 6: classifying crown image objects by a fuzzy C-means classifier containing local background information to estimate the span of a single crown, using a fractional image u of the crown i Determining the boundary of a single crown, the fractional image u of the crown i The tree crown boundary of a single tree can be delineated by an active contour algorithm, which can be obtained by performing a markov random field on a fuzzy classification framework.
And 7: and manually investigating various urban street trees on the spot, shooting images to determine tree species and marking to form a training set and a verification set of the urban street tree images.
And 8: and (4) completing the construction of the spectral space parallel convolution neural network, and training the neural network through the training set formed in the step (7).
And step 9: and (4) inputting the image obtained in the step (6) and subjected to the single tree crown segmentation into a spectral space parallel convolutional neural network, and completing the identification of the street tree types by utilizing a deep learning algorithm.
In the above embodiment, as shown in fig. 1, in step 2, when the true color multispectral image of the unmanned aerial vehicle obtained in step 1 is preprocessed, the method used is a scale invariant feature transformation method, the method can complete automatic key point generation and connection point matching of the multispectral image, the generated key points and the matched connection points are used to estimate the internal and external camera direction parameters carried by the unmanned aerial vehicle, the estimated result is used to generate a 3D point cloud P, so as to better describe the horizontal structure and surface height change of the crown, a multispectral camera radiation calibration model is used to convert a digital quantization value image into a radiation brightness value image, and a calibration reflection panel is used to complete conversion from the radiation brightness value image to the reflection brightness value image on the basis of the reflectivity of a single band panel.
As shown in FIG. 1, in step 3, a reflectivity threshold R is set H And R L The reflectivity R of the image object is compared to a threshold as follows: if R is ≦ R L Judging the image object as a shade of the street tree; if R is L ≤R≤R H Judging the image object as a street tree; if R.gtoreq.R H And judging the image object as a building roof bright spot, and accordingly shielding shade of the street tree and the building roof bright spot to avoid influence on obtaining the crown object.
As shown in FIG. 1, in step 4, the 3D point cloud P generated in step 2 for describing the horizontal structure of the canopy is filtered, and then every 5m 2 Selecting a ground point, obtaining a digital elevation model by interpolation, obtaining a digital surface model by resampling the original point cloud data, obtaining a normalized digital surface model by subtracting the digital elevation model from the digital surface model, and obtaining a crown vertex T by using a local maximum detection algorithm in the normalized digital surface model c
As shown in FIG. 1, the height threshold T is set in step 5 H And T L Comparing the height of the crown vertex to a height threshold as follows: if T is c ≤T L If the top point is the top point of the short shrub; if T is L ≤T c ≤T H If the vertex is the crown vertex of the street tree, the vertex is considered as the crown vertex of the street treeIf T is c ≥T H And considering the vertex as the vertex of the tree-like building, and accordingly shielding the short shrubs and the tree-like building to avoid influencing the acquisition of the crown object.
As shown in FIG. 1, in step 6, a fuzzy C-means classifier is used to estimate the span of the single tree crown, while the classifier contains partial background information, and then a Markov random field is performed on a fuzzy classification frame to obtain a fractional image u of the tree crown i Determining the boundary of the crown by using the fractional image, and then using an active contour algorithm to describe the boundary of the crown of the single tree to finish the detection and segmentation of the crown of the single tree, wherein the method specifically comprises the following steps:
step a: compared with a hard classifier which can only allocate pixels to one class, the fuzzy C-means classifier has the greatest advantages that classification can be performed on the premise that a single pixel can belong to different classes, the minimization of the following equation 1 can be realized by iterating membership degrees and clustering centers in equations 2 and 3 for multiple times, and after the following equation 1 is minimized, a fraction image u e { u } which represents the space likelihood of a single class can be obtained 1 ,u 2 ,…,u C }。
Figure BDA0003942922070000101
Figure BDA0003942922070000102
Figure BDA0003942922070000103
Where equation 1 is the objective function of the fuzzy C-means classifier, N represents the number of pixels in the class, C represents the number of image classes, m is the fuzzification index, and equation 2 is the membership matrix, where D is the data point x i And a cluster center c j Euclidean distance of (c), clustering center j As can be seen from equation 3, the above parameters are iterated to satisfy the constraint:
Figure BDA0003942922070000104
step b: respectively defining the posterior probability, the prior probability and the conditional probability of the pixel y and the classification label w as p (w/y), p (w), p (y/w), wherein the prior probability p (w) can be obtained by estimation of a smooth prior Markov random field model under the condition of assuming that the physical boundary of the system is smoothly changed, the conditional probability p (y/w) can be derived from the equation 1 of the step a, and the maximum value of the posterior probability p (w/y) is the minimum value of the posterior energy U, so that the following equation 4 can be minimized by using a simulated annealing algorithm to obtain the i-th pixel and the j-th class global posterior probability,
Figure BDA0003942922070000111
where λ represents a control variable which, in determining class membership, controls the effect of local spectral and spatial components on the classification, β is used to control the degree of smoothing at the classification boundaries, N j For classifying a neighborhood, it can be defined as v 1 (w r )+v 2 (w r ,w r' )+v 3 (w r ,w r' ,w r” ) And, v 1 (w r ),v 2 (w r ,w r' ),v 3 (w r ,w r' ,w r” ) Potential functions of single-site, double-site, and triple-site masses, respectively;
step c: in order to better determine the trend of the curve around the crown vertex, the boundary of the single-tree crown is described by using an active contour algorithm which can simultaneously consider curve shape parameters and the crown classification score image, and the image is segmented.
As shown in fig. 1, in step 8, the 1-D-CNN and the 2-D-CNN are integrated to complete the construction of the spectrum-space parallel convolution neural network model, which can process the spectrum and space information of the image at the same time, specifically as follows:
step a: the available I E R of the image of the input spectrum space parallel convolution neural network h×w×d Are used to indicate the height, width and number of image light channels by h, w and d, respectively
Figure BDA0003942922070000112
To represent spectral information of the pixel (m, n); by using
Figure BDA0003942922070000113
To represent spatial information of a pixel, where p mn Is a pixel A mn The center position of (a) is reduced by principal component analysis, and e represents the number of optical channels after the reduction of the dimension.
Step b: spectral feature f 1 The acquisition may be performed by passing the input spectral information through two pairs of one-dimensional convolution layers and pooling layers, and then smoothing the spectral information.
Step c: spatial feature f 2 The input space can be firstly processed through two pairs of two-dimensional convolution layers and pooling layers, and then the space information is smoothed.
Step d: using three full-link layer pairs f 1 And f 2 Processing is performed, followed by classification using a softmax classifier.
As shown in FIG. 1, in step 9, the spectral information of the image after the single tree crown detection and drawing in step 6 is completed is extracted
Figure BDA0003942922070000121
And spatial information
Figure BDA0003942922070000122
And inputting the spectral information and the spatial information into a spectral space parallel convolution neural network, identifying the types of the street trees by using a deep learning algorithm, comparing the identification result with a verification set, and determining the identification precision.
The present invention and its embodiments have been described above schematically, but the description is not limited thereto, and the drawings show only one embodiment of the present invention, and the actual configuration is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (8)

1. A city street tree type identification method based on deep learning is characterized by comprising the following steps:
step 1: acquiring multispectral images of multiple true-color unmanned aerial vehicles under different scenes;
step 2: preprocessing the multispectral image acquired in the step 1, wherein the multispectral image comprises the generation of a 3D point cloud P and the conversion from a digital quantization value image to a radiation value image and then to a reflection value image;
and step 3: judging the image object according to the reflectivity presented by the reflection value image in the step 2, and removing the object which is possibly misjudged as the crown in the image object;
and 4, step 4: detecting the crown vertex, generating a normalized digital surface model according to the 3D point cloud P generated in the step 2, and obtaining the crown vertex position and height by using a local maximum detection algorithm;
and 5: judging according to the height of the crown top point obtained in the step 4, and removing the top point with too high or too low height;
and 6: estimating the span of the crown using a fuzzy C-means classifier containing local background information, obtaining a fractional image u of the crown by performing a Markov random field on a fuzzy classification frame i Determining the crown boundary through the fractional image, and depicting the crown boundary of a single tree by using an active contour algorithm;
and 7: acquiring a street tree image, labeling the street tree image through manual investigation, and constructing a street tree image training set and a verification set;
and 8: building a spectrum space parallel convolution neural network and training;
and step 9: and (4) inputting the segmented image obtained in the step (6) into a spectrum space parallel convolution neural network to realize tree species identification.
2. The urban street tree species identification method based on deep learning according to claim 1, wherein in step 2, in the image preprocessing process, a scale invariant feature transformation method is used for performing automatic key point generation and connection point matching on the multispectral image acquired in step one, so as to estimate the internal and external camera direction parameters, the estimated result is used for generating a 3D point cloud P describing the tree crown horizontal structure and surface height variation, a multispectral camera radiation calibration model is used for completing the conversion from a digital quantization value image to a radiation brightness value image, and a calibration reflection panel is used for converting the radiation brightness value image into a reflection brightness value image by combining the panel reflectivity of a single wave band.
3. The urban street tree species identification method based on deep learning of claim 1, wherein a reflectivity threshold R is introduced in step 3 H And R L The judgment is as follows: if the image object reflectivity R ≦ R L Then, the image object is considered as a shade of the street tree; if R is L ≤R≤R H If so, the image object is considered as a street tree; if R.gtoreq.R H And considering the image as a building roof bright spot, and accordingly shielding shade of the street tree and the building roof bright spot to avoid influence on the crown object.
4. The urban road tree species identification method based on deep learning of claim 1, wherein in step 4, the 3D point cloud P generated in step 2 is filtered to every 5m 2 Selecting a ground point mode for interpolation to generate a digital elevation model, resampling the original point cloud data to obtain a digital surface model, subtracting the digital elevation model from the digital surface model to obtain a normalized digital surface model, and acquiring a peak T of the crown in the normalized digital surface model by using a local maximum detection algorithm c
5. The deep learning based city of claim 1The method for identifying the types of the city road trees is characterized in that a height threshold value T is introduced in the step 5 H And T L The judgment is as follows: if the vertex T is c ≤T L Judging the top point as the top point of the short shrub; if T is L ≤T c ≤T H If the vertex is the top of the tree crown of the street tree, the vertex is considered as the top of the tree crown of the street tree; if T is c ≥T H And considering the top point as the top point of the tree-like building, shielding the short shrubs and the tree-like building according to the top point, and avoiding influencing the judgment of the crown object.
6. The method for identifying urban street tree species based on deep learning of claim 1, wherein step 6 comprises estimating the span of the crown by using a fuzzy C-means classifier containing local background information, and obtaining the fractional image u of the crown by performing a Markov random field on a fuzzy classification frame i Determining the crown boundary through the fractional image, and depicting the crown boundary of a single tree by using an active contour algorithm, wherein the method specifically comprises the following steps:
step a: unlike hard classifiers, which assign pixels to a class entirely, fuzzy C-means classifiers perform classification under the assumption that a single pixel can belong to different classes, with a fractional image in the data u e { u } representing the spatial likelihood of a single class 1 ,u 2 ,…,u C That can be obtained by minimizing the following equation 1, and the minimization of equation 1 can be achieved by iterating the membership degrees and the cluster centers in equations 2 and 3,
Figure FDA0003942922060000031
Figure FDA0003942922060000032
Figure FDA0003942922060000033
where equation 1 is the objective function of the fuzzy C-means classifier, N is the number of pixels, C is the number of classes, m is the fuzzification index, and equation 2 is the membership matrix, where D is the data point x i And a cluster center c j Is the cluster center, all parameter iterations must satisfy:
Figure FDA0003942922060000034
step b: defining the posterior probability, the prior probability and the conditional probability of the pixel y and the classification label w as p (w/y), p (w), p (y/w), estimating the prior probability by using a smooth prior Markov random field model, and assuming that the physical boundary of the system changes smoothly, the conditional probability is derived from equation 1 described in step a, the maximum value of the posterior probability is the minimum value of the posterior energy U, whereby the global posterior probability of the ith pixel and the jth class is obtained by minimizing the following equation 4 by using a simulated annealing algorithm,
Figure FDA0003942922060000035
where λ is the control variable, controlling the influence of the local spectral and spatial components in determining class membership, β controls the degree of smoothing at class boundaries, N j Can be defined as v as the neighborhood 1 (w r )+v 2 (w r ,w r' )+v 3 (w r ,w r' ,w r” ),v 1 (w r ),v 2 (w r ,w r' ),v 3 (w r ,w r' ,w r” ) Is a blob potential function corresponding to single, double, and triple points;
step c: the boundaries of the individual crowns are delineated using an active contour algorithm that can consider both the curve shape parameters and the fractional images of the crown classifications to determine the trend of the curve around the crown vertices.
7. The urban street tree species identification method based on deep learning of claim 1, wherein the spectral space parallel convolution neural network model built in the step 8 integrates 1-D-CNN and 2-D-CNN, and can process the spectral and spatial information of the image at the same time, specifically as follows:
step a: the input image is represented as I ∈ R h×w×d Where h, w, d denote the height, width and number of light channels, respectively, and the spectral information at pixel (m, n) is represented as
Figure FDA0003942922060000041
The spatial information of each pixel can be expressed as
Figure FDA0003942922060000042
Wherein p is mn Is a pixel A mn E is the number of optical channels after dimensionality reduction, and the method used in the dimensionality reduction process is a principal component analysis method;
step b: for the input spectral information, first two pairs of one-dimensional convolution and pooling layers are passed, and then smoothed, to obtain the spectral feature f 1
Step c: for the input spatial information, firstly two pairs of two-dimensional convolution layer and pooling layer are passed, then the smoothing process is implemented to obtain spatial information f 2
Step d: f. of 1 And f 2 And (4) processing through three full-connection layers, and finally classifying by using a softmax classifier.
8. The urban street tree species identification method based on deep learning of claim 1, wherein spectral information of the image segmented in step 6 is obtained in step 9
Figure FDA0003942922060000043
And spatial information
Figure FDA0003942922060000044
Rear input spectrumAnd the spatial parallel convolutional neural network identifies the tree species of the street tree, compares the identification result with the verification set and determines the identification precision.
CN202211420428.2A 2022-11-15 2022-11-15 Urban street tree species identification method based on deep learning Pending CN115690513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211420428.2A CN115690513A (en) 2022-11-15 2022-11-15 Urban street tree species identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211420428.2A CN115690513A (en) 2022-11-15 2022-11-15 Urban street tree species identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN115690513A true CN115690513A (en) 2023-02-03

Family

ID=85052042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211420428.2A Pending CN115690513A (en) 2022-11-15 2022-11-15 Urban street tree species identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN115690513A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576559A (en) * 2023-11-16 2024-02-20 星景科技有限公司 Urban greening tree species identification method and system based on orthographic image of unmanned aerial vehicle
CN118072029A (en) * 2024-04-24 2024-05-24 山东科技大学 Vehicle-mounted point cloud single wood segmentation method and system for improving Thiessen polygon constraint

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117576559A (en) * 2023-11-16 2024-02-20 星景科技有限公司 Urban greening tree species identification method and system based on orthographic image of unmanned aerial vehicle
CN118072029A (en) * 2024-04-24 2024-05-24 山东科技大学 Vehicle-mounted point cloud single wood segmentation method and system for improving Thiessen polygon constraint

Similar Documents

Publication Publication Date Title
CN108573276B (en) Change detection method based on high-resolution remote sensing image
CN104915636B (en) Remote sensing image road recognition methods based on multistage frame significant characteristics
CN107292339B (en) Unmanned aerial vehicle low-altitude remote sensing image high-resolution landform classification method based on feature fusion
CN102799901B (en) Method for multi-angle face detection
CN105335966B (en) Multiscale morphology image division method based on local homogeney index
CN105718945B (en) Apple picking robot night image recognition method based on watershed and neural network
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
CN109325484B (en) Flower image classification method based on background prior significance
CN111815776A (en) Three-dimensional building fine geometric reconstruction method integrating airborne and vehicle-mounted three-dimensional laser point clouds and streetscape images
Barnea et al. Segmentation of terrestrial laser scanning data using geometry and image information
CN115690513A (en) Urban street tree species identification method based on deep learning
CN103035013B (en) A kind of precise motion shadow detection method based on multi-feature fusion
CN107273905B (en) Target active contour tracking method combined with motion information
CN111899172A (en) Vehicle target detection method oriented to remote sensing application scene
CN106651795A (en) Method of using illumination estimation to correct image color
CN110287798B (en) Vector network pedestrian detection method based on feature modularization and context fusion
CN110738676A (en) GrabCT automatic segmentation algorithm combined with RGBD data
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN109034233A (en) A kind of high-resolution remote sensing image multi classifier combination classification method of combination OpenStreetMap
CN109784216B (en) Vehicle-mounted thermal imaging pedestrian detection Rois extraction method based on probability map
CN113128507A (en) License plate recognition method and device, electronic equipment and storage medium
Femiani et al. Shadow-based rooftop segmentation in visible band images
CN108022245B (en) Facial line primitive association model-based photovoltaic panel template automatic generation method
CN114842262A (en) Laser point cloud ground object automatic identification method fusing line channel orthographic images
CN112396655A (en) Point cloud data-based ship target 6D pose estimation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination