CN115841492B - Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method - Google Patents

Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method Download PDF

Info

Publication number
CN115841492B
CN115841492B CN202310161954.XA CN202310161954A CN115841492B CN 115841492 B CN115841492 B CN 115841492B CN 202310161954 A CN202310161954 A CN 202310161954A CN 115841492 B CN115841492 B CN 115841492B
Authority
CN
China
Prior art keywords
enhancement
pixel point
component
pixel
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310161954.XA
Other languages
Chinese (zh)
Other versions
CN115841492A (en
Inventor
王永
李晓娟
郭婉琳
尹华阳
李琳琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANHUI ACADEMY OF FORESTRY
Hefei Hengbao Tianxuan Intelligent Technology Co ltd
Original Assignee
ANHUI ACADEMY OF FORESTRY
Hefei Hengbao Tianxuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANHUI ACADEMY OF FORESTRY, Hefei Hengbao Tianxuan Intelligent Technology Co ltd filed Critical ANHUI ACADEMY OF FORESTRY
Priority to CN202310161954.XA priority Critical patent/CN115841492B/en
Publication of CN115841492A publication Critical patent/CN115841492A/en
Application granted granted Critical
Publication of CN115841492B publication Critical patent/CN115841492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the field of image processing, and provides a pine wood nematode lesion color standing tree remote sensing intelligent identification method based on cloud edge cooperation, which comprises the following steps: acquiring a pine tree image; acquiring component histograms under different scales, and respectively calculating the neighborhood enhancement range of the pixel point under each scale according to the component values of the pixel point; obtaining a measurement enhancement coefficient, a component enhancement coefficient and an enhancement component value corresponding to the pixel point according to the difference of the component values and the difference of measurement distances of the pixel point and the pixel point in the same super pixel block; acquiring a new component value of the pixel point according to the enhanced component value of the pixel point under the multi-scale; obtaining an enhancement map corresponding to the high-quality remote sensing pine tree image according to the new component value of the pixel point; and obtaining a corresponding nematode lesion area in the pine tree image according to the identification result of the pine tree image in the identification network. The invention aims to solve the problem of low recognition accuracy caused by redundancy of remote sensing image information when the nematode lesion area in the acquired pine tree remote sensing image is recognized.

Description

Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method
Technical Field
The invention relates to the field of image processing, in particular to a pine wood nematode lesion color standing tree remote sensing intelligent identification method based on cloud edge cooperation.
Background
Pine wood nematode disease is a deep forest disease caused by pine wood nematodes and has strong destructiveness, and in recent years, a plurality of provinces and cities in China grow and spread, so that a large number of pine trees die, and the ecological balance is seriously destroyed.
Once the pine is parasitized by the pine wood nematodes, the needle leaves will lose water and the surface of the pine wood nematodes is green, and the needle leaves will be damaged, and when the pine wood needles are all changed into yellow brown, the pine wood is in a dead state. Therefore, the condition of the standing wood needle can be identified to judge whether the pine wood nematode disease occurs. The pine is mainly concentrated and connected, the branch is in a wheel shape, the height of the pine is about 20 to 50 meters, and the manual shooting and collecting mode is difficult to realize, so that the pine is usually identified by the pine image in the unmanned aerial vehicle remote sensing data. The coverage area of each scene of remote sensing image and the image resolution are mutually restricted, if the remote sensing image contains large-area pine, the image resolution is inevitably reduced, if only a small amount of pine is used for collecting satellite images with high resolution, the cost is too high, and a small amount of data is difficult to obtain a recognition result with high confidence, so that the final recognition precision is determined by how to effectively extract relevant characteristics of the pine from the remote sensing image. Therefore, there is a need for a pine wood nematode disease identification scheme that reduces economic losses and maintains ecological balance.
Disclosure of Invention
The invention provides a pine wood nematode lesion color standing tree remote sensing intelligent identification method based on cloud edge cooperation, which aims to solve the problem of low identification accuracy caused by remote sensing image information redundancy when a nematode lesion area in a collected pine tree remote sensing image is identified by a traditional method, and adopts the following specific technical scheme:
the embodiment of the invention provides a pine wood nematode lesion color standing tree remote sensing intelligent identification method based on cloud edge cooperation, which comprises the following steps:
obtaining pine tree remote sensing images, and obtaining segmentation results of high-quality pine tree remote sensing images under a plurality of segmentation scales by utilizing a super-pixel segmentation technology;
acquiring the information enhancement intensity of each pixel point under each scale on each component according to the position of the component value of each pixel point under each scale in each component in the Lab color space in the component histogram, acquiring the enhancement radius corresponding to the pixel point under each scale according to the information enhancement degree of the pixel point under each scale on three components, and acquiring the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale;
obtaining the maximum value of a neighborhood enhancement range in a super-pixel block where the pixel point is located under each scale, and obtaining the measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range; acquiring component enhancement coefficients of the pixel points according to the information enhancement degree of the pixel points on the three components and the information enhancement degree of the pixel points on the three components corresponding to the maximum value of the neighborhood enhancement range, and acquiring enhancement component values of the pixel points on the three components according to the measurement enhancement coefficients and the component enhancement coefficients of the pixel points and the component values of the three components of the pixel points;
acquiring the average value of the enhancement component values of each component of each pixel under all scales according to the enhancement component values of three components in the color space corresponding to each pixel under each scale, taking the average value of the enhancement coefficients of each component under all scales as the new component value of each pixel on each component, and forming an enhanced image by the new component values of all pixels;
and taking the enhanced image and the high-quality pine image as input of a recognition network, acquiring a recognition result of the worm lesion region in the high-quality pine image by using the recognition network, and acquiring the position information of the recognition result of the worm lesion region in the high-quality pine image by using a minimum circumscribed rectangle.
Optionally, the method for obtaining the segmentation result of the high-quality pine tree remote sensing image under multiple scales by using the super pixel segmentation technology comprises the following specific steps:
the method comprises the steps of obtaining a remote sensing image of a pine in a pine forest by using an unmanned aerial vehicle, processing the remote sensing image of the pine by using a bilateral filtering technology to obtain a high-quality pine remote sensing image F, artificially setting initial sizes in a plurality of different super-pixel segmentation algorithms, and obtaining segmentation results of the high-quality pine remote sensing image under a plurality of scales by using the super-pixel segmentation technology.
Optionally, the component value of each component in Lab color space according to each pixel at each scale is in the component histogramThe method for obtaining the information enhancement of the pixel point on each component under each scale by the position comprises the following specific steps:
Figure SMS_2
,/>
Figure SMS_11
Figure SMS_15
wherein->
Figure SMS_4
、/>
Figure SMS_13
、/>
Figure SMS_18
Component values of pixel i in Lab color space L, a, b components, +.>
Figure SMS_23
、/>
Figure SMS_1
、/>
Figure SMS_8
The number of the pixel points is equal to the component values of the pixel point i on the components L, a and b; />
Figure SMS_14
、/>
Figure SMS_20
、/>
Figure SMS_3
The minimum component values on the L, a, b components, < >>
Figure SMS_10
、/>
Figure SMS_17
、/>
Figure SMS_22
The number of pixels of the smallest component value on the L, a and b components; />
Figure SMS_7
、/>
Figure SMS_12
、/>
Figure SMS_19
The maximum component values on the L, a, b components, respectively, < >>
Figure SMS_24
、/>
Figure SMS_5
、/>
Figure SMS_9
The number of pixels of the maximum component value on the components L, a and b; />
Figure SMS_16
、/>
Figure SMS_21
Figure SMS_6
The information of pixel i on the L, a and b components increases intensity.
Optionally, the method for obtaining the enhancement radius corresponding to the pixel point under each scale according to the information enhancement degree of the pixel point under each scale on three components includes the following specific steps:
the information enhancement of the pixel point on three components under each scale is obtained respectively, the average value of the information enhancement of the pixel point on the three components under each scale is used as the average information enhancement of the pixel point on each scale, the length and width size of the high-quality pine remote sensing image F is obtained, and the product of the average information enhancement of the pixel point under each scale and the maximum value of the length and width size is used as the enhancement radius of the pixel point under each scale.
Optionally, the method for obtaining the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale includes the following specific steps:
and acquiring the enhancement radius corresponding to the pixel point under each scale, and taking the pixel point as a round point and a round area with the enhancement radius as the radius as a neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point.
Optionally, the obtaining a maximum value of a neighborhood enhancement range in the super-pixel block where the pixel point is located under each scale, and obtaining a measurement enhancement coefficient of the pixel point according to a measurement distance of the pixel point and a measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range, includes the specific steps of:
ordering the neighborhood enhancement range corresponding to the pixel point in the super pixel block where the pixel point under each scale is located, obtaining the pixel point corresponding to the maximum value of the neighborhood enhancement range, calculating the difference value of the measurement distance between the maximum value of the measurement distance in the super pixel block and the measurement distance between the pixel point in the super pixel block as a molecule, taking the difference value of the measurement distance between the maximum value of the measurement distance in the super pixel block and the measurement distance between the pixel point corresponding to the maximum value of the neighborhood enhancement range as a denominator, and taking the ratio result as the measurement enhancement coefficient of the pixel point.
Optionally, the method for obtaining the component enhancement coefficient of the pixel point according to the information enhancement degree of the pixel point on the three components and the information enhancement degree of the pixel point corresponding to the maximum value of the neighborhood enhancement range includes the following specific steps:
and respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the L component under each scale, respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the a component under each scale, respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the b component under each scale, and taking the accumulated sum of the three difference values as the component enhancement coefficient of the pixel point under each scale.
Optionally, the method for obtaining the enhancement component values of the pixel point on the three components according to the metric enhancement coefficient and the component enhancement coefficient of the pixel point and the component values of the three components of the pixel point includes the following specific steps:
Figure SMS_26
wherein->
Figure SMS_30
Is the scale->
Figure SMS_33
Metric enhancement coefficient of pixel x in the segmentation result of (2), is->
Figure SMS_27
Is the scale->
Figure SMS_29
Component enhancement coefficient of pixel x in the segmentation result of (2)>
Figure SMS_32
Is the scale->
Figure SMS_35
Enhancement component value of pixel x in the segmentation result of (a) is +.>
Figure SMS_25
Is to collect the component value of the pixel point x in the image, < >>
Figure SMS_31
,/>
Figure SMS_34
Is the L component value corresponding to pixel x, < >>
Figure SMS_36
Is the a component value corresponding to pixel x, < >>
Figure SMS_28
Is corresponding to pixel point xb component value.
Optionally, the method includes obtaining an average value of enhancement component values of each component of each pixel under all scales according to enhancement component values of three components in a color space corresponding to each pixel under each scale, and taking the average value of enhancement coefficients of each component under all scales as a new component value of each component of the pixel, where the specific method includes:
Figure SMS_38
,/>
Figure SMS_41
,/>
Figure SMS_43
wherein->
Figure SMS_39
Is a new component value of the L component corresponding to pixel x,/for the pixel x>
Figure SMS_42
Is the enhancement component value of the L component corresponding to the pixel point x under the scale c; />
Figure SMS_44
Is the new component value of the a component corresponding to pixel x,/for the pixel x>
Figure SMS_45
Is the enhancement component value of the component a corresponding to the pixel point x under the scale c;
Figure SMS_37
is the new component value of the b component corresponding to pixel x,/for the pixel x>
Figure SMS_40
Is the enhancement component value of the b component corresponding to the pixel point x under the scale c, and K is the number of the segmentation scales.
Optionally, the utilizing the recognition network adopts a semantic segmentation model based on deep learning.
The beneficial effects of the invention are as follows: and obtaining the super-pixel segmentation result of the high-quality pine remote sensing image under a plurality of scales by setting the sizes of the super-pixel blocks of the plurality of scales. The information enhancement of the pixel points is constructed through the proportion of the component values on the components in the color space where the pixel points are located under each scale, and the method has the advantages that the defect of small difference between different pixel points when the proportion is obtained only through the pixel values is avoided through analyzing the component values of each pixel point. And the enhancement proportion of pixel point image information under different scales is obtained through a neighborhood enhancement range and enhancement component values, the neighborhood enhancement range reflects the distribution characteristics of the significance of the pixel points with consistent component values in the whole image, and the problem that the pixel points in a small-range color-changing standing tree area are ignored due to too dense needle leaves is avoided. And calculating new component values of the pixel points based on a plurality of neighborhood enhancement ranges, and then obtaining an enhancement map corresponding to the acquired pine tree image, wherein the new component values calculated by the plurality of neighborhood enhancement ranges can weaken the mutual restriction between the coverage area and the image resolution, so that the recognition problem caused by overlarge or undersize resolution is avoided, the enhancement map enlarges the saliency of the pixel point image information in the target area, the extracted image features can more accurately correspond to the color-changing standing tree area in the acquired image, and the more reliable nematode lesion area recognition of the pine tree remote sensing image is completed.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a schematic flow chart of a remote sensing intelligent identification method for pine wood nematode lesion color standing tree based on cloud edge cooperation according to an embodiment of the invention;
fig. 2 is a schematic diagram of component histograms in a color space in a remote sensing intelligent identification method of pine wood nematode lesion color standing tree based on cloud edge cooperation according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a cloud-edge-collaboration-based intelligent identification method for pine wood nematode lesion color standing tree provided by an embodiment of the invention is shown, and comprises the following steps:
and S001, performing segmentation processing on the acquired pine tree remote sensing image by using a multi-scale super-pixel segmentation algorithm.
The aim of the embodiment is to firstly identify the nematode lesion areas in the collected pine remote sensing images by utilizing an identification network, then respectively calculate the ratio of the area of each nematode lesion area to the minimum circumscribed rectangular area by utilizing an unmanned aerial vehicle computing system, upload the position and area ratio results of the nematode lesion areas in all pine forests to a cloud for storage, and distribute the storage areas to forest guards, wherein the forest guards perform insecticidal maintenance on the nematode lesion areas according to the nematode lesion conditions. And sending the site situation to the cloud end by a nursing staff, storing the site situation by the cloud end, and sending information to a forest guard periodically to remind the forest guard to periodically check the lesion-occurring region so as to prevent the pine wood nematode lesion from happening again.
After the pine forest to be identified is selected, a flight route of the unmanned aerial vehicle is set according to the topography of the pine forest, the distribution area of the pine tree and the height of the pine tree, the purpose of setting the flight route is to collect the pine tree image in the pine forest in the shortest time possible, and the flight height of the unmanned aerial vehicle is manually adjusted according to the pine tree heights in different areas in the collection process. In the flying process, the image quality collected by the unmanned aerial vehicle is interfered by surrounding noise, so that the collected image needs to be subjected to denoising pretreatment.
Further, for high quality pine tree image F, converting into CIELab color space
Figure SMS_46
. Image +_with Multi-scale SLIC using Multi-scale super-pixel segmentation algorithm>
Figure SMS_47
K different scale divisions are carried out, wherein the different scale divisions refer to different initial dimensions in a setting algorithm, and the magnitude of K in the invention takes a tested value of 5 which is respectively 200, 300, 500, 700 and 1000 and is respectively marked as->
Figure SMS_48
,/>
Figure SMS_49
To->
Figure SMS_50
. The multi-scale segmentation aims at evaluating consistency of image information of each pixel point in super pixel block areas with different sizes and image information of surrounding pixel points, and due to different severity of nematode discoloration, the apparent size of pine tree growth is different, so that segmented super pixel blocks with a plurality of segmentation scales are obtained, and image information of the pixel points can be amplified to a greater extent by calculating relevant characteristic values of the pixel points, so that the contrast of the pixel points belonging to nematode lesion standing tree with normal pine tree pixel points is more obvious.
So far, the multi-scale segmentation result of the high-quality pine tree image is obtained through bilateral filtering processing and multi-scale super-pixel segmentation.
Step S002, obtaining a component histogram corresponding to the super pixel block in the segmentation result under each scale, calculating the information enhancement and enhancement radius of each pixel point under each scale based on the component histogram, and calculating the neighborhood enhancement range of each pixel point based on the information enhancement and enhancement radius.
In the pine remote sensing image, if a color-changing standing tree area exists, in order to acquire clear boundaries and texture features, feature enhancement can be performed on pixel points of a suspicious area in the image, and the purpose of feature enhancement is to make the image information of the pixel points richer. If it is confirmed that the ground object is disturbed or pine tree grows normally, the information should extract only a small number of features for identifying redundant information that the color-changing stump area belongs to the disturbance.
And setting a plurality of scales to extract characteristics of the analysis object, wherein the characteristics comprise gray characteristics and texture characteristics, although the images of the scales are analyzed to obtain more image information, as the image scale is larger, the interference information is gradually increased, and the recognition accuracy of the color-changing standing tree area is affected. Therefore, the invention enhances the characteristics of each pixel point in scale
Figure SMS_51
For example, pair->
Figure SMS_52
The pixel points in the image are subjected to characteristic enhancement, and the characteristic enhancement range is acquired as follows:
recording an image
Figure SMS_56
Is of size M.times.N, and the statistics are obtained +.>
Figure SMS_60
The values of three components of each pixel point in L, a and b in the image are obtained to obtain three corresponding component histograms which are respectively marked as histograms +.>
Figure SMS_69
Histogram->
Figure SMS_54
Histogram->
Figure SMS_61
. For the color-changing standing tree in the remote sensing image, the color-changing standing tree grows normally with the surrounding in the remote sensing imageThe long pine tree has obvious color difference, for the pixel points belonging to the color-changing standing tree area, a certain number of similar pixel points are necessarily arranged around the pine tree, a neighborhood enhancement range R is constructed here in consideration of different densities of pine needle leaves, the neighborhood enhancement range R is used for representing how much range of image information of the pixel points has influence on the surrounding, and the neighborhood enhancement range of the pixel point i is calculated>
Figure SMS_67
:/>
Figure SMS_72
Figure SMS_57
,/>
Figure SMS_64
Figure SMS_68
,/>
Figure SMS_73
Wherein->
Figure SMS_58
Is the component value of pixel i on the L component in color space, +.>
Figure SMS_65
Is->
Figure SMS_66
Corresponding pixel number, +.>
Figure SMS_74
,/>
Figure SMS_55
Respectively is an image->
Figure SMS_63
Maximum and minimum of the L component of the middle pixel point,/, respectively>
Figure SMS_71
Is->
Figure SMS_76
Corresponding pixel number, +.>
Figure SMS_53
Is->
Figure SMS_62
Corresponding number of pixels, histogram of L component +.>
Figure SMS_70
As shown in fig. 2 below. />
Figure SMS_75
Is the information enhancement of pixel i, < >>
Figure SMS_59
Meaning the degree of saliency of the pixel point in the L-component histogram in the color space that is the same as the L-component value of pixel point i.
Figure SMS_77
Is the component value on the component a of Lab color space with pixel i, +.>
Figure SMS_78
Is the number of pixels equal to the component value of pixel i on the component a, +.>
Figure SMS_79
Is the minimum component value on the a component, +.>
Figure SMS_80
Is the number of pixels of the smallest component value on the a component,/->
Figure SMS_81
Is the maximum component value on the a component, +.>
Figure SMS_82
Is the number of pixels of the maximum component value on the a component, is->
Figure SMS_83
Is the information enhancement of pixel i on the a component at the corresponding scale.
Figure SMS_84
Is the component value on the b component of Lab color space with pixel i, +.>
Figure SMS_85
Is the number of pixels equal to the component value of pixel i on the b component, +.>
Figure SMS_86
Is the minimum component value on the b component, +.>
Figure SMS_87
Is the number of pixels of the smallest component value on the b component,/->
Figure SMS_88
Is the maximum component value on the b component, +.>
Figure SMS_89
Is the number of pixels of the maximum component value on the b component, is->
Figure SMS_90
Is the information enhancement of pixel i on the b component at the corresponding scale.
M and N are images respectively
Figure SMS_91
Is a length and a width of the (c). />
Figure SMS_92
Is the enhancement radius of pixel i, +.>
Figure SMS_93
The function means that the pixel point i is used as the center of a circle and +.>
Figure SMS_94
Taking a circular area for a radius will take a circleThe region serves as a neighborhood enhancement range for pixel i.
The neighborhood enhancement range reflects the pixel point in the image
Figure SMS_95
The influence area of the corresponding image information in the image information refers to a feature range which is considered for the pixel point i when feature pyramids are fused with features, and the pixel point i is considered to be only a neighborhood enhancement range +.>
Figure SMS_96
There is an impact on the image features within. The larger the information duty ratio of the pixel point i in the three component histograms of the color space is, the corresponding neighborhood enhancement radius is +.>
Figure SMS_97
The larger the image information of pixel i corresponds to the larger the influence area, i.e. the neighborhood enhancement range +.>
Figure SMS_98
The larger.
So far, the neighborhood enhancement range of the pixel point under each scale can be obtained by the method
Figure SMS_99
And S003, acquiring a measurement enhancement coefficient of the pixel point by using the maximum value of the neighborhood enhancement range in the super pixel block where the pixel point is located, calculating the component enhancement coefficients of the pixel point on different components by using the component values of the pixel point, and acquiring the enhancement component value of each component of each pixel point according to the measurement enhancement coefficient and the component enhancement coefficient of the pixel point.
For each segmentation scale, the segmentation scale is the initial size in a super-pixel segmentation algorithm, and an image is acquired
Figure SMS_102
Segmentation results at each segmentation scale. Further, for each segmentation scale, respectively obtaining L component, a component and b component in the color space corresponding to the super pixel block where the pixel point is locatedAnd obtaining a neighborhood enhancement range R corresponding to the pixel point under each segmentation scale according to the statistical result of the histogram and the component values of the pixel point in the three component histograms. In the scale->
Figure SMS_105
For example, the super pixel block where pixel i is located is marked as +.>
Figure SMS_108
Acquiring a corresponding component histogram, and bringing the component value of the pixel point i into the information enhancement degree +.>
Figure SMS_101
And enhancing the calculation formula of the radius r to obtain the dimension +.>
Figure SMS_104
The neighborhood enhancement range corresponding to the lower pixel point i is marked as +.>
Figure SMS_107
. Respectively obtaining +.>
Figure SMS_109
To->
Figure SMS_100
The neighborhood enhancement ranges of the lower pixel point i are respectively marked as
Figure SMS_103
To->
Figure SMS_106
. The function of the plurality of neighborhood enhancement ranges is to determine how large-range image information can be influenced by the image information of the pixel points, obtain the neighborhood enhancement ranges of the pixel points under a plurality of segmentation scales, and then calculate relevant characteristic values of the pixel points in the plurality of neighborhood enhancement ranges to amplify the image information of the pixel points to a greater extent, so that the contrast between the pixel points belonging to the nematode lesion standing tree and the normal pine tree pixel points is more obvious.
Obtaining super pixel block where pixel point i is located
Figure SMS_110
And the pixel point corresponding to the maximum value of the middle neighborhood enhancement range is marked as a pixel point p. The maximum neighborhood enhancement range of the pixel point p indicates that the image information of the pixel point p has the highest influence on the image information of the super-pixel block. The object of the feature enhancement of the pixel points is to extract the features of the input image by using a feature pyramid when the nematode lesion area in the pine image is identified by using an identification model.
For the scale
Figure SMS_114
Any pixel point in the lower super pixel block has a measurement distance with the seed point, which is used for representing the similarity degree of the pixel point and the seed point image information, for the pixel point p, if one pixel point is in a neighborhood enhancement range and the measurement distance with the seed point is close to the measurement distance of the pixel point p, the same kind of reliability of the pixel point p is higher, the image characteristic of the pixel point p should be amplified, an enhancement component value Q is constructed, which is used for representing the pixel point component value, based on the proportion of the enhancement, and the enhancement component value corresponding to the pixel point x is calculated through an enhancement coefficient>
Figure SMS_118
:/>
Figure SMS_125
,/>
Figure SMS_116
Figure SMS_119
Wherein->
Figure SMS_127
Is the super pixel block where the pixel point p is located +.>
Figure SMS_134
Maximum value of metric distance between middle pixel point and seed point, < ->
Figure SMS_112
Is the measurement distance between the pixel point x and the seed point of the super pixel block, < >>
Figure SMS_120
Is the measurement distance between the pixel point p and the seed point of the super pixel block, < >>
Figure SMS_126
Is the scale->
Figure SMS_132
The metric enhancement coefficient of pixel x in the segmentation result of (2). />
Figure SMS_115
,/>
Figure SMS_123
,/>
Figure SMS_130
Information enhancement of L component, a component and b component of pixel point x in color space, < >>
Figure SMS_136
,/>
Figure SMS_117
,/>
Figure SMS_121
The information of the L component, the a component and the b component of the pixel point p in the color space are enhanced, < >>
Figure SMS_128
Is the scale->
Figure SMS_133
The component enhancement coefficient of the pixel x in the segmentation result of (a). />
Figure SMS_111
Is the scale->
Figure SMS_122
Enhancement component value of pixel x in the segmentation result of (a) is +.>
Figure SMS_129
Is to collect the component value of the pixel point x in the image, < >>
Figure SMS_135
,/>
Figure SMS_113
Is the L component value corresponding to pixel x, < >>
Figure SMS_124
Is the a component value corresponding to pixel x, < >>
Figure SMS_131
Is the b component value corresponding to pixel x.
Traversing the whole image, calculating corresponding metric enhancement coefficients for each pixel point
Figure SMS_138
Component enhancement coefficient
Figure SMS_141
Obtaining the enhancement component value corresponding to each pixel point>
Figure SMS_144
And obtaining the enhancement component value of each component value in the color space corresponding to each pixel point. For example, the color space component values corresponding to pixel x are +.>
Figure SMS_139
Based on enhancement coefficients
Figure SMS_142
,/>
Figure SMS_145
The corresponding enhancement component value is marked +.>
Figure SMS_147
,/>
Figure SMS_137
The corresponding enhancement component value is marked +.>
Figure SMS_140
,/>
Figure SMS_143
The corresponding enhancement component value is marked +.>
Figure SMS_146
So far, the image information of the pixel points is subjected to enhancement processing, so that the enhancement component value corresponding to each pixel point is obtained.
Step S004, obtaining new component values of the pixel points in each component according to the enhancement component values of each component of each pixel point under a plurality of scales, and obtaining enhancement graphs corresponding to high-quality pine images according to the new component values of all the pixel points.
Further, for the dimensions
Figure SMS_150
And (3) respectively acquiring the enhancement component values of each pixel point in three components according to each pixel point in the segmentation result of the (c), and acquiring the enhancement pixel values of the pixel points according to the enhancement component values. Taking pixel point x as an example, obtain
Figure SMS_149
、/>
Figure SMS_162
、/>
Figure SMS_155
Afterwards, will->
Figure SMS_165
、/>
Figure SMS_153
、/>
Figure SMS_166
As new component values of pixel points x in Lab color space, traversing all pixel points to obtain scale +.>
Figure SMS_152
And a new component value for each pixel. Further, the scales are respectively obtained
Figure SMS_160
To->
Figure SMS_148
The new component value of the pixel point is obtained based on the enhancement component values of the pixel points under 5 different scales, and the new component value of the pixel point x in the three components of L, a and b is calculated: />
Figure SMS_159
,
Figure SMS_154
,/>
Figure SMS_163
Wherein->
Figure SMS_157
Is a new component value of the L component corresponding to pixel x,
Figure SMS_164
is the enhancement component value of the L component corresponding to the pixel point x under the scale c; />
Figure SMS_156
Is the new component value of the a component corresponding to pixel x,/for the pixel x>
Figure SMS_158
Is the enhancement component value of the component a corresponding to the pixel point x under the scale c; />
Figure SMS_151
Is the new component value of the b component corresponding to pixel x,/for the pixel x>
Figure SMS_161
The value of the enhancement component of the b component corresponding to the pixel point x under the scale c is that of the segmentation scale, and the value of K is 5 in the invention.
And acquiring a new component value of each pixel point, converting the new component value of each pixel point from the Lab color space to the RGB color space to obtain a characteristic enhancement map of the high-quality pine image F, and marking the characteristic enhancement map as an enhancement map FQ.
Up to this point, the enhancement map FQ is obtained by the multi-scale division and the processing of the enhancement component values.
And S005, acquiring a recognition result of the nematode lesion area in the high-quality pine tree image by using a recognition network, determining the position information of the nematode lesion area by using a minimum circumscribed rectangle, storing transmission information by using a cloud, and completing cooperative protection of pine forests.
Further, the recognition network is used to obtain the recognition result of the insect lesion area in the high-quality pine tree image, in this embodiment, the semantic segmentation model U-net based on deep learning is used as the recognition network, the data set after data enhancement is used as the training set, the remote sensing image in the training set is artificially marked, the image label is artificially set, the label of the color-changing standing tree area is set to be 1, the label of the normal pine tree area is set to be 2, the labels of the other areas are set to be 3, the remote sensing image and the label in the training set are encoded, the encoded remote sensing image is used as the input of the neural network, the recognition model is trained, the training of the neural network is a known technology, and the detailed description of the specific process is omitted. After the recognition model training is completed, the enhancement map FQ and the acquired image are used as input of a recognition model U-net, and the output of the recognition model is a nematode lesion area in the pine tree image.
The method comprises the steps of obtaining the minimum circumscribed rectangle of a lesion area of the pine wood nematodes, respectively calculating the ratio of the area of each lesion area of the nematodes to the area of the minimum circumscribed rectangle by using a calculation system of an unmanned aerial vehicle in order to determine the position information and the lesion degree of the lesion area, uploading the position information and the area ratio of the lesion area to a cloud end, sending the position information to a pine forest protector with a relatively short distance by the cloud end, and effectively cleaning the dead color-changing standing tree and surrounding pine trees by the forest protector to the site. The method comprises the steps that a caretaker sends site conditions to a cloud, the cloud stores specific pine information and site conditions of the occurrence of nematode lesions, and periodically sends information to a forest guard, so that the forest guard is reminded of periodically checking the pine with the lesions, and the pine nematode lesions are prevented from occurring again.
So far, the identification result of the worm lesion area in the pine tree remote sensing image is acquired, and the collaborative management of the identification result is completed.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (6)

1. The intelligent remote sensing identification method for the pine wood nematode lesion color standing tree based on cloud edge cooperation is characterized by comprising the following steps of:
obtaining pine tree remote sensing images, and obtaining segmentation results of high-quality pine tree remote sensing images under a plurality of segmentation scales by utilizing a super-pixel segmentation technology;
acquiring the information enhancement intensity of each pixel point under each scale on each component according to the position of the component value of each pixel point under each scale in each component in the Lab color space in the component histogram, acquiring the enhancement radius corresponding to the pixel point under each scale according to the information enhancement degree of the pixel point under each scale on three components, and acquiring the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale;
obtaining the maximum value of a neighborhood enhancement range in a super-pixel block where the pixel point is located under each scale, and obtaining the measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range; acquiring component enhancement coefficients of the pixel points according to the information enhancement degree of the pixel points on the three components and the information enhancement degree of the pixel points on the three components corresponding to the maximum value of the neighborhood enhancement range, and acquiring enhancement component values of the pixel points on the three components according to the measurement enhancement coefficients and the component enhancement coefficients of the pixel points and the component values of the three components of the pixel points;
obtaining the average value of the enhancement component values of each component of each pixel under all scales according to the enhancement component values of three components in the color space corresponding to each pixel under each scale, taking the average value of the enhancement component values of each component under all scales as the new component value of each pixel on each component, and forming an enhancement image by the new component values of all pixels;
taking the enhanced image and the high-quality pine tree image as input of a recognition network, acquiring a recognition result of the worm lesion region in the high-quality pine tree image by using the recognition network, acquiring position information of the recognition result of the worm lesion region in the high-quality pine tree image by using a minimum circumscribed rectangle, uploading the position information to a cloud for storage and transmitting information to related personnel to complete cooperative protection of pine forests;
the method for obtaining the information enhancement of the pixel point on each component under each scale according to the position of the component value of each component of each pixel point in the Lab color space in the component histogram comprises the following specific steps:
Figure QLYQS_5
,/>
Figure QLYQS_11
Figure QLYQS_17
wherein->
Figure QLYQS_4
、/>
Figure QLYQS_10
、/>
Figure QLYQS_16
Component values of pixel i in Lab color space L, a, b components, +.>
Figure QLYQS_22
、/>
Figure QLYQS_2
、/>
Figure QLYQS_8
The number of the pixel points is equal to the component values of the pixel point i on the components L, a and b; />
Figure QLYQS_14
、/>
Figure QLYQS_20
、/>
Figure QLYQS_3
The minimum component values on the L, a, b components, < >>
Figure QLYQS_9
、/>
Figure QLYQS_15
、/>
Figure QLYQS_21
The number of pixels of the smallest component value on the L, a and b components; />
Figure QLYQS_6
、/>
Figure QLYQS_13
、/>
Figure QLYQS_19
The maximum component values on the L, a, b components respectively,
Figure QLYQS_24
、/>
Figure QLYQS_1
、/>
Figure QLYQS_12
the number of pixels of the maximum component value on the components L, a and b; />
Figure QLYQS_18
、/>
Figure QLYQS_23
、/>
Figure QLYQS_7
The information intensity of the pixel point i on the components L, a and b is enhanced respectively;
the method for acquiring the enhancement radius corresponding to the pixel point under each scale according to the information enhancement degree of the pixel point under each scale on three components comprises the following steps:
respectively obtaining the information enhancement of the pixel point on three components under each scale, taking the average value of the information enhancement of the three components under each scale as the average information enhancement of the pixel point under each scale, obtaining the length-width size of the high-quality pine remote sensing image F, and taking the product of the average information enhancement of the pixel point under each scale and the maximum value of the length-width size as the enhancement radius of the pixel point under each scale;
the method for obtaining the neighborhood enhancement range maximum value in the super-pixel block where the pixel point is located under each scale, and obtaining the measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the neighborhood enhancement range maximum value comprises the following specific steps:
ordering the neighborhood enhancement range corresponding to the pixel point in the super pixel block where the pixel point under each scale is located, obtaining the pixel point corresponding to the maximum value of the neighborhood enhancement range, calculating the difference value of the measurement distance between the maximum value of the measurement distance in the super pixel block and the measurement distance between the pixel point in the super pixel block as a molecule, taking the difference value of the measurement distance between the maximum value of the measurement distance in the super pixel block and the measurement distance between the pixel point corresponding to the maximum value of the neighborhood enhancement range as a denominator, and taking the ratio result as the measurement enhancement coefficient of the pixel point;
specifically, the metric enhancement coefficients are:
Figure QLYQS_25
wherein->
Figure QLYQS_26
Is the super pixel block where the pixel point p is located
Figure QLYQS_27
Maximum value of metric distance between middle pixel point and seed point, < ->
Figure QLYQS_28
Is the measurement distance between the pixel point x and the seed point of the super pixel block, < >>
Figure QLYQS_29
Is the measurement distance between the pixel point p and the seed point of the super pixel block, < >>
Figure QLYQS_30
Is the scale->
Figure QLYQS_31
The measurement enhancement coefficient of the pixel point x in the segmentation result of (2);
the method for obtaining the component enhancement coefficient of the pixel point according to the information enhancement degree of the pixel point on the three components and the information enhancement degree of the pixel point corresponding to the maximum value of the neighborhood enhancement range comprises the following specific steps:
and respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the L component under each scale, respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the a component under each scale, respectively calculating the difference value of the information enhancement intensity of the pixel point corresponding to the maximum value of the neighborhood enhancement range under the b component under each scale, and taking the accumulated sum of the three difference values as the component enhancement coefficient of the pixel point under each scale.
2. The intelligent recognition method for the pine wood nematode lesion color standing tree based on cloud edge coordination according to claim 1, wherein the method is characterized in that a super-pixel segmentation technology is utilized to obtain segmentation results of high-quality pine tree remote sensing images under a plurality of scales; the method comprises the following steps:
the method comprises the steps of obtaining a remote sensing image of a pine in a pine forest by using an unmanned aerial vehicle, processing the remote sensing image of the pine by using a bilateral filtering technology to obtain a high-quality pine remote sensing image F, artificially setting initial sizes in a plurality of different super-pixel segmentation algorithms, and obtaining segmentation results of the high-quality pine remote sensing image under a plurality of scales by using the super-pixel segmentation technology.
3. The cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method according to claim 1, wherein the method for acquiring the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale comprises the following specific steps:
and acquiring the enhancement radius corresponding to the pixel point under each scale, and taking the pixel point as a round point and a round area with the enhancement radius as the radius as a neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point.
4. The intelligent recognition method of pine wood nematode lesion color standing tree based on cloud edge cooperation according to claim 1, wherein the method for obtaining the enhancement component values of the pixel point on three components according to the measurement enhancement coefficient and the component enhancement coefficient of the pixel point and the component values of the three components of the pixel point comprises the following specific steps:
Figure QLYQS_34
wherein->
Figure QLYQS_36
Is the scale->
Figure QLYQS_39
Metric enhancement coefficient of pixel x in the segmentation result of (2), is->
Figure QLYQS_35
Is the scale->
Figure QLYQS_38
Component enhancement coefficient of pixel x in the segmentation result of (2)>
Figure QLYQS_40
Is the scale->
Figure QLYQS_42
Enhancement component value of pixel x in the segmentation result of (a) is +.>
Figure QLYQS_32
Is to collect the component value of the pixel point x in the image, < >>
Figure QLYQS_37
,/>
Figure QLYQS_41
Is the L component value corresponding to pixel x, < >>
Figure QLYQS_43
Is the a component value corresponding to pixel x, < >>
Figure QLYQS_33
Is the b component value corresponding to pixel x.
5. The intelligent remote sensing identification method for pine wood nematode lesion color standing tree based on cloud edge coordination according to claim 1, wherein the method is characterized in that according to three components in a color space corresponding to each pixel point under each scaleThe method for obtaining the average value of the enhancement component value of each component of each pixel point under all scales by using the enhancement component value of each component under all scales as a new component value of each component of the pixel point comprises the following specific steps:
Figure QLYQS_46
,/>
Figure QLYQS_49
,/>
Figure QLYQS_50
wherein->
Figure QLYQS_45
Is a new component value of the L component corresponding to pixel x,/for the pixel x>
Figure QLYQS_48
Is the enhancement component value of the L component corresponding to the pixel point x under the scale c; />
Figure QLYQS_51
Is the new component value of the a component corresponding to pixel x,/for the pixel x>
Figure QLYQS_52
Is the enhancement component value of the component a corresponding to the pixel point x under the scale c; />
Figure QLYQS_44
Is the new component value of the b component corresponding to pixel x,/for the pixel x>
Figure QLYQS_47
Is the enhancement component value of the b component corresponding to the pixel point x under the scale c, and K is the number of the segmentation scales.
6. The intelligent recognition method for the pine wood nematode lesion color standing tree based on cloud edge coordination according to claim 1, wherein a semantic segmentation model based on deep learning is adopted by the recognition network.
CN202310161954.XA 2023-02-24 2023-02-24 Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method Active CN115841492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310161954.XA CN115841492B (en) 2023-02-24 2023-02-24 Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310161954.XA CN115841492B (en) 2023-02-24 2023-02-24 Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method

Publications (2)

Publication Number Publication Date
CN115841492A CN115841492A (en) 2023-03-24
CN115841492B true CN115841492B (en) 2023-05-12

Family

ID=85580167

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310161954.XA Active CN115841492B (en) 2023-02-24 2023-02-24 Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method

Country Status (1)

Country Link
CN (1) CN115841492B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7058234B2 (en) * 2002-10-25 2006-06-06 Eastman Kodak Company Enhancing the tonal, spatial, and color characteristics of digital images using expansive and compressive tone scale functions
CN103136733B (en) * 2013-02-25 2016-03-02 中国人民解放军总参谋部第六十一研究所 Based on multi-scale image segmentation and the remote sensing images color enhancement method of color transfer
CN103530635A (en) * 2013-09-23 2014-01-22 上海海洋大学 Coastline extracting method based on satellite microwave remote sensing image
CN105608458B (en) * 2015-10-20 2019-01-18 武汉大学 A kind of high-resolution remote sensing image building extracting method
CN105403989B (en) * 2015-10-28 2018-03-27 清华大学 Nematode identifying system and nematode recognition methods
CN106295604B (en) * 2016-08-19 2017-11-03 厦门大学 Remote sensing image road network extractive technique based on Federated filter
US11409994B2 (en) * 2018-05-15 2022-08-09 Shenzhen University Methods for image segmentation, computer devices, and storage mediums
CN112633202B (en) * 2020-12-29 2022-09-16 河南大学 Hyperspectral image classification algorithm based on dual denoising combined multi-scale superpixel dimension reduction
CN113240689A (en) * 2021-06-01 2021-08-10 安徽建筑大学 Method for rapidly extracting flood disaster area
CN114595975A (en) * 2022-03-11 2022-06-07 安徽大学 Unmanned aerial vehicle remote sensing pine wood nematode disease monitoring method based on deep learning model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于谱直方图的遥感图像分层次多尺度植被分割;刘小丹;于宁;邱红圆;;国土资源遥感(第02期) *

Also Published As

Publication number Publication date
CN115841492A (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN106951836B (en) crop coverage extraction method based on prior threshold optimization convolutional neural network
CN104881865B (en) Forest pest and disease monitoring method for early warning and its system based on unmanned plane graphical analysis
CN107609526A (en) Rule-based fine dimension city impervious surface rapid extracting method
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN108416985B (en) Geological disaster monitoring and early warning system and method based on image recognition
CN112418188A (en) Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision
CN111832518B (en) Space-time fusion-based TSA remote sensing image land utilization method
CN113392775A (en) Sugarcane seedling automatic identification and counting method based on deep neural network
CN110569747A (en) method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN102855485B (en) The automatic testing method of one grow wheat heading
CN109886146B (en) Flood information remote sensing intelligent acquisition method and device based on machine vision detection
CN106503695A (en) A kind of tobacco plant identification and method of counting based on Aerial Images
CN112037244B (en) Landsat-8 image culture pond extraction method combining index and contour indicator SLIC
CN111967441A (en) Crop disease analysis method based on deep learning
CN115272860A (en) Method and system for determining rice planting area, electronic device and storage medium
CN115601670A (en) Pine wilt disease monitoring method based on artificial intelligence and high-resolution remote sensing image
CN114965501A (en) Peanut disease detection and yield prediction method based on canopy parameter processing
CN114120203A (en) Improved YoloV 4-based field wheat scab occurrence degree evaluation method
CN111882573B (en) Cultivated land block extraction method and system based on high-resolution image data
CN114049564A (en) Pine wood nematode disease grade prediction model construction method based on hyperspectral remote sensing image
CN115841492B (en) Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method
CN114694048A (en) Sparse shrub species identification method and system based on unmanned aerial vehicle remote sensing technology
CN115841615A (en) Tobacco yield prediction method and device based on multispectral data of unmanned aerial vehicle
CN112949607A (en) Wetland vegetation feature optimization and fusion method based on JM Relief F
CN110544237A (en) Oil tea pest model training method and recognition method based on image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant