CN115841492A - Pine wood nematode disease color-changing standing tree remote sensing intelligent identification method based on cloud edge synergy - Google Patents
Pine wood nematode disease color-changing standing tree remote sensing intelligent identification method based on cloud edge synergy Download PDFInfo
- Publication number
- CN115841492A CN115841492A CN202310161954.XA CN202310161954A CN115841492A CN 115841492 A CN115841492 A CN 115841492A CN 202310161954 A CN202310161954 A CN 202310161954A CN 115841492 A CN115841492 A CN 115841492A
- Authority
- CN
- China
- Prior art keywords
- enhancement
- pixel point
- component
- scale
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention relates to the field of image processing, and provides a pine wood nematode discoloration standing tree remote sensing intelligent identification method based on cloud edge synergy, which comprises the following steps: acquiring a pine image; acquiring component histograms under different scales, and respectively calculating neighborhood enhancement ranges of pixel points under each scale according to the size of pixel point component values; acquiring a measurement enhancement coefficient, a component enhancement coefficient and an enhancement component value corresponding to a pixel point according to the difference of the component values and the difference of the measurement distances between the pixel point and a pixel point in the same superpixel block; acquiring a new component value of the pixel point according to the enhanced component value of the pixel point under the multi-scale; acquiring an enhancement graph corresponding to the high-quality remote sensing pine image according to the new component value of the pixel point; and obtaining a corresponding nematode lesion area in the pine tree image according to the identification result of the pine tree image in the identification network. The invention aims to solve the problem of low identification accuracy rate caused by information redundancy of the remote sensing image when the nematode lesion area in the collected pine remote sensing image is identified.
Description
Technical Field
The invention relates to the field of image processing, in particular to a pine wood nematode discoloration standing tree remote sensing intelligent identification method based on cloud edge synergy.
Background
The pine wood nematode disease is caused by pine wood nematodes, has strong destructive deep forest diseases, and has been bred and spread in a plurality of provinces in China in recent years, so that a large number of pines die, and serious damage is caused to ecological balance.
Once the pine is parasitized by the pine wood nematode, the conifer loses water and the surface is dark green, the damage trace on the conifer can be generated, and when the conifer is completely changed into yellow brown, the pine is in a withered state. Therefore, whether the pine wood nematode disease occurs can be judged by identifying the state of the stumpage needle leaves. Pine trees are mostly concentrated in pieces and are branched in a wheel shape, the height of the pine trees is about 20-50 meters, and the mode of manual shooting and collection is difficult to realize, so pine wood nematode diseases are generally identified through pine tree images in unmanned aerial vehicle remote sensing data. The coverage range and the image resolution of each scene of the remote sensing image are mutually restricted, if the remote sensing image contains pine trees with large areas, the image resolution is inevitably reduced, if the remote sensing image only aims at a small number of pine trees to acquire satellite images with high resolution, the cost is too high, and the identification result with high confidence degree is difficult to acquire by a small number of data, so that the final identification precision is determined by how to effectively extract the relevant features of the pine trees from the remote sensing image. Therefore, there is a need for a scheme for identifying pine nematode disease that reduces economic losses and maintains ecological balance.
Disclosure of Invention
The invention provides a pine wilt disease color-changing standing tree remote sensing intelligent identification method based on cloud edge cooperation, which aims to solve the problem of low identification accuracy caused by remote sensing image information redundancy when a traditional method is used for identifying a nematode lesion area in an acquired pine remote sensing image, and adopts the following technical scheme:
one embodiment of the invention provides a remote sensing intelligent identification method for pine wilt disease color-changing standing trees based on cloud edge synergy, which comprises the following steps:
acquiring pine tree remote sensing images, and acquiring segmentation results of high-quality pine tree remote sensing images under a plurality of segmentation scales by utilizing a superpixel segmentation technology;
acquiring information enhancement degree of each pixel point on each component under each scale according to the position of the component value of each component of each pixel point in the Lab color space in the component histogram under each scale, acquiring enhancement radius corresponding to each pixel point under each scale according to the information enhancement degree of each pixel point on three components under each scale, and acquiring neighborhood enhancement range of each pixel point under each scale according to the position information of each pixel point and the enhancement radius under each scale;
acquiring the maximum value of a neighborhood enhancement range in a superpixel block where a pixel point is positioned under each scale, and acquiring a measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range; acquiring component enhancement coefficients of the pixel points according to the information enhancement degrees of the pixel points on the three components and the information enhancement degrees of the pixel points on the three components corresponding to the maximum value of the neighborhood enhancement range, and acquiring enhancement component values of the pixel points on the three components according to the measurement enhancement coefficients and the component enhancement coefficients of the pixel points and the component values of the three components of the pixel points;
acquiring the average value of the enhancement component values of each pixel point in all scales of each component according to the enhancement component values of three components in the color space corresponding to each pixel point in each scale, taking the average value of the enhancement coefficients of each component in all scales as the new component value of the pixel point on each component, and forming an enhanced image by the new component values of all the pixel points;
and taking the enhanced image and the high-quality pine image as input of an identification network, acquiring an identification result of the nematode lesion area in the high-quality pine image by using the identification network, and acquiring position information of the identification result of the nematode lesion area in the high-quality pine image through a minimum circumscribed rectangle.
Optionally, the obtaining of the segmentation result of the high-quality pine remote sensing images under multiple scales by using the super-pixel segmentation technology includes the following specific steps:
the method comprises the steps of obtaining remote sensing images of pine trees in a pine forest by an unmanned aerial vehicle, processing the remote sensing images of the pine trees by a bilateral filtering technology to obtain high-quality pine remote sensing images F, manually setting initial sizes in a plurality of different superpixel segmentation algorithms, and obtaining segmentation results of the high-quality pine remote sensing images in a plurality of scales by the superpixel segmentation technology.
Optionally, the information enhancement degree of each pixel point on each component in the Lab color space is obtained according to the position of the component value of each pixel point in each component in the component histogram in each scale, and the specific method includes:,/>,/>in, is greater than or equal to>、/>、/>Respectively the component values of the pixel point i on the components L, a and b of the Lab color space, and->、/>、/>Are respectively associated with the pixel points iThe number of pixel points with equal component values on the L, a and b components; />、/>、/>Is the minimum component value on the L, a, b component, respectively>、、/>The pixel point number of the minimum component value on the L, a and b components is respectively; />、/>、/>Is the largest component value on the L, a, b component, respectively>、/>、/>The pixel point quantity of the maximum component value on the L, a and b components respectively; />、/>、/>The information enhancement of pixel i on the L, a, b components, respectively.
Optionally, the obtaining of the enhancement radius corresponding to the pixel point under each scale according to the information enhancement degree of the pixel point under each scale on the three components includes:
respectively obtaining the information enhancement degrees of the pixel points on the three components under each scale, taking the average value of the information enhancement degrees on the three components under each scale as the average information enhancement degree of the pixel points under each scale, obtaining the length and width sizes of the high-quality pine remote sensing image F, and taking the product of the average information enhancement degree of the pixel points under each scale and the maximum value in the length and width sizes as the enhancement radius of the pixel points under each scale.
Optionally, the obtaining of the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale includes:
and acquiring the corresponding enhancement radius of the pixel point under each scale, and taking the pixel point as a dot and a circular area with the enhancement radius as the radius as a neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point.
Optionally, the obtaining of the maximum value of the neighborhood enhancement range in the super-pixel block where the pixel point is located under each scale, and the obtaining of the measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range include the specific method:
the method comprises the steps of sequencing the neighborhood enhancement ranges corresponding to pixel points in a super-pixel block where the pixel points are located under each scale, obtaining the pixel points corresponding to the maximum value of the neighborhood enhancement ranges, calculating the difference value between the maximum value of the medium quantity distance and the pixel point measurement distance in the super-pixel block, calculating the difference value between the maximum value of the medium quantity distance in the super-pixel block and the pixel point measurement distance corresponding to the maximum value of the neighborhood enhancement ranges, taking the difference value between the maximum value of the medium quantity distance in the super-pixel block and the pixel point measurement distance as a numerator, taking the difference value between the maximum value of the medium quantity distance in the super-pixel block and the pixel point measurement distance corresponding to the maximum value of the neighborhood enhancement ranges as a denominator, and taking a ratio result as a measurement enhancement coefficient of the pixel points.
Optionally, the obtaining of the component enhancement coefficients of the pixel points according to the information enhancement degrees of the pixel points on the three components and the information enhancement degrees of the pixel points on the three components corresponding to the maximum value of the neighborhood enhancement range includes the specific steps of:
and respectively calculating the difference value of the pixel information enhancement degree corresponding to the pixel point under the L component and the maximum value of the neighborhood enhancement range under each scale, respectively calculating the difference value of the pixel information enhancement degree corresponding to the pixel point under the a component and the maximum value of the neighborhood enhancement range under each scale, respectively calculating the difference value of the pixel information enhancement degree corresponding to the pixel point under the b component and the maximum value of the neighborhood enhancement range under each scale, and taking the sum of the three difference values as the component enhancement coefficient of the pixel under each scale.
Optionally, the obtaining of the enhanced component values of the pixel point on the three components according to the measurement enhancement coefficient and the component enhancement coefficient of the pixel point and the component values of the three components of the pixel point includes the specific method:in, is greater than or equal to>Is a scale->Based on the measurement enhancement coefficient of the pixel point x in the segmentation result, then>Is a scale->The component enhancement coefficient of the pixel point x in the segmentation result of (4), and->Is a scale>The enhanced component of the pixel point x in the segmentation resultValue,. Or>Is to collect the component values of the pixel points x in the image,,/>is the L component value corresponding to the pixel point x, is greater than or equal to>Is the a component value corresponding to the pixel point x, is greater than or equal to>Is the b component value corresponding to pixel point x.
Optionally, the obtaining of the average value of the enhancement component values of each pixel point in all scales according to the enhancement component values of the three components in the color space corresponding to each pixel point in each scale includes:,/>,/>in the formula (I), wherein,is the new component value of the L component corresponding to the pixel point x, is greater than or equal to>Is the enhancement component value of the L component corresponding to the pixel point x under the scale c; />Is a new component value of the a component corresponding to the pixel point x, and is combined with the pixel point x>The enhanced component value of the component a corresponding to the pixel point x under the scale c; />Is the new component value of the b component corresponding to the pixel point x, is greater than or equal to>And the enhanced component value of the component b corresponding to the pixel point x under the scale c, and K is the number of the segmentation scales.
Optionally, the recognition network is used to adopt a deep learning-based semantic segmentation model.
The invention has the beneficial effects that: and obtaining a superpixel segmentation result of the high-quality pine remote sensing image under multiple scales by setting the size of the superpixel blocks of multiple scales. The information enhancement degree of the pixel points is constructed through the proportion of the component values of the components in the color space where the pixel points are located under each scale, and the method has the advantages that the defect that the difference between different pixel points is small when the proportion is obtained only through the pixel values is overcome through analyzing the component values of the pixel points. The enhancement proportion of the pixel point image information under different scales is obtained through the neighborhood enhancement range and the enhancement component values, the neighborhood enhancement range reflects the distribution characteristics of the components with consistent pixel points in the whole image, and the problem that the pixel points in the small-range color-changing log area are ignored due to over dense needles is avoided. The new component values of the pixel points are calculated based on the plurality of neighborhood enhancement ranges, then enhancement graphs corresponding to the collected pine images are obtained, the new component values calculated by the plurality of neighborhood enhancement ranges can weaken mutual restriction between the coverage range and the image resolution, the identification problem caused by too large or too small resolution is avoided, the significance of pixel point image information in the target area is expanded by the enhancement graphs, the extracted image features can more accurately correspond to the color-changing standing tree area in the collected images, and more reliable nematode lesion area identification of the pine remote sensing images is completed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a remote sensing intelligent identification method for pine wilt disease and standing tree discoloration based on cloud edge synergy according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a component histogram in a color space in the pine wilt disease color-changing standing tree remote sensing intelligent identification method based on cloud edge synergy according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a method for remotely sensing and intelligently identifying pine wood nematode discoloration standing trees based on cloud-edge synergy according to an embodiment of the present invention is shown, and the method includes the following steps:
and S001, carrying out segmentation processing on the collected pine tree remote sensing image by using a multi-scale superpixel segmentation algorithm.
The purpose of this embodiment is that the discernment network is firstly utilized to the regional discernment of nematode lesion in the pine remote sensing image of collection, secondly utilizes unmanned aerial vehicle's computing system to calculate respectively the ratio of every nematode lesion regional area rather than minimum external rectangle area, uploads the position and the area ratio result of nematode lesion region in all pines to the high in the clouds and saves, and the high in the clouds is with the region of keeping in the forest and distribute the forest protection person, and the forest protection person carries out the insecticidal maintenance to nematode lesion region according to the condition of nematode lesion. The on-site condition is sent to the cloud end by a caregiver, the on-site condition is stored in the cloud end, information is sent to a forest protection worker periodically, the forest protection worker is reminded of carrying out periodic inspection on a region with the disease, and the pine wood nematode disease is prevented from happening again.
After a pine forest to be identified is selected, a flight route of the unmanned aerial vehicle is set according to the terrain of the pine forest, the distribution area of the pine trees and the heights of the pine trees, the flight route is set to acquire pine images in the pine forest in the shortest time possible, and the flight height of the unmanned aerial vehicle is manually adjusted according to the heights of the pine trees in different areas in the acquisition process. In the invention, in order to retain more edge information while denoising, a bilateral filtering technology is utilized to denoise a pine image collected by the unmanned aerial vehicle to obtain a high-quality pine image F, the bilateral filtering denoising is a known technology, and the specific denoising process is not described in detail.
Further, for the image obtained by converting the high-quality pine image F into CIELab color space. Image ^ based on Multi-scale superpixel segmentation algorithm Multi-scale SLIC>Performing K types of segmentation with different scales, wherein the segmentation with different scales refers to setting different initial sizes in an algorithm, and the K value in the invention is an empirical value of 5, namely 200, 300, 500, 700 and 1000, and is respectively recorded as ^ er>,/>To/is>. The purpose of multi-scale division is to evaluate the consistency of the image information of each pixel point and the image information of surrounding pixel points in the super pixel block areas with different sizes, and the severity of nematode discoloration is not highAnd meanwhile, the appearance sizes of the pine tree leaves are different, so that the segmentation superpixel blocks under a plurality of segmentation scales are obtained, and the image information of the pixel points can be amplified to a greater extent by subsequently calculating the related characteristic values of the pixel points, so that the comparison between the pixel points belonging to the nematode lesion standing trees and the pixel points of normal pine trees is more obvious.
And obtaining a multi-scale segmentation result of the high-quality pine image through bilateral filtering processing and multi-scale superpixel segmentation.
S002, obtaining a component histogram corresponding to the superpixel block in the segmentation result under each scale, calculating the information enhancement degree and the enhancement radius of each pixel point under each scale based on the component histogram, and calculating the neighborhood enhancement range of each pixel point based on the information enhancement degree and the enhancement radius.
In the pine tree remote sensing image, if a color-changing stumpage area exists, in order to obtain clear boundary and texture characteristics, characteristic enhancement can be carried out on pixel points of a suspicious area in the image, and the purpose of the characteristic enhancement is to enable image information of the pixel points to be richer. If it is confirmed that the pine tree is an interfering feature or a normal growing pine tree, the information should extract only a small amount of features for redundant information for identifying that the discolored standing tree region belongs to the interference.
The method comprises the steps of setting a plurality of scales to carry out feature extraction on an analysis object, wherein the features comprise gray scale features and texture features, although more image information is obtained by analyzing images of the plurality of scales, as the image scales are larger and larger, interference information is increased gradually, and the identification precision of the color-changing stumpage area is influenced. Therefore, the invention enhances the characteristics of each pixel point according to the scaleFor example, the pair->The characteristic enhancement is carried out on the pixel points, and the acquisition process of the characteristic enhancement range is as follows:
recording imagesIn a size of M x N, a statistic is obtained>Obtaining the values of each pixel point in the three components of L, a and b to obtain three corresponding component histograms which are respectively recorded as the histogram->Histogram->Histogram->. For the color-changing standing tree in the remote sensing image, obvious color difference is generated between the color-changing standing tree in the remote sensing image and pine trees growing normally around the color-changing standing tree, for pixel points belonging to the color-changing standing tree region, a certain number of similar pixel points must exist around the color-changing standing tree, and in consideration of different densities of pine needles, a neighborhood enhancement range R is constructed here for representing the influence of image information of the pixel points on the surrounding range, and the neighborhood enhancement range (or) of the pixel point i is calculated>:/>,,/>,,/>In, is greater than or equal to>Is the component value of the pixel point i on the L component in the color space, and is based on the component value of the pixel point i>Is->The number of the corresponding pixel points is greater or less>,/>Are respectively an image->The maximum value and the minimum value of the L component of the middle pixel point->Is->The number of the corresponding pixel points is greater or less>Is->The corresponding number of pixels, the histogram of the L component->As shown in fig. 2 below. />Is the information enhancement degree of the pixel point i, is greater than or equal to>The meaning of (1) is the degree of protrusion of the pixel point with the same L component value as the pixel point i in the L component histogram in the color space.
Is the component value of the pixel point i on the component a of the Lab color space->Is the number of the pixel points which are equal to the component value of the pixel point i on the component a, is greater than or equal to the component value of the pixel point i on the component a>Is the minimum component value on the a-component, is greater than>Is the number of pixels of the minimum component value on the a component, is->Is the maximum component value on the a-component>The number of pixels which are the maximum component value on the a component, is->The information enhancement degree of the pixel point i on the component a under the corresponding scale is shown.
Is the component value of the pixel point i on the component b of the Lab color space, is based on the comparison result of the comparison result and the comparison result>Is the number of the pixel points which are equal to the component value of the pixel point i on the component b, and is greater than or equal to the component value of the pixel point i on the component b>Is the minimum component value on the b component, is greater than>Is the number of pixels of the minimum component value on the b component, is->Is the largest component value on the b component, is greater than>The number of pixels which have the largest component value on the b component, in conjunction with the corresponding pixel value>The information enhancement degree of the pixel point i on the b component under the corresponding scale is shown.
M, N are images respectivelyLength and width. />Is the enhanced radius of the pixel point i->The meaning of the function is that the pixel point i is taken as the center of a circle and is based on->And taking a circular area for the radius, and taking the obtained circular area as a neighborhood enhancement range of the pixel point i.
The neighborhood enhancement range reflects the pixel point in the imageThe influence area of the image information refers to a characteristic range which should be considered for the pixel point i when the characteristic pyramid fuses the characteristics, and the pixel point i is considered to be based on the neighborhood enhancement range ^ and ^>There is an effect of the image features within. The larger the information proportion of the pixel point i in the three component histograms in the color space is, the corresponding neighborhood enhancement radius->The larger the area is, the larger the influence area corresponding to the image information of the pixel point i is, namely the neighborhood enhancement range->The larger.
So far, the neighborhood enhancement range of the pixel point under each scale can be obtained by the method。
And S003, acquiring the measurement enhancement coefficient of the pixel point by utilizing the maximum value of the neighborhood enhancement range in the superpixel block where the pixel point is positioned, calculating the component enhancement coefficients of the pixel point on different components by utilizing the component values of the pixel point, and acquiring the enhancement component value of each component of each pixel point according to the measurement enhancement coefficient and the component enhancement coefficient of the pixel point.
For each segmentation scale, which is the initial size in the superpixel segmentation algorithm, an image is acquiredSegmentation results at each segmentation scale. Further, for the segmentation result of each segmentation scale, the statistical results of two histograms, i.e., the component L, the component a and the component b, in the color space corresponding to the superpixel block in which the pixel point is located are respectively obtained, and the neighborhood enhancement range R corresponding to the pixel point under each segmentation scale is obtained according to the component values of the pixel point in the three component histograms. By scale ofFor example, the super-pixel block where pixel point i is located is marked as &>Acquiring a corresponding component histogram, and bringing the component value of the pixel point i into the information enhancement degree->And a calculation formula for the radius r to be increased, the criterion->The neighborhood enhancing range corresponding to the lower pixel point i is recorded as ^ er>. Respectively acquiring the corresponding conditions according to the flow>To/is>The neighborhood enhancing ranges of the lower pixel point i are respectively recorded as ^ er>To/is>. The neighborhood enhancement ranges are used for determining the influence of the image information of the pixel points on the image information in the large range, acquiring the neighborhood enhancement ranges of the pixel points under the multiple segmentation scales, and subsequently calculating the relevant characteristic values of the pixel points in the neighborhood enhancement ranges to amplify the image information of the pixel points to a greater extent, so that the comparison between the pixel points belonging to the nematode lesion stumpage and the normal pine pixel points is more obvious.
Obtaining a superpixel block where a pixel point i is locatedAnd marking the pixel point corresponding to the maximum value of the middle neighborhood enhancement range as a pixel point p. The neighborhood enhancement range of the pixel point p is the largest, which shows that the influence degree of the image information of the pixel point p on the image information of the super-pixel block is the highest. When the purpose of feature enhancement of the pixel points is to identify nematode lesion areas in the pine tree images by using the identification model, the features of the input images need to be extracted by using the feature pyramid.
For scaleAny pixel point in the lower super pixel block has a measurement distance with the seed point and is used for representing the similarity degree of the image information of the pixel point and the seed point, for the pixel point p, if a pixel point is in the neighborhood enhancement range and the measurement distance with the seed point is closer to the measurement distance of the pixel point p, the similar credibility of the pixel point and the pixel point p is higher, the image characteristic of the pixel point is amplified, an enhancement component value Q is constructed here and is used for representing the proportion size of the pixel point component value based on the enhancement, and the enhancement component value corresponding to the pixel point x is calculated by the enhancement coefficient>:/>,/>,In, is greater than or equal to>Is the super pixel block where the pixel point p is located->The maximum of the measured distance of the middle pixel point from the seed point, is->Is the measured distance between the pixel point x and the super pixel block seed point, is>Is the measured distance between the pixel point p and the superpixel block seed point, is>Is a scale->The measurement enhancement coefficient of the pixel point x in the segmentation result. />,/>,/>The information enhancement degrees of the L component, the a component and the b component of the pixel point x in the color space are respectively,,/>,/>the information enhancement degree and the information enhancement degree of the L component, the a component and the b component of the pixel point p in the color space are respectively greater than or equal to>Is a scale->The component enhancement coefficient of the pixel point x in the segmentation result. />Is a scale>The enhanced component value of pixel point x in the segmented result of (4), (4)>Is to collect the component value of the pixel point x in the image, and then>,/>Is the L component value corresponding to the pixel point x, is greater than or equal to>Is the a component value corresponding to the pixel point x, is greater than or equal to>Is the b component value corresponding to pixel point x.
Traversing the whole image, and calculating corresponding measurement enhancement coefficient for each pixel pointComponent enhancement coefficientAcquiring the enhancement component value corresponding to each pixel point>And obtaining the enhancement component value of each component value in the color space corresponding to each pixel point. For example, the color space component values corresponding to pixel point x are ≧ respectively>Based on the boost factor->,/>The corresponding enhancement component value is noted as>,/>The corresponding enhancement component value is denoted ≥>,/>The corresponding enhancement component value is denoted ≥>。
At this point, the image information of the pixel points is enhanced to obtain an enhanced component value corresponding to each pixel point.
Step S004, acquiring a new component value of each pixel point in each component according to the enhanced component value of each component of each pixel point under multiple scales, and acquiring an enhanced graph corresponding to the high-quality pine image according to the new component values of all the pixel points.
Further, for scaleAnd respectively acquiring the enhancement component values of each pixel point in the three components, and acquiring the enhancement pixel values of the pixel points according to the enhancement component values. Take pixel point x as an example to obtain、/>、/>Then will-> 、/>、/>Traversing all pixel points as new component values of the pixel point x in the Lab color space to acquire the scale->And (5) setting a new component value of each pixel point. Furthermore, a respective criterion is detected>To/is>Obtaining a new component value of the pixel point based on the enhancement component values of the pixel points under 5 different scales, and calculating the new component values of the pixel point x in three components of L, a and b:,/>,/>in, is greater than or equal to>Is the new component value of the L component corresponding to the pixel point x, is greater than or equal to>Is the enhancement component value of the L component corresponding to the pixel point x under the scale c; />Is the new component value of the a component corresponding to the pixel point x, is greater than or equal to>The enhanced component value of the component a corresponding to the pixel point x under the scale c; />Is the new component value of the b component corresponding to the pixel point x, is greater than or equal to>The value of the enhancement component of the component b corresponding to the pixel point x under the scale c, and K is the number of the segmentation scales, wherein the value of K in the invention is 5.
And acquiring a new component value of each pixel point, converting the Lab color space into the RGB color space after acquiring the new component value of each pixel point, and acquiring a characteristic enhancement diagram of the high-quality pine image F, and recording the characteristic enhancement diagram as an enhancement diagram FQ.
Up to this point, the enhancement map FQ is obtained by multi-scale division and processing of the enhancement component values.
And S005, acquiring a recognition result of the nematode lesion area in the high-quality pine image by using a recognition network, determining the position information of the nematode lesion area by using the minimum circumscribed rectangle, storing the transmission information by using a cloud, and completing the cooperative protection of the pine forest.
Further, a recognition result of a nematode lesion area in a high-quality pine image is obtained by using a recognition network, in this embodiment, a semantic segmentation model U-net based on deep learning is used as the recognition network, a data set after data enhancement is used as a training set, a remote sensing image in the training set is artificially labeled, an image label is artificially set, a label of a color-changing stumpage area is set as 1, a label of a normal pine area is set as 2, labels of other areas are set as 3, the remote sensing image and the label in the training set are encoded, the encoded remote sensing image is used as an input of a neural network, the recognition model is trained, the training of the neural network is a known technology, and detailed descriptions are omitted. And after the training of the recognition model is finished, taking the enhancement image FQ and the collected image as the input of the recognition model U-net, wherein the output of the recognition model is the nematode lesion area in the pine tree image.
The method comprises the steps of obtaining a minimum circumscribed rectangle of a pine wood nematode lesion area, wherein the purpose of the minimum circumscribed rectangle is to determine position information and lesion degree of the lesion area, respectively calculating the ratio of the area of each nematode lesion area to the area of the minimum circumscribed rectangle by using a calculation system of an unmanned aerial vehicle, wherein the larger the ratio is, the more serious nematode lesion is shown, uploading the position information and the area ratio of the lesion area to a cloud end, sending the position information to a pine forest stand worker close to the cloud end, and effectively cleaning withered and discolored standing trees and surrounding pines thereof by the forest stand worker on site. And sending the site condition to a cloud end by a caregiver, storing the specific pine information and the site condition of the nematode lesion in the cloud end, and periodically sending information to a forest protection person to remind the forest protection person to periodically check the pine with the nematode lesion so as to prevent the nematode lesion of the pine wood from happening again.
And obtaining the identification result of the nematode lesion area in the acquired pine remote sensing image, and finishing the cooperative management of the identification result.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. The pine wood nematode discoloration standing wood remote sensing intelligent identification method based on cloud edge synergy is characterized by comprising the following steps:
acquiring pine tree remote sensing images, and acquiring segmentation results of high-quality pine tree remote sensing images under a plurality of segmentation scales by utilizing a superpixel segmentation technology;
acquiring information enhancement degree of each pixel point on each component under each scale according to the position of the component value of each component of each pixel point in the Lab color space in the component histogram under each scale, acquiring enhancement radius corresponding to each pixel point under each scale according to the information enhancement degree of each pixel point on three components under each scale, and acquiring neighborhood enhancement range of each pixel point under each scale according to the position information of each pixel point and the enhancement radius under each scale;
acquiring the maximum value of a neighborhood enhancement range in a superpixel block where a pixel point is positioned under each scale, and acquiring a measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range; acquiring component enhancement coefficients of the pixel points according to the information enhancement degrees of the pixel points on the three components and the information enhancement degrees of the pixel points on the three components corresponding to the maximum value of the neighborhood enhancement range, and acquiring enhancement component values of the pixel points on the three components according to the measurement enhancement coefficients and the component enhancement coefficients of the pixel points and the component values of the three components of the pixel points;
acquiring the average value of the enhancement component values of each pixel point in all scales of each component according to the enhancement component values of three components in the color space corresponding to each pixel point in each scale, taking the average value of the enhancement coefficients of each component in all scales as the new component value of the pixel point on each component, and forming an enhanced image by the new component values of all the pixel points;
and taking the enhanced image and the high-quality pine image as input of a recognition network, acquiring a recognition result of the nematode lesion area in the high-quality pine image by using the recognition network, acquiring position information of the recognition result of the nematode lesion area in the high-quality pine image through a minimum external rectangle, uploading the position information to a cloud for storage, and transmitting the information to related personnel to finish cooperative protection of pine forests.
2. The remote sensing intelligent identification method for the pine wilt disease color-changing standing trees based on cloud edge coordination according to claim 1, wherein the segmentation result of the high-quality pine remote sensing images at multiple scales is obtained by utilizing a super-pixel segmentation technology; the method comprises the following steps:
the method comprises the steps of obtaining remote sensing images of pine trees in a pine forest by an unmanned aerial vehicle, processing the remote sensing images of the pine trees by a bilateral filtering technology to obtain high-quality pine remote sensing images F, manually setting initial sizes in a plurality of different superpixel segmentation algorithms, and obtaining segmentation results of the high-quality pine remote sensing images in a plurality of scales by the superpixel segmentation technology.
3. The remote sensing intelligent identification method for the pine wilt disease color-changing standing timber based on cloud edge coordination according to claim 1, wherein the information enhancement degree of each component of each pixel point under each scale is obtained according to the position of the component value of each component of each pixel point in Lab color space in a component histogram under each scale, and the method comprises the following specific steps:,,in the formula (I), wherein,、、respectively are the component values of the pixel point i on the components L, a and b of the Lab color space,、、the number of the pixel points is equal to the component values of the pixel point i on the L component, the a component and the b component respectively;、、the minimum component values on the L, a, b components respectively,、、the pixel point number of the minimum component value on the L, a and b components is respectively;、、the largest component values on the L, a, b components respectively,、、the pixel point quantity of the maximum component value on the L, a and b components respectively;、、the information enhancement of pixel i on the L, a, b components, respectively.
4. The remote sensing intelligent identification method for pine wilt disease color-changing standing trees based on cloud edge synergy according to claim 1, wherein the enhancement radius corresponding to the pixel point under each scale is obtained according to the information enhancement degree of the pixel point under each scale on three components, and the method comprises the following specific steps:
respectively obtaining the information enhancement degrees of the pixel points on the three components under each scale, taking the average value of the information enhancement degrees on the three components under each scale as the average information enhancement degree of the pixel points under each scale, obtaining the length and width sizes of the high-quality pine remote sensing image F, and taking the product of the average information enhancement degree of the pixel points under each scale and the maximum value in the length and width sizes as the enhancement radius of the pixel points under each scale.
5. The remote sensing intelligent identification method for the pine wilt disease color-changing standing trees based on cloud edge coordination according to claim 1, wherein the method for obtaining the neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point and the enhancement radius under each scale comprises the following specific steps:
and acquiring the corresponding enhancement radius of the pixel point under each scale, and taking the pixel point as a dot and a circular area with the enhancement radius as the radius as a neighborhood enhancement range of the pixel point under each scale according to the position information of the pixel point.
6. The remote sensing intelligent identification method for the pine wilt disease and standing timber with color change based on cloud edge coordination according to claim 1, wherein the method comprises the following specific steps of obtaining the maximum value of the neighborhood enhancement range in the superpixel block where the pixel point is located under each scale, and obtaining the measurement enhancement coefficient of the pixel point according to the measurement distance of the pixel point and the measurement distance of the pixel point corresponding to the maximum value of the neighborhood enhancement range:
the method comprises the steps of sequencing the neighborhood enhancement ranges corresponding to pixel points in a super-pixel block where the pixel points are located under each scale, obtaining the pixel points corresponding to the maximum value of the neighborhood enhancement ranges, calculating the difference value between the maximum value of the medium distance and the pixel point measurement distance in the super-pixel block, calculating the difference value between the maximum value of the medium distance in the super-pixel block and the pixel point measurement distance corresponding to the maximum value of the neighborhood enhancement ranges, taking the difference value between the maximum value of the medium distance in the super-pixel block and the pixel point measurement distance as a numerator, taking the difference value between the maximum value of the medium distance in the super-pixel block and the pixel point measurement distance corresponding to the maximum value of the neighborhood enhancement ranges as a denominator, and taking a ratio result as a measurement enhancement coefficient of the pixel points.
7. The remote sensing intelligent identification method for the pine wilt disease color-changing standing trees based on cloud edge coordination according to claim 1, wherein the component enhancement coefficients of the pixel points are obtained according to the information enhancement degrees of the pixel points on the three components and the information enhancement degrees of the pixel points corresponding to the maximum value of the neighborhood enhancement range on the three components, and the method comprises the following specific steps:
and respectively calculating the difference value of the pixel point information enhancement degree corresponding to the pixel point under the L component under each scale and the maximum value of the neighborhood enhancement range, respectively calculating the difference value of the pixel point information enhancement degree corresponding to the pixel point under the a component under each scale and the maximum value of the neighborhood enhancement range, respectively calculating the difference value of the pixel point information enhancement degree corresponding to the pixel point under the b component under each scale and the maximum value of the neighborhood enhancement range, and taking the accumulated sum of the three difference values as the component enhancement coefficient of the pixel point under each scale.
8. The remote sensing intelligent identification method for the pine wilt disease color-changing standing timber based on cloud edge coordination according to claim 1, wherein the enhancement component values of the pixel points on the three components are obtained according to the measurement enhancement coefficient and the component enhancement coefficient of the pixel points and the component values of the three components of the pixel points, and the method comprises the following specific steps:in the formula (I), wherein,is a scaleThe metric enhancement coefficient of the pixel point x in the segmentation result of (2),is a scaleThe component enhancement coefficient of the pixel point x in the segmentation result of (2),is a scaleThe enhancement component value of the pixel point x in the segmentation result of (2),is to collect the component values of the pixel points x in the image,,the L component value corresponding to the pixel point x,is the a component value corresponding to the pixel point x,is the b component value corresponding to pixel point x.
9. The remote sensing intelligent identification method for pine wilt disease color-changing standing trees based on cloud edge coordination according to claim 1, wherein the identification method is characterized in that the identification method is based on three components in color space corresponding to each pixel point under each scaleThe enhancement component value obtains the average value of the enhancement component value of each component of each pixel point under all scales, and the average value of the enhancement coefficient of each component under all scales is taken as the new component value of each component of the pixel point, and the method comprises the following specific steps:,,in the formula (I), wherein,is the new component value of the L component corresponding to pixel point x,is the enhancement component value of the L component corresponding to the pixel point x under the scale c;is the new component value of the component a corresponding to the pixel point x,the enhanced component value of the component a corresponding to the pixel point x under the scale c;is the new component value of the b component corresponding to pixel point x,and the enhanced component value of the component b corresponding to the pixel point x under the scale c, and K is the number of the segmentation scales.
10. The remote sensing intelligent identification method for the pine wilt disease and standing timber with color change based on cloud edge coordination according to claim 1, wherein the semantic segmentation model based on deep learning is adopted by the utilization of the identification network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310161954.XA CN115841492B (en) | 2023-02-24 | 2023-02-24 | Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310161954.XA CN115841492B (en) | 2023-02-24 | 2023-02-24 | Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115841492A true CN115841492A (en) | 2023-03-24 |
CN115841492B CN115841492B (en) | 2023-05-12 |
Family
ID=85580167
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310161954.XA Active CN115841492B (en) | 2023-02-24 | 2023-02-24 | Cloud-edge-collaboration-based pine wood nematode lesion color standing tree remote sensing intelligent identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115841492B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040081369A1 (en) * | 2002-10-25 | 2004-04-29 | Eastman Kodak Company | Enhancing the tonal, spatial, and color characteristics of digital images using expansive and compressive tone scale functions |
CN103136733A (en) * | 2013-02-25 | 2013-06-05 | 中国人民解放军总参谋部第六十一研究所 | Remote sensing image color enhancing method based on multi-scale image segmentation and color transferring |
CN103530635A (en) * | 2013-09-23 | 2014-01-22 | 上海海洋大学 | Coastline extracting method based on satellite microwave remote sensing image |
CN105403989A (en) * | 2015-10-28 | 2016-03-16 | 清华大学 | Nematode recognition system and nematode recognition method |
CN105608458A (en) * | 2015-10-20 | 2016-05-25 | 武汉大学 | High-resolution remote sensing image building extraction method |
CN106295604A (en) * | 2016-08-19 | 2017-01-04 | 厦门大学 | Remote sensing image road network extractive technique based on Federated filter |
CN112633202A (en) * | 2020-12-29 | 2021-04-09 | 河南大学 | Hyperspectral image classification algorithm based on dual denoising combined multi-scale superpixel dimension reduction |
CN113240689A (en) * | 2021-06-01 | 2021-08-10 | 安徽建筑大学 | Method for rapidly extracting flood disaster area |
US20210374478A1 (en) * | 2018-05-15 | 2021-12-02 | Shenzhen University | Methods for Image Segmentation, Computer Devices, and Storage Mediums |
CN114595975A (en) * | 2022-03-11 | 2022-06-07 | 安徽大学 | Unmanned aerial vehicle remote sensing pine wood nematode disease monitoring method based on deep learning model |
-
2023
- 2023-02-24 CN CN202310161954.XA patent/CN115841492B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040081369A1 (en) * | 2002-10-25 | 2004-04-29 | Eastman Kodak Company | Enhancing the tonal, spatial, and color characteristics of digital images using expansive and compressive tone scale functions |
CN103136733A (en) * | 2013-02-25 | 2013-06-05 | 中国人民解放军总参谋部第六十一研究所 | Remote sensing image color enhancing method based on multi-scale image segmentation and color transferring |
CN103530635A (en) * | 2013-09-23 | 2014-01-22 | 上海海洋大学 | Coastline extracting method based on satellite microwave remote sensing image |
CN105608458A (en) * | 2015-10-20 | 2016-05-25 | 武汉大学 | High-resolution remote sensing image building extraction method |
CN105403989A (en) * | 2015-10-28 | 2016-03-16 | 清华大学 | Nematode recognition system and nematode recognition method |
CN106295604A (en) * | 2016-08-19 | 2017-01-04 | 厦门大学 | Remote sensing image road network extractive technique based on Federated filter |
US20210374478A1 (en) * | 2018-05-15 | 2021-12-02 | Shenzhen University | Methods for Image Segmentation, Computer Devices, and Storage Mediums |
CN112633202A (en) * | 2020-12-29 | 2021-04-09 | 河南大学 | Hyperspectral image classification algorithm based on dual denoising combined multi-scale superpixel dimension reduction |
CN113240689A (en) * | 2021-06-01 | 2021-08-10 | 安徽建筑大学 | Method for rapidly extracting flood disaster area |
CN114595975A (en) * | 2022-03-11 | 2022-06-07 | 安徽大学 | Unmanned aerial vehicle remote sensing pine wood nematode disease monitoring method based on deep learning model |
Non-Patent Citations (5)
Title |
---|
IXIA HUANG 等: "Chromosome Image Enhancement Using Multiscale Differential Operators" * |
侯晓然 等: "改进 Retinex 算法对特殊环境下的车牌图像增强研究" * |
刘小丹;于宁;邱红圆;: "基于谱直方图的遥感图像分层次多尺度植被分割" * |
刘小丹;于宁;邱红圆;: "基于谱直方图的遥感图像分层次多尺度植被分割", 国土资源遥感 * |
耿鑫 等: "基于模糊同组划分的多尺度彩色图像增强算法" * |
Also Published As
Publication number | Publication date |
---|---|
CN115841492B (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | A computer vision system for early stage grape yield estimation based on shoot detection | |
Chen et al. | Isolating individual trees in a savanna woodland using small footprint lidar data | |
CN107609526A (en) | Rule-based fine dimension city impervious surface rapid extracting method | |
CN113040034B (en) | Water-saving irrigation control system and control method | |
CN103065149A (en) | Netted melon fruit phenotype extraction and quantization method | |
CN112418188A (en) | Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision | |
CN111340826A (en) | Single tree crown segmentation algorithm for aerial image based on superpixels and topological features | |
CN109522899B (en) | Detection method and device for ripe coffee fruits and electronic equipment | |
CN102855485B (en) | The automatic testing method of one grow wheat heading | |
CN102663397B (en) | Automatic detection method of wheat seedling emergence | |
CN109033937B (en) | Method and system for counting plant number through unmanned aerial vehicle image | |
CN111860150B (en) | Lodging rice identification method and device based on remote sensing image | |
CN109886146B (en) | Flood information remote sensing intelligent acquisition method and device based on machine vision detection | |
CN111161362A (en) | Tea tree growth state spectral image identification method | |
CN106845366B (en) | Sugarcane coverage automatic detection method based on image | |
CN114140692A (en) | Fresh corn maturity prediction method based on unmanned aerial vehicle remote sensing and deep learning | |
CN113392846A (en) | Water gauge water level monitoring method and system based on deep learning | |
CN111060455B (en) | Northeast cold-cool area oriented remote sensing image crop marking method and device | |
CN110455201A (en) | Stalk plant height measurement method based on machine vision | |
CN112597855B (en) | Crop lodging degree identification method and device | |
CN115841492A (en) | Pine wood nematode disease color-changing standing tree remote sensing intelligent identification method based on cloud edge synergy | |
Mohammadi et al. | Estimation of leaf area in bell pepper plant using image processing techniques and artificial neural networks | |
CN108985307B (en) | Water body extraction method and system based on remote sensing image | |
CN114266975B (en) | Litchi fruit detection and counting method for unmanned aerial vehicle remote sensing image | |
CN115526927A (en) | Rice planting method integrating phenological data and remote sensing big data and area estimation method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |