CN113077486B - Method and system for monitoring vegetation coverage rate in mountainous area - Google Patents

Method and system for monitoring vegetation coverage rate in mountainous area Download PDF

Info

Publication number
CN113077486B
CN113077486B CN202110484665.4A CN202110484665A CN113077486B CN 113077486 B CN113077486 B CN 113077486B CN 202110484665 A CN202110484665 A CN 202110484665A CN 113077486 B CN113077486 B CN 113077486B
Authority
CN
China
Prior art keywords
image
vegetation
pixel points
pix
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110484665.4A
Other languages
Chinese (zh)
Other versions
CN113077486A (en
Inventor
李可
谢尚宏
万莉萍
杨建�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shiyuan Engineering Technology Co ltd
Original Assignee
Shenzhen Shiyuan Engineering Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shiyuan Engineering Technology Co ltd filed Critical Shenzhen Shiyuan Engineering Technology Co ltd
Priority to CN202110484665.4A priority Critical patent/CN113077486B/en
Publication of CN113077486A publication Critical patent/CN113077486A/en
Application granted granted Critical
Publication of CN113077486B publication Critical patent/CN113077486B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for monitoring the vegetation coverage rate in mountainous areas, which comprises the steps of firstly obtaining a set U of vegetation pixel points contained in a color image in two different modes1And U2Then obtain U1And U2Of (2) intersection U3Then obtain U3At U2Complementary set U corresponding to4And further to U4Whether the pixel points in the tree belong to vegetation pixel points or not is judged, so that a final vegetation pixel point set U is obtainedfinalAnd is based on UfinalAnd monitoring the vegetation coverage rate in the mountainous area. On the other hand, the invention also provides a system for monitoring the vegetation coverage rate in the mountainous area, which is used for realizing the method. Compared with the prior art that only a single threshold segmentation algorithm is adopted to directly obtain the vegetation pixel points of the color image, the method has the advantage that the calculation result of the vegetation coverage rate obtained by the method is more accurate.

Description

Method and system for monitoring vegetation coverage rate in mountainous area
Technical Field
The invention relates to the field of monitoring, in particular to a method and a system for monitoring vegetation coverage in mountainous areas.
Background
The coverage rate of vegetation in mountainous areas generally refers to the ratio of the projected area of plants such as trees, shrubs, grasslands and the like to the ground in a certain area to the total area of the area. The vegetation coverage in the mountainous area is an important index reflecting forest resources and water and soil conservation level in the mountainous area, so that the vegetation coverage in the mountainous area needs to be acquired regularly, and the change situation of the vegetation coverage in the mountainous area needs to be known in time so as to make corresponding decisions. However, the manual measurement method is obviously not suitable due to the inconvenient traffic and the large area in the mountainous area. Therefore, in the prior art, the vegetation coverage in the mountainous area is generally calculated by adopting a remote sensing image mode. However, in the prior art, only a single image segmentation method is generally adopted, for example, only histogram segmentation is adopted to obtain vegetation pixel points in the remote sensing image, and the result obtained by the processing method is not accurate enough.
Disclosure of Invention
In view of the above problems, the present invention provides a method and a system for monitoring the coverage of vegetation in mountainous areas.
The invention provides a method for monitoring vegetation coverage in mountainous areas, which comprises the following steps:
s1, acquiring a color image of the monitoring area;
s2, obtaining a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
S3, inputting the color image into a pre-trained neural network model for image segmentation processing, and acquiring a second set U of vegetation pixel points in the color image2
S4, obtaining U1And U2Of (2) intersection U3
S5, obtaining U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
s6, carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtainSet of foreground pixels U5And a set U of background pixels6
S7, calculating U respectively3、U5、U6Pixel value mean values of (1), in turn denoted as cluster3、clustc5、clustc6
S8, calculating clustc3And clustc5The absolute value dist of the difference between3,5Calculate clustc3And clustc6The absolute value dist of the difference between3,6
S9, if dist3,5Less than dist3,6Then U will be5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a If dist3,5Dist or more3,6Then U will be6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
S10, based on UfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comparing the vegecoidx with the historical vegetation coverage rate, and obtaining the change condition of the vegetation coverage rate.
Preferably, the first set U of vegetation pixel points in the color image is obtained by using a threshold segmentation algorithm1The method comprises the following steps:
converting the color image into an RGB color space, and respectively obtaining a red component image R, a green component image G and a blue component image B of the color image;
respectively calculating the segmentation parameters of each pixel point in the color image to obtain an image to be processed;
performing threshold segmentation processing on the image to be processed by using an otsu algorithm, acquiring vegetation pixel points contained in the image to be processed, and storing all the vegetation pixel points contained in the image to be processed into a first set U1
Preferably, the calculating the segmentation parameters of each pixel point in the color image respectively to obtain the image to be processed includes:
establishing a blank image with the same resolution as the color image, wherein pixel points in the blank image correspond to pixel points in the color image one by one;
recording the segmentation parameters of pixel point pix in the color image as cutedxpixRecording the corresponding pixel point of pix in the blank image as pix';
the following formula was used to calculate cutedxpix
Figure BDA0003049848230000021
Wherein alpha represents a preset proportion parameter, and alpha belongs to (0,1), Rpix、Gpix、BpixRespectively representing pixel values of pixel points corresponding to pix in the red component image R, the green component image G and the blue component image B; l ispixExpressing the pixel value of a pixel point corresponding to pix in an L component image, wherein the L component image is an image corresponding to the brightness component of the color image in the Lab color space;
mixing cutedxpixThe value of (a) is used as the pixel value of the pixel point pix' in the blank image;
and calculating the pixel value of each pixel point in the blank image by adopting the mode so as to obtain the image to be processed.
Preferably, the color image is input into a pre-trained neural network model for image segmentation processing, and a second set U of vegetation pixel points in the color image is obtained2The method comprises the following steps:
carrying out graying processing on the color image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
enhancing the noise-reduced image to obtain an enhanced image;
inputting the enhanced image into a pre-trained neural network model for image segmentation processing, obtaining a second set of vegetation pixel points in the enhanced image, and recording the second set as U2
Preferably, the base is UfinalCalculating the monitoring areaThe current vegetation coverage rate vegecoidx of a domain includes:
vegecoidx is calculated using the following formula:
Figure BDA0003049848230000031
in the formula, numUfinalRepresentation set UfinalAnd numColor represents the total number of pixels contained in the color image.
On the other hand, the invention also provides a monitoring system for the vegetation coverage rate of the mountainous area, which comprises an image acquisition module, a first set acquisition module, a second set acquisition module, an intersection acquisition module, an image acquisition module to be segmented, an image segmentation module, a mean value calculation module, a difference value calculation module, a merging module and a coverage rate processing module;
the image acquisition module is used for acquiring a color image of a monitoring area;
the first set acquisition module is used for acquiring a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
The second set acquisition module is used for inputting the color image into a pre-trained neural network model for image segmentation processing to acquire a second set U of vegetation pixel points in the color image2
The intersection acquisition module is used for acquiring U1And U2Of (2) intersection U3
The image acquisition module to be segmented is used for acquiring U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
the image segmentation module is used for carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
The mean value calculation module is used for respectively calculating U3、U5、U6All of the pixel values ofValues, in turn denoted clustc3、clustc5、clustc6
The difference value calculation module is used for calculating clustc3And clustc5The absolute value dist of the difference between3,5And for calculating clustc3And clustc6The absolute value dist of the difference between3,6
The merge module is to merge at dist3,5Less than dist3,6When in use, U is turned on5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a And for use in dist3,5Dist or more3,6When in use, U is turned on6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
The coverage rate processing module is used for U-basedfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comparing the vegecoidx with the historical vegetation coverage rate, and obtaining the change condition of the vegetation coverage rate.
Compared with the prior art, the invention has the advantages that:
in the prior art, only a single threshold segmentation algorithm is adopted to directly obtain vegetation pixel points from a color image, and because the difference between the bare soil pixel points and the vegetation pixel points is small, the single threshold segmentation is difficult to distinguish the bare soil pixel points and the vegetation pixel points, the obtained result is not accurate enough.
And the present application obtains U by threshold segmentation1Then obtaining U through a neural network model2Then by obtaining U1And U2The intersection of the vegetation points is used for obtaining a set U of a part of vegetation pixel points3For complementary U4And as the difference between the bare soil pixel points and the vegetation pixel points in the color image is smaller, most of the pixel points belong to the bare soil pixel points and the vegetation pixel points, so that the U-shaped pixel points are paired4The image to be segmented is segmented to obtain foreground pixel points and background pixel points, and then the difference of the pixel value mean values is used for judging whether the foreground pixel points belong to vegetation pixel points or the background pixel points belong to vegetation pixel points, so that complete images are obtainedThe set of the vegetation pixel points effectively avoids the influence of the bare soil pixel points on the statistical result, and effectively improves the accuracy of the method. The vegetation pixel points are obtained through different modes, the intersection of the vegetation pixel points obtained through different modes is used as the accurate vegetation pixel points, the accuracy of the identification result of the vegetation pixel points is favorably improved, and then U is achieved4The pixel point in (1) is processed because of U4The method is a complementary set, and the number of pixel points in the complementary set is small, so that the number of the pixel points participating in calculation is small, and the calculation efficiency of the method can be effectively improved.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, other drawings can be obtained on the basis of the following drawings without inventive effort.
Fig. 1 is a diagram of an exemplary embodiment of a method for monitoring vegetation coverage in a mountain area according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the present invention provides a method for monitoring coverage rate of vegetation in mountainous areas, which comprises:
s1, acquiring a color image of the monitoring area;
s2, obtaining a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
S3, inputting the color image into a pre-trained neural network model for image segmentation processing, and acquiring a second set U of vegetation pixel points in the color image2
S4, obtaining U1And U2Of (2) intersection U3
S5, obtaining U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
s6, carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
S7, calculating U respectively3、U5、U6Pixel value mean values of (1), in turn denoted as cluster3、clustc5、clustc6
S8, calculating clustc3And clustc5The absolute value dist of the difference between3,5Calculate clustc3And clustc6The absolute value dist of the difference between3,6
S9, if dist3,5Less than dist3,6Then U will be5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a If dist3,5Dist or more3,6Then U will be6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
S10, based on UfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comparing the vegecoidx with the historical vegetation coverage rate, and obtaining the change condition of the vegetation coverage rate.
Preferably, the color image of the detection area of the present application is acquired by a drone. Utilize unmanned aerial vehicle to obtain can effectively avoid the influence of atmosphere to formation of image, consequently, the detail information that the color image detail that obtains contains is abundanter, and on the other hand, obtains the color image through unmanned on the other hand, and the cost is lower than shooting in satellite.
Preferably, the first set U of vegetation pixel points in the color image is obtained by using a threshold segmentation algorithm1The method comprises the following steps:
converting the color image into an RGB color space, and respectively obtaining a red component image R, a green component image G and a blue component image B of the color image;
respectively calculating the segmentation parameters of each pixel point in the color image to obtain an image to be processed;
performing threshold segmentation processing on the image to be processed by using an otsu algorithm, acquiring vegetation pixel points contained in the image to be processed, and storing all the vegetation pixel points contained in the image to be processed into a first set U1
Preferably, the calculating the segmentation parameters of each pixel point in the color image respectively to obtain the image to be processed includes:
establishing a blank image with the same resolution as the color image, wherein pixel points in the blank image correspond to pixel points in the color image one by one;
recording the segmentation parameters of pixel point pix in the color image as cutedxpixRecording the corresponding pixel point of pix in the blank image as pix';
the following formula was used to calculate cutedxpix
Figure BDA0003049848230000061
Wherein alpha represents a preset proportion parameter, and alpha belongs to (0,1), Rpix、Gpix、BpixRespectively representing pixel values of pixel points corresponding to pix in the red component image R, the green component image G and the blue component image B; l ispixExpressing the pixel value of a pixel point corresponding to pix in an L component image, wherein the L component image is an image corresponding to the brightness component of the color image in the Lab color space;
mixing cutedxpixThe value of (a) is used as the pixel value of the pixel point pix' in the blank image;
and calculating the pixel value of each pixel point in the blank image by adopting the mode so as to obtain the image to be processed.
In the prior art, the threshold segmentation processing is generally directly performed on the G component image in the color image, but the result obtained by the segmentation method is not accurate, because the bare soil pixel point and the vegetation pixel point are difficult to be separated from the G component image by using single threshold segmentation. Therefore, according to the method and the device, the red component image, the green component image and the blue component image are subjected to weighted fusion and are combined with the L component image, so that the to-be-processed image which is easy to separate the bare soil pixel points and the vegetation pixel points is obtained, the pixel value of each pixel point in the to-be-processed image is the value of the segmentation parameter, the L component image is set to avoid the influence of uneven illumination on the segmentation result, and the accuracy of the segmentation result is further improved. Therefore, the embodiment of the invention is beneficial to improving the accuracy of threshold segmentation.
The one-to-one correspondence between the pixel points in the blank image and the pixel points in the color image means that when the blank image and the color image are placed in the same coordinate system, the pixel point at the position (1,1) in the blank image is the pixel point at the position (1,1) in the color image, and the relative positions of the two pixel points in the respective images are the same,
preferably, the color image is input into a pre-trained neural network model for image segmentation processing, and a second set U of vegetation pixel points in the color image is obtained2The method comprises the following steps:
carrying out graying processing on the color image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
enhancing the noise-reduced image to obtain an enhanced image;
inputting the enhanced image into a pre-trained neural network model for image segmentation processing, obtaining a second set of vegetation pixel points in the enhanced image, and recording the second set as U2
The color image is subjected to noise reduction processing and enhancement processing, so that more edge detail information is kept in the enhanced image while noise points are removed, and the accuracy of identifying the enhanced image by using a neural network model and obtaining vegetation pixel points contained in the enhanced image is improved.
Preferably, the performing noise reduction processing on the grayscale image to obtain a noise-reduced image includes:
segmenting the gray level image into numQ sub-images by adopting a preset segmentation algorithm;
for the jth sub-image, the noise reduction processing is performed on the jth sub-image in the following way, j belongs to [1, numQ ]:
calculating the variance voga of the gradient amplitude of the pixel point in the jth sub-imagejIf vogajIf the variance is smaller than the preset variance threshold, performing noise reduction on the jth sub-image in the following mode to obtain a sub-image after the noise reduction:
for the pixel point pixel in the jth sub-image, the pixel points in the neighborhood with bl × bl size are stored into the set neUpixel
Pixel values after pixel denoising are calculated using the following formula:
Figure BDA0003049848230000071
in the formula (a) apixelRepresenting pixel values after pixel de-noising, aneUpixelShow neUpixelF, a set of remaining pixel points after deletion of the pixel point with the largest pixel value and the pixel point with the smallest pixel valuekTo indicate aneUpixelThe pixel value of the pixel point k in (1), numofaneU represents aneUpixelThe total number of pixels contained therein;
processing each pixel point in the jth sub-image by using the formula to obtain a sub-image subjected to noise reduction processing;
if vogajIf the variance is larger than or equal to a preset variance threshold, performing noise reduction processing on the jth sub-image in the following mode to obtain a sub-image after the noise reduction processing:
performing wavelet decomposition on the jth sub-image to obtain a wavelet decomposition high-frequency image and a wavelet decomposition low-frequency image;
for pixel points in the wavelet decomposition high-frequency image, the following improved threshold processing mode is adopted for processing:
Figure BDA0003049848230000081
in the formula, xhp (u, v) represents the pixel value of the pixel point at (u, v) in the wavelet decomposed high-frequency image before thresholding, axhp (u, v) represents the pixel value of the pixel point at (u, v) in the wavelet decomposed high-frequency image before and after thresholding, ya1And ya2Representing a preset process threshold, xs representing a selection function,
Figure BDA0003049848230000082
th is a preset selection threshold, cm represents a control parameter, cm belongs to (0.1,0.9), z1And z2Respectively representing the standard deviation of pixel values of pixel points in 4 neighborhoods and 8 neighborhoods of pixel points with (u, v) in the wavelet decomposition high-frequency image;
performing threshold processing on each pixel point in the wavelet decomposition high-frequency image by adopting the processing formula to obtain a processed wavelet decomposition high-frequency image;
for pixel points in the wavelet decomposition low-frequency image, the following method is adopted for processing:
Figure BDA0003049848230000083
in the formula, bxlp (a, b) represents the pixel value neU of the pixel point with the position (a, b) in the wavelet decomposition low-frequency image processed in the above-mentioned mannera,bA set of coordinates of pixels in a neighborhood of tl × tl size representing pixels positioned at (a, b) in a wavelet decomposed low-frequency image, (a)1,b1) Representation neUa,bElement of (1), dst [ (a)1,b1),(a,b)]The pixel point with the position (a, b) in the wavelet decomposition low-frequency image and the position (a)1,b1) Distance between pixel points of (1), fcdst represents neUa,bThe variance of the distances between the pixel points corresponding to all the elements and the pixel point with the position (a, b), ylp (a, b) and ylp (a, b)1,b1) Respectively representing the bit positions in the wavelet decomposition low-frequency imageThe pixel points and positions of (a, b) are (a)1,b1) The gradient amplitude of the pixel point of (a),
Figure BDA0003049848230000091
numneUa,brepresentation neUa,bThe total number of elements contained in (a);
processing each pixel point in the wavelet decomposition low-frequency image by adopting the processing formula to obtain a processed wavelet decomposition low-frequency image;
performing wavelet reconstruction on the processed wavelet decomposition low-frequency image and the processed wavelet decomposition high-frequency image to obtain a sub-image subjected to noise reduction;
and forming a noise reduction image by all the sub-images subjected to noise reduction processing.
The existing noise reduction processing mode generally adopts a global noise reduction mode, namely, the same noise reduction function is used for carrying out uniform noise reduction processing on all pixel points, and the processing mode easily causes the Gaussian blur of the noise-reduced image to be excessive, so that the detail information is seriously lost. Therefore, the gray-scale image is divided into the plurality of sub-images by adopting the preset division algorithm, and then each sub-image is subjected to noise reduction processing, so that the processing mode is more targeted in use, and the noise reduction time and the noise reduction effect are well balanced.
Specifically, when the sub-images are subjected to noise reduction processing, the specific conditions of different sub-images are further considered, and a reasonable noise reduction processing mode is selected for the sub-images under different conditions. When the variance of the gradient amplitude of the pixel points in the sub-image is smaller than a variance threshold value, namely when the difference of the pixel points in the sub-image is small, noise reduction is performed on the sub-image by adopting improved mean value noise reduction, specifically, the pixel point with the largest pixel value and the pixel point with the smallest pixel value in a neighborhood pixel point set of the currently processed pixel point are excluded, and then the mean value of the pixel values of the remaining pixel points is used as the pixel value of the currently noise-reduced pixel point after noise reduction. Therefore, the influence of noise points on the noise reduction result can be avoided to a great extent, because the noise points are the points with the maximum pixel value or the points with the minimum pixel value.
In addition, when the variance of the gradient amplitude of the pixel points in the sub-images is greater than or equal to the variance threshold, namely when the difference of the pixel points in the sub-images is large, the Gaussian filtering is adopted, and the detail information is seriously lost, so that the improved wavelet denoising method is adopted to perform denoising processing on the sub-images, and specifically, when the wavelet decomposition high-frequency image is subjected to denoising processing, different processing functions are selected for the pixel points in the wavelet decomposition high-frequency image under different conditions through the two set processing thresholds, the pertinence of the processing functions is further enhanced, and the accuracy of the denoising result is improved. When the processing function is set, the processing threshold is not only involved in the operation, but also the selection function and the control parameter are set, so that the processing function is adaptively changed along with different sub-images, and the pertinence and the adaptivity of the processing function are further enhanced.
Preferably, the segmenting the grayscale image into numQ sub-images by using a preset segmentation algorithm includes:
segmenting the gray level image in an iterative mode:
storing the subimages obtained by the n-1 th division into the set imsegUnFor imsegUnThe sub-picture imseg contained innJudging whether the segmentation is needed to be further performed or not by adopting the following method:
calculating imsegnThe division index of (a):
imsegidxn=w1×qval[sum(imsegn)]+w2×qval[gvva(imsegn)×[ma(imsegn)-mi(imsegn)]]
in the formula, imsegidxnRepresents imsegnDivision index of (1), w1And w2Represents a weight parameter, qval represents a value function, and represents taking a numerical value in parentheses to participate in operation, sum (imseg)n) Represents imsegnTotal number of included pixels, gvva (imseg)n) Represents imsegnOf all the pixel points, ma (imseg)n) And mi (imseg)n) Respectively represent imsegnOf the sum of maximum values of pixel valuesA minimum value;
if the division index is larger than the preset division index threshold value, imseg is representednFurther segmentation is required;
all sub-images that need to be further segmented are stored in the set simtrfUnFor simtrfUnSub-image simtrf in (1)nDividing the image into Q sub-images with the same area, and storing the obtained sub-images into a set imsegUn+1Middle, imsegUn+1Representing the set of sub-images obtained by the nth division.
In the above embodiment of the present invention, when the grayscale image is divided to obtain the sub-images, the image is not directly divided into a plurality of sub-images with the same area by the conventional method, but the grayscale image is divided by the iterative method, and the sub-image obtained by the last iteration is compared with the division index threshold by calculating the division index, so as to determine whether it needs to be further divided, when the division index is larger, the area of the current sub-image on the surface is larger and the difference between the pixel points included therein is larger, therefore, the division can be further performed properly, and when the division index is smaller, the division is stopped, which is favorable for avoiding the area of the obtained sub-image being too small, and simultaneously, the difference between the pixel points in the finally obtained sub-image can be made as small as possible, therefore, in the subsequent denoising process, too much wavelet denoising is called to perform denoising, and meanwhile, due to the fact that the difference between the pixel points in the sub-images is small, the denoising result obtained by the same denoising process is accurate, and the efficiency and accuracy of the subsequent denoising process are effectively improved.
Preferably, the enhancing the noise-reduced image to obtain an enhanced image includes:
and adopting a Gamma correction algorithm to perform enhancement processing on the noise-reduced image to obtain an enhanced image.
Preferably, the base is UfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comprising:
vegecoidx is calculated using the following formula:
Figure BDA0003049848230000111
in the formula, numUfinalRepresentation set UfinalAnd numColor represents the total number of pixels contained in the color image.
Preferably, the comparing the vegecoidx with the historical vegetation coverage to obtain the variation of the vegetation coverage includes:
the vegecoidx is recorded as the coverage rate of the ith vegetation, so that the vegecoidx is compared with the coverage rate of the ith-1 vegetation, namely the vegecoidi-1The rate of change var of (a) is calculated by the following formula:
Figure BDA0003049848230000112
besides calculating the change rate, the change condition of the vegetation coverage can be obtained by drawing a vegetation coverage change curve and the like in combination with the historical vegetation coverage.
Preferably, the image to be segmented is segmented by using a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6The method comprises the following steps:
performing image segmentation processing on the image to be segmented by adopting a two-dimensional otsu algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
On the other hand, the invention also provides a monitoring system for the vegetation coverage rate of the mountainous area, which comprises an image acquisition module, a first set acquisition module, a second set acquisition module, an intersection acquisition module, an image acquisition module to be segmented, an image segmentation module, a mean value calculation module, a difference value calculation module, a merging module and a coverage rate processing module;
the image acquisition module is used for acquiring a color image of a monitoring area;
the first set acquisition module is used for acquiring a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
The second set acquisition module is used for inputting the color image into a pre-trained neural network model for image segmentation processing to acquire a second set U of vegetation pixel points in the color image2
The intersection acquisition module is used for acquiring U1And U2Of (2) intersection U3
The image acquisition module to be segmented is used for acquiring U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
the image segmentation module is used for carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
The mean value calculation module is used for respectively calculating U3、U5、U6Pixel value mean values of (1), in turn denoted as cluster3、clustc5、clustc6
The difference value calculation module is used for calculating clustc3And clustc5The absolute value dist of the difference between3,5And for calculating clustc3And clustc6The absolute value dist of the difference between3,6
The merge module is to merge at dist3,5Less than dist3,6When in use, U is turned on5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a And for use in dist3,5Dist or more3,6When in use, U is turned on6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
The coverage rate processing module is used for U-basedfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comparing the vegecoidx with the historical vegetation coverage rate, and acquiring the change situation of the vegetation coverage rateThe method is described.
It should be noted that, the system is used for implementing the functions of the method, and each module in the apparatus corresponds to the steps of the method, and can implement different embodiments of the method.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (4)

1. A method for monitoring vegetation coverage in mountainous areas is characterized by comprising the following steps:
s1, acquiring a color image of the monitoring area;
s2, obtaining a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
S3, inputting the color image into a pre-trained neural network model for image segmentation processing, and acquiring a second set U of vegetation pixel points in the color image2
S4, obtaining U1And U2Of (2) intersection U3
S5, obtaining U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
s6, carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
S7, calculating U respectively3、U5、U6Pixel value mean values of (1), in turn denoted as cluster3、clustc5、clustc6
S8, calculating clustc3And clustc5The absolute value dist of the difference between3,5Calculate clustc3And clustc6The absolute value dist of the difference between3,6
S9, if dist3,5Less than dist3,6Then U will be5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a If dist3,5Dist or more3,6Then U will be6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
S10, based on UfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, and comparing the vegecoidx with the historical vegetation coverage rate to obtain the change condition of the vegetation coverage rate;
obtaining a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1The method comprises the following steps:
converting the color image into an RGB color space, and respectively obtaining a red component image R, a green component image G and a blue component image B of the color image;
respectively calculating the segmentation parameters of each pixel point in the color image to obtain an image to be processed;
performing threshold segmentation processing on the image to be processed by using an otsu algorithm, acquiring vegetation pixel points contained in the image to be processed, and storing all the vegetation pixel points contained in the image to be processed into a first set U1
The calculating the segmentation parameters of each pixel point in the color image respectively to obtain the image to be processed includes:
establishing a blank image with the same resolution as the color image, wherein pixel points in the blank image correspond to pixel points in the color image one by one;
recording the segmentation parameters of pixel point pix in the color image as cutedxpixRecording the corresponding pixel point of pix in the blank image as pix';
the following formula was used to calculate cutedxpix
Figure FDA0003248602230000021
Wherein alpha represents a preset proportion parameter, and alpha belongs to (0,1), Rpix、Gpix、BpixRespectively representing pixel values of pixel points corresponding to pix in the red component image R, the green component image G and the blue component image B; l ispixExpressing the pixel value of a pixel point corresponding to pix in an L component image, wherein the L component image is an image corresponding to the brightness component of the color image in the Lab color space;
mixing cutedxpixThe value of (a) is used as the pixel value of the pixel point pix' in the blank image;
and calculating the pixel value of each pixel point in the blank image by adopting the mode so as to obtain the image to be processed.
2. The method for monitoring the vegetation coverage in the mountainous area according to claim 1, wherein the color image is input into a pre-trained neural network model for image segmentation processing to obtain a second set U of vegetation pixel points in the color image2The method comprises the following steps:
carrying out graying processing on the color image to obtain a grayscale image;
carrying out noise reduction processing on the gray level image to obtain a noise reduction image;
enhancing the noise-reduced image to obtain an enhanced image;
inputting the enhanced image into a pre-trained neural network model for image segmentation processing, obtaining a second set of vegetation pixel points in the enhanced image, and recording the second set as U2
3. The method for monitoring vegetation coverage in mountainous areas according to claim 1, wherein the U-based vegetation coverage is based onfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, comprising:
vegecoidx is calculated using the following formula:
Figure FDA0003248602230000031
in the formula, numUfinalRepresentation set UfinalAnd numColor represents the total number of pixels contained in the color image.
4. A monitoring system for vegetation coverage in mountainous areas is characterized by comprising an image acquisition module, a first set acquisition module, a second set acquisition module, an intersection acquisition module, an image to be segmented acquisition module, an image segmentation module, a mean value calculation module, a difference value calculation module, a merging module and a coverage processing module;
the image acquisition module is used for acquiring a color image of a monitoring area;
the first set acquisition module is used for acquiring a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1
The second set acquisition module is used for inputting the color image into a pre-trained neural network model for image segmentation processing to acquire a second set U of vegetation pixel points in the color image2
The intersection acquisition module is used for acquiring U1And U2Of (2) intersection U3
The image acquisition module to be segmented is used for acquiring U3At U2Complementary set U corresponding to4Will U is4The elements in (1) form an image to be segmented;
the image segmentation module is used for carrying out image segmentation processing on the image to be segmented by adopting a threshold segmentation algorithm to obtain a set U of foreground pixel points5And a set U of background pixels6
The mean value calculation module is used for respectively calculating U3、U5、U6Pixel value mean values of (1), in turn denoted as cluster3、clustc5、clustc6
The difference value calculation module is used for calculating clustc3And clustc5The absolute value dist of the difference between3,5And for calculating clustc3And clustc6The absolute value dist of the difference between3,6
The merge module is to merge at dist3,5Less than dist3,6When in use, U is turned on5And U3Merging to obtain a final set U of vegetation pixel pointsfinal(ii) a And for use in dist3,5Dist or more3,6When in use, U is turned on6And U3Merging to obtain a final set U of vegetation pixel pointsfinal
The coverage rate processing module is used for U-basedfinalCalculating the current vegetation coverage rate vegecoidx of the monitoring area, and comparing the vegecoidx with the historical vegetation coverage rate to obtain the change condition of the vegetation coverage rate;
obtaining a first set U of vegetation pixel points in the color image by adopting a threshold segmentation algorithm1The method comprises the following steps:
converting the color image into an RGB color space, and respectively obtaining a red component image R, a green component image G and a blue component image B of the color image;
respectively calculating the segmentation parameters of each pixel point in the color image to obtain an image to be processed;
performing threshold segmentation processing on the image to be processed by using an otsu algorithm, acquiring vegetation pixel points contained in the image to be processed, and storing all the vegetation pixel points contained in the image to be processed into a first set U1
The calculating the segmentation parameters of each pixel point in the color image respectively to obtain the image to be processed includes:
establishing a blank image with the same resolution as the color image, wherein pixel points in the blank image correspond to pixel points in the color image one by one;
recording the segmentation parameters of pixel point pix in the color image as cutedxpixRecording the corresponding pixel point of pix in the blank image as pix';
the following formula was used to calculate cutedxpix
Figure FDA0003248602230000041
Wherein alpha represents a preset proportion parameter, and alpha belongs to (0,1), Rpix、Gpix、BpixRespectively representing pixel values of pixel points corresponding to pix in the red component image R, the green component image G and the blue component image B; l ispixExpressing the pixel value of a pixel point corresponding to pix in an L component image, wherein the L component image is an image corresponding to the brightness component of the color image in the Lab color space;
mixing cutedxpixThe value of (a) is used as the pixel value of the pixel point pix' in the blank image;
and calculating the pixel value of each pixel point in the blank image by adopting the mode so as to obtain the image to be processed.
CN202110484665.4A 2021-04-30 2021-04-30 Method and system for monitoring vegetation coverage rate in mountainous area Expired - Fee Related CN113077486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110484665.4A CN113077486B (en) 2021-04-30 2021-04-30 Method and system for monitoring vegetation coverage rate in mountainous area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110484665.4A CN113077486B (en) 2021-04-30 2021-04-30 Method and system for monitoring vegetation coverage rate in mountainous area

Publications (2)

Publication Number Publication Date
CN113077486A CN113077486A (en) 2021-07-06
CN113077486B true CN113077486B (en) 2021-10-08

Family

ID=76616678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110484665.4A Expired - Fee Related CN113077486B (en) 2021-04-30 2021-04-30 Method and system for monitoring vegetation coverage rate in mountainous area

Country Status (1)

Country Link
CN (1) CN113077486B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115272174B (en) * 2022-06-15 2023-05-19 武汉市市政路桥有限公司 Municipal road detection method and system
CN115830459B (en) * 2023-02-14 2023-05-12 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) Mountain forest grass life community damage degree detection method based on neural network
CN116453003B (en) * 2023-06-14 2023-09-01 之江实验室 Method and system for intelligently identifying rice growth vigor based on unmanned aerial vehicle monitoring
CN116740580A (en) * 2023-08-16 2023-09-12 山东绿博园市政工程有限公司 Garden engineering data processing method based on remote sensing technology

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583378A (en) * 2018-11-30 2019-04-05 东北大学 A kind of vegetation coverage extracting method and system
CN110334583A (en) * 2019-05-09 2019-10-15 王志杰 A kind of zonule soil vegetative cover coverage measure method, apparatus and electronic equipment
CN110853022A (en) * 2019-11-14 2020-02-28 腾讯科技(深圳)有限公司 Pathological section image processing method, device and system and storage medium
CN111079637A (en) * 2019-12-12 2020-04-28 武汉轻工大学 Method, device and equipment for segmenting rape flowers in field image and storage medium
CN111340826A (en) * 2020-03-25 2020-06-26 南京林业大学 Single tree crown segmentation algorithm for aerial image based on superpixels and topological features
CN111832386A (en) * 2020-05-22 2020-10-27 大连锐动科技有限公司 Method and device for estimating human body posture and computer readable medium
CN112488938A (en) * 2020-11-28 2021-03-12 井若凡 Remote sensing image processing system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210073692A1 (en) * 2016-06-12 2021-03-11 Green Grid Inc. Method and system for utility infrastructure condition monitoring, detection and response
CN106846322B (en) * 2016-12-30 2019-06-21 西安电子科技大学 The SAR image segmentation method learnt based on curve wave filter and convolutional coding structure
US10970832B2 (en) * 2017-07-31 2021-04-06 Rachio, Inc. Image data for improving and diagnosing sprinkler controller performance
CN111598028A (en) * 2020-05-21 2020-08-28 佛山市高明曦逻科技有限公司 Method for identifying earth surface vegetation distribution based on remote sensing imaging principle
CN112418188A (en) * 2020-12-17 2021-02-26 成都亚讯星科科技股份有限公司 Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583378A (en) * 2018-11-30 2019-04-05 东北大学 A kind of vegetation coverage extracting method and system
CN110334583A (en) * 2019-05-09 2019-10-15 王志杰 A kind of zonule soil vegetative cover coverage measure method, apparatus and electronic equipment
CN110853022A (en) * 2019-11-14 2020-02-28 腾讯科技(深圳)有限公司 Pathological section image processing method, device and system and storage medium
CN111079637A (en) * 2019-12-12 2020-04-28 武汉轻工大学 Method, device and equipment for segmenting rape flowers in field image and storage medium
CN111340826A (en) * 2020-03-25 2020-06-26 南京林业大学 Single tree crown segmentation algorithm for aerial image based on superpixels and topological features
CN111832386A (en) * 2020-05-22 2020-10-27 大连锐动科技有限公司 Method and device for estimating human body posture and computer readable medium
CN112488938A (en) * 2020-11-28 2021-03-12 井若凡 Remote sensing image processing system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Vegetation segmentation robust to illumination variations based on clustering and morphology modelling;XiaodongBai等;《Biosystems Engineering》;20140930;第125卷;80-97 *
使用图像分析法测量叶面积指数和植被覆盖率;支克广等;《气象水文海洋仪器》;20070813(第1期);5-8 *
城市植被覆盖度遥感信息提取——以连云港市为例;李文慧等;《信息技术》;20151009(第21期);183-186 *
基于超像素的图像分割方法研究;袁旭;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20200315(第3期);I138-1073 *
基于遥感影像的福建省长汀县级植被覆盖变化监测及分析;胡鸿等;《南京林业大学学报(自然科学版)》;20190630;第43卷(第3期);92-98 *

Also Published As

Publication number Publication date
CN113077486A (en) 2021-07-06

Similar Documents

Publication Publication Date Title
CN113077486B (en) Method and system for monitoring vegetation coverage rate in mountainous area
CN108596849B (en) Single image defogging method based on sky region segmentation
CN109272489B (en) Infrared weak and small target detection method based on background suppression and multi-scale local entropy
CN109272455B (en) Image defogging method based on weak supervision generation countermeasure network
CN107862667B (en) Urban shadow detection and removal method based on high-resolution remote sensing image
CN110232389B (en) Stereoscopic vision navigation method based on invariance of green crop feature extraction
CN109583378A (en) A kind of vegetation coverage extracting method and system
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN104966285B (en) A kind of detection method of salient region
CN116110053B (en) Container surface information detection method based on image recognition
WO2021189782A1 (en) Image processing method, system, automatic locomotion device, and readable storage medium
CN111080696A (en) Underwater sea cucumber identification and positioning method based on computer vision
CN113160053B (en) Pose information-based underwater video image restoration and splicing method
CN115908186A (en) Remote sensing mapping image enhancement method
CN113888397A (en) Tobacco pond cleaning and plant counting method based on unmanned aerial vehicle remote sensing and image processing technology
JP4747122B2 (en) Specific area automatic extraction system, specific area automatic extraction method, and program
CN109191482B (en) Image merging and segmenting method based on regional adaptive spectral angle threshold
CN111008563A (en) Seed germination detection method and device in dark scene and readable storage medium
CN106204596B (en) Panchromatic waveband remote sensing image cloud detection method based on Gaussian fitting function and fuzzy mixed estimation
CN107239761B (en) Fruit tree branch pulling effect evaluation method based on skeleton angular point detection
CN111476739B (en) Underwater image enhancement method, system and storage medium
CN109993104B (en) Method for detecting change of object level of remote sensing image
CN109886991B (en) Infrared imaging river channel detection method based on neighborhood intensity texture coding
CN113610940B (en) Ocean vector file and image channel threshold based coastal area color homogenizing method
CN113221788B (en) Method and device for extracting ridge culture characteristics of field blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211008

CF01 Termination of patent right due to non-payment of annual fee