CN116229263A - Vegetation growth disaster damage verification method based on foundation visible light image - Google Patents

Vegetation growth disaster damage verification method based on foundation visible light image Download PDF

Info

Publication number
CN116229263A
CN116229263A CN202310161318.7A CN202310161318A CN116229263A CN 116229263 A CN116229263 A CN 116229263A CN 202310161318 A CN202310161318 A CN 202310161318A CN 116229263 A CN116229263 A CN 116229263A
Authority
CN
China
Prior art keywords
vegetation
image
visible light
green
green vegetation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310161318.7A
Other languages
Chinese (zh)
Inventor
陈燕丽
莫伟华
莫建飞
陈诚
谢映
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Zhuang Autonomous Region Institute Of Meteorological Sciences
Original Assignee
Guangxi Zhuang Autonomous Region Institute Of Meteorological Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Zhuang Autonomous Region Institute Of Meteorological Sciences filed Critical Guangxi Zhuang Autonomous Region Institute Of Meteorological Sciences
Priority to CN202310161318.7A priority Critical patent/CN116229263A/en
Publication of CN116229263A publication Critical patent/CN116229263A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vegetation growth vigor disaster damage verification method based on a foundation visible light image, which comprises the following steps of: acquiring visible light image data of a vegetation canopy, selecting an image sequence based on the visible light image data, and performing image segmentation on the image sequence to acquire a green vegetation image; calculating a green vegetation visible light index and simulating a green vegetation change trend based on the green vegetation image; and monitoring vegetation growth vigor damage conditions based on the green vegetation visible light index and the change trend. According to the invention, the RGB image of the landform near-ground canopy which is mixed by bare rock and vegetation in the karst region is segmented by a machine learning algorithm, so that the accuracy is improved, and technical support is provided for vegetation monitoring based on visible light images of the foundation canopy.

Description

Vegetation growth disaster damage verification method based on foundation visible light image
Technical Field
The invention belongs to image processing data, and particularly relates to a vegetation growth vigor disaster damage verification method based on a foundation visible light image
Background
Karst is a typical fragile ecosystem in southwest, and land stony desertification severely restricts the socioeconomic development and improvement of people's living conditions in the region. The karst region is barren in land, shallow in soil layer, mainly comprises shrubs, grass irrigation and grasslands, is sensitive to climate change and weather disaster response and weak in disaster bearing capacity, and has important significance for developing vegetation monitoring aiming at complex environmental conditions of stony desertification regions. In recent years, the wide establishment of national ecological weather observation stations makes the near-ground canopy RGB image become normalized data, and by 2021, only Guangxi has established 46 ecological weather observation stations, of which 4 stony desertification ecological weather observation test stations. As an important component of the space-air-ground integrated monitoring network, the canopy RGB image obtained by the ecological station is used as a representative of near-surface remote sensing application, so that effective supplementation can be formed for satellite remote sensing and unmanned aerial vehicle remote sensing inversion vegetation information, the influence of complex and changeable weather on image quality is overcome, and high-flux time sequence monitoring of vegetation growth conditions can be realized.
The image segmentation technology aims at dividing an image containing the spatial distribution information of the complex ground object into different areas with specific semantic tags, and is a basis for developing vegetation monitoring by using the near-ground canopy RGB image. At present, various segmentation methods such as region, histogram thresholding, feature space clustering, edge detection, blurring technology and the like are developed for color images, and in recent years, artificial neural networks and deep learning are also beginning to be applied to color image segmentation. Various visible light vegetation index indexes such as NDYI, GLA, VARI, NGRDI and the like have been developed by adopting RGB different color channels based on the segmented image. The scholars utilize the near-ground canopy RGB image to carry out various researches on plant growth period, coverage, growth vigor, nitrogen and the like, and the effectiveness of the near-ground canopy RGB image for vegetation monitoring is verified. However, most of the existing research objects are concentrated on crops and pastures, and research reports on the mixing of bare rock, vegetation and soil under the bedding surface in karst regions are rarely seen.
Disclosure of Invention
The invention aims to provide a vegetation growth vigor disaster damage verification method based on a foundation visible light image, which provides technical support for vegetation monitoring based on a foundation canopy visible light image.
In order to achieve the above purpose, the invention provides a vegetation growth disaster damage verification method based on a foundation visible light image, which comprises the following steps:
acquiring visible light image data of a vegetation canopy, selecting an image sequence based on the visible light image data, and performing image segmentation on the image sequence to acquire a green vegetation image;
calculating a green vegetation visible light index and simulating a green vegetation change trend based on the green vegetation image;
and monitoring vegetation growth vigor damage conditions based on the green vegetation visible light index and the change trend.
Optionally, based on the visible light image data, the selecting an image sequence specifically includes:
acquiring RGB, HSV, lab color space of the visible light image data, and respectively acquiring different time variation coefficients of each channel data of the color space based on the color space;
and selecting the image sequence by comparing the variation coefficients of the channel data at different times.
Optionally, a machine learning segmentation algorithm is used when image segmentation is performed on the image sequence.
Optionally, the method for performing image segmentation on the image sequence by adopting the machine learning segmentation algorithm specifically includes:
generating training samples by using a K-means unsupervised clustering algorithm and screening the samples;
and classifying vegetation and background of the screened samples by adopting a support vector machine, and obtaining the green vegetation image.
Optionally, after the green vegetation image is acquired, the method further includes: and removing the edge misprimed pixels in the green vegetation image by utilizing non-local mean filtering.
Optionally, the process of calculating the green vegetation visible light index includes:
extracting R, G, B color channel values of each pixel of a green vegetation region in the green vegetation image, carrying out average processing on all pixels in the region, and calculating the visible light index of the green vegetation based on the processed pixels.
Optionally, a composite sine function is adopted in the process of simulating the green vegetation change trend.
Optionally, a composite sine function is adopted in the process of simulating the green vegetation change trend, and the calculation formula is as follows:
VI=a+bsin[2π(t day -c)/365],
wherein a, b and c are empirical coefficients, t day The sequence of days.
The invention has the technical effects that: (1) In the training sample generation stage, K-means unsupervised clustering is adopted to generate training samples, the screened clustering samples are used as training samples of a vegetation-background classifier of a support vector machine, non-local mean filtering is finally carried out on a background-plant segmented image, edge misprimed pixels are removed, the degree of automation is high, a machine learning method is applied to image background segmentation, noise in the image is filtered by adopting the non-local mean filtering, meanwhile, the edge area of a contour blade of a foreground image is reserved, and finally the accuracy is improved.
(2) The machine learning algorithm is used for dividing the RGB image of the landform near-ground canopy in the karst region, the accuracy is up to more than 80%, the composite sine function can better simulate the daily dynamic change characteristics of GLA, NDYI, NGRDI and VARI, and technical support is provided for vegetation monitoring based on visible light images of the foundation canopy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
FIG. 1 is a schematic flow chart of a vegetation growth disaster verification method based on a foundation visible light image according to an embodiment of the invention;
FIG. 2 is a graph showing the effect of the RGB image on the cloudy day segmentation of the canopy of the present invention, wherein (a) is before the A channel-filtering, (b) is after the A channel-filtering, (c) is before the S channel-filtering, (d) is after the S channel-filtering, (e) is before the ExG-filtering, (f) is after the ExG-filtering, (g) is before the machine learning-filtering, and (h) is after the machine learning-filtering;
FIG. 3 is a graph showing a sunny segmentation effect of a canopy RGB image according to an embodiment of the present invention, wherein (a) is before A channel-filtering, (b) is after A channel-filtering, (c) is before S channel-filtering, (d) is after S channel-filtering, (e) is before ExG-filtering, (f) is after ExG-filtering, (g) is before machine learning-filtering, and (h) is after machine learning-filtering;
fig. 4 shows the results of the variation and simulation of the canopy visible light index according to the embodiment of the present invention, wherein (a) is the GLA visible light index, and (b) is the NYDI visible light index, and (c) is the NGRDI visible light index, and (d) is the VARI visible light index.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
As shown in fig. 1, in this embodiment, a method for verifying vegetation growth and disaster damage based on a ground visible light image is provided, and a karst rock desertification ecological meteorological observation test station is selected and located in the village town of guangxi ma mountain county. Belongs to a wet climate zone of a south subtropical zone, the annual average temperature is 21.5 ℃, and the annual average precipitation is 1457.2mm. The station is a first stony desertification ecological meteorological observation test station in the whole country, and full-real-scene multi-gradient stereoscopic observation is carried out on stony desertification area ecological meteorological in a 1 main station and 2 substation mode. The observation system main station is positioned at the mountain top, and the two sub-stations are respectively positioned at the mountain top with the altitude of 500m and the mountain slope with the altitude of 300 m.
The main station of the Marshan station is set to be a far-field view, the shooting range is large, but vegetation detail information is difficult to capture, and the vegetation in the peripheral hardware facility change view is greatly changed, so that the data continuity is poor, therefore, continuous image data of fdda long time series in the fixed view range shot by the 500m substation crown layer digital camera is selected, the digital camera is 6m away from the ground, the camera model is ZQZ-TIM, a 1/1.8 inch CMOS sensor is adopted, and the total pixel is about 644 ten thousand pixels. The digital camera automatically collects 1 visible light image of the canopy at fixed angles of 8:00, 10:00, 12:00, 14:00 and 16:00 every day, and the image acquisition time is 3650 images from 1 month 1 in 2018 to 12 months 31 in 2019.
And (3) selecting an image sequence of vegetation canopy visible light image data, comparing variation coefficients of 9 channel data in 5 times RGB, HSV, lab three color spaces, wherein the 8 channels R, G, H, S, V, L, a, B have the minimum variation coefficient at 8:00 times, the B channel has the minimum variation coefficient at 12:00 times, comprehensively comparing, selecting vegetation canopy images acquired at 8:00 in the morning, carrying out vegetation segmentation and growth variation analysis by adopting a machine learning method, and table 1 shows variation coefficients of canopy RGB images at different times.
TABLE 1
Figure SMS_1
As shown in fig. 2-3, the vegetation segmentation effect of three segmentation methods of color space, nonlinear combination of color channels and machine learning algorithm and the optimization effect of NLM filtering on different segmentation results are analyzed.
The segmentation result in the vegetation growing vigorous period is selected as an example, and the difference of sensibility of different segmentation algorithms to vegetation and background information is obvious through visual interpretation. The degree of distinction of the bright green vegetation is higher based on the color space, the nonlinear combination of the color channels and the machine learning algorithm, the difference of the segmentation results is not obvious, but the degree of distinction of the dark green vegetation and the bare rock is larger. Degree of discrimination for bare rock: machine learning > a channel > S channel > ExG, for dark green vegetation: machine learning > ExG > S channel > a channel. The degree of distinction of the A channel, the S channel and the ExG on bare rock is obviously higher than that on sunny days, but dark green vegetation and soil background are easier to be confused on sunny days, and overall, the machine learning segmentation effect is optimal.
The manual real-time labeling result is used as a reference to quantitatively compare the segmentation effects of the three algorithms, and the segmentation effects of different segmentation methods have a certain difference in sunny strong light and cloudy weak light conditions. Machine learning segmentation effect is optimal under clear strong light condition, and Q seg Up to 80.31%, S r 91.35% and the A channel has the inferior segmentation effect, and the EXG segmentation effect is the worst; machine learning segmentation effect is optimal under dim light condition in overcast days, and Q is as follows seg Up to 80.80%, S r 81.57% ExG has inferior segmentation effect, and S channel has worst segmentation effect. Compared with the prior art, the machine learning algorithm has the best segmentation effect on green vegetation, has the accuracy of more than 80 percent, and is particularly suitable for sunny strong light conditions. The color channel (S and A channel) based threshold segmentation method has good effect under the dim light condition in the cloudy day, but has obvious effect reduction under the bright light condition in the sunny day; the S channel has poor segmentation effect, but the accuracy rate exceeds 70 percent. Three image segmentation algorithms are shown in table 2 for accuracy comparison.
The method is characterized in that a training sample is generated by adopting K-means unsupervised clustering based on a machine learning segmentation algorithm, the sample is screened, and vegetation and background information are classified by adopting a support vector machine (support vector machine, SVM).
The probability of dislocation at the edge of vegetation is high under the influence of factors such as illumination condition difference, imaging quality and the like. In order to reduce the blade edge false segmentation phenomenon, non-Local mean filtering (NLM) is introduced to optimize vegetation segmentation results. The NLM operator can protect long and narrow structures similar to vegetation leaves, and pixel segmentation ambiguity at the edge parts of the vegetation leaves is eliminated by mutually enhancing non-adjacent pixels on the same structure in a certain neighborhood range. The NLM algorithm principle is as follows:
assuming that the noisy image is v= { v (a) |a∈a }, the image after denoising is NL [ v ], and the weight value of each pixel a can be found by the following formula:
Figure SMS_2
wherein w (a, b) represents a similarity (Gaussian weighted Euclidean distance) weight between pixel a and pixel b, satisfies 0.ltoreq.w (a, b). Ltoreq.1 and Σ b w (a, b) =1, and the expression thereof is as follows:
Figure SMS_3
Figure SMS_4
in the method, in the process of the invention,
Figure SMS_5
the square of the Gaussian weighted distance of the pixel in the sub-block taking the pixel a as the center and the pixel in the sub-block taking the pixel b as the center is represented, the similarity between two pixel points is measured, a is the standard deviation, v (a)) represents the local sub-block pixel set around a, and h is the filtering parameter.
Taking the manual in-situ labeling result as a reference, performing accuracy assessment on the classified images acquired before and after filtering by the three segmentation algorithms, wherein the accuracy assessment index Q seg And S is r The calculation formula is as follows:
Figure SMS_6
Figure SMS_7
/>
wherein, A is a foreground (green vegetation) pixel set (p=255) or a background (information except green vegetation) pixel set (p=0) of the segmented image, B is a foreground pixel set (p=255) or a background pixel set (p=0) obtained by manual field annotation, m and n are the number of rows and columns of the image respectively, i and j are corresponding coordinates respectively, Q seg And S is r The larger the value, the higher the segmentation accuracy, where Q seg Indicating the overall consistency of the segmentation result background and foreground, while Sr only indicates the consistency of the foreground segmentation result.
NLM filtering can effectively optimize various segmentation results, and erroneous segmentation of the edge of the plant leaf and the background is obviously reduced after filtering treatment. The optimization degree of the ExG segmentation result is most obvious under the dim light condition in overcast days, Q seg Increases by 1.86 percent, increases by 2.12 percent, has the most obvious optimization degree on the S channel segmentation result under the strong light condition in Qin days, and has Q seg Increase by 1.32%, S r The increase is 0.40%. In comparison, NLM filtering has more obvious effect on optimizing the segmentation result under the condition of weak light on sunny days. As shown in table 2.
TABLE 2
Figure SMS_8
As can be seen from table 2, the machine learning algorithm is suitable for RGB image segmentation of the geomorphic near-ground canopy where bare rock and vegetation are mixed in karst regions. The light green vegetation in the karst region is distinguished by three algorithms, namely, based on a color space, a nonlinear combination of color channels and machine learning, but the sensitivity of the S channel and the ExG to bare rock is poor, and the distinction of the S channel and the A channel to the dark green vegetation is poor. The three segmentation methods have obvious difference of vegetation segmentation effect under the conditions of strong light and weak light in sunny days, the segmentation effect of the machine learning algorithm is optimal, the accuracy rate of the weak light in cloudy days is more than 80%, and the accuracy rate of the weak light in sunny days can be more than 90% under the conditions of strong light in sunny days
Calculating a green vegetation index: after the segmentation of the green vegetation part in the visible light image is completed, the R, G, B color channel value of each pixel of the green vegetation region is extracted, all pixels in the region are averaged, and then the green vegetation visible light index is calculated according to the formula in table 3.
TABLE 3 Table 3
Figure SMS_9
Simulation of vegetation change trend: the composite sine function was first generated by 1983
Figure SMS_10
The deduction is used for simulating the annual change of the lowest air temperature and the highest air temperature, and the vegetation change and the air temperature change have stronger synchronism, and the annual change trend of the vegetation index is simulated by utilizing the compound sine function, so that the calculation formula is as follows:
VI=a+bsin[2π(t day -c)/365]………………………………(6)
wherein a, b and c are empirical coefficients, t day The sequence of days.
As shown in fig. 4, the composite sine function can better simulate GLA, NDYI, NGRDI and VARI day-to-day dynamic variation characteristics. The full sequence NGRDI simulation effect in 2018-2019 is best (R 2 =0.830), VARI times (R 2 =0.824),GLA(R 2 =0.785) and NDYI (R 2 =0.714) is slightly worse. Of these, NGRDI and VARI simulate best (R in 2018 2 2018 =0.824), GLA times (R 2 2018 =0.772),NDYI(R 2 2018 =0.708) is slightly worse. The best NGRDI simulation effect (R) of data from 1 to 7 months in 2019 2 2019 =0.783), VARI times (R 2 2019 =0.770),GLA(R 2 2019 =0.747) and NYDI (R 2 2019 =0.617) is slightly worse. As shown by the cap visible index statistics in table 4.
TABLE 4 Table 4
Figure SMS_11
The simulation effect of each visible light index in different time periods is obviously different. Comparing the average absolute deviation over 6 periods 2018-2019 found that NGRDI deviation was minimal (3.7%), VARI times (4.0%), GLA (6.6%) and NYDI (9.5) were more biased. In 6 time periods, the deviation of 2018, 2019, 1-3 and 4-6 is less than 5%, and the simulation effect is good. And comprehensively comparing, when vegetation growth vigor changes, the NDYI amplitude is maximum, and the composite sine function has the highest simulation precision on the NGRDI change trend.
The thresholding of the various color channels (ExG, lab, HSV) achieves good results when the illumination conditions are predominantly diffuse light. However, the external illumination constantly changes within a day, and when the direct light is strong, a large amount of high lights and shadows in the images bring great challenges to vegetation image segmentation, and particularly in a karst landform environment, a large amount of rocks, soil, artificial impurities, soil residues and the like exist in the vegetation, so that the automatic segmentation difficulty is further increased. The background segmentation algorithm based on machine learning can map the low-dimensional inseparable problem to a high-dimensional space by using rich information contained in the image through the multi-dimensional classification sample training classifier. Because the classifier is required to be trained by the classification samples in advance, the classification of the training samples is generally completed by manual interaction operation, the degree of automation is not high, and the application of the machine learning method on image background segmentation is limited. In the training sample generation stage, K-means unsupervised clustering is adopted to generate training samples, the screened clustering samples are used as training samples of a vegetation-background classifier of a support vector machine, and finally non-local mean filtering is carried out on a background-plant segmented image to remove edge misprimed pixels. As can be seen from the comparison of the accuracy of different segmentation algorithms, the machine learning-based method has the best effect, and the accuracy is more than 80%. The accuracy of various segmentation methods is improved after filtering, because the foreground image contour obtained by segmentation by adopting an image processing method is smaller than the foreground image contour obtained by adopting a manual method, noise in the image is filtered by filtering, and meanwhile, the edge area of the foreground image contour blade is reserved, so that the accuracy is improved finally.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The vegetation growth vigor damage verification method based on the foundation visible light image is characterized by comprising the following steps of:
acquiring visible light image data of a vegetation canopy, selecting an image sequence based on the visible light image data, and performing image segmentation on the image sequence to acquire a green vegetation image;
calculating a green vegetation visible light index and simulating a green vegetation change trend based on the green vegetation image;
and monitoring vegetation growth vigor damage conditions based on the green vegetation visible light index and the change trend.
2. The method for verifying vegetation growth vigor and damage based on the foundation visible light image as set forth in claim 1, wherein the selecting an image sequence based on the visible light image data is specifically as follows:
acquiring RGB, HSV, lab color space of the visible light image data, and respectively acquiring different time variation coefficients of each channel data of the color space based on the color space;
and selecting the image sequence by comparing the variation coefficients of the channel data at different times.
3. The method for verifying vegetation growth and damage based on the ground visible light image according to claim 1, wherein a machine learning segmentation algorithm is adopted when the image sequence is subjected to image segmentation.
4. The method for verifying vegetation growth vigor damage based on a ground visible light image according to claim 3, wherein the method for performing image segmentation on the image sequence by using the machine learning segmentation algorithm specifically comprises the following steps:
generating training samples by using a K-means unsupervised clustering algorithm and screening the samples;
and classifying vegetation and background of the screened samples by adopting a support vector machine, and obtaining the green vegetation image.
5. The method for verifying vegetation growth and damage based on a ground visible light image as recited in claim 4, further comprising, after obtaining the green vegetation image: and removing the edge misprimed pixels in the green vegetation image by utilizing non-local mean filtering.
6. The method for verifying vegetation growth and damage based on the ground visible light image as claimed in claim 1, wherein the process of calculating the green vegetation visible light index comprises:
extracting R, G, B color channel values of each pixel of a green vegetation region in the green vegetation image, carrying out average processing on all pixels in the region, and calculating the visible light index of the green vegetation based on the processed pixels.
7. The method for verifying vegetation growth vigor and damage based on foundation visible light images according to claim 1, wherein a composite sine function is adopted in the process of simulating the green vegetation change trend.
8. The method for verifying vegetation growth vigor and damage based on foundation visible light images according to claim 7, wherein a composite sine function is adopted in the process of simulating the green vegetation change trend, and the calculation formula is as follows:
VI=a+bsin[2π(t day -c)/365],
wherein a, b and c are empirical coefficients, t day The sequence of days.
CN202310161318.7A 2023-02-24 2023-02-24 Vegetation growth disaster damage verification method based on foundation visible light image Pending CN116229263A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310161318.7A CN116229263A (en) 2023-02-24 2023-02-24 Vegetation growth disaster damage verification method based on foundation visible light image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310161318.7A CN116229263A (en) 2023-02-24 2023-02-24 Vegetation growth disaster damage verification method based on foundation visible light image

Publications (1)

Publication Number Publication Date
CN116229263A true CN116229263A (en) 2023-06-06

Family

ID=86588754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310161318.7A Pending CN116229263A (en) 2023-02-24 2023-02-24 Vegetation growth disaster damage verification method based on foundation visible light image

Country Status (1)

Country Link
CN (1) CN116229263A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274871A (en) * 2020-01-07 2020-06-12 西南林业大学 Forest fire damage degree extraction method based on light and small unmanned aerial vehicle
CN111738165A (en) * 2020-06-24 2020-10-02 中国农业科学院农业信息研究所 Method for extracting individual plant canopy from high-resolution unmanned aerial vehicle visible light remote sensing image
CN115063437A (en) * 2022-06-16 2022-09-16 广西壮族自治区气象科学研究所 Mangrove canopy visible light image index characteristic analysis method and system
US20230017425A1 (en) * 2019-12-03 2023-01-19 Basf Se System and method for determining damage on crops

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230017425A1 (en) * 2019-12-03 2023-01-19 Basf Se System and method for determining damage on crops
CN111274871A (en) * 2020-01-07 2020-06-12 西南林业大学 Forest fire damage degree extraction method based on light and small unmanned aerial vehicle
CN111738165A (en) * 2020-06-24 2020-10-02 中国农业科学院农业信息研究所 Method for extracting individual plant canopy from high-resolution unmanned aerial vehicle visible light remote sensing image
CN115063437A (en) * 2022-06-16 2022-09-16 广西壮族自治区气象科学研究所 Mangrove canopy visible light image index characteristic analysis method and system

Similar Documents

Publication Publication Date Title
Chen et al. Spatially and temporally weighted regression: A novel method to produce continuous cloud-free Landsat imagery
Maldonado Jr et al. Automatic green fruit counting in orange trees using digital images
CN106780091B (en) Agricultural disaster information remote sensing extraction method based on vegetation index time-space statistical characteristics
CN102855494B (en) A kind of Clean water withdraw method of satellite remote-sensing image and device
CN108830844B (en) Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image
CN109063754A (en) A kind of remote sensing image multiple features combining classification method based on OpenStreetMap
CN113033670A (en) Method for extracting rice planting area based on Sentinel-2A/B data
Uddin et al. Forest condition monitoring using very-high-resolution satellite imagery in a remote mountain watershed in Nepal
CN115063437B (en) Mangrove canopy visible light image index feature analysis method and system
Kamal et al. Comparison of Google Earth Engine (GEE)-based machine learning classifiers for mangrove mapping
Loresco et al. Segmentation of lettuce plants using super pixels and thresholding methods in smart farm hydroponics setup
CN117763186A (en) Remote sensing image retrieval method, remote sensing image retrieval system, computer equipment and storage medium
Makhamreh Derivation of vegetation density and land-use type pattern in mountain regions of Jordan using multi-seasonal SPOT images
Ahlswede et al. Hedgerow object detection in very high-resolution satellite images using convolutional neural networks
Kamal et al. A preliminary study on machine learning and google earth engine for mangrove mapping
CN102231190B (en) Automatic extraction method for alluvial-proluvial fan information
CN113033386B (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
Li et al. Hybrid cloud detection algorithm based on intelligent scene recognition
Zhao et al. Image dehazing based on haze degree classification
CN114299379A (en) Shadow area vegetation coverage extraction method based on high dynamic image
He et al. A calculation method of phenotypic traits of soybean pods based on image processing technology
Qin et al. Inundation impact on croplands of 2020 flood event in three Provinces of China
CN116229263A (en) Vegetation growth disaster damage verification method based on foundation visible light image
Chen et al. Mangrove Growth Monitoring Based on Camera Visible Images—A Case Study on Typical Mangroves in Guangxi
CN111768101B (en) Remote sensing cultivated land change detection method and system taking account of physical characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination