CN108830844B - Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image - Google Patents

Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image Download PDF

Info

Publication number
CN108830844B
CN108830844B CN201810592833.XA CN201810592833A CN108830844B CN 108830844 B CN108830844 B CN 108830844B CN 201810592833 A CN201810592833 A CN 201810592833A CN 108830844 B CN108830844 B CN 108830844B
Authority
CN
China
Prior art keywords
image
remote sensing
facility
resolution remote
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810592833.XA
Other languages
Chinese (zh)
Other versions
CN108830844A (en
Inventor
杨秀峰
赵建鹏
李国洪
金永涛
李旭青
赵起超
刘世盟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Institute of Aerospace Engineering
Original Assignee
North China Institute of Aerospace Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Institute of Aerospace Engineering filed Critical North China Institute of Aerospace Engineering
Priority to CN201810592833.XA priority Critical patent/CN108830844B/en
Publication of CN108830844A publication Critical patent/CN108830844A/en
Application granted granted Critical
Publication of CN108830844B publication Critical patent/CN108830844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30128Food products

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a facility vegetable extraction method based on a multi-temporal high-resolution remote sensing image, which comprises the following steps: preprocessing the high-resolution remote sensing image to extract a single-waveband image; performing image enhancement processing on the single-waveband image so as to improve the distinguishing capability of the facility vegetables and other ground object categories; selecting a preset observation window, and performing texture analysis processing on the enhanced image so as to calculate and obtain a characteristic value texture image; creating mask images of buildings and roads which are easy to be confused with facility vegetables based on the characteristic value texture images; masking the original high-resolution remote sensing image by using a mask image so as to extract an edge detection line of a ground object pattern spot; performing mathematical morphology operation and segmentation processing on the edge detection line so as to generate an extraction result; and (3) performing binarization processing on the grayscale image of the facility vegetable, extracting vector pattern spots of the facility vegetable, and performing vector-to-grid processing, thereby realizing information extraction of the facility vegetable.

Description

Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image
Technical Field
The invention belongs to the technical field of intelligent image processing, relates to an information extraction technology of a high-resolution remote sensing image, and particularly relates to a facility vegetable extraction method based on a multi-temporal high-resolution remote sensing image, which is used for extracting facility vegetable information in the image based on the high-resolution remote sensing image.
Background
With the rapid advance of urbanization in China, the demand for vegetables is increasing day by day, and intensive vegetable production (represented by facility vegetables) has become the main development direction of vegetable production. The facility vegetables can prolong the vegetable supply period, improve the vegetable yield and relieve the supply tension phenomenon. The types of facility agriculture commonly used in China at present mainly comprise: plastic greenhouses (including medium and small arched greenhouses), sunlight greenhouses and multi-span greenhouses. The plastic greenhouse is a single arch shed which takes a plastic film as a light-transmitting covering material. The use function in south China is heat preservation in winter, sun shading in summer and rain proofing. The use in northern areas mainly has the effects of early spring and late autumn, and is generally earlier than or about one month later than that in open field production departments. Because of the poor heat preservation performance, the cable is not generally used for overwintering production in northern areas.
The sunlight greenhouse is a greenhouse form with Chinese characteristics, which is developed by scientific and technological workers in China on the basis of a slope greenhouse. The greenhouse is a single-layer plastic film greenhouse which is produced by using solar energy as a main energy source and adopting a movable heat-insulating quilt on a front roof for heat insulation at night, and the east, west and north wall bodies and a rear roof of the greenhouse are made of high-heat-insulating building materials. The greenhouse is used in northern areas of China, the temperature difference between the inside and the outside of the room can be kept to be more than 20-30 ℃ without manual heating under normal conditions, and the greenhouse is widely applied to northern latitude 30-45 degrees and is a main greenhouse form for overwintering production of gardening products in the northern areas. The multi-span greenhouse refers to a large-area production greenhouse formed by connecting a plurality of single-span greenhouses through a gutter, and is a trend and trend for showing the generation facility agriculture in the world and China. The multi-span greenhouse is divided into: a multi-span glass greenhouse, a multi-span plastic greenhouse, and a polycarbonate plate greenhouse (PC greenhouse). The three greenhouses can be simply divided into: a "cold shed" (plastic greenhouse) and a "warm shed" (sunlight greenhouse and multi-span greenhouse).
Vegetable facility cultivation has become one of the important components of modern agriculture, and the facility cultivation becomes an important means for transforming traditional agriculture into modern agriculture, and becomes a local post industry in many areas, thereby greatly increasing the income of farmers. In addition, the appearance of the facility vegetables enables the utilization rate of the land to be high, the dependence on the natural environment is eliminated to a certain extent, and the method has the characteristics of high investment, high technical content, high quality, high yield, high benefit and the like, and is the most viable new agricultural industry. The construction area of the facility vegetables reflects the development level of the local agriculture modernization and the local vegetable supply capacity, plays a vital role in the supply and demand relationship of vegetables in the market, and has important significance in aspects of scientific management, vegetable planting policy control and the like of local agricultural departments. The important item of the fine variety subsidy is the area of the fine variety subsidy, and the accuracy of the area is related to the personal interest of farmers.
In the prior art, information such as the area yield of facility vegetables is usually obtained by a conventional ground investigation method or perennial statistical data, the scientificity is lacked, a large amount of time and manpower are wasted, the influence of artificial subjective factors is large, the reliability basis is difficult to provide for government decision and management, and the requirement of the government cannot be met. The remote sensing technology for remotely detecting the characteristics of the ground object receiving electromagnetic wave information has the advantages of reality, richness, presence, macroscopicity, dynamics and the like of the information which are not possessed by the conventional traditional technology. The remote sensing technology is used for rapidly and accurately acquiring information such as the area of facility vegetables, land utilization distribution and the like, and a good foundation can be laid for reasonably arranging vegetable distribution, realizing intensive vegetable production, stabilizing and promoting the agricultural development level, realizing high efficiency of agricultural resources and realizing sustainable utilization.
With the rapid development of remote sensing technology, high-resolution remote sensing images have become a main way for acquiring the planting area of the facility vegetables. How to efficiently and accurately extract information such as the area, the type and the like of the facility vegetables from massive high-resolution remote sensing image data is one of key technologies which are urgently needed to be solved in the intelligent interpretation of the high-resolution remote sensing images. Meanwhile, the facility vegetables are typical, very common and very important ground feature element types in remote sensing images, and the effective acquisition of the information of the facility vegetables has important significance in the fields of geographic data updating, rural development planning, agricultural scientific management and the like.
The diversified development of the remote sensing platform and the improvement of the spatial and spectral resolution, the application of the remote sensing technology in the agricultural field becomes more and more extensive. The remote sensing image ground object class classification is mainly to determine a discrimination interface and a discrimination standard between different ground object classes, and in the existing facility vegetable extraction method, the resolution is improved by taking supervised classification and unsupervised classification as algorithms and adopting visual interpretation to correct in many times. The visual interpretation can comprehensively utilize the knowledge of image features such as tone or color, shape, size, shadow, texture, pattern, position and layout of the ground feature, but the visual interpretation requires that the interpreter is familiar with the research area and the classification is time-consuming and labor-consuming. The supervised classification and unsupervised classification methods depend on feature spectral information excessively, cannot fully mine feature spatial characteristics and other auxiliary information, are difficult to overcome the phenomena of 'same-feature different-spectrum' and 'same-spectrum foreign matter' in images, are limited in classification accuracy, and need a large amount of later correction if high accuracy is achieved. The method further comprises: an object-oriented classification algorithm, an artificial neural network method, a support vector machine method, an extraction method based on a spatial structure, and the like. At present, buildings and roads exist in the result of extracting the facility vegetables, which causes poor extraction precision and large confusion range to a great extent.
In addition, from a data source, most of the existing methods adopt low-resolution remote sensing images such as Landsat TM, SPOT5 and RapidEye, which cannot meet the requirement of fine facility vegetable information extraction, and some of the proposed methods for extracting high-resolution remote sensing facility vegetables are often greatly influenced by image quality and scene complexity, and need a large amount of manual intervention, so that the universality and the automation degree of the method are reduced. Therefore, to realize accurate extraction of the facility vegetables, a high-resolution process remote sensing image is required to be adopted.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a facility vegetable information extraction scheme based on a high-resolution remote sensing image, which can meet the requirement of extracting facility vegetable information in the image, improve the robustness and universality of facility vegetable extraction of the high-resolution remote sensing image and realize the extraction of the planting area of the facility vegetable.
The invention provides a facility vegetable extraction method based on a multi-temporal high-resolution remote sensing image, which is used for extracting facility vegetable information in the image based on the high-resolution remote sensing image and comprises the following steps: the method comprises the following steps that firstly, a high-resolution remote sensing image is preprocessed to extract a single-band image; performing image enhancement processing on the single-waveband image so as to improve the distinguishing capability of the facility vegetables and other ground object types; selecting a preset observation window, and performing texture analysis processing on the enhanced image so as to calculate and obtain a characteristic value texture image; fourthly, based on the characteristic value texture image, creating a mask image of buildings and roads which are easy to be confused with facility vegetables; step five, masking the original high-resolution remote sensing image by using a mask image so as to extract an edge detection line of the surface feature pattern spot; step six, performing mathematical morphology operation and segmentation processing on the edge detection line so as to generate an extraction result of a gray level image of the facility vegetable; and seventhly, performing binarization processing on the grayscale image of the facility vegetable, extracting the vector pattern spots of the facility vegetable, and performing vector-to-grid processing, thereby realizing the extraction of the facility vegetable information.
Preferably, in the present invention, the pretreatment includes at least: radiometric calibration, geometric correction by using a reference image, image fusion, forward image and region clipping based on rules. The texture analysis processing is a texture statistical method of the gray level co-occurrence matrix, and the characteristic value of the characteristic value texture image at least comprises the following steps: entropy, contrast, autocorrelation, energy, homogeneity.
Specifically, in step one, the following steps are performed: carrying out orthorectification operation on the remote sensing image by adopting a self-contained rpb file of the high-resolution remote sensing image; performing radiometric calibration on the image subjected to the calibration by adopting calibration parameters and a spectral response function so as to obtain a surface reflectivity image; adopting a reference image to carry out geometric correction on the earth surface reflectivity image; performing fusion processing on the panchromatic image with high resolution and multiple spectra by adopting a GS fusion method; and (4) clipping the image by adopting an administrative boundary, thereby finishing the preprocessing of the image.
Additionally, step two may further include: and extracting a blue band in the high-resolution remote sensing image, and performing histogram equalization image enhancement processing.
Accordingly, in step two is performed: converting the data of the high-resolution remote sensing image into a double type; converting the data of the high-resolution remote sensing image into a gray image in a [0, 1] interval; expanding the data of the high-resolution remote sensing image to a [0, 255] interval; extracting a blue band in the high-resolution remote sensing image for analysis; and performing histogram equalization image enhancement processing on the blue wave band, thereby further improving the distinguishing capability of the facility vegetables and other ground object categories.
In step three, performing: gray scale operation; determining an observation window; setting a step distance and a scanning direction; calculating a gray level co-occurrence matrix of the texture; calculating the characteristic value of the characteristic value texture image; and generating a characteristic value texture image.
Specifically, the fourth step includes: and according to the characteristic value texture image, creating a mask image of the building and the road which are easy to be confused with the facility vegetables by a global threshold segmentation method, wherein the mask image is used for removing the building and the road from the high-resolution remote sensing image.
In step four, performing: selecting texture images with high distinguishing degree between buildings, roads and other ground features; adopting global threshold value segmentation and distribution to replace buildings and roads, and creating a mask image; and eliminating buildings and roads under influence by using the mask image.
The fifth step comprises the following steps: masking the original high-resolution remote sensing image by using a mask image; removing background elements by adopting an iterative threshold segmentation method; and extracting the edges of the facility vegetables by adopting an edge detection method so as to obtain an edge detection line, wherein the background elements at least comprise image soil, and the edge detection method is a canny edge detection method.
Specifically, in step five, the following steps are performed: removing background elements by adopting an iterative threshold segmentation method; adopting a Gaussian filter to smooth the remote sensing image; calculating the amplitude and direction of the gradient; carrying out non-maximum suppression on the amplitude image; the image edges are detected and connected using a dual threshold algorithm.
The sixth step comprises: performing mathematical morphology operation on the edge detection line to connect image pixel points; dividing the image subjected to global threshold segmentation by using an edge detection line so as to obtain the pattern spots of the facility vegetables; removing the connecting piece phenomenon in the image spots by using the opening operation in the mathematical morphology so as to obtain a removed image; and finely dividing the removed image by using a connected domain display mode through characteristic indexes to generate an extraction result of the gray level image of the facility vegetable, wherein the mathematical morphology operation is a dilation operation of mathematical morphology, and the characteristic indexes comprise an area, a perimeter, a circularity, a length ratio and a rectangular ratio.
In the sixth step, a 'disk' structural element with the size of 2 is selected, a mathematical morphology expansion operation is executed by adopting the structural element to obtain an edge detection line, the image subjected to global threshold segmentation is segmented by utilizing the edge detection line to obtain a pattern spot of the facility vegetable, a connecting sheet phenomenon in the pattern spot is removed by adopting an opening operation in the mathematical morphology to obtain a removed image, a connecting domain is adopted for displaying, a correlation index in the connecting domain is calculated, the removed image is finely segmented by adopting five characteristic indexes, and an extraction result of the grayscale image of the facility vegetable is generated.
In step seven, performing: carrying out binarization processing on the gray level image of the facility vegetable; extracting vector pattern spots of the facility vegetables from ENVI software and carrying out vector-to-grid processing; in ArcGIS software, the geographical coordinate system of the extracted facility vegetables is added, so that the final result image is obtained.
Therefore, compared with the prior art, the invention can realize the following beneficial effects:
1) building and road information is extracted by adopting global threshold segmentation, background information such as bare soil and the like is removed by adopting iterative threshold segmentation, and the precision of extraction of the facility vegetables can be ensured by adopting an edge detection segmentation algorithm;
2) compared with the traditional supervised classification and unsupervised classification, the method adopts the class extraction of a single image, focuses more on the deep analysis and mining of spectrum and texture information, and achieves the purposes of stronger pertinence and obviously improved classification precision;
3) the extraction precision of each image can be ensured by adopting mask processing, layered extraction and the assistance of local data and expert knowledge;
4) the combined advantages ensure the extraction advantages of the facility vegetables and the accuracy of accurate classification;
5) compared with the existing facility vegetable information extraction method, the method improves the robustness and universality of the high-resolution remote sensing image facility vegetable extraction, and realizes the extraction of the facility vegetable planting area.
Drawings
Fig. 1 is a flowchart of a facility vegetable extraction method based on multi-temporal high-resolution remote sensing images according to an embodiment of the present invention;
fig. 2 is a flow chart of a facility vegetable extraction specific operation based on a multi-temporal high-resolution remote sensing image according to an embodiment of the present invention;
FIG. 3 shows a high resolution remote sensing image preprocessed for a region;
fig. 4(a), fig. 4(b), fig. 4(c), fig. 4(d), and fig. 4(e) respectively show five texture feature value images obtained by using the gray level co-occurrence matrix in step three of fig. 1;
FIG. 5 shows an image after processing using the mask of step four of FIG. 1;
FIG. 6 shows the result image obtained after the threshold segmentation and canny edge detection algorithm of step five of FIG. 1 is adopted;
FIG. 7 is a diagram illustrating a display of a blob statistic using connected components, as referred to in step six of FIG. 1;
FIG. 8 is a graph showing the results of a blob process using shape feature indices, as referred to in step six of FIG. 1;
fig. 9 is a diagram showing an extraction result obtained by using the facility vegetable information method based on the high-resolution remote sensing image according to the present invention.
Detailed Description
It is understood that in agriculture, intensive vegetable production represented by facility vegetables is increasing in proportion, and therefore scientific and normative management of facility vegetable information is required. In order to meet the requirement, the present invention provides a method for extracting facility vegetable information based on high-resolution remote sensing images, and the present invention will be described in detail with reference to fig. 1 to 9 and the detailed description thereof.
As shown in fig. 1, the facility vegetable information extraction method based on the high-resolution remote sensing image includes the following steps:
firstly, preprocessing a high-resolution remote sensing image;
extracting a single wave band in the high-resolution remote sensing image, and performing image enhancement processing to improve the distinguishing capability of the facility vegetables and other ground object types;
selecting a proper observation window for texture analysis of the enhanced image, and calculating a characteristic value texture image;
fourthly, creating mask images of buildings and roads easy to cause facility and vegetable confusion according to the characteristic value texture images;
and fifthly, masking the original image by using the mask image produced in the previous step, and removing the background by adopting a threshold segmentation method again. Extracting the edges of the facility vegetables by adopting edge detection to obtain an edge detection line;
performing mathematical morphology operation on the edge detection line, removing the 'connected slice' phenomenon as much as possible, then displaying by adopting a connected domain, finely dividing the image by using a shape characteristic index, and generating a facility vegetable gray level image extraction result;
performing binarization processing on the facility vegetable gray level image, extracting a facility vegetable vector image spot, and performing vector grid-to-grid processing to extract facility vegetable information;
specifically, the specific technical flow of the steps one to seven is shown in fig. 2.
The method comprises the steps of firstly, after the remote sensing image is read, carrying out radiometric calibration, carrying out geometric correction by using a reference image, carrying out image fusion, carrying out image mosaic, carrying out research area cutting based on rules and the like, and finishing the preprocessing operation of the image.
And step two, extracting a blue wave band in the high-resolution remote sensing image, and performing histogram equalization image enhancement to improve the distinguishing capability of the facility vegetables and other ground object types.
Selecting a proper observation window for the enhanced image, and calculating a texture characteristic value image by adopting a texture statistical method of a gray level co-occurrence matrix, wherein the characteristic value comprises the following steps: entropy (ENT), Contrast (CON), autocorrelation (COR), Energy (ENE), homogeneity (homgemedium), etc.
And fourthly, creating mask images of buildings and roads which are easy to be confused by facility vegetables according to the characteristic value texture images by a global threshold segmentation method, and removing the buildings and the roads from the high-resolution remote sensing images through the mask images.
And fifthly, removing background elements such as soil in the masked high-resolution remote sensing image by adopting an iterative threshold segmentation method, and then carrying out canny edge detection to obtain an edge detection line of the ground object pattern spot.
And step six, performing mathematical morphology expansion operation on the edge detection line to connect the pixel points, and then segmenting the image segmented by the threshold value in the step four by using the edge line to obtain the pattern spots of the facility vegetables. The "connected slice" phenomenon is removed as much as possible by continuing the opening operation in morphology, and then the image is finely divided by five feature indexes of Ar (area), Perimeter, Metric (circularity), Pwl (aspect ratio), and Pr (rectangular ratio) by using connected domain display, thereby generating the facility vegetable grayscale image extraction result.
And seventhly, performing binarization processing on the facility vegetable gray level image, then extracting a facility vegetable vector pattern spot from ENVI software, performing vector grid conversion processing, and adding the extracted facility vegetable geographic coordinate system into ArcGIS software to obtain a final result image.
Further, the third step specifically includes the following steps:
gray scale quantization
And (4) performing texture analysis on the basis of the single-waveband gray level image obtained after the second step, and performing texture information statistics by adopting a gray level co-occurrence matrix, wherein the gray level of the image is 256 levels, so that the calculation amount is large, and the consumed time is long. Therefore, histogram equalization is carried out on the image before the gray level co-occurrence matrix is calculated, and then the gray level of the original image is compressed on the premise of not image texture features, generally 8-level or 16-level, so that the size of the co-occurrence matrix is reduced, and the calculation amount and the calculation time are reduced.
Determining observation windows
The setting of the window is very important, which is the key to extracting accurate texture information. There is a certain contradiction in the choice of the window. If texture is a regional concept, the principle of spatial consistency is required to be embodied, the larger the observation window is, the stronger the capability of detecting identity is, and the weaker the identity is, which causes a larger false recognition rate in the boundary region of each type, and the larger the window is, the larger the calculation amount is. If the boundaries of different textures correspond to transitions of identity with the region texture and the boundaries are located accurately, it is desirable to make the viewing window smaller. The difficulty is that if the window is too small, the segmentation error occurs in the same texture. Generally, when the image size is determined, the calculation window is determined accordingly.
Setting step distance and scanning direction
The gray level co-occurrence matrix varies rapidly with distance in fine textures and slowly with distance in coarse textures, and generally, a larger distance for smooth textures and a smaller distance for coarse textures will achieve better results. In the high-resolution remote sensing image, the step distance d is 1, and the scanning direction theta is 0 degrees, 45 degrees, 90 degrees and 135 degrees.
Computing gray level co-occurrence matrices for textures
The gray level co-occurrence matrix is defined as the joint probability distribution of pixel pairs, is a symmetric matrix, not only reflects the comprehensive information of image gray levels in adjacent directions, adjacent intervals and change ranges, but also reflects the position distribution characteristics among pixels with the same gray level, and is the basis for calculating texture characteristics. A point (x, y) and a point (x + a, y + b) deviating from the point (x + a, y + b) (wherein a and b are integers and are artificially defined) are arbitrarily taken in the image to form a point pair. Assuming that the gray-scale value of the pair is (f1, f2), and moving the point (x, y) over the entire image, different (f1, f2) values are obtained. Assuming that the maximum gray scale of the image is L, L × L combinations of f1 and f2 are common. For the whole image, the number of times of occurrence of each (f1, f2) value is counted, then the values are arranged into a square matrix, and the square matrix is normalized to the probability of occurrence P (f1, f2) by the total number of occurrences of (f1, f2), so that the generated matrix is a gray level co-generation matrix. Wherein, the gray level co-occurrence matrix is calculated as follows:
Figure BSA0000165372450000071
Figure BSA0000165372450000072
Figure BSA0000165372450000073
Figure BSA0000165372450000081
where D represents the pixel spacing, and (k, 1) and (m, n) are the pixel coordinates after the pixel spacing and the offset, respectively, where k, m are the ordinate and D is the image range.
Calculating a texture characteristic value: entropy (ENT), Contrast (CON), autocorrelation (COR), Energy (ENE), homogeneity (homgemedium, HOM).
The following 5 eigenvalues can be calculated from the gray level co-occurrence matrix, and the formula is as follows:
entropy (Entropy, ENT):
Figure BSA0000165372450000082
contrast (Contrast, CON):
Figure BSA0000165372450000083
wherein: i-j | ═ n.
Autocorrelation (COR'):
Figure BSA0000165372450000084
wherein: mu.sx,μyAnd deltax,δyAre respectively mx,myMean and standard deviation of (1), mxIs the sum of the elements of each row in the matrix P; m isyIs the sum of the elements of each column in the matrix P.
Energy (Energy, ENE'):
Figure BSA0000165372450000085
homogeneity (homgemeity, HOM):
Figure BSA0000165372450000086
in the above 5 formulae, P (i, j) represents an element value in GLCM.
Generating texture feature images
The main idea of texture feature image generation is as follows: and (3) calculating the gray level co-occurrence matrix and the texture characteristic value of the image of the observation window by using the sub-image formed by each observation window through a texture characteristic calculation program, and then assigning the texture characteristic value representing the window to the central point of the window, thereby completing the calculation of the texture characteristic value of one observation window. Then the window moves one pixel to form another observation window image, and then new co-occurrence matrix and texture characteristic value are repeatedly calculated. And analogizing in turn, the whole image forms a texture characteristic value matrix made of texture characteristic values, and then the texture characteristic value matrix is converted into a texture characteristic image in MATLAB to be displayed.
Further, the fifth step specifically includes the following steps:
removing background elements such as bare soil in the image by adopting an iterative threshold segmentation method
The iterative method adopts the idea based on approximation and comprises the following steps:
calculating the maximum gray value and the minimum gray value of the image, respectively recording as ZMAX and ZMIN, and making the initial threshold value T equal to (ZMAX + ZMIN)/2;
dividing the image into a foreground and a background according to the threshold T, and respectively solving average gray values ZO and ZB of the foreground and the background;
finding out a new threshold value T ═ ZO + ZB)/2;
if the two average gray values ZO and ZB do not change (or T does not change), then T is the threshold; otherwise, turning to 2) iterative computation. Calculating all the time and then removing background elements such as bare soil and the like.
Smoothing high-resolution remote sensing image by adopting Gaussian filter
I [ I, j ] is used for expressing the pixel value of a pixel (the line number is I, and the column number is j) in an original image, the convolution of the image and a Gaussian smoothing filter is solved by using a separable filtering method, and the obtained result is a smoothed data array, wherein the formula is as follows:
S[i,j]=G[σ]*I[i,j]
wherein S [ i, j ] represents the smoothed pixel value; g [ sigma ] is a Gaussian function; σ is a step function, which controls the degree of smoothing.
Calculating the magnitude and direction of the gradient
The gradients of the smoothed data arrays S [ i, j ] can be computed for both arrays P [ i, j ] and Q [ i, j ] of x and y partial derivatives using a 2 x 2 first order finite difference approximation, as follows:
P[i,j]≈(S[i,j+1]-S[i,j]+S[i+1,j+1]-S[i+1,j])/2
Q[i,j]≈(S[i,j]-S[i+1,j]+S[i,j+1]-S[i+1,j+1])/2
finite differences are averaged within this 2 x 2 square to calculate the partial derivative gradients of x and y at the same point in the image. The amplitude M [ i, j ] and azimuth θ [ i, j ] may be calculated using a Cartesian to polar coordinate transformation equation as follows:
Figure BSA0000165372450000091
θ[i,j]=arctan(Q[i,j]/P[i,j])
the arctangent function contains two parameters, which represent an angle, whose value range is the entire circumference range.
Non-maximum suppression of amplitude images
The purpose of non-maxima suppression is to eliminate most of the non-edge points in the resulting image computed in the previous step. The principle is to determine whether to place the pixel as an edge point or a background color by the eight neighborhoods of the pixel.
Detecting and connecting image edges using a dual threshold algorithm
For the image generated by non-maximum suppression, in order to reduce the number of false edges, a dual threshold algorithm is used to detect the edges, and a high threshold TH and a low threshold TL are generally set, and the ratio of the high threshold to the low threshold is generally between 2: 1 and 3: 1. If the gradient magnitude of a certain pixel position exceeds a high threshold, the pixel is reserved as an edge pixel; if the gradient magnitude of a certain pixel position is less than a low threshold, the pixel is excluded; if the magnitude of a pixel location is between two thresholds, the pixel is only retained when connected to a pixel above the high threshold. The dual-threshold algorithm can splice the candidate pixels into a contour, and a hysteresis threshold algorithm is applied to the pixels when the contour is formed.
Further, the sixth step specifically includes the following steps:
selecting a size 4 "disk" structuring element
The structural element can be regarded as a small image, and is generally used for dilation, erosion, opening and closing operations, and the like in morphological operations of the image. When filtering an image by using mathematical morphology, the most important thing is the selection of structural elements, and generally, for the structural elements, the type (shape) and size (dimension) of the structural elements need to be determined. Generally, for the types (shapes) in the structural elements, there are: ' arbitrary ', ' pair ', ' line ', ' square ', ' rectangle ', diamond ', ' disk ', etc. In the aspect of selection of structural elements, detailed information of land types in the high-resolution image is rich, and for the greenhouse of the facility vegetable, the land types are regular in shape and do not have obvious anisotropic characteristics. The disk (circular) element has the characteristic of isotropy, and in addition, the structural element can selectively remove noise and irrelevant image objects with scale in the image and retain other useful information, so that the circular structural element has great advantage in processing the high-resolution remote sensing image.
Performing morphological dilation operations using structural elements to obtain edge lines
After the edge detection is finished by using the canny algorithm, some edges have the problem of not being connected together, so the expansion algorithm in morphology is used for connecting the positions. The dilation (dilation) operation defines:
Figure BSA0000165372450000101
wherein: in the formula, A is an input image, B is a structural element, and x is the moving distance of the operation window. The dilation operation may incorporate all background points in contact with the object into the object, enlarging the object, and filling in holes in the object. The specific operation is as follows: each pixel in the image is scanned by a structural element, and each pixel in the structural element and the pixel covered by the structural element are subjected to AND operation, if the pixel is 0, and otherwise the pixel is 1.
Utilizing the image after the edge line segmentation step five threshold segmentation to obtain the facility vegetable pattern spots
And performing an AND operation on the image after the five-threshold segmentation and the masked image, deleting confusing objects connected with the facility vegetables and separating some connected facility vegetables to obtain the pattern spots of the facility vegetables as much as possible.
Adopts the morphological open operation to remove the 'connected slice' phenomenon as much as possible
The 'connected slice' phenomenon of the facility vegetables can be reduced by adopting the opening operation in morphology. Open (imopen) operation definition:
Figure BSA0000165372450000102
the opening operation is to sequentially carry out corrosion and expansion operation to smooth bright detail characteristics in the image, filter burrs smaller than defined structural elements, smooth the boundary of a larger object, cut off slender lap joints to play a role in separation, and simultaneously not obviously change the area of the object.
Using connected domain display
Images that have been processed to contain both facility vegetables and "noise" can be distinguished by labeling. A simple and effective way to mark regions in a segmented image is to check the connectivity of each pixel to its neighbors. After the processing in the above steps is performed, the background region pixel value of the image is 0, and the target region pixel value is 1. When the algorithm is set to scan an image from left to right and from top to bottom, it is necessary for the spot to be marked to mark the connectivity of the pixel currently being scanned with the several neighbor pixels scanned before it. Pixel scanning is performed using a 4-pass algorithm.
If the current pixel value is 0, the scanning position is moved to the next scanning position. If the current pixel value is 1, the two adjacent pixels to the left and top are examined (since they would be scanned before the current pixel). The combination of these two pixel values and the flag will give rise to four cases to be considered.
1) If their pixel values are both 0, then a new label (indicating the start of a new connected component) is given to the pixel.
2) If only one pixel value between them is 1, the current pixel is marked as the mark of the pixel value of 1.
3) If their pixel values are all 1 and the labels are the same, then the label of the current pixel is equal to the label.
4) If the pixel values are 1 and the labels are different, the smaller value is assigned to the current pixel, and then the other trace is traced back to the starting pixel of the region, wherein each trace needs to perform the above four determinations.
Therefore, all connected domains can be ensured to be marked, and then the marking can be finished by endowing different marks with different colors or adding frames to the different marks.
Calculating the correlation index in the connected domain by Ar (area), Perimeter (Perimeter), Metric (circularity), Pwl (aspect ratio) and Pr (rectangular ratio) to finely divide the image and generate a grayscale image of the greenhouse vegetable Results
And after all connected domains are marked, related indexes can be calculated, and the images are subdivided and segmented by using the related characteristic indexes to generate a grayscale image extraction result of the facility vegetables.
After morphological operation, the extraction precision of the facility vegetable greenhouse after image segmentation can be improved, but the phenomenon of wrong segmentation that roads are mixed with land features such as buildings still exists. By analysis, some buildings are generally represented by squares with smaller aspect ratios than those of the vegetable greenhouse, so that the buildings can be divided by using the rectangle ratio Pr, the aspect ratio Pwl and the circularity Metric; some of the small area irregular shaped patches can be removed using two parameters, area Ar and Perimeter.
1. Circularity Metric:
Figure BSA0000165372450000111
where S represents the area of the spot region and P is the perimeter of the spot region. I is more than 0 and less than or equal to 1, and the larger I is, the more the area is close to a circle.
2. Aspect ratio Pwl:
Figure BSA0000165372450000121
wherein, a is the length of the long side of the pattern spot area, and b is the length of the short side of the pattern spot area.
3. Squareness ratio Pr:
Figure BSA0000165372450000122
wherein, S is the area of the region of the pattern spot and represents the minimum circumscribed rectangle area of the region.
The ratio of the area of the spot region to the area of the smallest circumscribed rectangle of the spot region. The degree of similarity of a region rectangle can be measured, reflecting the degree of fullness of a patch in its bounding rectangle. The value range is as follows: [0, 1], a larger degree of rectangularity indicates that the area is closer to a rectangle.
Note that the invention uses MATLAB R2016a software to carry out programming simulation on a professional version system with a CPU of core (TM) i 5-45903.30 GHz, a memory of 4GB and Windows 8. In each embodiment of the present invention, a preprocessed high-resolution remote sensing image of a certain region containing 4 surface feature classes is selected, as shown in fig. 3. After the histogram equalization image enhancement in the second step, five kinds of feature value texture images obtained by performing statistics on feature values in the gray level co-occurrence matrix in the texture image extraction step in the third step are generated, as shown in fig. 4. And then, four steps are carried out, namely, the characteristic texture image is processed to create a mask image of the corresponding building and the corresponding road, wherein the mask image is shown in figure 5. And fifthly, obtaining a result image after threshold segmentation and a canny edge detection algorithm are adopted in the masked image, and the result image is shown in fig. 6. In the sixth specific implementation step, a result image of connected domain image spot statistics performed in the 4-neighborhood manner is shown in fig. 7, and then a processed display result image is obtained by removing the "noise" image spots in the image through the shape feature index, as shown in fig. 8. Finally, the facility vegetable information reddest extraction result based on the high-resolution remote sensing image is shown in fig. 9. The extraction result and the result of manual visual interpretation can reach more than 90% of precision, and the design method of the invention is proved to have excellent extraction effect.
In conclusion, the invention adopts global threshold segmentation to extract building and road information, adopts iterative threshold segmentation to remove background information such as bare soil and the like, and adopts an edge detection segmentation algorithm to ensure the precision of extraction of the facility vegetables.
In addition, the invention adopts the class extraction of a single image, and compared with the traditional supervised classification and unsupervised classification, the invention focuses more on the deep analysis and mining of the spectrum and texture information, thereby achieving the purposes of stronger pertinence and obviously improving the classification precision. In addition, the mask processing, the layered extraction and the assistance of local data and expert knowledge can ensure the extraction precision of each image. The combined advantages ensure the extraction advantages of the facility vegetables and the accuracy of the accurate classification. Compared with the existing facility vegetable information extraction method, the method has better robustness and universality.
The foregoing is only a basic embodiment of the invention, and it will be apparent to those skilled in the art that various modifications and enhancements can be made without departing from the principles of the invention, which modifications and enhancements are also considered to be within the scope of the invention. The parts not described in the present invention belong to the known art in the field.

Claims (14)

1. A facility vegetable extraction method based on a multi-temporal high-resolution remote sensing image is used for extracting facility vegetable information in the image based on the high-resolution remote sensing image, and is characterized by comprising the following steps:
preprocessing the high-resolution remote sensing image to extract a single-waveband image;
performing image enhancement processing on the single-waveband image so as to improve the distinguishing capability of the facility vegetables and other ground object types;
selecting a preset observation window, and performing texture analysis processing on the enhanced image so as to calculate and obtain a characteristic value texture image;
fourthly, based on the characteristic value texture image, creating a mask image of buildings and roads which are easy to be confused with the facility vegetables;
step five, masking the original high-resolution remote sensing image by using the mask image so as to extract an edge detection line of a surface feature pattern spot;
step six, performing mathematical morphology operation and sub-segmentation processing on the edge detection line so as to generate an extraction result of a gray level image of the facility vegetable;
and seventhly, performing binarization processing on the grayscale image of the facility vegetable, extracting the vector pattern spots of the facility vegetable, and performing vector-to-grid processing, thereby realizing the extraction of the facility vegetable information.
2. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 1, wherein the preprocessing at least comprises: radiometric calibration, geometric correction by using a reference image, image fusion, forward image and region clipping based on rules.
3. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 2, wherein the first step is executed as follows:
carrying out orthorectification operation on the remote sensing image by adopting the rpb file carried by the high-resolution remote sensing image;
performing radiometric calibration on the image subjected to the calibration by adopting calibration parameters and a spectral response function so as to obtain an earth surface reflectivity image;
adopting a reference image to carry out geometric correction on the earth surface reflectivity image;
performing fusion processing on the panchromatic image with high resolution and multiple spectra by adopting a GS fusion method;
and (4) clipping the image by adopting an administrative boundary, thereby finishing the preprocessing of the image.
4. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 1, wherein the second step further comprises:
and extracting a blue wave band in the high-resolution remote sensing image, and performing histogram equalization image enhancement processing.
5. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 4, wherein in the second step:
converting the data of the high-resolution remote sensing image into a double type;
converting the data of the high-resolution remote sensing image into a gray image in a [0, 1] interval;
expanding the data of the high-resolution remote sensing image to a [0, 255] interval;
extracting a blue band in the high-resolution remote sensing image for analysis;
and performing histogram equalization image enhancement processing on the blue wave band, thereby further improving the distinguishing capability of the facility vegetables and other ground feature categories.
6. The method for extracting greenhouse vegetables based on multi-temporal high-resolution remote sensing images as claimed in claim 1, wherein the texture analysis process is a texture statistical method of gray level co-occurrence matrix, and the eigenvalues of the eigenvalue texture images at least include: entropy, contrast, autocorrelation, energy, homogeneity.
7. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 6, wherein the following steps are performed in the third step:
gray scale operation; determining the observation window; setting a step distance and a scanning direction; calculating a gray level co-occurrence matrix of the texture; calculating the characteristic value of the characteristic value texture image; and generating the characteristic value texture image.
8. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 1, wherein the fourth step comprises:
creating mask images of buildings and roads which are easy to be confused with the facility vegetables by a global threshold segmentation method according to the characteristic value texture images,
and the mask image is used for removing buildings and roads from the high-resolution remote sensing image.
9. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 8, wherein the fourth step is performed by:
selecting texture images with high distinguishing degree between buildings, roads and other ground features;
replacing the buildings and roads by adopting a global threshold segmentation method to create the mask image;
and eliminating buildings and roads under influence by using the mask image.
10. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 1, wherein the fifth step comprises:
masking the original high-resolution remote sensing image by using the mask image;
removing background elements by adopting an iterative threshold segmentation method;
extracting the edges of the facility vegetables by adopting an edge detection method so as to obtain the edge detection line,
wherein, the background element at least comprises image soil, and the edge detection method is canny edge detection method.
11. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 10, wherein the step five is executed as follows:
removing background elements by adopting an iterative threshold segmentation method;
smoothing the remote sensing image by adopting a Gaussian filter;
calculating the amplitude and direction of the gradient;
carrying out non-maximum suppression on the amplitude image;
the image edges are detected and connected using a dual threshold algorithm.
12. The method for extracting greenhouse vegetables based on multi-temporal high-resolution remote sensing images according to claim 1, wherein the sixth step comprises:
performing mathematical morphology operation on the edge detection line to connect image pixel points;
dividing the image subjected to global threshold segmentation by using the edge detection line so as to obtain the pattern spots of the facility vegetables;
removing the connecting piece phenomenon in the image spots by using the opening operation in the mathematical morphology so as to obtain a removed image;
performing fine segmentation on the removed image through a characteristic index by adopting a connected domain display mode to generate an extraction result of a gray level image of the facility vegetable,
wherein the mathematical morphology operation is a dilation operation of mathematical morphology, and the characteristic index includes an area, a perimeter, a circularity, a length ratio, and a rectangular ratio.
13. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 12, wherein in the sixth step:
selecting a "disk" structural element of size 2;
executing the mathematical morphology expansion operation by adopting a structural element to obtain the edge detection line;
dividing the image subjected to global threshold segmentation by using the edge detection line to obtain the pattern spots of the facility vegetables;
removing the connecting piece phenomenon in the image spots by adopting the opening operation in the mathematical morphology so as to obtain a removed image;
displaying by adopting a connected domain;
and calculating the correlation index in the connected domain, and finely dividing the removed image through five characteristic indexes to generate an extraction result of the grayscale image of the facility vegetable.
14. The facility vegetable extraction method based on the multi-temporal high-resolution remote sensing image according to claim 1, wherein in the seventh step:
carrying out binarization processing on the grayscale image of the facility vegetable;
extracting vector pattern spots of the facility vegetables from ENVI software and carrying out vector-to-grid processing;
and adding the extracted geographical coordinate system of the facility vegetable in ArcGIS software so as to obtain a final result image.
CN201810592833.XA 2018-06-11 2018-06-11 Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image Active CN108830844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810592833.XA CN108830844B (en) 2018-06-11 2018-06-11 Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810592833.XA CN108830844B (en) 2018-06-11 2018-06-11 Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image

Publications (2)

Publication Number Publication Date
CN108830844A CN108830844A (en) 2018-11-16
CN108830844B true CN108830844B (en) 2021-09-10

Family

ID=64144933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810592833.XA Active CN108830844B (en) 2018-06-11 2018-06-11 Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image

Country Status (1)

Country Link
CN (1) CN108830844B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685081B (en) * 2018-12-27 2020-07-24 中国土地勘测规划院 Combined change detection method for remote sensing extraction of abandoned land
CN109934122A (en) * 2019-02-21 2019-06-25 北京以萨技术股份有限公司 A kind of remote sensing image ship detecting method based on deep learning
CN110276797B (en) * 2019-07-01 2022-02-11 河海大学 Lake area extraction method
CN111259797A (en) * 2020-01-16 2020-06-09 南开大学 Iterative remote sensing image road extraction method based on points
CN113610013A (en) * 2021-08-10 2021-11-05 四川易方智慧科技有限公司 Method for extracting building outline based on RGB (Red Green blue) wave bands of high-definition remote sensing image
CN115346162B (en) * 2022-10-19 2022-12-13 南京优佳建筑设计有限公司 Indoor monitoring-based real-time monitoring method for water seepage of underground building wall
CN116385472B (en) * 2023-06-07 2023-08-08 深圳市锦红兴科技有限公司 Hardware stamping part deburring effect evaluation method
CN117237384B (en) * 2023-11-16 2024-02-02 潍坊科技学院 Visual detection method and system for intelligent agricultural planted crops
CN118230192B (en) * 2024-05-27 2024-07-23 环天智慧科技股份有限公司 Processing method for intertillage map spot collineation of cultivated land image based on example segmentation model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN105184251A (en) * 2015-08-31 2015-12-23 中国科学院遥感与数字地球研究所 Water bloom area identification method and device based on high-resolution-I satellite image
CN106650812A (en) * 2016-12-27 2017-05-10 辽宁工程技术大学 City water body extraction method for satellite remote sensing image
CN107154044A (en) * 2017-03-27 2017-09-12 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of dividing method of Chinese meal food image
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN105184251A (en) * 2015-08-31 2015-12-23 中国科学院遥感与数字地球研究所 Water bloom area identification method and device based on high-resolution-I satellite image
CN106650812A (en) * 2016-12-27 2017-05-10 辽宁工程技术大学 City water body extraction method for satellite remote sensing image
CN107154044A (en) * 2017-03-27 2017-09-12 广东顺德中山大学卡内基梅隆大学国际联合研究院 A kind of dividing method of Chinese meal food image
CN107862667A (en) * 2017-11-23 2018-03-30 武汉大学 A kind of city shadow Detection and minimizing technology based on high-resolution remote sensing image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Semi Automatic Road Extraction by Fusion of High Resolution Optical and Radar Images;Elahe Khesali et al;《J Indian Soc Remote Sens》;20151021;1-9 *
基于高分辨遥感数据的农业大棚面积提取及分析——以北京市大兴区为例;李黔湘;《北京水务》;20161231(第6期);14-17 *

Also Published As

Publication number Publication date
CN108830844A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108830844B (en) Facility vegetable extraction method based on multi-temporal high-resolution remote sensing image
Yang et al. Urban surface water body detection with suppressed built-up noise based on water indices from Sentinel-2 MSI imagery
Li et al. An index and approach for water extraction using Landsat–OLI data
Kuang et al. A comparative analysis of megacity expansions in China and the US: Patterns, rates and driving forces
WO2024020744A1 (en) High-resolution image-based ecological patch extraction method
CN107564017B (en) Method for detecting and segmenting urban high-resolution remote sensing image shadow
CN111340826A (en) Single tree crown segmentation algorithm for aerial image based on superpixels and topological features
Chen et al. Mapping agricultural plastic greenhouses using Google Earth images and deep learning
CN110645961B (en) Forest resource dynamic change detection method based on remote sensing and NDVI
CN107247927B (en) Method and system for extracting coastline information of remote sensing image based on tassel cap transformation
CN107330875A (en) Based on the forward and reverse heterogeneous water body surrounding enviroment change detecting method of remote sensing images
CN105631903A (en) Remote sensing image water extraction method and device based on RGBW characteristic space diagram cutting algorithm
CN110705449A (en) Land utilization change remote sensing monitoring analysis method
CN106340005A (en) High-resolution remote sensing image unsupervised segmentation method based on scale parameter automatic optimization
CN113780307A (en) Method for extracting blue-green space information with maximum regional year
CN114821348A (en) Sea ice drawing method
CN112037244A (en) Landsat-8 image culture pond extraction method combining index and contour indicator SLIC
CN112241956B (en) PolSAR image ridge line extraction method based on region growing method and variation function
CN114387446A (en) Automatic water body extraction method for high-resolution remote sensing image
CN109635715A (en) A kind of remote sensing images building extracting method
CN116363514A (en) Method and device for determining vegetation growth suitable area of inland coastal zone based on multi-source images
CN113642663B (en) Satellite remote sensing image water body extraction method
CN117876696B (en) Object-oriented high-resolution image cultivated land information extraction method
CN113642579B (en) Method and system for determining and dividing triarrhena growing area
Zhang et al. Research on classification method based on multi-scale segmentation and hierarchical classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant