CN110310246B - Sugarcane planting area remote sensing information extraction method based on three-linear array image - Google Patents

Sugarcane planting area remote sensing information extraction method based on three-linear array image Download PDF

Info

Publication number
CN110310246B
CN110310246B CN201910603652.7A CN201910603652A CN110310246B CN 110310246 B CN110310246 B CN 110310246B CN 201910603652 A CN201910603652 A CN 201910603652A CN 110310246 B CN110310246 B CN 110310246B
Authority
CN
China
Prior art keywords
image
view
sugarcane
data
multispectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910603652.7A
Other languages
Chinese (zh)
Other versions
CN110310246A (en
Inventor
罗恒
丘小春
刘波
韦金丽
凌子燕
邵光州
黄妤
钟喆
廖珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Zhuang Autonomous Region Natural Resources Information Center
Original Assignee
Guangxi Zhuang Autonomous Region Basic Geographic Information Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Zhuang Autonomous Region Basic Geographic Information Center filed Critical Guangxi Zhuang Autonomous Region Basic Geographic Information Center
Priority to CN201910603652.7A priority Critical patent/CN110310246B/en
Publication of CN110310246A publication Critical patent/CN110310246A/en
Application granted granted Critical
Publication of CN110310246B publication Critical patent/CN110310246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a sugarcane planting area remote sensing information extraction method based on a three-linear array image. The sugarcane planting information extraction method provided by the invention overcomes the defects that the conventional sugarcane remote sensing identification only uses a single-view-angle image shot by a conventional satellite optical camera, available data is limited, only the characteristics of spectrum, texture, shape and the like of the satellite image are calculated, the available information is limited and the like, can acquire the elevation and multi-angle observation information of the sugarcane planting area, combines the characteristics of spectrum, texture and the like, and carries out the sugarcane planting area remote sensing identification based on an object-oriented method, thereby improving the sugarcane and other precision.

Description

Sugarcane planting area remote sensing information extraction method based on three-linear-array image
Technical Field
The invention belongs to the technical field of agricultural information mapping, photogrammetry and remote sensing, and particularly relates to a sugarcane planting area remote sensing information extraction method based on a three-linear array image.
Background
With the development of satellite remote sensing technology, more and more remote sensing images can be used for remote sensing identification of crop planting areas. For example, in some studies, sugarcane crop information extraction and yield estimation were performed using medium-resolution or high-resolution, multi-spectral band optical satellite imagery (Patel, 1985, rudorff,1990, rao,2002, galvao,2005, baghdadi,2009, elhajj,2009, rudorff). However, there are some problems in monitoring the cantonese sugarcane by using the satellite remote sensing image, for example, the cloud and rainy climate limits the effectiveness and usability of the image, most of the research is mainly based on the satellite image with medium-low resolution, and therefore, more satellite images with high resolution need to be used, more technical means are used to improve the identification precision, and the requirement of remote sensing identification of sugarcane crops is met.
The invention discloses a method and a device for dynamically updating sugarcane planting information, and particularly relates to a method and a device for dynamically updating sugarcane planting information, which are disclosed in the patent CN201610670931.1, and specifically comprise: acquiring high-resolution remote sensing sugarcane plot images of different time phases in the same region, and identifying unchanged plots and changed plots; carrying out time dimension migration on the unchanged land parcel by utilizing historical land category attributes; and carrying out feature matching on the changed plots and the unchanged plots to judge sugarcane plots and non-sugarcane plots in the changed plots. The invention takes high-resolution data as the basis of the identification and classification of the plot, avoids the phenomenon of 'salt and pepper' in pixel level classification, and improves the classification and area measurement and calculation precision and the verifiability of the result; and the traditional object-oriented irregular pattern spots without natural and social attribute meanings are replaced by the accurate plot boundaries. The method improves the updating speed and frequency of the sugarcane planting information and reduces the data quantity dependence in the updating process.
The invention patent CN201610664410.5, a method and a device for acquiring an undulating surface sugarcane planting area based on a remote sensing image, and discloses a method and a device for acquiring an undulating surface sugarcane planting area based on a remote sensing image, wherein the method comprises the following steps: acquiring a pattern spot vector diagram and DME grid data of a sugarcane plot; acquiring a gradient mean value of a spot vector diagram according to DME grid data; dividing the map spot vector diagram of the sugarcane plot into a flat plot and a slope plot according to the slope mean value and the slope threshold value; acquiring the area of a land block on flat ground; decomposing a micro surface element of the gradient plot into a plurality of mutually adjacent plane small surface elements, acquiring the area of the plane small surface elements, and adding the areas of the plurality of plane small surface elements to acquire the area of the gradient plot; and adding the areas of the two plots to obtain the area of the sugarcane plot. According to the method, the terrain of the sugarcane plot is judged by combining a high-precision digital elevation model according to a plane range vector acquired by remote sensing, the rapid fitting calculation of the real sugarcane planting area is realized by extracting important terrain and boundary characteristics, and accurate data support is provided for policy making and production management decision making.
The three-linear array three-dimensional mapping satellite is provided with a three-linear array panchromatic camera and a multi-spectral camera, and more available bands and wider coverage range are provided for land utilization coverage, sugarcane planting monitoring and the like. By using the three-linear-array camera images, the sugarcane planting area can be observed by 2-meter or even 1-meter high-resolution images shot from different angles; meanwhile, the multispectral camera can provide information of 4 different wave bands of blue, green, red and near infrared, the spatial resolution is 5.8m, and the multispectral image and the panchromatic image are fused, so that an image with higher resolution and spectral characteristics can be obtained. The sugarcane planting area is easy to be confused with other crops such as corn, rice and the like, and the height information is an important characteristic of the sugarcane which is different from the other crops. A Digital Surface Model (DSM) generated by using the three-linear array image can reflect the height information of the ground surface, and can distinguish the characteristics of the height difference of the sugarcane and other crops. When the sugarcane is in a mature period, the stem is 3 to 4 meters high, and images obtained at different angles can show different spectral and textural characteristics.
At present, research and report of extracting crop information by using a satellite three-linear array image do not appear in published documents, so that sugarcane information extraction by using a resource three-linear array image is attempted, more means and data sources are provided for sugarcane remote sensing monitoring, and the application range of a three-linear array high-resolution satellite is further expanded.
Disclosure of Invention
The invention provides a sugarcane planting area remote sensing information extraction method based on a three-linear array image, aiming at the defects that the existing sugarcane planting information extraction method is low in precision and incomplete in remote sensing information calculation.
The invention is realized by the following technical scheme:
a sugarcane planting area remote sensing information extraction method based on a three-linear array image comprises the following steps:
A. three-linear array image preprocessing, namely processing an original foresight image, an original backsight image, an original orthophoto image and an original multispectral image obtained by using a three-linear array three-dimensional mapping satellite into an image set with consistent resolution to obtain a foresight corrected image, a backsight corrected image, an orthophoto corrected image and a multispectral corrected image;
the method comprises the following specific steps:
(1) Forward-looking image orthorectification based on a rational polynomial model:
opening the foresight image data and the digital elevation model file, reading a rational polynomial file (RPC) of the foresight image, and performing orthorectification processing on the foresight image based on a rational polynomial function model to obtain a foresight rectified image;
(2) Rear-view image ortho-rectification based on rational polynomial model:
opening the rearview image data and the digital elevation model file, reading a rational polynomial file (RPC) of the rearview image, and performing orthorectification processing on the rearview image based on a rational polynomial function model to obtain a rearview rectified image;
(3) Orthophoto orthorectification based on rational polynomial model:
opening the front-view image data and the digital elevation model file, reading a rational polynomial file (RPC) of the front-view image, and performing an orthorectification treatment on the front-view image based on a rational polynomial function model to obtain a front-view rectified image;
(4) Multispectral image orthorectification based on rational polynomial functions:
opening multispectral image data and a digital elevation model file, reading a rational polynomial file (RPC) of the multispectral image, and performing orthorectification processing on the multispectral image based on a rational polynomial function model to obtain a multispectral rectified image;
B. generating a digital surface model, namely DSM, by using the three-linear array image;
the method comprises the following specific steps:
(1) Generating a left epipolar line image by using the original foresight image and the foresight correction image;
(2) Generating a right epipolar line image by using the original rear-view image and the rear-view corrected image;
(3) Combining the left epipolar line image and the right epipolar line image to generate a DSM;
C. forming a resampling image data set by the front-view corrected image, the rear-view corrected image, the front-view corrected image, the multi-spectral corrected image and the DSM;
the method comprises the following specific steps:
(1) Fusing and enhancing the multispectral corrected image with low resolution and the front-view corrected image with high resolution to generate an image with higher spatial resolution and multispectral characteristics, namely a multispectral fused image;
(2) Respectively resampling the front-view corrected image, the rear-view corrected image and the front-view corrected image, wherein the resolution is numerically set to be the same as the front-view image, and obtaining a resampled front-view corrected image, a resampled rear-view corrected image and a resampled front-view corrected image;
(3) Combining the resampled front-view corrected image, the resampled rear-view corrected image and the multispectral fusion image as different wave bands to form a multi-band image, and then forming and combining the multi-band image and the DSM to form a data set, namely obtaining a resampled image data set;
D. segmenting the resampled image data set to obtain an image segmentation object set;
the segmentation algorithm is a watershed image segmentation algorithm, segmentation scale parameters are selected according to image resolution, the segmentation scale parameters of an image with the resolution less than 1 m can be set to be more than 90, and the segmentation scale parameters of an image with the resolution of about 2 m can be set to be more than 85;
E. selecting samples, selecting sugarcane planting area samples according to field investigation or interior interpretation results in the image segmentation object set, and selecting 20-30 images in each scene;
F. classifying information, namely performing supervision and classification on the images by using a support vector machine, a neural network or other machine learning algorithms, and extracting sugarcane obtained from classification results as final extraction results;
the method comprises the following specific steps:
(1) Using the sample as a training sample, and performing image analysis on the basis of the resampled image data collection to obtain image classification result grid data;
(2) And extracting immature sugarcane, mature sugarcane and cut-harvested sugarcane obtained from the image classification result grid data as final extraction results to obtain remote sensing information of the sugarcane planting area.
The working principle of the invention is as follows:
the method comprises the steps of using a satellite three-linear array image, forming a digital surface model, a multispectral fusion image, a front-view image and a back-view image through preprocessing, dividing the image into an object set by adopting an object-oriented method, selecting a certain number of sugarcane planting area sample objects and non-sugarcane planting area sample objects in the image, and identifying the land parcel objects belonging to the sugarcane planting area category in the image by using a feature extraction method based on supervision and classification.
The classification features comprise attribute features such as spectrum, texture, shape and the like of the multispectral fusion image, the front-view image, the rear-view image and the digital surface model.
As a further improvement of the present invention, the generation step of the epipolar line image in step B is as follows:
(1) Projecting the front-view original front-view image or the original rear-view image onto a plane parallel to the shooting baseline, wherein the geometrical relationship is similar to an image space coordinate system and an object space coordinate system; the coordinate relation between the points on the original front-view image or the original rear-view image and the points on the front-view corrected image or the rear-view corrected image can be obtained through the similar rotation matrix, and a horizontal epipolar line is formed;
(2) Points are taken at intervals on the horizontal epipolar line, one image is selected to be not less than 20 points, the number of the points can be increased according to the terrain complexity, then the image can be inversely calculated on the original image by utilizing the coordinate relation on the plane parallel to the shooting baseline, and the epipolar line image gray information is obtained by utilizing resampling to form a left epipolar line image or a right epipolar line image.
As a further improvement of the present invention, the resampled data set in step C includes a red band, a green band, and a blue band of the multispectral fusion image, and a panchromatic band of the resampled front-view corrected image and a panchromatic band of the resampled back-view corrected image.
As a further improvement of the invention, the samples in the step E comprise immature sugarcane, mature sugarcane, harvested sugarcane, woodland, grassland, water body, built-up area, road, bare soil, other dry land and paddy field.
As a further improvement of the present invention, the image analysis method in step F comprises: calculating by using the spectral, texture and shape attribute characteristics of the multispectral fusion image, the forward-looking correction image, the backward-looking correction image and the digital surface model; the spectral data comprise a mean value, a maximum value, a minimum value and a variance; the texture data comprises mean, variance, entropy, angular second moment and homogeneity; the shape data includes area, aspect ratio, and squareness.
As a further improvement of the method, if the image classification result raster data in the step F has fragments, obvious misclassification and the like, the image classification result raster data is subjected to category merging and data elimination processing, and then sugarcane data extraction can be performed.
The invention has the beneficial effects that:
the method utilizes the satellite three-linear-array image to obtain the elevation and multi-angle observation information of the sugarcane planting area, combines the characteristics of spectrum, texture and the like, and performs the remote sensing identification of the sugarcane planting area based on the object-oriented method, thereby improving the identification precision, solving the problems that the remote sensing identification of the sugarcane planting area only uses a single-view image shot by a conventional satellite optical camera, available data is limited, and simultaneously, only the characteristics of spectrum, texture, shape and the like of the satellite image are calculated, the available information is limited, and the identification precision is difficult to further improve.
Drawings
Fig. 1 is a schematic block diagram of a method for extracting remote sensing information of a sugarcane planting area according to embodiment 1 of the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Example 1
A sugarcane planting area remote sensing information extraction method based on a three-linear array image comprises the following steps:
1. forward-looking image orthorectification based on rational polynomial model:
opening the foresight image data and the digital elevation model file, reading a rational polynomial file (RPC) of the foresight image, and performing orthorectification processing on the foresight image based on a rational polynomial function model to obtain a foresight rectified image;
2. rear-view image ortho-rectification based on rational polynomial model:
opening the rearview image data and the digital elevation model file, reading a rational polynomial file (RPC) of the rearview image, and performing orthorectification processing on the rearview image based on a rational polynomial function model to obtain a rearview rectified image;
3. orthophoto correction based on rational polynomial model:
opening the front-view image data and the digital elevation model file, reading a rational polynomial file (RPC) of the front-view image, and performing an orthorectification treatment on the front-view image based on a rational polynomial function model to obtain a front-view rectified image;
4. multispectral image orthorectification based on rational polynomial functions:
opening multispectral image data and a digital elevation model file, reading a rational polynomial file (RPC) of the multispectral image, and performing orthorectification processing on the multispectral image based on a rational polynomial function model to obtain a multispectral rectified image;
5. projecting the original front-view image or the original rear-view image onto a plane parallel to the shooting baseline, wherein the geometric relationship of the original front-view image or the original rear-view image is similar to an image space coordinate system and an object space coordinate system; the coordinate relation between the points on the original front-view image or the original rear-view image and the points on the front-view corrected image or the rear-view corrected image can be obtained through the similar rotation matrix, and a horizontal epipolar line is formed;
6. points are taken at intervals on the horizontal epipolar line, one image is selected to be not less than 20 points, the number of the points can be increased according to the terrain complexity, then the image can be inversely calculated on the original image by utilizing the coordinate relation on the plane parallel to the shooting baseline, and the epipolar line image gray information is obtained by utilizing resampling to form a left epipolar line image or a right epipolar line image.
7. Combining the left epipolar line image and the right epipolar line image to generate a Digital Surface Model (DSM);
8. fusing and enhancing the multispectral corrected image with low resolution and the front-view corrected image with high resolution to generate an image with higher spatial resolution and multispectral characteristics, namely a multispectral fused image;
9. respectively resampling the front-view corrected image, the rear-view corrected image and the front-view corrected image, setting the resolution value to be the same as that of the front-view image, and obtaining a resampled front-view corrected image, a resampled rear-view corrected image and a resampled front-view corrected image;
10. combining the resampled front-view corrected image, the resampled rear-view corrected image and the multispectral fusion image as different wave bands to form a multi-band image, and then combining the multi-band image and the DSM to form a data set to obtain a resampled image data set;
the resampling data collection comprises a red wave band, a green wave band and a blue wave band of the multispectral fusion image, and a panchromatic wave band of a resampling front-view correction image and a panchromatic wave band of a resampling rear-view correction image;
11. segmenting the resampled image data set to obtain an image segmentation object set; the segmentation algorithm is a watershed image segmentation algorithm, segmentation scale parameters are selected according to image resolution, the segmentation scale parameters of an image with the resolution less than 1 m can be set to be more than 90, and the segmentation scale parameters of an image with the resolution of about 2 m can be set to be more than 85; 12. selecting samples, selecting sugarcane planting area samples according to field investigation or interior interpretation results in the image segmentation object set, and selecting 20-30 images in each scene; the samples comprise immature sugarcane, mature sugarcane, cut and harvested sugarcane, woodland, grassland, water body, built-up area, road, bare soil, other dry land and paddy field;
13. using the sample as a training sample, and performing image analysis on the basis of the resampled image data set to obtain image classification result grid data; the image analysis method comprises the following steps: calculating by using the spectral, texture and shape attribute characteristics of the multispectral fusion image, the forward-looking correction image, the backward-looking correction image and the digital surface model;
the spectral data comprise a mean value, a maximum value, a minimum value and a variance; the texture data comprises mean, variance, entropy, angular second moment and homogeneity; the shape data includes area, aspect ratio, and squareness.
14. And extracting immature sugarcane, mature sugarcane and cut-harvested sugarcane obtained from the image classification result grid data as final extraction results to obtain remote sensing information of the sugarcane planting area. If the image classification result raster data has the conditions of fragments, obvious misclassification and the like, the image classification result raster data is subjected to category combination and data rejection processing, and then sugarcane data can be extracted.
The above embodiments are only exemplary embodiments of the present invention, and are not intended to limit the present invention, and the scope of the present invention is defined by the claims. Various modifications and equivalents may be made thereto by those skilled in the art within the spirit and scope of the present invention, and such modifications and equivalents should be considered as falling within the scope of the present invention.

Claims (5)

1. A sugarcane planting area remote sensing information extraction method using a three-linear array image is characterized by comprising the following steps:
A. three-linear array image preprocessing, namely processing an original foresight image, an original backsight image, an original orthophoto image and an original multispectral image obtained by using a three-linear array three-dimensional mapping satellite into an image set with consistent resolution to obtain a foresight corrected image, a backsight corrected image, an orthophoto corrected image and a multispectral corrected image;
the method comprises the following specific steps:
(1) Forward-looking image orthorectification based on a rational polynomial model:
opening the forward-looking image data and the digital elevation model file, reading the forward-looking image rational polynomial file, and performing orthorectification processing on the forward-looking image based on the rational polynomial function model to obtain a forward-looking corrected image;
(2) Rear-view image ortho-rectification based on rational polynomial model:
opening the rearview image data and the digital elevation model file, reading the rearview image rational polynomial file, and performing orthorectification processing on the rearview image based on the rational polynomial function model to obtain a rearview rectified image;
(3) Orthophoto orthorectification based on rational polynomial model:
opening the front-view image data and the digital elevation model file, reading the front-view image rational polynomial file, and performing orthorectification processing on the front-view image based on a rational polynomial function model to obtain a front-view rectified image;
(4) Multispectral image orthorectification based on rational polynomial functions:
opening multispectral image data and a digital elevation model file, reading a rational polynomial file of the multispectral image, and performing orthorectification processing on the multispectral image based on a rational polynomial function model to obtain a multispectral rectified image;
B. generating a digital surface model, namely DSM, by using the three-linear array image;
the method comprises the following specific steps:
(1) Generating a left epipolar line image by using the original foresight image and the foresight corrected image;
(2) Generating a right epipolar line image by using the original rear-view image and the rear-view corrected image;
(3) Combining the left epipolar line image and the right epipolar line image to generate a DSM;
C. forming a resampling image data set by the front-view corrected image, the rear-view corrected image, the front-view corrected image, the multi-spectral corrected image and the DSM;
the method comprises the following specific steps:
(1) Fusing and enhancing the multispectral corrected image with low resolution and the front-view corrected image with high resolution to generate an image with higher spatial resolution and multispectral characteristics, namely a multispectral fused image;
(2) Respectively resampling the front-view corrected image, the rear-view corrected image and the front-view corrected image, wherein the resolution is numerically set to be the same as the front-view image, and obtaining a resampled front-view corrected image, a resampled rear-view corrected image and a resampled front-view corrected image;
(3) Combining the resampled front-view corrected image, the resampled rear-view corrected image and the multispectral fusion image as different wave bands to form a multi-band image, and then combining the multi-band image and the DSM to form a data set to obtain a resampled image data set;
D. segmenting the resampled image data set to obtain an image segmentation object set;
the segmentation is to use a watershed image segmentation algorithm, segmentation scale parameters are selected according to the image resolution, the segmentation scale parameters of the image with the resolution less than 1 meter can be set to be more than 90, and the segmentation scale parameters of the image with the resolution of about 2 meters can be set to be more than 85;
E. selecting samples, selecting sugarcane planting area samples according to field investigation or interior interpretation results in the image segmentation object set, and selecting 20-30 images in each scene;
F. classifying information, namely performing supervision and classification on the images by using a support vector machine, a neural network or other machine learning algorithms, and then extracting sugarcane obtained from classification results as final extraction results;
the method comprises the following specific steps:
(1) Using the sample as a training sample, and performing image analysis on the basis of the resampled image data set to obtain image classification result grid data;
the image analysis method comprises the following steps: calculating by using the multispectral fusion image, the forward-looking corrected image, the backward-looking corrected image, the spectral data, the texture data and the shape attribute characteristics of the digital surface model;
the spectral data comprise a mean value, a maximum value, a minimum value and a variance; the texture data comprises mean, variance, entropy, angular second moment and homogeneity; shape data includes area, aspect ratio, and squareness;
(2) And extracting immature sugarcane, mature sugarcane and cut-harvested sugarcane obtained from the image classification result grid data as final extraction results to obtain remote sensing information of the sugarcane planting area.
2. The method for extracting the remote sensing information of the sugarcane planting area by using the three-linear array image according to claim 1, wherein the epipolar line image in the step B is generated by the following steps:
(1) Projecting the original front-view image or the original rear-view image onto a plane parallel to the shooting baseline, wherein the geometric relationship of the original front-view image or the original rear-view image is similar to an image space coordinate system and an object space coordinate system; the coordinate relation between the points on the original front-view image or the original rear-view image and the points on the front-view corrected image or the rear-view corrected image can be obtained through the similar rotation matrix, and a horizontal epipolar line is formed;
(2) Points are taken at intervals on the horizontal epipolar line, one image is selected to be not less than 20 points, the number of the points can be increased according to the terrain complexity, then the image can be inversely calculated on the original image by utilizing the coordinate relation on the plane parallel to the shooting baseline, and the epipolar line image gray information is obtained by utilizing resampling to form a left epipolar line image or a right epipolar line image.
3. The method for extracting remote sensing information of a sugarcane planting area by using a tri-linear array image as claimed in claim 1, wherein the resampled image data set in the step C comprises a red waveband, a green waveband and a blue waveband of the multi-spectral fusion image, and a panchromatic waveband of the resampled front-view corrected image and a panchromatic waveband of the resampled rear-view corrected image.
4. The method for extracting the remote sensing information of the sugarcane planting area by using the three-linear array image as claimed in claim 1, wherein the samples in the step E comprise immature sugarcane, mature sugarcane, cut and harvested sugarcane, woodland, grassland, water body, built-up area, road, bare soil, other dry land and paddy field.
5. The method for extracting remote sensing information of a sugarcane planting area by using a three-linear array image as claimed in claim 1, wherein if the image classification result raster data in step F has fragments, obvious misclassification and the like, the image classification result raster data is subjected to category merging and data elimination before sugarcane data extraction is performed.
CN201910603652.7A 2019-07-05 2019-07-05 Sugarcane planting area remote sensing information extraction method based on three-linear array image Active CN110310246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910603652.7A CN110310246B (en) 2019-07-05 2019-07-05 Sugarcane planting area remote sensing information extraction method based on three-linear array image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910603652.7A CN110310246B (en) 2019-07-05 2019-07-05 Sugarcane planting area remote sensing information extraction method based on three-linear array image

Publications (2)

Publication Number Publication Date
CN110310246A CN110310246A (en) 2019-10-08
CN110310246B true CN110310246B (en) 2023-04-11

Family

ID=68079126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910603652.7A Active CN110310246B (en) 2019-07-05 2019-07-05 Sugarcane planting area remote sensing information extraction method based on three-linear array image

Country Status (1)

Country Link
CN (1) CN110310246B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047566B (en) * 2019-12-04 2023-07-14 昆明市滇池高原湖泊研究院 Method for carrying out aquatic vegetation annual change statistics by unmanned aerial vehicle and multispectral satellite image
CN111209871B (en) * 2020-01-09 2021-06-25 河南大学 Rape planting land remote sensing automatic identification method based on optical satellite image
CN112330700A (en) * 2020-11-16 2021-02-05 四川航天神坤科技有限公司 Cultivated land plot extraction method of satellite image
CN113358091B (en) * 2021-06-02 2021-11-16 自然资源部国土卫星遥感应用中心 Method for producing digital elevation model DEM (digital elevation model) by using three-linear array three-dimensional satellite image
CN113421273B (en) * 2021-06-30 2022-02-25 中国气象科学研究院 Remote sensing extraction method and device for forest and grass collocation information

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604018A (en) * 2009-07-24 2009-12-16 中国测绘科学研究院 High-definition remote sensing image data disposal route and system thereof
CN103383775A (en) * 2013-07-02 2013-11-06 中国科学院东北地理与农业生态研究所 Method for evaluating remote-sensing image fusion effect
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN104006802A (en) * 2014-05-06 2014-08-27 国家基础地理信息中心 Information fusion-based earth's surface three-dimensional change detection method and system
CN104239890A (en) * 2014-08-12 2014-12-24 浙江工商大学 Method for automatically extracting coastal land and earth cover information through GF-1 satellite
CN104732577A (en) * 2015-03-10 2015-06-24 山东科技大学 Building texture extraction method based on UAV low-altitude aerial survey system
CN106384332A (en) * 2016-09-09 2017-02-08 中山大学 Method for fusing unmanned aerial vehicle image and multispectral image based on Gram-Schmidt
CN106530326A (en) * 2016-11-04 2017-03-22 中科宇图科技股份有限公司 Change detection method based on image texture features and DSM
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN108871286A (en) * 2018-04-25 2018-11-23 中国科学院遥感与数字地球研究所 The completed region of the city density of population evaluation method and system of space big data collaboration
CN109684929A (en) * 2018-11-23 2019-04-26 中国电建集团成都勘测设计研究院有限公司 Terrestrial plant ECOLOGICAL ENVIRONMENTAL MONITORING method based on multi-sources RS data fusion

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016025848A1 (en) * 2014-08-15 2016-02-18 Monsanto Technology Llc Apparatus and methods for in-field data collection and sampling
CN106295576B (en) * 2016-08-12 2017-12-12 中国水利水电科学研究院 A kind of water source type analytic method based on nature geography characteristic
US10325349B2 (en) * 2017-08-11 2019-06-18 Intermap Technologies Inc. Method and apparatus for enhancing 3D model resolution
CN108256419B (en) * 2017-12-05 2018-11-23 交通运输部规划研究院 A method of port and pier image is extracted using multispectral interpretation

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604018A (en) * 2009-07-24 2009-12-16 中国测绘科学研究院 High-definition remote sensing image data disposal route and system thereof
CN103383775A (en) * 2013-07-02 2013-11-06 中国科学院东北地理与农业生态研究所 Method for evaluating remote-sensing image fusion effect
CN103914808A (en) * 2014-03-14 2014-07-09 国家测绘地理信息局卫星测绘应用中心 Method for splicing ZY3 satellite three-line-scanner image and multispectral image
CN104006802A (en) * 2014-05-06 2014-08-27 国家基础地理信息中心 Information fusion-based earth's surface three-dimensional change detection method and system
CN104239890A (en) * 2014-08-12 2014-12-24 浙江工商大学 Method for automatically extracting coastal land and earth cover information through GF-1 satellite
CN104732577A (en) * 2015-03-10 2015-06-24 山东科技大学 Building texture extraction method based on UAV low-altitude aerial survey system
CN106384332A (en) * 2016-09-09 2017-02-08 中山大学 Method for fusing unmanned aerial vehicle image and multispectral image based on Gram-Schmidt
CN106530326A (en) * 2016-11-04 2017-03-22 中科宇图科技股份有限公司 Change detection method based on image texture features and DSM
CN106895851A (en) * 2016-12-21 2017-06-27 中国资源卫星应用中心 A kind of sensor calibration method that many CCD polyphasers of Optical remote satellite are uniformly processed
CN108871286A (en) * 2018-04-25 2018-11-23 中国科学院遥感与数字地球研究所 The completed region of the city density of population evaluation method and system of space big data collaboration
CN109684929A (en) * 2018-11-23 2019-04-26 中国电建集团成都勘测设计研究院有限公司 Terrestrial plant ECOLOGICAL ENVIRONMENTAL MONITORING method based on multi-sources RS data fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Multi-dimensional multi-targets detection and interference suppression with DSLA (Double Spiral Line Array);Shanyi We;《2016 IEEE/OES China Ocean Acoustics (COA)》;20161231;全文 *
基于无人机低空遥感的广西甘蔗灾害监测应用研究;孙明;《气象研究与应用》;20190331;全文 *
资源三号卫星多光谱图像特征分析和质量评价;李霖;《国土资源遥感》;20141231;全文 *
高分辨率立体测绘卫星影像质量提升和典型要素提取;王伟;《中国优秀博士学位论文全文数据库基础科学辑》;20190115;全文 *

Also Published As

Publication number Publication date
CN110310246A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN110310246B (en) Sugarcane planting area remote sensing information extraction method based on three-linear array image
Prošek et al. UAV for mapping shrubland vegetation: Does fusion of spectral and vertical information derived from a single sensor increase the classification accuracy?
Alganci et al. Parcel-level identification of crop types using different classification algorithms and multi-resolution imagery in Southeastern Turkey
Du et al. Mapping wetland plant communities using unmanned aerial vehicle hyperspectral imagery by comparing object/pixel-based classifications combining multiple machine-learning algorithms
CN112285710B (en) Multi-source remote sensing reservoir water storage capacity estimation method and device
CN114926748A (en) Soybean remote sensing identification method combining Sentinel-1/2 microwave and optical multispectral images
Cao et al. Use of unmanned aerial vehicle imagery and a hybrid algorithm combining a watershed algorithm and adaptive threshold segmentation to extract wheat lodging
Wang et al. Fusion of HJ1B and ALOS PALSAR data for land cover classification using machine learning methods
Solanky et al. Pixel-level image fusion techniques in remote sensing: a review
Demir Using UAVs for detection of trees from digital surface models
Lei et al. A novel algorithm of individual tree crowns segmentation considering three-dimensional canopy attributes using UAV oblique photos
Orlíková et al. Land cover classification using sentinel-1 SAR data
Yin et al. Individual tree parameters estimation for chinese fir (cunninghamia lanceolate (lamb.) hook) plantations of south china using UAV Oblique Photography: Possibilities and Challenges
Jaber et al. Object-based approaches for land use-land cover classification using high resolution quick bird satellite imagery (a case study: Kerbela, Iraq)
CN110909821B (en) Method for carrying out high-space-time resolution vegetation index data fusion based on crop reference curve
CN112561981A (en) Photogrammetry point cloud filtering method fusing image information
CN116994029A (en) Fusion classification method and system for multi-source data
CN113902759B (en) Space-spectrum information combined satellite-borne hyperspectral image segmentation and clustering method
CN115841615A (en) Tobacco yield prediction method and device based on multispectral data of unmanned aerial vehicle
CN115294183A (en) Disc-shaped sub-lake water body time sequence extraction method based on multi-source remote sensing data
Liu et al. Semi-automatic extraction and mapping of farmlands based on high-resolution remote sensing images
Liew et al. Integration of tree database derived from satellite imagery and lidar point cloud data
Kete et al. Land use classification based on object and pixel using Landsat 8 OLI in Kendari City, Southeast Sulawesi Province, Indonesia
Ozdarici-Ok et al. Object-based classification of multi-temporal images for agricultural crop mapping in Karacabey Plain, Turkey
Tawade et al. Remote sensing image fusion using machine learning and deep learning: a systematic review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230828

Address after: 530000 No. 2, Zhongxin Road, Qingxiu District, Nanning City, Guangxi Zhuang Autonomous Region

Patentee after: Guangxi Zhuang Autonomous Region natural resources information center

Address before: Building 530023 Nanning Road, the Guangxi Zhuang Autonomous Region No. 5

Patentee before: Guangxi Zhuang Autonomous Region Basic Geographic Information Center

TR01 Transfer of patent right