CN112990368A - Polygonal structure guided hyperspectral image single sample identification method and system - Google Patents

Polygonal structure guided hyperspectral image single sample identification method and system Download PDF

Info

Publication number
CN112990368A
CN112990368A CN202110450691.5A CN202110450691A CN112990368A CN 112990368 A CN112990368 A CN 112990368A CN 202110450691 A CN202110450691 A CN 202110450691A CN 112990368 A CN112990368 A CN 112990368A
Authority
CN
China
Prior art keywords
image
sample
polygon
scale
hyperspectral image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110450691.5A
Other languages
Chinese (zh)
Other versions
CN112990368B (en
Inventor
李树涛
章硕
康旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202110450691.5A priority Critical patent/CN112990368B/en
Publication of CN112990368A publication Critical patent/CN112990368A/en
Application granted granted Critical
Publication of CN112990368B publication Critical patent/CN112990368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a polygonal structure guided hyperspectral image single sample identification method and a system, and the method comprises the following steps: reducing the dimension aiming at the hyperspectral image; carrying out multi-scale segmentation on the dimension-reduced image to obtain a multi-scale segmentation image; expanding the sample in the polygon where the single sample is located by using a known single sample and combining a decomposition result of the finest scale obtained by multi-scale segmentation and a spectral angular distance, then searching for a similar polygon, expanding the sample in the similar polygon for the second time, and training a pixel-level classifier identification model by using the expanded sample; identifying the dimensionality reduction image by utilizing a classifier identification model to obtain an initial identification result pixel by pixel; and optimizing the initial recognition result by combining the multi-scale segmentation graph and then fusing to obtain a final recognition result. The method realizes the hyperspectral image single-sample identification guided by the polygonal structure aiming at the farmland scene or the city scene, solves the problem of insufficient sample quantity, and can improve the identification precision and optimize the visual identification effect.

Description

Polygonal structure guided hyperspectral image single sample identification method and system
Technical Field
The invention relates to a hyperspectral image identification method, in particular to a polygonal structure guided hyperspectral image single sample identification method and a system.
Background
The hyperspectral image (HSI) contains hundreds of continuous wave bands, has rich spectrum information, can provide spectrum fingerprints of different ground objects, and brings unprecedented opportunities for accurate classification and identification of the ground objects. Therefore, the hyperspectral image identification is always a hot research direction of hyperspectral image processing, and the technology is widely applied to land coverage investigation, environment monitoring, mineral mapping and the like.
The hyperspectral image recognition aims to assign a specified category label to each pixel in an image in a mode of pattern recognition so as to facilitate subsequent image analysis. However, the performance of many surface feature identification methods is closely related to the number of training samples. In practical applications, obtaining the marked sample is time-consuming, labor-consuming and difficult. In order to solve the problem of insufficient sample quantity, some technologies for recognizing ground features by using semi-supervised learning are developed, the semi-supervised learning can improve a supervised learning task by using ready-made unmarked samples, and the image recognition precision under the condition of small samples can be further improved.
In addition to the problem of the number of samples, although the objective evaluation index of the feature recognition result of the leading edge algorithm can reach a very high value, the visual effect is often unsatisfactory. For example, in some methods that do not fully utilize image spatial information, there may be false sub-samples that resemble noise; when the interior of the same type of sample in the identification result is homogeneous, the boundaries of different types and the boundaries of real ground objects still have certain difference. These phenomena are particularly evident in man-made scenes, such as farmlands, buildings.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a polygonal-structure-guided hyperspectral image single sample identification method and a polygonal-structure-guided hyperspectral image single sample identification system.
In order to solve the technical problems, the invention adopts the technical scheme that:
a polygonal structure guided hyperspectral image single sample identification method comprises the following steps:
1) performing spectral dimension reduction on the input hyperspectral image to obtain a dimension reduction image;
2) carrying out multi-scale segmentation on the dimension-reduced image to obtain a multi-scale segmentation image;
3) expanding the sample in the polygon where the single sample is located by using a known single sample and combining a decomposition result of the finest scale obtained by multi-scale segmentation and a spectral angular distance, then searching for a similar polygon, expanding the sample in the similar polygon for the second time, and training a pixel-level classifier identification model by using the expanded sample;
4) identifying the dimensionality reduction image by using the trained classifier identification model to obtain an initial pixel-by-pixel identification result;
5) optimizing an initial recognition result by combining a multi-scale segmentation graph to obtain a multi-scale recognition result;
6) and fusing the multi-scale recognition results to obtain a final recognition result.
Optionally, performing dimensionality reduction on the spectral dimensionality of the input hyperspectral image in step 1) refers to performing dimensionality reduction on the input hyperspectral imageH∈R M N B××Performing spectral dimension reduction by principal component analysis to obtain dimension reduction imageS∈R M N L××WhereinM×NIn order to be able to determine the spatial resolution of the image,Bis the number of the bands of the image,Lthe number of wave bands after dimensionality reduction.
Optionally, before performing the multi-scale segmentation on the dimension-reduced image in step 2), before selecting the first dimension-reduced imagepAnd forming an image to be segmented by the wave bands.
Optionally, the method further includes, before performing multi-scale segmentation on the dimension-reduced image in step 2), performing a preprocessing operation on the image to be segmented, where the preprocessing operation includes first performing a histogram equalization operation to enhance contrast, and then smoothing texture information in the image to highlight a parcel boundary.
Optionally, when performing multi-scale segmentation in step 2), the segmentation step for each scale includes:
2.1) detecting line segments in the dimension-reduced image based on the region growth of the image gradient, wherein the number of the detected line segments is the preset number of line segments corresponding to the dimension;
2.2) reorienting the detected line segment by a minimum energy function based on the following formula;
U(x)=(1-λ)D(x)+λV(x)
in the above formula, the first and second carbon atoms are,U(x) The function of the minimum energy is represented,D(x) The data items are represented by a representation of,V(x) The line segments are shown to be inclined in pairs,λ∈[0,1]for weight, data itemD(x) Limiting large angular deviations of line segments from their original orientation, the line segments having a tendency to pairV(x) Bringing pairs of spatially close line segments that are nearly parallel or nearly orthogonal to become perfectly parallel or orthogonal, x =: (i) (), (ii)x 1 ,…,x n ),x i ∈[-θ maxθ max]Is thatnThe disturbance parameter imposed on the line segment,θ maxdata items for maximum offset of line segmentD(x) And line segment pairing tendencyV(x) The functional expressions of (a) are respectively:
Figure 100002_DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,nas to the number of line segments,x i is as followsiThe term interference parameter is a parameter of the interference,x j is as followsjThe term interference parameter is a parameter of the interference,θ maxis the maximum offset of the line segment,μ ij representing line segmentsiAndjcorrelation between line segmentsiAndjvery close in spatial position orθ ij |<2θ maxμ ij =1, otherwiseμ ij =0,θ ij Representing line segmentsiAndjthe distance between the relative angles deviate from a straight line or a right angle;
2.3) basing the line segment on the dynamic planG t = (V t, E t ) Is gradually extended in the frame of the air conditioner,this plan view divides the image domain,V t andE t respectively, a set of vertex and edge, when the line segments intersect, new vertex and edge are inserted into the graph, when the line segments stop extending, the final polygon segmentation result is obtained, and the graph keeps a plane all the time in the process.
Optionally, step 3) comprises: firstly, expanding samples in a polygon, finding the polygon where a single sample is located, calculating the spectral distance SAM between all pixel points in the polygon and the single sample, and taking pixel points with the distance smaller than the previous first preset proportion as new expanded sample points; then calculating the average spectrum of other polygons and the spectral distance between the single sample and the spectra, using the polygon with the distance smaller than the second preset proportion as a similar polygon, then searching the sample in the similar polygon, and the specific operation of searching the sample is to calculate the spectral distance SAM between the single sample and all pixels in the similar polygon, when the value of the spectral distance SAM is smaller than the threshold valueθ 1 In time, two pixels assigned to a calculated spectral distance SAMs j Ands i adding the labels of the same type into a training sample set; and training a classifier identification model at a pixel level by using the obtained training sample set as input.
Optionally, step 5) comprises: mapping the initial identification result to a multi-scale segmentation graph respectively, counting the number and the proportion of the category labels in each polygon on different scales respectively, wherein the category proportion with the largest number meets a set threshold valueθ 2 And in time, the pixels in the whole polygon are marked as the category again, so that a multi-scale optimization result is obtained.
Optionally, step 6) comprises: for the optimization result of multiple scales, each scale is counted asjTime to first in optimization resultsiA pixelP ij The belonged category takes the feature category label with the maximum occurrence probability as the final label of the pixel, and the result obtained by carrying out the operation on the whole image is taken as the final recognition resultR *
In addition, the invention also provides a polygonal structure guided hyperspectral image single sample identification system which comprises a processor and a memory which are connected with each other, wherein the processor is programmed or configured to execute the steps of the polygonal structure guided hyperspectral image single sample identification method.
Furthermore, the present invention also provides a computer readable storage medium having stored therein a computer program programmed or configured to execute the aforementioned polygon structure guided hyperspectral image single sample identification method.
Compared with the prior art, the invention mainly has the following technical effects:
1. the hyperspectral image ground object identification method can be suitable for identification tasks with few labels, and can remarkably improve identification accuracy by means of sample expansion by means of spectral information of limited labels and spatial information of images; the experimental result shows that the method can obtain better ground object identification effect;
2. the multi-scale optimization method provided by the invention can eliminate the false marked samples similar to noise distribution in the recognition result, and alleviate the problem of boundary blurring, and experimental results show that the ground feature recognition precision and the visual effect are improved through optimization.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Fig. 2 is a diagram of a multi-scale segmentation result in an embodiment of the present invention, where (a) is an obtained image to be segmented, and (b) is a segmentation result at different scales.
Fig. 3 shows the result of hyperspectral image recognition by the method of the embodiment of the invention, in which (a) is the gray scale display of a color image corresponding to a visible light band of a hyperspectral image, (b) is a ground real label, (c) to (g) are the results obtained by other existing hyperspectral recognition methods, and (h) is the result obtained by the method of the embodiment of the invention.
FIG. 4 is a table comparing the identification results of the method of the present invention with those of other prior art methods.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described and explained in detail below with reference to flowcharts and embodiments, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the method for identifying a single sample of a hyperspectral image guided by a polygonal structure in this embodiment includes:
1) performing spectral dimension reduction on the input hyperspectral image to obtain a dimension reduction image;
2) carrying out multi-scale segmentation on the dimension-reduced image to obtain a multi-scale segmentation image;
3) expanding the sample in the polygon where the single sample is located by using a known single sample and combining a decomposition result of the finest scale obtained by multi-scale segmentation and a spectral angular distance, then searching for a similar polygon, expanding the sample in the similar polygon for the second time, and training a pixel-level classifier identification model by using the expanded sample;
4) identifying the dimensionality reduction image by using the trained classifier identification model to obtain an initial pixel-by-pixel identification result;
5) optimizing an initial recognition result by combining a multi-scale segmentation graph to obtain a multi-scale recognition result;
6) and fusing the multi-scale recognition results to obtain a final recognition result.
In this embodiment, performing dimension reduction on the spectral dimension of the input hyperspectral image in step 1) refers to performing dimension reduction on the input hyperspectral imageH∈R M N B××Performing spectral dimension reduction by Principal Component Analysis (PCA) to obtain a dimension-reduced imageS∈R M N L××WhereinM×NIn order to be able to determine the spatial resolution of the image,Bis the number of the bands of the image,Lthe number of wave bands after dimensionality reduction. In this embodiment, the number of bands after dimensionality reductionLSet to 20. The PCA is a classical feature extraction and data representation technology, high-dimensional vectors are mapped into a low-dimensional space through linear projection to be represented, new features are represented by linear combination of original features, and therefore the PCA can retain most information in data while reducing the dimension of the data.
In this embodiment, before performing the multi-scale segmentation on the dimension-reduced image in step 2), before selecting the first dimension-reduced image, the method further includespAnd forming an image to be segmented by the wave bands. In this embodiment of the present invention,pis set to 3. In this embodiment, the dimensionality reduction of the spectral dimensionality for the input hyperspectral image in step 1) aims to reduce redundant information and calculation cost, and is performed before selectionpThe individual bands constitute the image to be segmented, sincepThe wave bands effectively contain the spatial information of the hyperspectral image. Under a fixed scale, only one segmentation result image needs to be obtained from one hyperspectral image. There is no need to process each band.
In this embodiment, before performing multi-scale segmentation on the dimension-reduced image in step 2), a step of performing a preprocessing operation on the image to be segmented is further included, and the preprocessing operation includes first performing a histogram equalization operation to enhance contrast and then smoothing texture information in the image to highlight a parcel boundary.
In this embodiment, the principle of performing multi-scale segmentation in step 2) is to segment the image by using a Kinetic-based method, extend the line segments detected in the image until they intersect, and merge the intersection points to obtain a segmentation result. When multi-scale segmentation is carried out in the step 2), the segmentation step aiming at each scale comprises the following steps:
2.1) detecting line segments in the dimension-reduced image based on the region growth of the image gradient, wherein the number of the detected line segments is the preset number of line segments corresponding to the dimension;
2.2) reorienting the detected line segment by a minimum energy function based on the following formula (for optimizing the detected line segment to more accurately conform to the ground object boundary);
U(x)=(1-λ)D(x)+λV(x)
in the above formula, the first and second carbon atoms are,U(x) The function of the minimum energy is represented,D(x) The data items are represented by a representation of,V(x) Indicates the line segment pair tendency (pair probability),λ∈[0,1]for weight (default value of 0.8 in this embodiment), data itemD(x) Limiting large angular deviations of line segments from their original orientation, the line segments having a tendency to pairV(x) Bringing pairs of spatially close line segments that are nearly parallel or nearly orthogonal to become perfectly parallel or orthogonal, x =: (i) (), (ii)x 1 ,…,x n ),x i ∈[-θ maxθ max]Is thatnThe disturbance parameter imposed on the line segment,θ maxdata items for maximum offset of line segmentD(x) And line segment pairing tendencyV(x) The functional expressions of (a) are respectively:
Figure 675732DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,nas to the number of line segments,x i is as followsiThe term interference parameter is a parameter of the interference,x j is as followsjThe term interference parameter is a parameter of the interference,θ maxis the maximum offset of the line segment,μ ij representing line segmentsiAndjcorrelation between line segmentsiAndjvery close in spatial position orθ ij |<2θ maxμ ij =1, otherwiseμ ij =0,θ ij Representing line segmentsiAndjthe distance between the relative angles deviate from a straight line or a right angle;
2.3) basing the line segment on the dynamic planG t = (V t, E t ) Gradually extends in the frame, this levelThe face-map divides the domain of the image,V t andE t respectively, a set of vertex and edge, when the line segments intersect, new vertex and edge are inserted into the graph, when the line segments stop extending, the final polygon segmentation result is obtained, and the graph keeps a plane all the time in the process.
In this embodiment, when performing multi-scale segmentation in step 2), the multi-scale means setting different numbers of pre-detected line segments, so that the numbers of polygons in the segmented image are different. In this embodiment, the number of multi-scalesMThe default value is set to 8.
In this embodiment, step 3) includes: firstly, expanding samples in a polygon, finding the polygon where a single sample is located, calculating the spectral distance SAM between all pixel points in the polygon and the single sample, and taking pixel points with the distance smaller than the previous first preset proportion as new expanded sample points; then calculating the average spectrum of other polygons and the spectral distance between the single sample and the spectra, using the polygon with the distance smaller than the second preset proportion as a similar polygon, then searching the sample in the similar polygon, and the specific operation of searching the sample is to calculate the spectral distance SAM between the single sample and all pixels in the similar polygon, when the value of the spectral distance SAM is smaller than the threshold valueθ 1 Then, two pixel points of the SAM are given to calculate the spectral distances j Ands i adding the labels of the same type into a training sample set; and training a classifier identification model at a pixel level by using the obtained training sample set as input. And 3) searching similar samples in the global scale by using known single sample spectrum information and the self spatial information of the land parcel when the training samples are expanded. Specifically, an initial single sample is randomly selected according to the ground real mark obtained by field investigation, and the corresponding category mark is recorded. Then, sample expansion is respectively carried out inside polygons and among polygons in the finest scale segmentation result by utilizing the spectral information of the known marked samples, and the basis of the sample expansion is that the marked samples and the original hyperspectral imageHThe spectral angular distance SAM of the middle unmarked pixel is a commonly used spectral similarity measurement method in a hyperspectral imageThe method calculates two pixel pointss j Ands i to determine the similarity between two pixels.
The SAM calculation between two pixels is as follows:
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE003
in the above formula, the first and second carbon atoms are,θ(s i ,s j ) Is a pixel points j Ands i b is the number of bands of the image. The value range of the spectral angular distance SAM is 0-1.57. As a specific implementation manner, in this embodiment, first, samples are extended inside a polygon, the polygon where a single sample is located is found, a spectral distance SAM between all pixel points inside the polygon and the single sample is calculated, and pixel points whose distance is less than the first 10% are used as new extended sample points. Then calculating the average spectrum of other polygons and the spectral distance between the single sample and the spectra, using the polygon with the distance less than the first 30% as the similar polygon, then searching the sample in the similar polygon, specifically, calculating the spectral distance between the single sample and all pixels in the similar polygon, when the SAM value is less than the threshold valueθ 1 When is givens j Ands i and adding the labels of the same type into the training sample set. Training samples can be enriched through the steps, and the problem that the number of the samples is insufficient is solved. In this embodiment, the spectral thresholdθ 1 Set to 0.01.
In this embodiment, the classifier recognition model specifically uses a Support Vector Machine (SVM) classifier, and in addition, other types of machine learning classifiers may be used as needed. When the pixel-level classifier is trained by using the extended samples, the pixel-level recognition model is trained by using the extended sample set as input,in this embodiment, a Support Vector Machine (SVM) is selected as the classifier. The SVM is a common machine learning ground feature recognition model in a hyperspectral image ground feature recognition task, and the basic idea of the SVM is to map input features to a high-dimensional feature space through a selected kernel function and solve a separation hyperplane which can correctly divide a training data set and has the largest geometric interval. In the SVM process, multiple cross validation is adopted to obtain model parameters, a Gaussian kernel is used, and the width range of the Gaussian kernel is 2-5∼25With a penalty factor in the range of 10-2∼104In the meantime.
In this embodiment, step 4) is configured to identify the dimension-reduced image by using the trained classifier identification model, so as to obtain an initial pixel-by-pixel identification result. In particular, using reduced-dimension imagesS∈R M N L××Using as input a trained pair of SVM classifiersSAnd (4) performing identification, wherein in the identification process, the SVM maps the spectral feature of each pixel to a high-dimensional feature space obtained in advance, and the feature type represented by the SVM is judged according to the distribution of the feature in the space.
In this embodiment, step 5) includes: mapping the initial identification result to a multi-scale segmentation graph respectively, counting the number and the proportion of the category labels in each polygon on different scales respectively, wherein the category proportion with the largest number meets a set threshold valueθ 2 And in time, the pixels in the whole polygon are marked as the category again, so that a multi-scale optimization result is obtained. In the identification process, the selection of the characteristics is single, only the spectrum information is utilized, and in the initial identification result, a misclassified sample similar to noise distribution exists, so that further optimization is needed. In this embodiment, the spectral thresholdθ 2 Set to 0.8.
In this embodiment, step 6) includes: for the optimization result of multiple scales, each scale is counted asjTime to first in optimization resultsiA pixelP ij The belonged category takes the feature category label with the maximum occurrence probability as the final label of the pixel, and the whole image is displayedThe result obtained by performing this operation is used as the final recognition resultR * The formula is adopted to be expressed as:
Figure DEST_PATH_IMAGE004
in the above formula, the result of multi-scale optimization R = (R)1, R2,…, R M );αRepresent different ground object classes, andα=(1,2,…,c),cis the total number of surface feature classes.
In order to verify the method of the present embodiment, the method of the present invention is tested by using a public data set Salinas in the present embodiment. The public data set Salinas is obtained by shooting over the valley of Salinas of California of America through an AVIRIS hyperspectral sensor system, the size of the data space is 512 pixels multiplied by 217 pixels, the spatial resolution is 3.7 m/pixel, and after removing the wave bands interfered by noise and the wave bands absorbed by water, the remaining 204 wave bands are used for the subsequent identification task. In addition, the data collectively contains 16 geotypes. Tests are carried out on a public data set Salinas, three common objective evaluation indexes of OA, AA and Kappa coefficients are adopted to evaluate the performance of different algorithms, and the compared algorithms comprise PCA-EPF, MSTV, GTR, RPNET and PKCRCAWG. The initial training sample number of each type of ground feature is 1, and the obtained recognition result table is shown in fig. 4. As can be seen from fig. 4, the method of the present embodiment obtains the highest values for the three objective evaluation indexes of OA, AA, and Kappa coefficients, that is, the method of the present embodiment can obtain the best recognition accuracy, which is improved by 2 to 10 percentage points compared with other methods. In addition, fig. 3 shows the recognition result of the method of the present embodiment, from which it can be seen that the sample can be effectively expanded by using the method of the present embodiment, and the error division of the sample is eliminated to a certain extent by multi-scale optimization, the farmland boundary is optimized, and the recognition accuracy and the visual effect are further improved.
In addition, the present embodiment also provides a polygonal structure guided hyperspectral image single sample identification system, which includes a processor and a memory connected to each other, where the processor is programmed or configured to execute the steps of the polygonal structure guided hyperspectral image single sample identification method.
Furthermore, the present embodiment also provides a computer-readable storage medium, in which a computer program programmed or configured to execute the aforementioned polygon structure guided hyperspectral image single sample identification method is stored.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A hyperspectral image single sample identification method guided by a polygonal structure is characterized by comprising the following steps:
1) performing spectral dimension reduction on the input hyperspectral image to obtain a dimension reduction image;
2) carrying out multi-scale segmentation on the dimension-reduced image to obtain a multi-scale segmentation image;
3) expanding the sample in the polygon where the single sample is located by using a known single sample and combining a decomposition result of the finest scale obtained by multi-scale segmentation and a spectral angular distance, then searching for a similar polygon, expanding the sample in the similar polygon for the second time, and training a pixel-level classifier identification model by using the expanded sample;
4) identifying the dimensionality reduction image by using the trained classifier identification model to obtain an initial pixel-by-pixel identification result;
5) optimizing an initial recognition result by combining a multi-scale segmentation graph to obtain a multi-scale recognition result;
6) and fusing the multi-scale recognition results to obtain a final recognition result.
2. The polygon-structure-guided hyperspectral image single-sample identification method according to claim 1, wherein the step 1) of performing dimensionality reduction on the spectral dimensionality of the input hyperspectral image is a pointerFor input hyperspectral imageH∈R M N B××Performing spectral dimension reduction by principal component analysis to obtain dimension reduction imageS∈R M N L××WhereinM×NIn order to be able to determine the spatial resolution of the image,Bis the number of the bands of the image,Lthe number of wave bands after dimensionality reduction.
3. The polygonal structure guided hyperspectral image single-sample identification method according to claim 1, wherein before performing multi-scale segmentation on the dimension-reduced image in the step 2), the method further comprises before selecting the first dimension-reduced imagepAnd forming an image to be segmented by the wave bands.
4. The polygon-structure-guided hyperspectral image single-sample identification method according to claim 3, wherein the method further comprises a step of performing a preprocessing operation on the image to be segmented before performing the multi-scale segmentation on the dimension-reduced image in the step 2), and the preprocessing operation comprises firstly performing a histogram equalization operation to enhance contrast and then smoothing texture information in the image to highlight the boundary of the land parcel.
5. The method for identifying the single sample of the hyperspectral image guided by the polygonal structure according to claim 1, wherein when performing the multi-scale segmentation in step 2), the segmentation step for each scale comprises:
2.1) detecting line segments in the dimension-reduced image based on the region growth of the image gradient, wherein the number of the detected line segments is the preset number of line segments corresponding to the dimension;
2.2) reorienting the detected line segment by a minimum energy function based on the following formula;
U(x)=(1-λ)D(x)+λV(x)
in the above formula, the first and second carbon atoms are,U(x) The function of the minimum energy is represented,D(x) The data items are represented by a representation of,V(x) The line segments are shown to be inclined in pairs,λ∈[0,1]for weight, data itemD(x) Limiting the length of a line segment with respect to its initial directionLarge angular deviation, line segment pairing tendencyV(x) Bringing pairs of spatially close line segments that are nearly parallel or nearly orthogonal to become perfectly parallel or orthogonal, x =: (i) (), (ii)x 1 ,…,x n ),x i ∈[-θ maxθ max]Is thatnThe disturbance parameter imposed on the line segment,θ maxdata items for maximum offset of line segmentD(x) And line segment pairing tendencyV(x) The functional expressions of (a) are respectively:
Figure DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,nas to the number of line segments,x i is as followsiThe term interference parameter is a parameter of the interference,x j is as followsjThe term interference parameter is a parameter of the interference,θ maxis the maximum offset of the line segment,μ ij representing line segmentsiAndjcorrelation between line segmentsiAndjvery close in spatial position orθ ij |<2θ maxμ ij =1, otherwiseμ ij =0,θ ij Representing line segmentsiAndjthe distance between the relative angles deviate from a straight line or a right angle;
2.3) basing the line segment on the dynamic planG t = (V t, E t ) Gradually, this plan view divides the image field,V t andE t respectively, a set of vertex and edge, when the line segments intersect, new vertex and edge are inserted into the graph, when the line segments stop extending, the final polygon segmentation result is obtained, and the graph keeps a plane all the time in the process.
6. The polygonal structure guided hyperspectral image single-sample identification method according to claim 1, wherein the step 3) comprises: first, extend the sample inside the polygon to findCalculating the spectral distances SAM of all pixel points in the polygon and the single sample, and taking the pixel points with the distances smaller than the first preset proportion as new extended sample points; then calculating the average spectrum of other polygons and the spectral distance between the single sample and the spectra, using the polygon with the distance smaller than the second preset proportion as a similar polygon, then searching the sample in the similar polygon, and the specific operation of searching the sample is to calculate the spectral distance SAM between the single sample and all pixels in the similar polygon, when the value of the spectral distance SAM is smaller than the threshold valueθ 1 In time, two pixels assigned to a calculated spectral distance SAMs j Ands i adding the labels of the same type into a training sample set; and training a classifier identification model at a pixel level by using the obtained training sample set as input.
7. The polygonal structure guided hyperspectral image single-sample identification method according to claim 1, wherein the step 5) comprises: mapping the initial identification result to a multi-scale segmentation graph respectively, counting the number and the proportion of the category labels in each polygon on different scales respectively, wherein the category proportion with the largest number meets a set threshold valueθ 2 And in time, the pixels in the whole polygon are marked as the category again, so that a multi-scale optimization result is obtained.
8. The polygonal structure guided hyperspectral image single-sample identification method according to claim 1, wherein the step 6) comprises: for the optimization result of multiple scales, each scale is counted asjTime to first in optimization resultsiA pixelP ij The belonged category takes the feature category label with the maximum occurrence probability as the final label of the pixel, and the result obtained by carrying out the operation on the whole image is taken as the final recognition resultR *
9. A polygon structure guided hyperspectral image single sample identification system comprising a processor and a memory connected to each other, characterized in that the processor is programmed or configured to perform the steps of the polygon structure guided hyperspectral image single sample identification method according to any of claims 1 to 8.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, the computer program being programmed or configured to perform a polygon structure guided hyperspectral image single sample identification method according to any of claims 1 to 8.
CN202110450691.5A 2021-04-26 2021-04-26 Polygonal structure guided hyperspectral image single sample identification method and system Active CN112990368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450691.5A CN112990368B (en) 2021-04-26 2021-04-26 Polygonal structure guided hyperspectral image single sample identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450691.5A CN112990368B (en) 2021-04-26 2021-04-26 Polygonal structure guided hyperspectral image single sample identification method and system

Publications (2)

Publication Number Publication Date
CN112990368A true CN112990368A (en) 2021-06-18
CN112990368B CN112990368B (en) 2021-07-30

Family

ID=76340179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450691.5A Active CN112990368B (en) 2021-04-26 2021-04-26 Polygonal structure guided hyperspectral image single sample identification method and system

Country Status (1)

Country Link
CN (1) CN112990368B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902717A (en) * 2021-10-13 2022-01-07 自然资源部国土卫星遥感应用中心 Satellite-borne hyperspectral farmland bare soil target identification method based on spectrum library

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9466122B1 (en) * 2014-08-25 2016-10-11 Raytheon Company Independent covariance estimation and decontamination
CN109740631A (en) * 2018-12-07 2019-05-10 中国科学院东北地理与农业生态研究所 Object-based OBIA-SVM-CNN Remote Image Classification
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
CN111222539A (en) * 2019-11-22 2020-06-02 国际竹藤中心 Method for optimizing and expanding supervision classification samples based on multi-source multi-temporal remote sensing image
CN111310666A (en) * 2020-02-18 2020-06-19 浙江工业大学 High-resolution image ground feature identification and segmentation method based on texture features
CN111563544A (en) * 2020-04-27 2020-08-21 中国科学院国家空间科学中心 Multi-scale super-pixel segmentation maximum signal-to-noise ratio hyperspectral data dimension reduction method
CN111951284A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing satellite image refined cloud detection method based on deep learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9466122B1 (en) * 2014-08-25 2016-10-11 Raytheon Company Independent covariance estimation and decontamination
EP3614308A1 (en) * 2018-08-24 2020-02-26 Ordnance Survey Limited Joint deep learning for land cover and land use classification
CN109740631A (en) * 2018-12-07 2019-05-10 中国科学院东北地理与农业生态研究所 Object-based OBIA-SVM-CNN Remote Image Classification
CN111222539A (en) * 2019-11-22 2020-06-02 国际竹藤中心 Method for optimizing and expanding supervision classification samples based on multi-source multi-temporal remote sensing image
CN111310666A (en) * 2020-02-18 2020-06-19 浙江工业大学 High-resolution image ground feature identification and segmentation method based on texture features
CN111563544A (en) * 2020-04-27 2020-08-21 中国科学院国家空间科学中心 Multi-scale super-pixel segmentation maximum signal-to-noise ratio hyperspectral data dimension reduction method
CN111951284A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing satellite image refined cloud detection method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
PHUONG D. DAO等: "Improving hyperspectral image segmentation by applying inverse noise weighting and outlier removal for optimal scale selection", 《ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING》 *
QIAOBO HAO等: "Multilabel Sample Augmentation-Based Hyperspectral Image Classification", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》 *
付巍: "基于稀疏表征的高光谱图像空谱压缩与分类方法研究", 《中国博士学位论文全文数据库 工程科技Ⅱ辑》 *
李树涛等: "多源遥感图像融合发展现状与未来展望", 《遥感学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902717A (en) * 2021-10-13 2022-01-07 自然资源部国土卫星遥感应用中心 Satellite-borne hyperspectral farmland bare soil target identification method based on spectrum library

Also Published As

Publication number Publication date
CN112990368B (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN110543837B (en) Visible light airport airplane detection method based on potential target point
Zhu et al. A text detection system for natural scenes with convolutional feature learning and cascaded classification
Jia et al. Visual tracking via adaptive structural local sparse appearance model
CN105225226B (en) A kind of cascade deformable part model object detection method based on image segmentation
US9607228B2 (en) Parts based object tracking method and apparatus
CN106446150B (en) A kind of method and device of vehicle precise search
CN107085708B (en) High-resolution remote sensing image change detection method based on multi-scale segmentation and fusion
Han et al. A novel computer vision-based approach to automatic detection and severity assessment of crop diseases
CN109785366B (en) Related filtering target tracking method for shielding
Holzer et al. Learning to efficiently detect repeatable interest points in depth data
CN109035300B (en) Target tracking method based on depth feature and average peak correlation energy
CN110288612B (en) Nameplate positioning and correcting method and device
Wu et al. Strong shadow removal via patch-based shadow edge detection
JP2009163682A (en) Image discrimination device and program
CN113837037A (en) Plant species identification method and system, electronic equipment and storage medium
CN112990368B (en) Polygonal structure guided hyperspectral image single sample identification method and system
CN110827327A (en) Long-term target tracking method based on fusion
Rotem et al. Combining region and edge cues for image segmentation in a probabilistic gaussian mixture framework
Bonde et al. Multi scale shape index for 3d object recognition
Perrotton et al. Automatic object detection on aerial images using local descriptors and image synthesis
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
Vohra et al. Spatial shape feature descriptors in classification of engineered objects using high spatial resolution remote sensing data
CN109409375B (en) SAR image semantic segmentation method based on contour structure learning model
Othmani et al. Region-based segmentation on depth images from a 3D reference surface for tree species recognition
Huang et al. Non-rigid visual object tracking using user-defined marker and Gaussian kernel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant