CN112132229A - Hyperspectral imaging classification method adopting coding intelligent learning framework - Google Patents

Hyperspectral imaging classification method adopting coding intelligent learning framework Download PDF

Info

Publication number
CN112132229A
CN112132229A CN202011066495.XA CN202011066495A CN112132229A CN 112132229 A CN112132229 A CN 112132229A CN 202011066495 A CN202011066495 A CN 202011066495A CN 112132229 A CN112132229 A CN 112132229A
Authority
CN
China
Prior art keywords
hyperspectral
model
classification
chsi
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011066495.XA
Other languages
Chinese (zh)
Other versions
CN112132229B (en
Inventor
马旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202011066495.XA priority Critical patent/CN112132229B/en
Publication of CN112132229A publication Critical patent/CN112132229A/en
Application granted granted Critical
Publication of CN112132229B publication Critical patent/CN112132229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2148Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a hyperspectral imaging classification method adopting a coding intelligent learning framework, which is characterized in that hardware parameters of a compression hyperspectral imaging (CHSI) system, software parameters of a machine learning model and a filter model and the like are regarded as an adjustable parameter set of the coding intelligent learning framework, an existing hyperspectral image and a classification mark corresponding to the hyperspectral image are used as sample data, and all parameters or part of parameters in the adjustable parameter set are cooperatively trained and optimized, so that the parameters in the hardware system and the software model can be jointly trained and optimized, the optimization freedom and the prediction performance of the coding intelligent learning framework are effectively improved, and the accuracy and the stability of the compression spectral imaging classification are improved; that is to say, the invention directly calculates the classification result of the spectral imaging according to the compressed measurement value of the CHSI system, and does not need to reconstruct the complete three-dimensional spectral data cube of the target scene, thereby effectively improving the calculation efficiency and simultaneously avoiding the influence of the spectral data reconstruction error on the classification result.

Description

Hyperspectral imaging classification method adopting coding intelligent learning framework
Technical Field
The invention belongs to the field of image classification, target recognition, machine learning, deep learning and computational imaging, and particularly relates to a hyperspectral imaging classification method adopting a coding intelligent learning framework.
Background
The hyperspectral imaging technology can adopt hundreds or even thousands of observation wavelength channels, images a target scene or an object in a wide band range from ultraviolet to infrared, obtains spatial light intensity distribution information and spectral information of the object simultaneously, and can provide high-resolution spectral fingerprint information while ensuring spatial resolution. The hyperspectral imaging technology has been widely applied to many fields, such as space remote sensing, precision agriculture, geoscience, hydrology, biomedicine, and the like. The Hyperspectral Imaging Classification (HIC for short) technique plays a crucial role in these applications.
The traditional HIC method acquires a three-dimensional spectrum data cube of a scene in a mode of traversing and pushing a frequency domain and a space domain, the data acquisition time is long, and meanwhile, the subsequent data storage, transmission and processing consume high calculation cost and resources. In order to solve the above problems, researchers introduce a coded aperture, a chromatic dispersion or a light splitting element into a Hyperspectral Imaging system, and propose a compressed Hyperspectral Imaging (CHSI) technology based on a Compressed Sensing (CS) theory. The technology modulates information such as intensity, phase and the like of a light field through a coding aperture and a dispersion element or a light splitting element, and then projects a modulated three-dimensional data cube to a two-dimensional detector to form compression measurement, so that image acquisition and compression processes are combined into a whole. The compressed measurement values contain the original spectral image information of the scene, so that the compressed measurement values can be used for reconstructing a three-dimensional data cube, or pixel-level classification or non-pixel-level classification of a hyperspectral image is directly carried out by adopting the compressed measurement values.
Recently, researchers have proposed a supervised compression HIC technology, which employs a Coded Aperture Snapshot Spectral Imaging (CASSI) system to obtain two-dimensional projection data, construct and train an over-complete sparse dictionary, and approximately characterize the Spectral curve of each spatial pixel as a linear combination of a plurality of sparse dictionary elements. For any spatial pixel, the sparse coefficient of the corresponding spectral curve can be reconstructed through an algorithm, and the spatial pixel is classified according to the sparse coefficient. However, the method also introduces higher computational complexity to the reconstruction process of the sparse coefficient, and affects the computational speed. In order to improve the performance of the compression HIC, researchers provide a code aperture optimization method based on a CS theory, a sparse dictionary optimization method and different sparse classifiers. Although coded aperture optimization based on the finite equidistant condition can improve the reconstruction performance of the spectral data, this is not equivalent to being able to improve the classification performance. Meanwhile, the prior method omits the collaborative optimization and design between the two stages of image data acquisition and data classification processing, thereby limiting the further improvement of classification precision.
In summary, the existing compressed hyperspectral imaging classification technology has room for improvement in the aspects of operation speed, calculation precision, collaborative design of software and hardware of a system and the like.
Disclosure of Invention
In order to solve the problems, the invention provides a hyperspectral imaging classification method adopting an intelligent coding learning framework, which can effectively improve the prediction performance of the intelligent coding learning framework and improve the calculation efficiency, accuracy and stability of compressed spectrum imaging classification.
A hyperspectral imaging classification method adopting a coding intelligent learning framework comprises the following steps:
s1: acquiring one or more hyperspectral image data cubes of known scenes to form a discrete hyperspectral three-dimensional data set
Figure BDA0002713894520000021
Wherein the content of the first and second substances,
Figure BDA0002713894520000022
discrete hyperspectral three-dimensional data corresponding to each hyperspectral image data cube,Wthe number of the hyperspectral image data cubes is; simultaneously, acquiring each discrete hyperspectral three-dimensional data
Figure BDA0002713894520000023
Corresponding true value vector S of the class labeliW, where i is 1iEach element in (1) represents a discrete hyperspectral three-dimensional modelData of
Figure BDA0002713894520000031
True value of the class and S1~SWComposing discrete hyperspectral three-dimensional data sets
Figure BDA0002713894520000032
Set of corresponding classification label truth vectors
Figure BDA0002713894520000033
Setting a CHSI system model, an artificial intelligence learning model and a filter model which are not optimally designed, and respectively using the CHSI system model, the artificial intelligence learning model and the filter model which are not optimally designed as a current CHSI system model, an artificial intelligence learning model and a filter model;
s2: acquisition using current CHSI system model
Figure BDA0002713894520000034
Compressed measurement data Y of all discrete hyperspectral three-dimensional data1~YWObtaining a compressed measurement data set Y ═ Y1,...,YW};
S3: inputting Y into a cascade model formed by the current artificial intelligence learning model and the filter model to obtain
Figure BDA0002713894520000035
Corresponding set of class label prediction vectors
Figure BDA0002713894520000036
Wherein, aggregate
Figure BDA0002713894520000037
Each of which is labeled a prediction vector
Figure BDA0002713894520000038
Each element in (1) represents discrete hyperspectral three-dimensional data
Figure BDA0002713894520000039
A predicted value of the category to which the user belongs;
s4: according to collections
Figure BDA00027138945200000310
And collections
Figure BDA00027138945200000311
Constructing a loss function
Figure BDA00027138945200000312
Judgment set
Figure BDA00027138945200000313
And collections
Figure BDA00027138945200000314
If the loss function value between the two models meets the set requirement, the current CHSI system model, the artificial intelligence learning model and the filter model jointly form a final hyperspectral imaging classification model, and if the loss function value between the two models does not meet the set requirement, the step S5 is carried out;
s5: updating hardware parameters of the CHSI system model, software parameters of the artificial intelligence learning model and software parameters of the filter model according to set rules, re-executing the steps S2-S4 by taking the updated CHSI system model, artificial intelligence learning model and filter model as the current CHSI system model, artificial intelligence learning model and filter model respectively, and updating the set
Figure BDA00027138945200000315
Until set
Figure BDA00027138945200000316
And collections
Figure BDA00027138945200000317
The loss function value between the two meets the set requirement;
s6: and classifying discrete hyperspectral three-dimensional data corresponding to the target scene by adopting a final hyperspectral imaging classification model.
Further, if the classification of the discrete hyperspectral three-dimensional data is image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000041
Each contains only one element and is labeled with a true value vector SiThe elements in (1) represent discrete hyperspectral three-dimensional data
Figure BDA0002713894520000042
True value of the class to which the corresponding hyperspectral image belongs, class-labeled prediction vector
Figure BDA0002713894520000043
The elements in (1) represent discrete hyperspectral three-dimensional data
Figure BDA0002713894520000044
A prediction value of a category to which the corresponding hyperspectral image belongs;
if the discrete hyperspectral three-dimensional data is classified into the sub-image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000045
All contain a plurality of elements, and each element respectively corresponds to a discrete type hyperspectral three-dimensional data
Figure BDA0002713894520000046
High spectrum three-dimensional data in sub-regions divided on space latitude, and simultaneously, true value vector S is labeled in classificationiEach element in the set of elements represents a true value of a category to which the hyper-spectral three-dimensional data in the respective corresponding sub-region belongs, and the classification label prediction vector
Figure BDA0002713894520000047
Each element in the three-dimensional data represents a predicted value of a category to which the hyperspectral three-dimensional data in the corresponding sub-area belongs;
if the discrete hyperspectral three-dimensional data is classified into pixel-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000048
The number of the contained elements is equal to that of each discrete hyperspectral three-dimensional data
Figure BDA0002713894520000049
The number of the contained space pixel points is the same, each element corresponds to one space pixel point, and simultaneously, the true value vector S is marked in a classification wayiEach element in the set represents the true value of the category of the corresponding spatial pixel point, and the classification marks the prediction vector
Figure BDA00027138945200000410
Each element in the space pixel represents a predicted value of the category of the corresponding space pixel.
Further, the hardware parameters of the CHSI system model include the coded aperture, the dispersion ratio of the dispersive element, and the center wavelength of the dispersive element.
Further, if the pattern of the coded aperture is a periodic pattern, the update rule of the coded aperture is:
selecting a pattern in any period in the coding aperture as a template pattern;
respectively taking each pixel on the template pattern as an optimization variable to obtain a loss function
Figure BDA0002713894520000051
Obtaining a gradient value matrix corresponding to the template pattern for the gradient values of the optimized variables;
the current pixel value of the template pattern correspondingly subtracts the product of the gradient value of each corresponding position in the gradient value matrix and the set step length one by one to complete the updating of the template pattern;
and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
Further, if the pattern of the coded aperture is a periodic pattern, the update rule of the coded aperture is:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure BDA0002713894520000052
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable;
dividing the gradient value matrix according to the periodicity of the pattern to obtain gradient value sub-matrices corresponding to the pattern of each period;
corresponding all elements of all the gradient value sub-matrixes one by one, and superposing the element value of each gradient value sub-matrix corresponding to the element position at each element position to obtain an updated value matrix;
selecting a pattern in any period as a template pattern, and subtracting the product of the gradient value and the set step length of each corresponding position in the update value matrix from the current pixel value of the template pattern in a one-to-one correspondence manner to complete the update of the template pattern;
and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
Further, if the pattern of the code aperture is an aperiodic pattern, the update rule of the code aperture is:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure BDA0002713894520000061
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable;
and (4) the product of the gradient value and the set step length of each corresponding position in the gradient value matrix is subtracted from the current pixel value of the coded aperture in a one-to-one correspondence manner, so that the updating of the coded aperture is completed.
Further, in step S6, the step of using the final hyperspectral imaging classification model to classify the discrete hyperspectral three-dimensional data corresponding to the target scene specifically includes:
s61: preparing an optimal coding aperture according to the coding aperture contained in the optimal CHSI system model in the final hyperspectral imaging classification model, and building a CHSI system according to the hardware parameters of the optimal CHSI system model;
s62: aiming at a target scene to be classified, a built CHSI system is adopted to acquire spectral data to obtain compressed measurement data;
s63: processing the compressed measurement data in the step S62 by adopting an optimal artificial intelligence learning model in the final hyperspectral imaging classification model to obtain output data;
s64: and (4) processing the output data in the step (S63) by adopting an optimal filter model in the final hyperspectral imaging classification model to obtain a classification result of the discrete hyperspectral three-dimensional data corresponding to the target scene.
Further, if the artificial intelligence learning model is a perceptive neural network, the software parameters of the artificial intelligence learning model comprise weighting coefficients, kernel functions and bias coefficients of all layers of the perceptive neural network;
if the artificial intelligence learning model is a support vector machine, the software parameters of the artificial intelligence learning model comprise a kernel function, a support vector machine weighting coefficient and a bias coefficient;
if the artificial intelligence learning model is a convolutional neural network, the software parameters of the artificial intelligence learning model comprise convolutional kernel values, bias coefficients and full-connection layer coefficients of each layer of the deep neural network;
the software parameters of the filter model include the values of the filtering kernel, scaling coefficients, and bias coefficients.
Further, the filter kernel function of the filter model is an all-pass function.
Further, the final hyperspectral imaging classification model is as follows:
Figure BDA0002713894520000071
wherein F represents discrete hyperspectral three-dimensional data corresponding to a target scene, K is the number of compressed measurement data obtained according to a hyperspectral image data cube corresponding to the target scene,
Figure BDA0002713894520000072
output data for an artificial intelligence learning model, TfDenotes the output of the filter model
Figure BDA0002713894520000073
And
Figure BDA0002713894520000074
in a functional relationship satisfied therebetween, QfAs software parameters of the filter model, TsMeans for
Figure BDA0002713894520000075
Compressed measurement data Y corresponding to software parameters and target scenes of artificial intelligence learning model1~YKIn a functional relationship satisfied therebetween, QMDSoftware parameters, T, for an artificial intelligence learning modelhThe { } represents a functional relationship which is satisfied between the compressed measurement data and hardware parameters of the CHSI system model and discrete hyperspectral three-dimensional data corresponding to a target scene, and C1~CKRespectively for the coded aperture, G, used in the CHSI system model when K compressed measurement data are acquired1~GKHardware parameters T except the coded aperture corresponding to the CHSI system model when K compressed measurement data are acquiredf' means predictive value
Figure BDA0002713894520000076
And the functional relationship is satisfied with the hardware parameters of the CHSI system model, the software parameters of the artificial intelligence learning model, the software parameters of the filter model and the discrete hyperspectral three-dimensional data corresponding to the target scene.
Has the advantages that:
1. the invention provides a hyperspectral imaging classification method adopting a coding intelligent learning framework, which is characterized in that hardware parameters of a coding aperture, software parameters of an artificial intelligent learning model and a filter model and the like in a CHSI (compact disc interactive system) system are regarded as an adjustable parameter set of the coding intelligent learning framework, existing hyperspectral image data and corresponding classification marks thereof are used as sample data, and all parameters or part of parameters in the adjustable parameter set are cooperatively trained and optimized, so that the method belongs to the technical field of joint training and optimization of parameters in the hardware system and the software model, the degree of freedom of optimization is effectively improved, the prediction performance of the coding intelligent learning framework is also effectively improved, and the accuracy and the stability of compressed spectrum imaging classification are improved.
2. According to the invention, the classification result of the spectral imaging is calculated by data in a compressed domain directly according to the compressed measurement value of the CHSI system, and the reconstruction of a complete three-dimensional spectral data cube of a target scene is not needed, so that the calculation efficiency is effectively improved, and the influence of the spectral data reconstruction error on the classification result is avoided.
3. The invention integrates a hardware system comprising a coding aperture, an artificial intelligence learning model for realizing subsequent data processing, a filter and other software systems, forms a unified coding intelligence learning framework, and realizes the joint design and optimization of software and hardware.
Drawings
FIG. 1 is a flow chart of a hyperspectral imaging classification method using a coding intelligent learning framework according to the present invention;
FIG. 2 is a schematic diagram of a coding intelligent learning framework according to the present invention;
FIG. 3 is a schematic diagram of the process of modulating spectral data using different coded apertures according to the present invention;
FIG. 4 is a schematic diagram of a spectral imaging classification result based on an intelligent coding learning framework according to the present invention;
FIG. 5 is a schematic diagram of a spectral imaging classification result obtained by using the coding intelligent learning framework of the method of the present invention without optimizing the CHSI hardware system parameters and the coding aperture.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
The invention aims to provide a compressed spectral imaging classification method adopting an intelligent coding learning framework, which can effectively improve the computational efficiency while improving the spectral imaging classification performance. The method adopts a CHSI system to obtain compressed spectrum measurement data of a target scene. Then, on the premise of not reconstructing the spectral image, the classification of the hyperspectral image is realized by directly adopting an artificial intelligence learning technology and a filter technology on a compressed domain, so that not only is the calculation efficiency improved, but also the influence of the spectral data reconstruction error on the classification result is avoided. The invention establishes a unified novel coding intelligent learning framework model covering a CHSI hardware system, an artificial intelligent learning system and a filter, adopts an end-to-end training method from a discrete hyperspectral three-dimensional data input end to a predicted value output end, and performs combined optimization design on CHSI hardware system parameters, a coding template, artificial intelligent learning model parameters, filter parameters and the like to obtain an optimal software and hardware parameter combination, obtain an optimized coding aperture and other system hardware parameters, obtain optimized artificial intelligent learning model parameters and obtain optimized filter parameters, thereby realizing the combined optimization design and performance complementation of software and hardware, greatly improving the optimization freedom degree, and effectively improving the accuracy and stability of compressed spectrum imaging classification.
Specifically, as shown in fig. 1, a hyperspectral imaging classification method using a coding intelligent learning framework includes the following steps:
s1: acquiring one or more hyperspectral image data cubes of known scenes to form a discrete hyperspectral three-dimensional data set
Figure BDA0002713894520000091
Wherein the content of the first and second substances,
Figure BDA0002713894520000092
discrete hyperspectral three-dimensional data corresponding to each hyperspectral image data cube, wherein W is the number of hyperspectral imagesAccording to the number of the cubes; simultaneously, acquiring each discrete hyperspectral three-dimensional data
Figure BDA0002713894520000093
Corresponding true value vector S of the class labeliW, where i is 1iEach element in (1) represents discrete hyperspectral three-dimensional data
Figure BDA0002713894520000094
True value of the class and S1~SWComposing discrete hyperspectral three-dimensional data sets
Figure BDA0002713894520000095
Set of corresponding classification label truth vectors
Figure BDA0002713894520000096
Setting a CHSI system model, an artificial intelligence learning model and a filter model which are not optimally designed, and respectively using the CHSI system model, the artificial intelligence learning model and the filter model which are not optimally designed as a current CHSI system model, an artificial intelligence learning model and a filter model;
s2: acquisition using current CHSI system model
Figure BDA0002713894520000101
Compressed measurement data Y of all discrete hyperspectral three-dimensional data1~YWObtaining a compressed measurement data set Y ═ Y1,...,YW};
S3: inputting Y into a cascade model formed by the current artificial intelligence learning model and the filter model to obtain
Figure BDA0002713894520000102
Corresponding set of class label prediction vectors
Figure BDA0002713894520000103
Wherein, aggregate
Figure BDA0002713894520000104
Each of which is labeled a prediction vector
Figure BDA0002713894520000105
Each element in (1) represents discrete hyperspectral three-dimensional data
Figure BDA0002713894520000106
A predicted value of the category to which the user belongs;
s4: according to collections
Figure BDA00027138945200001013
And collections
Figure BDA00027138945200001014
Constructing a loss function
Figure BDA0002713894520000107
Judgment set
Figure BDA0002713894520000108
And collections
Figure BDA0002713894520000109
If the loss function value between the two models meets the set requirement, the current CHSI system model, the artificial intelligence learning model and the filter model jointly form a final hyperspectral imaging classification model, and if the loss function value between the two models does not meet the set requirement, the step S5 is carried out;
s5: updating hardware parameters of the CHSI system model, software parameters of the artificial intelligence learning model and software parameters of the filter model according to set rules, re-executing the steps S2-S4 by taking the updated CHSI system model, artificial intelligence learning model and filter model as the current CHSI system model, artificial intelligence learning model and filter model respectively, and updating the set
Figure BDA00027138945200001010
Until set
Figure BDA00027138945200001011
And collections
Figure BDA00027138945200001012
The loss function value between the two meets the set requirement;
optionally, the hardware parameters of the CHSI system model include the coded aperture, the focal length and numerical aperture of the objective lens, the dispersion ratio of the dispersive element, and the center wavelength of the dispersive element. The software parameters of the filter model include the values of the filtering kernel, scaling coefficients, and bias coefficients. If the artificial intelligence learning model is a perception neural network, the software parameters of the artificial intelligence learning model comprise weighting coefficients, kernel functions and bias coefficients of all layers of the perception neural network; if the artificial intelligence learning model is a support vector machine, the software parameters of the artificial intelligence learning model comprise a kernel function, a support vector machine weighting coefficient and a bias coefficient; and if the artificial intelligence learning model is a convolutional neural network, the software parameters of the artificial intelligence learning model comprise convolutional kernel values, bias variables and full-connection layer coefficients of all layers of the convolutional neural network.
That is, after the hardware parameters of the CHSI system model, the software parameters of the artificial intelligence learning model and the filter model are updated according to the set rules, the initial CHSI system model, the initial artificial intelligence learning model and the initial filter model are all changed, and the updated CHSI system model is adopted to reacquire
Figure BDA0002713894520000111
And inputting the newly acquired compressed measurement data into a cascade model formed by the updated artificial intelligence learning model and the filter model again to obtain a predicted value again, and repeating the steps until the loss function value meets the set requirement.
Note that the loss function value is in accordance with
Figure BDA0002713894520000112
And
Figure BDA0002713894520000113
is related to, characterize
Figure BDA00027138945200001117
And
Figure BDA00027138945200001116
the degree of closeness between the two parts,
Figure BDA0002713894520000116
and
Figure BDA0002713894520000117
the closer the values are, the smaller the loss function value, and vice versa. Thus, the loss function value may satisfy a set requirement of being less than the set value, depending on the loss function being constructed, while a common loss function has a least square deviation
Figure BDA0002713894520000118
Minimum absolute deviation
Figure BDA0002713894520000119
Minimum log deviation
Figure BDA00027138945200001110
Etc. in which
Figure BDA00027138945200001111
And
Figure BDA00027138945200001112
are respectively as
Figure BDA00027138945200001113
And
Figure BDA00027138945200001114
the j-th element of (a).
The update rule of the coded aperture is further described below by taking the coded aperture as an example.
First, it is determined whether the coded aperture is a periodic pattern or a non-periodic pattern. If the pattern of the coded aperture is a periodic pattern, the update rule of the coded aperture is:
selecting a pattern in any period in the coding aperture as a template pattern; respectively taking each pixel on the template pattern as an optimization variable to obtain a loss function
Figure BDA00027138945200001115
Obtaining a gradient value matrix corresponding to the template pattern for the gradient values of the optimized variables; the current pixel value of the template pattern correspondingly subtracts the product of the gradient value of each corresponding position in the gradient value matrix and the set step length one by one to complete the updating of the template pattern; and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
When the pattern of the coded aperture is a periodic pattern, in addition to the above updating method, the following updating rule may be adopted:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure BDA0002713894520000121
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable; dividing the gradient value matrix according to the periodicity of the pattern to obtain gradient value sub-matrices corresponding to the pattern of each period; corresponding all elements of all the gradient value sub-matrixes one by one, and superposing the element value of each gradient value sub-matrix corresponding to the element position at each element position to obtain an updated value matrix; selecting a pattern in any period as a template pattern, and subtracting the product of the gradient value and the set step length of each corresponding position in the update value matrix from the current pixel value of the template pattern in a one-to-one correspondence manner to complete the update of the template pattern; and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
Finally, if the pattern of the code aperture is a non-periodic pattern, the update rule of the code aperture is:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure BDA0002713894520000122
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable; and (4) the product of the gradient value and the set step length of each corresponding position in the gradient value matrix is subtracted from the current pixel value of the coded aperture in a one-to-one correspondence manner, so that the updating of the coded aperture is completed.
For other hardware parameters of the CHSI system model, software parameters of the artificial intelligence learning model and the filter model, a common steepest descent method, a conjugate gradient method, a simulated annealing algorithm, a genetic algorithm, and the like may be adopted for updating, which is not described in detail herein.
S6: the method comprises the following steps of adopting a final hyperspectral imaging classification model to realize classification of discrete hyperspectral three-dimensional data corresponding to a target scene, and specifically comprising the following steps:
s61: preparing an optimal coding aperture according to the coding aperture contained in the optimal CHSI system model in the final hyperspectral imaging classification model, and building a CHSI system according to the hardware parameters of the optimal CHSI system model;
s62: aiming at a target scene to be classified, a built CHSI system is adopted to acquire spectral data to obtain compressed measurement data;
s63: processing the compressed measurement data in the step S62 by adopting an optimal artificial intelligence learning model in the final hyperspectral imaging classification model to obtain output data;
s64: and (4) processing the output data in the step (S63) by adopting an optimal filter model in the final hyperspectral imaging classification model to obtain a classification result of the discrete hyperspectral three-dimensional data corresponding to the target scene.
It should be noted that, if the classification of the discrete hyperspectral three-dimensional data is image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000131
Each contains only one element and is labeled with a true value vector SiThe elements in (1) represent discrete hyperspectral three-dimensional data
Figure BDA0002713894520000132
True value of the class to which the corresponding hyperspectral image belongs, class-labeled prediction vector
Figure BDA0002713894520000133
The elements in (1) represent discrete hyperspectral three-dimensional data
Figure BDA0002713894520000134
A prediction value of a category to which the corresponding hyperspectral image belongs;
if the discrete hyperspectral three-dimensional data is classified into the sub-image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000135
All contain a plurality of elements, and each element respectively corresponds to a discrete type hyperspectral three-dimensional data
Figure BDA0002713894520000136
High spectrum three-dimensional data in sub-regions divided on space latitude, and simultaneously, true value vector S is labeled in classificationiEach element in the set of elements represents a true value of a category to which the hyper-spectral three-dimensional data in the respective corresponding sub-region belongs, and the classification label prediction vector
Figure BDA0002713894520000137
Each element in the three-dimensional data represents a predicted value of a category to which the hyperspectral three-dimensional data in the corresponding sub-area belongs;
if the discrete hyperspectral three-dimensional data is classified into pixel-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure BDA0002713894520000141
ComprisingElement number and discrete hyperspectral three-dimensional data
Figure BDA0002713894520000142
The number of the contained space pixel points is the same, each element corresponds to one space pixel point, and simultaneously, the true value vector S is marked in a classification wayiEach element in the set represents the true value of the category of the corresponding spatial pixel point, and the classification marks the prediction vector
Figure BDA0002713894520000143
Each element in the space pixel represents a predicted value of the category of the corresponding space pixel.
Further, the final hyperspectral imaging classification model may be expressed as:
Figure BDA0002713894520000144
wherein F represents discrete hyperspectral three-dimensional data corresponding to a target scene, K is the number of compressed measurement data obtained according to a hyperspectral image data cube corresponding to the target scene,
Figure BDA0002713894520000145
output data for an artificial intelligence learning model, TfDenotes the output of the filter model
Figure BDA0002713894520000146
And
Figure BDA0002713894520000147
in a functional relationship satisfied therebetween, QfAs software parameters of the filter model, TsMeans for
Figure BDA0002713894520000148
Compressed measurement data Y corresponding to software parameters and target scenes of artificial intelligence learning model1~YKIn a functional relationship satisfied therebetween, QMDFor artificial intelligent learning modelsSoftware parameter, ThThe { } represents a functional relationship which is satisfied between the compressed measurement data and hardware parameters of the CHSI system model and discrete hyperspectral three-dimensional data corresponding to a target scene, and C1~CKRespectively for the coded aperture, G, used in the CHSI system model when K compressed measurement data are acquired1~GKHardware parameters T except the coded aperture corresponding to the CHSI system model when K compressed measurement data are acquiredf' means predictive value
Figure BDA0002713894520000149
And the functional relationship is satisfied with the hardware parameters of the CHSI system model, the software parameters of the artificial intelligence learning model, the software parameters of the filter model and the discrete hyperspectral three-dimensional data corresponding to the target scene.
It should be noted that the expression of the final hyperspectral imaging classification model is also applicable to the known scene in step S1, that is, the expression of the final hyperspectral imaging classification model may be a general formula.
The data acquisition process of the final hyperspectral imaging classification model is described in detail below.
Step 101, as shown in fig. 2, is a schematic diagram of a coding intelligent learning framework. By f0(x, y, λ) represents hyperspectral three-dimensional data of the target scene, where x and y are spatial coordinates and λ is a wavelength coordinate. For creating discrete model, the three-dimensional data of high spectrum is used as Fu,v,wExpressed in N × M × L, where u, v, w are positive integers, N, M and L are the upper limits of u, v and w, respectively, u and v are spatial coordinates, and w is the coordinate in the spectral dimension (λ direction).
102, for pixel-level hyperspectral imaging classification, each space coordinate (u, v) of three-dimensional spectral data F of a target scene corresponds to a space pixel, each space pixel corresponds to a specific classification mark, and a true value of the classification mark is recorded as Su,vThe estimated value of the classification mark obtained by the method of the invention is recorded as
Figure BDA0002713894520000151
For image levelThe whole three-dimensional spectral data F corresponds to a specific classification mark, and the true value of the classification mark is recorded as S1The estimated value of the classification mark obtained by the method of the invention is recorded as
Figure BDA0002713894520000152
For sub-image-level hyperspectral imaging classification, three-dimensional spectral data F are divided into a plurality of sub-regions in u and v space dimensions, each space sub-region corresponds to a specific classification mark, and a classification mark truth value corresponding to the t-th sub-region is recorded as StThe estimated value of the classification mark obtained by the method of the invention is recorded as
Figure BDA0002713894520000153
And 103, receiving and collecting discretized hyperspectral three-dimensional data of the target scene by a compressed hyperspectral imaging CHSI system. The system has a coded aperture, denoted by C, which can be spatially modulated, wherein the pattern of the coded aperture can be a periodic pattern or a non-periodic pattern. Suppose that the pattern of coded apertures C contains P periods, the P-th periodic pattern being denoted Cp(P ═ 1.. said., P), different periodic patterns are identical to each other, and the pattern within a certain period may be a periodic pattern or a non-periodic pattern. The system may also be provided with a dispersive element, a beam splitting element, other elements with spatial light modulation, such as amplitude type spatial light modulator, phase type spatial light modulator, polarization light modulator, and the set of other hardware parameters of the system is denoted as G.
Step 104, as shown in fig. 3, is a schematic diagram of a process of modulating the spectral data by using different coded apertures and obtaining a plurality of different sets of compressed measurement data. K groups of compressed measurement data are obtained on a detector of the CHSI system, wherein K is more than or equal to 1. The kth set of compressed measurement data is denoted as Yk=Th{F,Ck,GkWhere T ishThe compression measurement data and the hardware parameters of the CHSI system model and the discrete hyperspectral three-dimensional data corresponding to the target scene meet the functional relationship, wherein the functional relationship can comprise noise to the compression measurement dataThe influence of (a); ckAnd GkIndicating the coded aperture and system hardware parameters used in the kth measurement, e.g. the dispersion ratio of the dispersive element, the center wavelength of the dispersive element, the focal length and numerical aperture of the objective lens, then YkThe (u, v) th element value of (a) may be expressed as:
Figure BDA0002713894520000161
step 105, collecting all K sets of compressed measurement data, and forming a compressed measurement data set Y ═ Y1,...,YK}。
And 106, processing the compressed measurement data set Y by adopting an artificial intelligence learning method. Assuming the output of the employed artificial intelligence learning model
Figure BDA0002713894520000162
Is composed of
Figure BDA0002713894520000163
Wherein Q isMDIs a parameter of the artificial intelligence learning model, Ts{ } is the artificial intelligence learning model function adopted, T'sThe method is characterized in that { } is an equivalent model function after the software and hardware system in the coding intelligent framework are combined. According to the formula, the compound has the advantages of,
Figure BDA0002713894520000164
the method is a common function of compressed measurement data and parameters of an artificial intelligent learning model, and further is a common function of target scene three-dimensional spectral data, coding aperture and other system hardware parameters and parameters of the artificial intelligent learning model.
Step 107, adopting a filter to output results of the artificial intelligent learning model
Figure BDA0002713894520000165
And (6) processing. Assuming the output of the filter employed
Figure BDA0002713894520000171
Can be expressed as
Figure BDA0002713894520000172
Wherein, TfIs the model function of the filter employed, QfAre the model parameters of the filter. T in the inventionf{ } can be all-pass function, i.e. the output of the filter is equal to the input, or other filters can be selected according to the actual requirement, including low-pass filter, high-pass filter, band-stop filter. From the above formula, the output of the filter is the output data of the artificial intelligence learning and the common function of the filter parameters, and further the three-dimensional spectrum data of the target scene, the coding aperture and other system hardware parameters, the artificial intelligence learning model parameters and the common function of the filter model parameters.
Then, for the high spectrum imaging classification of pixel level, let
Figure BDA0002713894520000173
(u and v traverse all spatial coordinate positions). For the hyperspectral imaging classification at the image level, let
Figure BDA0002713894520000174
For sub-image level hyper-spectral imaging classification, let
Figure BDA0002713894520000175
(t traverses all the spatial sub-region sequence numbers). And performing combined training on the coded aperture, other system hardware parameters, artificial intelligence learning model parameters and filter model parameters to obtain an optimal software and hardware parameter combination, obtain the optimized coded aperture and other system hardware parameters, obtain an optimized artificial intelligence learning model and obtain an optimized filter.
In summary, in the process of acquiring scene spectral image information, the present invention uses the coded aperture to modulate the original spectral data, and obtains one or more sets of compressed measurement values on the detector. The coded aperture pattern may be periodic or aperiodic. Then, the compressed measurement data is adopted, and the classification of hyperspectral imaging is directly realized from a compressed domain on the premise of not reconstructing the spectral image, so that the calculation complexity is greatly reduced, and the calculation efficiency is improved. The classification may be a pixel level, image level, or sub-image level classification. In order to realize the classification function, the method uses artificial intelligence learning to process the compressed measurement data acquired by the detector of the imaging system, namely, the compressed measurement data acquired by the detector is input into an artificial intelligence learning model as input data, and a filter is further adopted to process the data to obtain the classification mark of hyperspectral imaging.
That is, the invention combines the imaging model of the CHSI hardware system with the artificial intelligence learning model and the filter model, establishes the compressed spectrum imaging classification model using the unified novel coding intelligence learning framework, and the input of the model is the spectrum image (namely, the three-dimensional spectrum data cube) of the target scene, and the output of the filter is used as the classification result of the hyperspectral imaging through the transformation and modulation (including the modulation effect of the coding aperture) of the CHSI hardware system, the artificial intelligence learning model and the processing of the filter.
Therefore, the CHSI hardware system is regarded as a part of the coding intelligent learning framework, the CHSI hardware system parameters and the coding aperture are regarded as adjustable parameters of the coding intelligent learning framework, and the CHSI hardware system parameters, the coding template, the artificial intelligent learning model parameters, the filter parameters and the like are collectively called as an adjustable parameter set. The method uses the existing hyperspectral imaging and the corresponding classification label thereof as sample data, and carries out collaborative training and optimization on all parameters or part of parameters in the adjustable parameter set, thereby effectively improving the prediction performance of the coding intelligent learning framework and improving the accuracy and stability of the compressed spectrum imaging classification.
Referring to fig. 4, truth values are labeled for the target scene image and its classification. Where 401 is a color image of the target scene and 402 is a true value of the classification label of the target scene, and different colors represent different classification labels of the region. Fig. 5 shows the result of classifying the target scene by spectral imaging using different methods. Wherein 501 is a spectrum imaging classification result schematic diagram obtained by performing combined training optimization on a coding aperture and an artificial intelligence learning model based on a coding intelligence learning framework, and the classification accuracy of spectrum imaging on a pixel level reaches 90% -95%. 502 is the coding intelligent learning framework adopting the method of the invention, and only the artificial intelligent learning model is trained, but the coding aperture is not optimized, the obtained spectral imaging classification result has a classification accuracy of 70-75% on the pixel level.
Therefore, compared with the prior art, the compressed spectral imaging classification method based on the coding intelligent learning framework provided by the invention directly calculates the classification result of spectral imaging according to the compressed measurement value of the CHSI system and the data in the compressed domain, does not need to reconstruct a complete three-dimensional spectral data cube of a target scene, effectively improves the calculation efficiency, and simultaneously avoids the influence of the spectral data reconstruction error on the classification result.
Secondly, the invention integrates a hardware system containing the coding aperture, artificial intelligence learning for realizing subsequent data processing, a filter and other software systems, forms a uniform coding intelligence learning framework and realizes the combined design of software and hardware.
Finally, the invention provides an end-to-end training optimization method of the coding intelligent learning framework, which is used for carrying out combined training and optimization on parameters in a hardware system and a software model, effectively improves the degree of freedom of optimization and can further improve the performance of compressed spectrum imaging classification.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it will be understood by those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A hyperspectral imaging classification method adopting an intelligent coding learning framework is characterized by comprising the following steps:
s1: acquiring one or more hyperspectral image data cubes of known scenes to form a discrete hyperspectral three-dimensional data set
Figure FDA0002713894510000011
Wherein the content of the first and second substances,
Figure FDA0002713894510000012
the method comprises the following steps of (1) obtaining discrete hyperspectral three-dimensional data corresponding to hyperspectral image data cubes, wherein W is the number of the hyperspectral image data cubes; simultaneously, acquiring each discrete hyperspectral three-dimensional data
Figure FDA0002713894510000013
Corresponding true value vector S of the class labeliW, where i is 1iEach element in (1) represents discrete hyperspectral three-dimensional data
Figure FDA0002713894510000014
True value of the class and S1~SWComposing discrete hyperspectral three-dimensional data sets
Figure FDA0002713894510000015
Set of corresponding classification label truth vectors
Figure FDA0002713894510000016
Setting a CHSI system model, an artificial intelligence learning model and a filter model which are not optimally designed, and respectively using the CHSI system model, the artificial intelligence learning model and the filter model which are not optimally designed as a current CHSI system model, an artificial intelligence learning model and a filter model;
s2: acquisition using current CHSI system model
Figure FDA0002713894510000017
Compressed measurement data Y of all discrete hyperspectral three-dimensional data1~YWObtaining a compressed measurement data set Y ═ Y1,...,YW};
S3: inputting Y into a cascade model formed by the current artificial intelligence learning model and the filter model to obtain
Figure FDA0002713894510000018
Corresponding set of class label prediction vectors
Figure FDA0002713894510000019
Wherein, aggregate
Figure FDA00027138945100000110
Each of which is labeled a prediction vector
Figure FDA00027138945100000111
Each element in (1) represents discrete hyperspectral three-dimensional data
Figure FDA00027138945100000112
A predicted value of the category to which the user belongs;
s4: according to collections
Figure FDA00027138945100000117
And collections
Figure FDA00027138945100000113
Constructing a loss function
Figure FDA00027138945100000114
Judgment set
Figure FDA00027138945100000115
And collections
Figure FDA00027138945100000116
If the loss function value between the two models meets the set requirement, the current CHSI system model, the artificial intelligence learning model and the filter model jointly form a final hyperspectral imaging classification model, and if the loss function value between the two models does not meet the set requirement, the step S5 is carried out;
s5: updating hardware parameters of the CHSI system model, software parameters of the artificial intelligence learning model and software parameters of the filter model according to set rules, re-executing the steps S2-S4 by taking the updated CHSI system model, artificial intelligence learning model and filter model as the current CHSI system model, artificial intelligence learning model and filter model respectively, and updating the set
Figure FDA0002713894510000021
Until set
Figure FDA0002713894510000022
And collections
Figure FDA0002713894510000023
The loss function value between the two meets the set requirement;
s6: and classifying discrete hyperspectral three-dimensional data corresponding to the target scene by adopting a final hyperspectral imaging classification model.
2. The hyperspectral imaging classification method adopting the coding intelligent learning framework according to claim 1, wherein if the classification of the discrete hyperspectral three-dimensional data is image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure FDA0002713894510000024
Each contains only one element and is labeled with a true value vector SiThe elements in (1) represent discrete hyperspectral three-dimensional data
Figure FDA0002713894510000025
True value of the class to which the corresponding hyperspectral image belongs, class-labeled prediction vector
Figure FDA0002713894510000026
The elements in (1) represent discrete hyperspectral three-dimensional data
Figure FDA0002713894510000027
A prediction value of a category to which the corresponding hyperspectral image belongs;
if the discrete hyperspectral three-dimensional data is classified into the sub-image-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure FDA0002713894510000028
All contain a plurality of elements, and each element respectively corresponds to a discrete type hyperspectral three-dimensional data
Figure FDA0002713894510000029
High spectrum three-dimensional data in sub-regions divided on space latitude, and simultaneously, true value vector S is labeled in classificationiEach element in the set of elements represents a true value of a category to which the hyper-spectral three-dimensional data in the respective corresponding sub-region belongs, and the classification label prediction vector
Figure FDA00027138945100000210
Each element in the three-dimensional data represents a predicted value of a category to which the hyperspectral three-dimensional data in the corresponding sub-area belongs;
if the discrete hyperspectral three-dimensional data is classified into pixel-level classification, the classification marks a true value vector SiAnd a class label prediction vector
Figure FDA00027138945100000211
The number of the contained elements is equal to that of each discrete hyperspectral three-dimensional data
Figure FDA00027138945100000212
The number of the contained space pixel points is the same, and each element corresponds to one space pixel point respectivelyAt the same time, the true value vector S is labeled by classificationiEach element in the set represents the true value of the category of the corresponding spatial pixel point, and the classification marks the prediction vector
Figure FDA0002713894510000031
Each element in the space pixel represents a predicted value of the category of the corresponding space pixel.
3. The hyperspectral image classification method adopting a coding intelligence learning framework according to claim 1, wherein the hardware parameters of the CHSI system model comprise a coding aperture, a dispersion ratio of a dispersive element and a center wavelength of a dispersive element.
4. The hyperspectral imaging classification method adopting the intelligent coding learning framework as claimed in claim 3, wherein if the pattern of the coded aperture is a periodic pattern, the update rule of the coded aperture is:
selecting a pattern in any period in the coding aperture as a template pattern;
respectively taking each pixel on the template pattern as an optimization variable to obtain a loss function
Figure FDA0002713894510000032
Obtaining a gradient value matrix corresponding to the template pattern for the gradient values of the optimized variables;
the current pixel value of the template pattern correspondingly subtracts the product of the gradient value of each corresponding position in the gradient value matrix and the set step length one by one to complete the updating of the template pattern;
and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
5. The hyperspectral imaging classification method adopting the intelligent coding learning framework as claimed in claim 3, wherein if the pattern of the coded aperture is a periodic pattern, the update rule of the coded aperture is:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure FDA0002713894510000033
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable;
dividing the gradient value matrix according to the periodicity of the pattern to obtain gradient value sub-matrices corresponding to the pattern of each period;
corresponding all elements of all the gradient value sub-matrixes one by one, and superposing the element value of each gradient value sub-matrix corresponding to the element position at each element position to obtain an updated value matrix;
selecting a pattern in any period as a template pattern, and subtracting the product of the gradient value and the set step length of each corresponding position in the update value matrix from the current pixel value of the template pattern in a one-to-one correspondence manner to complete the update of the template pattern;
and respectively replacing the pixel values of the patterns in other periods with the pixel values of the corresponding positions after the updating of the template patterns to complete the updating of the coding aperture.
6. The hyperspectral imaging classification method adopting the intelligent coding learning framework as claimed in claim 3, wherein if the pattern of the code aperture is an aperiodic pattern, the update rule of the code aperture is:
using each pixel on the coding aperture as an optimization variable to obtain a loss function
Figure FDA0002713894510000041
Obtaining a gradient value matrix corresponding to the coding aperture for the gradient value of each optimized variable;
and (4) the product of the gradient value and the set step length of each corresponding position in the gradient value matrix is subtracted from the current pixel value of the coded aperture in a one-to-one correspondence manner, so that the updating of the coded aperture is completed.
7. The hyperspectral imaging classification method by using the coding intelligent learning framework according to claim 3, wherein the step S6 of using the final hyperspectral imaging classification model to realize the classification of the discrete hyperspectral three-dimensional data corresponding to the target scene specifically comprises the following steps:
s61: preparing an optimal coding aperture according to the coding aperture contained in the optimal CHSI system model in the final hyperspectral imaging classification model, and building a CHSI system according to the hardware parameters of the optimal CHSI system model;
s62: aiming at a target scene to be classified, a built CHSI system is adopted to acquire spectral data to obtain compressed measurement data;
s63: processing the compressed measurement data in the step S62 by adopting an optimal artificial intelligence learning model in the final hyperspectral imaging classification model to obtain output data;
s64: and (4) processing the output data in the step (S63) by adopting an optimal filter model in the final hyperspectral imaging classification model to obtain a classification result of the discrete hyperspectral three-dimensional data corresponding to the target scene.
8. The hyperspectral imaging classification method adopting the coding intelligent learning framework as claimed in claim 1, wherein if the artificial intelligent learning model is a perceptive neural network, the software parameters of the artificial intelligent learning model comprise weighting coefficients, kernel functions and bias coefficients of each layer of the perceptive neural network;
if the artificial intelligence learning model is a support vector machine, the software parameters of the artificial intelligence learning model comprise a kernel function, a support vector machine weighting coefficient and a bias coefficient;
if the artificial intelligence learning model is a convolutional neural network, the software parameters of the artificial intelligence learning model comprise convolutional kernel values, bias coefficients and full-connection layer coefficients of each layer of the deep neural network;
the software parameters of the filter model include the values of the filtering kernel, scaling coefficients, and bias coefficients.
9. The hyperspectral imaging classification method adopting the coding intelligent learning framework as claimed in claim 1, wherein the filter kernel function of the filter model is an all-pass function.
10. The hyperspectral imaging classification method adopting the coding intelligent learning framework according to claim 1, wherein the final hyperspectral imaging classification model is as follows:
Figure FDA0002713894510000051
wherein F represents discrete hyperspectral three-dimensional data corresponding to a target scene, K is the number of compressed measurement data obtained according to a hyperspectral image data cube corresponding to the target scene,
Figure FDA0002713894510000061
output data for an artificial intelligence learning model, TfDenotes the output of the filter model
Figure FDA0002713894510000062
And
Figure FDA0002713894510000063
in a functional relationship satisfied therebetween, QfAs software parameters of the filter model, TsMeans for
Figure FDA0002713894510000064
Compressed measurement data Y corresponding to software parameters and target scenes of artificial intelligence learning model1~YKIn a functional relationship satisfied therebetween, QMDSoftware parameters, T, for an artificial intelligence learning modelhThe { } represents a functional relationship which is satisfied between the compressed measurement data and hardware parameters of the CHSI system model and discrete hyperspectral three-dimensional data corresponding to a target scene, and C1~CKRespectively adopted in a CHSI system model when K compressed measurement data are obtainedCoded aperture, G1~GKRespectively is a hardware parameter T 'except the coded aperture corresponding to the CHSI system model when K compressed measurement data are acquired'fRepresenting predicted values
Figure FDA0002713894510000065
And the functional relationship is satisfied with the hardware parameters of the CHSI system model, the software parameters of the artificial intelligence learning model, the software parameters of the filter model and the discrete hyperspectral three-dimensional data corresponding to the target scene.
CN202011066495.XA 2020-09-30 2020-09-30 Hyperspectral imaging classification method adopting coding intelligent learning framework Active CN112132229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011066495.XA CN112132229B (en) 2020-09-30 2020-09-30 Hyperspectral imaging classification method adopting coding intelligent learning framework

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011066495.XA CN112132229B (en) 2020-09-30 2020-09-30 Hyperspectral imaging classification method adopting coding intelligent learning framework

Publications (2)

Publication Number Publication Date
CN112132229A true CN112132229A (en) 2020-12-25
CN112132229B CN112132229B (en) 2022-11-29

Family

ID=73845024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011066495.XA Active CN112132229B (en) 2020-09-30 2020-09-30 Hyperspectral imaging classification method adopting coding intelligent learning framework

Country Status (1)

Country Link
CN (1) CN112132229B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750174A (en) * 2021-01-05 2021-05-04 东北大学 Target intelligent sensing and identifying system and method based on spatial coding

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307949A1 (en) * 2017-04-20 2018-10-25 The Boeing Company Methods and systems for hyper-spectral systems
CN108985301A (en) * 2018-07-04 2018-12-11 南京师范大学 A kind of hyperspectral image classification method of the sub- dictionary learning of multiple features class
CN109871830A (en) * 2019-03-15 2019-06-11 中国人民解放军国防科技大学 Spatial-spectral fusion hyperspectral image classification method based on three-dimensional depth residual error network
CN109883548A (en) * 2019-03-05 2019-06-14 北京理工大学 The Encoding Optimization of the spectrum imaging system of neural network based on optimization inspiration
CN110348399A (en) * 2019-07-15 2019-10-18 中国人民解放军国防科技大学 EO-1 hyperion intelligent method for classifying based on prototype study mechanism and multidimensional residual error network
CN111368896A (en) * 2020-02-28 2020-07-03 南京信息工程大学 Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180307949A1 (en) * 2017-04-20 2018-10-25 The Boeing Company Methods and systems for hyper-spectral systems
CN108985301A (en) * 2018-07-04 2018-12-11 南京师范大学 A kind of hyperspectral image classification method of the sub- dictionary learning of multiple features class
CN109883548A (en) * 2019-03-05 2019-06-14 北京理工大学 The Encoding Optimization of the spectrum imaging system of neural network based on optimization inspiration
CN109871830A (en) * 2019-03-15 2019-06-11 中国人民解放军国防科技大学 Spatial-spectral fusion hyperspectral image classification method based on three-dimensional depth residual error network
CN110348399A (en) * 2019-07-15 2019-10-18 中国人民解放军国防科技大学 EO-1 hyperion intelligent method for classifying based on prototype study mechanism and multidimensional residual error network
CN111368896A (en) * 2020-02-28 2020-07-03 南京信息工程大学 Hyperspectral remote sensing image classification method based on dense residual three-dimensional convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈善学等: "基于空谱特性的高光谱图像压缩感知重构", 《电讯技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112750174A (en) * 2021-01-05 2021-05-04 东北大学 Target intelligent sensing and identifying system and method based on spatial coding
CN112750174B (en) * 2021-01-05 2023-09-22 东北大学 Target intelligent sensing and identifying system and method based on space coding

Also Published As

Publication number Publication date
CN112132229B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
Yuan Learning building extraction in aerial scenes with convolutional networks
Sara et al. Hyperspectral and multispectral image fusion techniques for high resolution applications: A review
CN109883548B (en) Optimization heuristic-based coding optimization method for spectral imaging system of neural network
CN109741407A (en) A kind of high quality reconstructing method of the spectrum imaging system based on convolutional neural networks
KR102132075B1 (en) Hyperspectral Imaging Reconstruction Method Using Artificial Intelligence and Apparatus Therefor
CN109447891A (en) A kind of high quality imaging method of the spectrum imaging system based on convolutional neural networks
CN109712150A (en) Optical microwave image co-registration method for reconstructing and device based on rarefaction representation
CN111340698A (en) Multispectral image spectral resolution enhancement method based on neural network
CN112132229B (en) Hyperspectral imaging classification method adopting coding intelligent learning framework
He et al. Multi-spectral remote sensing land-cover classification based on deep learning methods
Feng et al. Fully convolutional network-based infrared and visible image fusion
CN114898217A (en) Hyperspectral classification method based on neural network architecture search
CN113008370A (en) Three-dimensional self-adaptive compression reconstruction method based on liquid crystal hyperspectral calculation imaging system
Bauer et al. Spatial functa: Scaling functa to imagenet classification and generation
Paoletti et al. AAtt-CNN: Automatical attention-based convolutional neural networks for hyperspectral image classification
WO2006096162A2 (en) Method for content driven image compression
Zhao et al. Hyperspectral unmixing via deep autoencoder networks for a generalized linear-mixture/nonlinear-fluctuation model
Laparra et al. Information theory measures via multidimensional gaussianization
Gao et al. A total variation global optimization framework and its application on infrared and visible image fusion
Manviya et al. Image fusion survey: a comprehensive and detailed analysis of image fusion techniques
Bartler et al. Grad-lam: Visualization of deep neural networks for unsupervised learning
Sigurdsson et al. Blind Nonlinear Hyperspectral Unmixing Using an $\ell_ {q} $ Regularizer
Saxena et al. Spotnet-learned iterations for cell detection in image-based immunoassays
Riese Development and Applications of Machine Learning Methods for Hyperspectral Data
Bin et al. Image fusion method based on short support symmetric non-separable wavelet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant