CN117314757B - Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium - Google Patents

Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium Download PDF

Info

Publication number
CN117314757B
CN117314757B CN202311622750.8A CN202311622750A CN117314757B CN 117314757 B CN117314757 B CN 117314757B CN 202311622750 A CN202311622750 A CN 202311622750A CN 117314757 B CN117314757 B CN 117314757B
Authority
CN
China
Prior art keywords
frequency
domain
hyperspectral
spatial
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311622750.8A
Other languages
Chinese (zh)
Other versions
CN117314757A (en
Inventor
康旭东
段普宏
李树涛
鲁续坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202311622750.8A priority Critical patent/CN117314757B/en
Publication of CN117314757A publication Critical patent/CN117314757A/en
Application granted granted Critical
Publication of CN117314757B publication Critical patent/CN117314757B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hyperspectral computed imaging method, a system and a medium for fusion of space spectrum frequency and multiple domains, wherein the method comprises the steps of utilizing two-dimensional offline discrete cosine transform DCT to convert RGB imagesConversion to frequency domain to obtain frequency domain feature mapThe method comprises the steps of carrying out a first treatment on the surface of the From frequency domain feature mapsFrequency information graph is extracted from the mediumThe method comprises the steps of carrying out a first treatment on the surface of the Mapping frequency information using two-dimensional off-line Inverse Discrete Cosine Transform (IDCT)Transforming to space domain to obtain frequency information diagram of space domainThe method comprises the steps of carrying out a first treatment on the surface of the Mapping frequency information of spatial domainFusion to RGB imageIs used for generating hyperspectral image in space domain features. The invention aims to solve the problems of poor detail information and low reconstruction accuracy of the hyperspectral image in the existing hyperspectral computed imaging, and realize high-fidelity reconstruction of the target spectrum.

Description

Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
Technical Field
The invention relates to the technical field of hyperspectral imaging, in particular to a hyperspectral computed radiography method, a hyperspectral frequency multi-domain fusion hyperspectral computed radiography system and a hyperspectral computed radiography medium.
Background
Hyperspectral images usually contain tens to hundreds of continuous spectrum bands, the number of the spectrum bands is high, and abundant spectrum information of objects is recorded. According to the fact that different objects have different reflectivity and absorptivity to electromagnetic waves at different wavelengths, accurate identification of the objects can be achieved. At present, a plurality of spectrum imagers exist on the market, but the imaging speed is low, and the real-time requirement cannot be met. In addition, the hyperspectral imaging equipment is large in size, cannot be carried about and is expensive in manufacturing cost, and application and development of hyperspectral images are limited. How to obtain high spatial resolution hyperspectral images at a low cost has become a research hotspot in the current hyperspectral imaging field.
The hyperspectral computational imaging technology aims at designing an algorithm to reconstruct a three-band RGB image into a hyperspectral image with high spatial resolution, and has the advantages of low cost, high imaging speed, high spatial resolution and the like. Compared with a hyperspectral image, the RGB image loses much spectral information, so that the reconstruction of the hyperspectral image from the RGB image by recovering the spectral information is an inverse problem and is seriously ill-posed. Existing research methods can be broadly divided into three categories: based on a hardware system method, based on a priori method and based on a deep learning method. The hardware system-based spectral imaging method mainly designs a specific system to increase the acquisition of more spectral dimension information from an RGB camera, and mainly comprises the steps of modifying the RGB camera system, increasing the number of cameras or controlling the imaging environment. The mapping relation between RGB and hyperspectral images is modeled by means of mathematical relation based on the prior method, and specific prior reconstructed spectrum information is learned based on inherent attribute and statistical information of hyperspectrum. The spectrum imaging precision of the method is greatly affected by manual priori, in recent years, because the convolutional neural network has outstanding performance in the field of computer vision, the spectrum imaging technology based on a deep learning method gradually becomes a research hot spot, and the method generally utilizes a large amount of available RGB and hyperspectral image pairs to characterize the hidden mapping relation of the two, so as to realize the accurate reconstruction of the hyperspectral image with high spatial resolution, and is also called as a data-driven method. In addition, some students research a spectrum superdivision technology combining a physical model and deep learning, the superdivision problem is converted into a target optimization problem, and the problem is solved by applying optimization theory knowledge, so that the model has physical interpretability.
In summary, existing hyperspectral computed imaging methods focus only on information in the spatial or spectral domain, and do not utilize information in the frequency domain. Meanwhile, the hyperspectral image has a complex spatial spectrum structure, and single-dimension information is difficult to accurately represent. Therefore, how to use information in the frequency domain to implement hyperspectral computed radiography has become a key technical problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the technical problems: aiming at the problems in the prior art, the invention provides a hyperspectral computed radiography method, a hyperspectral computed radiography system and a hyperspectral computed radiography medium, which aim to solve the problems of poor detail information and low reconstruction precision of hyperspectral images in the existing hyperspectral computed radiography and realize high-fidelity reconstruction of a target spectrum.
In order to solve the technical problems, the invention adopts the following technical scheme:
a hyperspectral computed imaging method of space-spectrum frequency multi-domain fusion comprises the following steps:
s1, RGB imageConversion to the frequency domain to obtain a frequency domain profile +.>
S2, from the frequency domain feature mapExtracting frequency information map->
S3, mapping the frequency informationConversion to the spatial domain to obtain a frequency information map of the spatial domain>
S4, mapping the frequency information of the spatial domainFusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>
Alternatively, the RGB image is processed in step S1Conversion to the frequency domain to obtain a frequency domain profile +.>Comprising the following steps:
s1.1, RGB image to be inputConverting to a YCbCr space diagram and dividing into image blocks according to color channels;
s1.2, each image block is projected to a frequency domain by utilizing two-dimensional off-line discrete cosine transform DCT to obtain a frequency coefficient;
s1.3, arranging the frequency coefficients of each image block together and stacking the frequency coefficients together according to the Y, cb and Cr three channels in sequence, and vectorizing the frequency coefficients to obtain a frequency band; and then the frequency bands are recombined according to the Z shape to obtain a frequency domain characteristic diagramAnd the frequency domain feature mapOne for each channel of the network.
Optionally, in step S2, the frequency domain feature map is extractedExtracting frequency information map->Comprising the following steps:
s2.1, frequency domain feature mapAccording to three equal parts of Y, cb and Cr channels in a YCbCr space in the channel dimension, firstly, respectively carrying out space downsampling on each equal part of characteristics and dividing the characteristics into two parts of low-frequency information and high-frequency information, and then respectively carrying out channel recombination on the low-frequency information and the high-frequency information in each equal part to obtain low-frequency characteristics->And high frequency characteristics->And is of a size +.>,/>Wherein->Is a frequency domain feature map->Channel number of->Representing the size of the space downsampled;
s2.2, for low frequency characteristics respectivelyAnd high frequency characteristics->Performing mapping learning by adopting m dense residual blocks RDB, and adopting a GeLu function as an activation function in the dense residual blocks RDB;
s2.3, carrying out mapping learning on the low-frequency features and the high-frequency features subjected to mapping learning by adopting n dense residual blocks RDB after stacking the low-frequency features and the high-frequency features in a channel dimension to obtain a frequency reconfiguration diagram, and then upsampling the frequency reconfiguration diagram to obtain a frequency information diagram
Optionally, the frequency information is mapped in step S3Transforming to space domain to obtain frequency information diagram of space domainComprising the following steps:
s3.1, dividing the frequency information graph into image blocks;
s3.2, each image block is projected to a space domain by utilizing two-dimensional off-line Inverse Discrete Cosine Transform (IDCT);
s3.3, combining the image blocks projected to the space domain to obtain a frequency information graph of the space domain
Alternatively, the RGB image in step S4The spatial domain features of (1) include +.>Extracting spatial shallow features by convolution with convolution kernel size of 1×1>And shallow features of the empty spectrum +.>The convolution with convolution kernel size of 3 x 3 is used to send the convolution up-dimension into the symmetric convolution neural network to further extract RGB image +.>Deep features of the spatial spectrum->The method comprises the steps of carrying out a first treatment on the surface of the And frequency information map of spatial domain +.>Fusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>The functional expression of (2) is:
in the above-mentioned method, the step of,a convolution operation representing a convolution kernel size of 3 x 3,/v>A convolution operation representing a convolution kernel size of 1 x 1,/->Is a frequency information diagram of a spatial domain.
Optionally, the symmetric convolutional neural network comprises an upper feature extraction branch, a spatial attention module SA, a lower feature extraction branch, a channel stacking module CAT and a convolutional module, wherein the upper feature extraction branch comprises N local feature extraction modules LFEM which are sequentially connected, a feature map input into the symmetric convolutional neural network enters from a first local feature extraction module LFEM of the upper feature extraction branch, the local feature extraction modules LFEM in the upper feature extraction branch all comprise three paths of outputs, the first path of outputs are output to the channel stacking module CAT, the second path of outputs are output to a corresponding spatial attention module SA, a third path of the first N-1 local feature extraction modules LFEM is output to a next local feature extraction module LFEM in the upper feature extraction branch, the feature extraction lower branch comprises N local feature extraction modules LFEM, the output of a first spatial attention module SA and the output of an Nth local feature extraction module LFEM in the feature extraction upper branch are fused and then used as the input of the first local feature extraction module LFEM in the feature extraction lower branch, the output of the rest spatial attention modules SA and the output of the last local feature extraction module LFEM in the feature extraction lower branch are fused and then used as the input of the corresponding local feature extraction module LFEM in the feature extraction lower branch, each local feature extraction module LFEM in the feature extraction lower branch comprises one output connected to a channel stacking module CAT, and the output of the channel stacking module CAT is refined through a convolution module with the convolution kernel size of 1 multiplied by 1 so as to obtain an RGB imageDeep features of the spatial spectrum->
Optionally, the local feature extraction module LFEM includes three multi-feature fusion dual-attention modules MFFDAB, two convolution kernel sizes of 1×1 group convolutions, one channel stacking module, and one convolution kernel size of 1×1 convolutions, where a functional expression of feature extraction of the input feature map by the local feature extraction module LFEM is:
in the above-mentioned method, the step of,~/>the output features of the dual-attention module MFFDAB are fused for three multiple features respectively,~/>respectively representing three multi-feature fusion dual-attention modules MFFDAB, ">And->Respectively representing the input-output characteristics of the local characteristic extraction module LFEM,/->Representing a channel stack module->Convolution representing a convolution kernel of size 1 x 1,/->And->Respectively representing two group convolutions with convolution kernel sizes of 1 x 1; the multi-feature fusion dual-attention module MFFDAB is used for extracting deep features of an image space domain, and the function expression of the deep features of the image space domain is as follows:
in the above-mentioned method, the step of,~/>three sets of intermediate features, respectively>Representing a convolution with a convolution kernel size of 1 x 1,convolution representing a convolution kernel size of 3 x 3, ">Convolution representing a convolution kernel size of 5 x 5, ">Convolution representing a convolution kernel size of 7 x 7, ">Representing channel attention module, < >>Representing a spatial attention module->For the output characteristics of the channel attention module, +.>For the output characteristics of the spatial attention module, +.>And->Respectively representing the input and output characteristics of the j-th multi-characteristic fusion dual-attention module MFFDAB.
Optionally, the steps S1 to S2 are implemented based on a hyperspectral computed imaging model, where the hyperspectral computed imaging model includes:
the frequency domain transformation module is used for executing the step S1;
the frequency domain learning module FDLM is used for executing the step S2;
the frequency domain inverse transformation module is used for executing the step S3;
the feature fusion module FFM is used for executing the step S4;
a spatial domain feature extraction module for extracting RGB imageIs a spatial domain feature of (1);
loss function adopted by hyperspectral calculation imaging model in trainingThe functional expression of (2) is:
in the above-mentioned method, the step of,is a spatial domain loss function, < >>For balance parameter->The space spectrum domain loss function is a space spectrum loss function between hyperspectral images generated by a hyperspectral calculation imaging model and true hyperspectral images serving as labels>A norm, wherein the frequency domain loss function is a frequency information graph of a spatial domain generated by a hyperspectral calculation imaging model and a frequency information graph of the spatial domain in a real hyperspectral image>Norms.
In addition, the invention also provides a hyperspectral computed radiography system with the spatial-spectral-frequency multi-domain fusion, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the hyperspectral computed radiography method with the spatial-spectral-frequency multi-domain fusion. The present invention also provides a computer readable storage medium having stored therein a computer program for programming or configuring by a microprocessor to perform the method of hyperspectral computed radiography of the spatial-spectral-frequency multi-domain fusion.
Compared with the prior art, the invention has the following advantages: the invention includes the steps of mapping RGB imagesImage forming apparatusConversion to the frequency domain to obtain a frequency domain profile +.>From the frequency domain feature map->Extracting frequency information map->Frequency information map->Conversion to the spatial domain to obtain a frequency information map of the spatial domain>Frequency information map of spatial domain +.>Fusion to RGB imageGenerates hyperspectral image in the spatial domain feature of +.>By introducing frequency information into the spectrum super-resolution, a spectrum super-resolution method of spatial spectrum frequency multi-domain feature fusion is constructed, fine details of a hyperspectral image are effectively reconstructed, and compared with the existing method, spectrum reconstruction accuracy and visual effect can be remarkably improved.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a network structure according to a method of an embodiment of the present invention.
Fig. 3 is a schematic diagram of step S1 in the embodiment of the invention.
Fig. 4 is a schematic diagram of step S2 in the embodiment of the invention.
Fig. 5 is a schematic diagram of a network structure of a local feature extraction module LFEM according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of a network structure of a multi-feature fusion dual-attention module MFFDAB according to an embodiment of the present invention.
Fig. 7 is a schematic diagram of a network structure of a spatial attention module according to an embodiment of the present invention.
Fig. 8 is a schematic diagram of a network structure of a channel attention module according to an embodiment of the invention.
FIG. 9 is a graph showing the results of a comparative experiment of the calculated imaging results of the method of the embodiment of the present invention and the prior art method.
Detailed Description
As shown in fig. 1 and 2, the hyperspectral computed imaging method for spatial-spectral-frequency multi-domain fusion of the present embodiment includes:
s1, RGB imageConversion to the frequency domain to obtain a frequency domain profile +.>
S2, from the frequency domain feature mapExtracting frequency information map->
S3, mapping the frequency informationConversion to the spatial domain to obtain a frequency information map of the spatial domain>
S4, mapping the frequency information of the spatial domainFusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>
RGB image is processed in step S1Conversion to the frequency domain to obtain a frequency domain profile +.>Can be expressed as:
in the above-mentioned method, the step of,for the frequency domain information extraction operation, < > is->For the final output characteristics of the branch, +.>RGB image->Is a width and a height of the same.
As shown in fig. 3, in step S1 of the present embodiment, RGB images are displayedConversion to the frequency domain to obtain a frequency domain profile +.>Comprising the following steps:
s1.1, RGB image to be inputConverting to a YCbCr space diagram and dividing into image blocks according to color channels; step S1.1 of dividing the image block into a plurality of rows and columns as required, which are generally the same, for example, as oneAn alternative implementation, in this embodiment, is divided into 4×4=16 image blocks;
s1.2, projecting each image block to a frequency domain by utilizing two-dimensional off-line discrete cosine transform DCT to obtain frequency coefficients, wherein each frequency coefficient corresponds to the intensity of a specific frequency band;
s1.3, arranging the frequency coefficients of each image block together and stacking the frequency coefficients together according to the Y, cb and Cr three channels in sequence, and vectorizing the frequency coefficients to obtain a frequency band; and then the frequency bands are recombined according to the Z shape to obtain a frequency domain characteristic diagramAnd the frequency domain feature mapOne for each channel of the network. The frequency bands are recombined according to the Z shape, namely, the first row is recombined and arranged in sequence from beginning to end, then the frequency bands return to the head of the second row, the second row is recombined and arranged in sequence from beginning to end, and the frequency bands return to the head of the last row finally, and the last row is recombined and arranged in sequence from beginning to end.
Due to the frequency domain information obtained in the last stepAll the features of the image cannot be well represented, and the frequency domain information needs to be further learned by means of the nonlinear characterization capability of deep learning, so that the method can be better used as the supplement of the spatial domain information. As shown in FIG. 4, in step S2 of the present embodiment, there is a frequency domain feature map +.>Extracting frequency information map->Comprising the following steps:
s2.1, frequency domain feature mapAccording to the Y, cb and Cr three channels of YCbCr space, in the channel dimension three equal parts are respectively aligned at firstEach equal part of characteristics is subjected to space downsampling and divided into two parts of low-frequency information and high-frequency information, and then the low-frequency information and the high-frequency information in each equal part are respectively subjected to channel recombination to obtain low-frequency characteristics ∈D->And high frequency characteristics->And is of a size +.>,/>Wherein->Is a frequency domain feature map->Channel number of->Representing the size of the space downsampled; in this embodiment, C=48, k is W/8,W is RGB image +.>Is a width of (c).
S2.2, for low frequency characteristics respectivelyAnd high frequency characteristics->Performing mapping learning by adopting m dense residual blocks RDB, and adopting a GeLu function as an activation function in the dense residual blocks RDB; based on the principle of not increasing the complexity of the model, selecting a plurality of dense residual blocks RDB to respectively perform mapping learning on high-frequency information and low-frequency information, and guaranteeing the richness between captured information under the condition of not increasing too many parameters; because the frequency domain signal contains the possibility of negative numbers, the GeLu function acts as a secretAn activation function in the set residual block;
s2.3, carrying out mapping learning on the low-frequency features and the high-frequency features subjected to mapping learning by adopting n dense residual blocks RDB after stacking the low-frequency features and the high-frequency features in a channel dimension to obtain a frequency reconfiguration diagram, and then upsampling the frequency reconfiguration diagram to obtain a frequency information diagram. Based on the principle of not increasing the complexity of the model, a plurality of dense residual blocks RDB are selected to respectively carry out mapping learning on high-frequency information and low-frequency information, so that the richness between the captured information is ensured under the condition of not increasing too many parameters.
Since the learned frequency domain information is mainly used for feature replenishment, the information needs to be converted into a space domain again, and the information is converted into a space domain by adopting 2D IDCT conversion. The same as the off-line DCT transformation, the feature map is cut and inversely transformed, and finally the final output feature map with the same size as the input RGB is obtained based on shape rearrangement. In this embodiment, the frequency information is mapped in step S3 +.>Conversion to the spatial domain to obtain a frequency information map of the spatial domain>Comprising the following steps:
s3.1, dividing the frequency information graph into image blocks;
s3.2, each image block is projected to a space domain by utilizing two-dimensional off-line Inverse Discrete Cosine Transform (IDCT);
s3.3, combining the image blocks projected to the space domain to obtain a frequency information graph of the space domain
The RGB imageThe spatial domain features are realized by spatial domain information extraction branches, and a known spatial domain feature extraction method can be adopted according to the needs. As an alternative embodiment, as shown in FIG. 2, RGB image ++in step S4 of the present example>The spatial domain features of (1) include +.>Extracting spatial shallow features by convolution with convolution kernel size of 1×1>(this feature is the same as the number of hyperspectral image channels), can be expressed as:
in the above-mentioned method, the step of,a convolution operation with a convolution kernel size of 1 x 1 is represented.
Then, the spatial spectrum shallow layer featuresThe convolution with convolution kernel size of 3 x 3 is used to send the convolution up-dimension into the symmetric convolution neural network to further extract RGB image +.>Deep features of the spatial spectrum->This can be expressed as:
in the above-mentioned method, the step of,for characteristic dimension-lifting operations, +.>Is a symmetric convolutional neural network.
Finally, the frequency information diagram of the space domain is formedFusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>The functional expression of (2) is:
in the above-mentioned method, the step of,a convolution operation representing a convolution kernel size of 3 x 3,/v>A convolution operation representing a convolution kernel size of 1 x 1,/->Is a frequency information diagram of a spatial domain.
As shown in fig. 2, the symmetric convolutional neural network (symmetric CNN) according to this embodiment includes an upper feature extraction branch, a spatial attention module SA, a lower feature extraction branch, a channel stacking module CAT and a convolutional module, where the upper feature extraction branch includes N local feature extraction modules LFEM connected in sequence, a feature map input to the symmetric convolutional neural network enters from a first local feature extraction module LFEM of the upper feature extraction branch, the local feature extraction modules LFEM in the upper feature extraction branch each include three outputs, the first output is output to the channel stacking module CAT, the second output is output to a corresponding spatial attention module SA, and the third output of the first N-1 local feature extraction modules LFEM is output to the upper feature extraction branchThe next local feature extraction module LFEM, the feature extraction lower branch includes N local feature extraction modules LFEM, the output of the first spatial attention module SA is fused with the output of the nth local feature extraction module LFEM in the feature extraction upper branch to serve as the input of the first local feature extraction module LFEM in the feature extraction lower branch, the output of the other spatial attention modules SA is fused with the output of the last local feature extraction module LFEM in the feature extraction lower branch to serve as the input of the corresponding local feature extraction module LFEM in the feature extraction lower branch, and each local feature extraction module LFEM in the feature extraction lower branch includes an output connected to a channel stacking module CAT, the output of the channel stacking module CAT is refined by a convolution module with a convolution kernel size of 1×1 to obtain an RGB imageDeep features of the spatial spectrum->. The output features of each local feature extraction module LFEM keep the beneficial information in the current stage, so the method in the embodiment splices the output features of all local feature extraction modules LFEM of the upper and lower branches in the channel dimension and uses one ++>Convolution refines these features to produce higher-level feature representations. Finally, by these operations, the final output feature of the spatial domain information extraction branch is obtained +.>. By combining the output characteristics of the two branches, the network is better able to capture texture and structural information of the input image. At the same time use->The convolution refines the characteristics, and the operation is also helpful to reduce the dimension of the characteristics and accelerate the calculation speed, so that the purposes of light weight and high performance of the model are achieved. ObtainingRGB image->Deep features of the spatial spectrum->The process of (2) is expressed as follows:
wherein,representation->Convolution (S)/(S)>Representing a channel stacking module.
In this embodiment, a spatial spectrum feature extraction structure of a symmetrical CNN is designed, where the symmetrical CNN is composed of some paired local feature extraction modules LFEM and a spatial attention module SA. Symmetrical CNN is a neural network with a dual-branch structure, where shallow features are first of all usedAnd sending the sample to an upper branch for feature extraction. In the upper branch, the output of the previous LFEM will be the input of the next LFEM, and this output will also become part of the corresponding LFEM input in the lower branch by the spatial attention module SA. More importantly, for the corresponding LFEM between the two branches, the network also adopts a parameter sharing mechanism, so that the network can learn the image characteristics more efficiently, and the parameter quantity is greatly reduced.
For the branches on feature extraction, the specific flow may be expressed as:
wherein,representing the output characteristics of the ith LFEM in the upper branch, a total of 3 +.>Representing the ith LFEM.
For the feature extraction lower branch, the specific flow may be expressed as:
wherein,representing the output characteristics of the ith LFEM in the lower branch. As can be seen from the above equation, for the first LFEM in the lower branch, the last LFEM output in the upper branch is chosen as the 0 th LFEM output feature of the lower branch, since there is no previous output feature. />Representing the ith LFEM whose parameters remain consistent with the parameters in the upper branch. />Representing the ith spatial attention module.
Based on the excellent performance of the dense residual blocks, the embodiment provides a local feature extraction module LFEM as a core module of the symmetrical CNN in the spatial domain extraction branch. As shown in fig. 5, the local feature extraction module LFEM of the present embodiment includes three multi-feature fusion dual-attention modules MFFDAB, two convolution kernel sizes of 1×1 group convolutions, one channel stacking module and one convolution kernel size of 1×1 convolution, where the multi-feature fusion dual-attention modules MFFDAB replace the "convolution layer-activation function" combination in the conventional dense residual block, and are more used for feature dimension reductionGroup convolution. The local feature extraction module LFEM is specific to inputThe functional expression of feature extraction of the feature map is as follows:
in the above-mentioned method, the step of,~/>the output features of the dual-attention module MFFDAB are fused for three multiple features respectively,~/>respectively representing three multi-feature fusion dual-attention modules MFFDAB, ">And->Respectively representing the input-output characteristics of the local characteristic extraction module LFEM,/->Representing a channel stack module->Convolution representing a convolution kernel of size 1 x 1,/->And->Two group convolutions with convolution kernel sizes of 1 x 1 are shown, respectively.
The multi-feature fusion dual-attention module MFFDAB in the embodiment is used for extracting deep features of an image space domain and improving the expression capacity of a model. As shown in fig. 6, the multi-feature fusion dual-attention module MFFDAB of the present embodiment is used for extracting deep features of an image space domain, where a functional expression for extracting the deep features of the image space domain is as follows:
in the above-mentioned method, the step of,~/>three sets of intermediate features, respectively>Representing a convolution with a convolution kernel size of 1 x 1,convolution representing a convolution kernel size of 3 x 3, ">Convolution representing a convolution kernel size of 5 x 5, ">Convolution representing a convolution kernel size of 7 x 7, ">Representing channel attention module, < >>Representing a spatial attention module->For the output characteristics of the channel attention module, +.>For the output characteristics of the spatial attention module, +.>And->Respectively representing the input and output characteristics of the j-th multi-characteristic fusion dual-attention module MFFDAB. As shown in FIG. 6, the input is first passed through convolution layers of different convolution kernel sizes, with their convolution sizes being +.>、/>、/>The method aims at obtaining the characteristics of different receptive fields, capturing more characteristic information and improving the accuracy and generalization capability of the model. And then will not be twoStacking and fusing the characteristic channels of the same receptive field, and then carrying out overall fusion to obtain the characteristics after multi-characteristic fusion. Then, the weight of each channel is obtained by utilizing a channel attention module CA, and the characteristic channels are weighted; the spatial context is modeled by a spatial attention module SA, re-weighting each pixel. And finally, superposing and fusing the outputs of the two attentions to obtain the final characteristic representation. As shown in fig. 7, the processing of the input features by the spatial attention module in this embodiment includes respectively performing an average pooling process and a maximum pooling process on the input features, stacking the channels of the processing results, activating the convolution operation with a convolution kernel size of 7×7 and a Sigmoid activation function, and then connecting the convolution operation with the input features to obtain the output features. As shown in fig. 8, the processing of the input features by the channel attention module in this embodiment includes performing adaptive pooling, convolution with a convolution kernel size of 1×1, activation of a Relu activation function, convolution with a convolution kernel size of 1×1, activation of a Sigmoid activation function, and then connecting with the input features to obtain output features. In this embodiment, a feature fusion module FFM is designed to fuse the spatial domain features, the frequency domain information and the initial features together, and reconstruct a high-fidelity hyperspectral image. Because the space spectrum domain features and the frequency domain information are inconsistent with the initial feature dimensions, the space spectrum features and the frequency domain information cannot be simply overlapped and fused, a Feature Fusion Module (FFM) needs to be designed to fuse the three features and output a final super-resolution hyperspectral image. Since the spatial domain features and the frequency domain information are already fully extracted.
As shown in fig. 2, steps S1 to S2 in this embodiment are implemented based on a hyperspectral computed tomography model, where the hyperspectral computed tomography model includes:
the frequency domain transformation module is used for executing the step S1;
the frequency domain learning module FDLM is used for executing the step S2;
the frequency domain inverse transformation module is used for executing the step S3;
the feature fusion module FFM is used for executing the step S4;
a spatial domain feature extraction module for extracting RGB imageIs a spatial domain feature of (1);
loss function adopted by hyperspectral calculation imaging model in trainingThe functional expression of (2) is:
in the above-mentioned method, the step of,is a spatial domain loss function, < >>For balance parameter->The space spectrum domain loss function is a space spectrum loss function between hyperspectral images generated by a hyperspectral calculation imaging model and true hyperspectral images serving as labels>A norm, wherein the frequency domain loss function is a frequency information graph of a spatial domain generated by a hyperspectral calculation imaging model and a frequency information graph of the spatial domain in a real hyperspectral image>Norms. In terms of the loss function, the difference between hyperspectral computed imaging and a true hyperspectral image in the frequency domain is also used as one of the constraints of the network, and the spatial domain and frequency domain joint loss function is designed. Definitions->For a true hyperspectral image, +.>Representing a reconstructed hyperspectral image, +.>And->Respectively, real hyperspectral frequency domain information and reconstructed hyperspectral frequency domain information, then +.>And->Can be expressed as:
in the above-mentioned method, the step of,representation->Norms.
In order to verify the hyperspectral computed imaging method of the spatial spectrum frequency multi-domain fusion, in the embodiment, a CAVE data set and Harvard data are utilized to carry out a verification experiment, wherein the CAVE data set contains 32 indoor images in total, the wave band number of each image is 31, the spectral resolution is 400-700nm, and the spatial resolution is 512 x 512; the Harvard dataset contains 50 images of indoor and outdoor scenes taken in natural daylight, with a wavelength range of 420-720nm and a spatial resolution of 1040 x 1392. In the generation of experimental data, according to the mapping relation between RGB and HSI, spectrum downsampling is carried out on a hyperspectral image to obtain a corresponding RGB data set, and the RGB-HSI data set required by the experiment is formed. And selecting a spectral response function of Nikon-D700-cameras as a spectral downsampling matrix in the experiment. In the CAVE dataset, 20 pairs of data (about 60%) were randomly selected as the training set, the remaining 12 pairs of data, on the training data versus test data divisionAs a test set. In the Harvard dataset, 35 pairs of data (70%) were also randomly selected as the training set, the remaining 15 pairs being used for testing. In order to be able to evaluate the performance of the proposed method accurately, the advantages and disadvantages of the different methods are compared more intuitively. Four widely used objective evaluation indexes, namely a spectrum angular distance SAM, a root mean square error RMSE, a global image quality index UIQI and a structural similarity SSIM, are selected. The embodiment is realized by adopting a deep learning framework, initializing network parameters by adopting a kaiming initialization method, and simultaneously optimizing the network parameters based on an Adam optimizer. The optimizer parameters are defined as,/>,/>. The initial learning rate was set to 0.0001 and a cosine annealing strategy was chosen to attenuate the learning rate. The training set image block size is 64 x 64, and to enlarge the sample set, 32 is set to the overlap size at the time of clipping. In the whole network training process, 100 iterations are set, and the batch size is 32. In the experiment, the method is compared with six existing spectrum calculation imaging methods, wherein the six existing spectrum calculation imaging methods comprise:
HSCNN-R(Shi Z, Chen C, Xiong Z, et al. HSCNN+: Advanced CNN-based hyperspectral recovery from RGB images. In: Proc of IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018, 939–947);
DFMN(Zhang L, Lang Z, Wang P, et al. Pixel-aware deep function-mixture network for spectral superresolution. In: Proc of AAAI Conference on Artificial Intelligence, volume 34. 2020, 12821–12828);
HRNet(Zhao Y, Po L M, Yan Q, et al. Hierarchical regression network for spectral reconstruction from RGB images. In: Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020, 422–423);
AWAN(Li J, Wu C, Song R, et al. Adaptive weighted attention network with camera spectral sensitivity prior for spectral reconstruction from RGB images. In: Proc of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2020, 462–463);
HSRNet(He J, Li J, Yuan Q, et al. Spectral response function-guided deep optimization-driven network for spectral super-resolution. IEEE Transactions on Neural Networks and Learning Systems, 2021, 33(9):4213–4227);
Prinet+(Hang R, Liu Q, Li Z. Spectral super-resolution network guided by intrinsic properties of hyperspectral imagery. IEEE Transactions on Image Processing, 2021, 30:7256–7265)。
table 1 shows the individual objective evaluation criteria for the different methods in the CAVE dataset and the Harvard dataset.
Table 1: the method of this embodiment and six existing methods provide objective performance metrics on CAVE and Harvard datasets.
/>
It can be observed from table 1 that the method of the present embodiment obtains the optimal evaluation index in both data sets, which indicates that the proposed method can more effectively improve the spatial and spectral quality of the hyperspectral image. In addition to objective evaluation indexes, in order to more intuitively feel the effects of different spectral super-resolution imaging methods, visual evaluation is performed in this embodiment. In visual evaluation, a band image of the hyper-spectral image is displayed for evaluating the spatial quality, and a certain area thereof is enlarged. The spectral quality of the image is evaluated by analyzing a spectral error map, which is obtained by calculating the spectral error between the original hyperspectral image and the reconstructed image, which is related to an evaluation index SAM.
FIG. 9 is a graph of visual evaluation of different methods on a CAVE dataset, where (a) is a reconstructed band graph of the HSCNN-R method, (b) is a reconstructed band graph of the DFMN method, (c) is a reconstructed band graph of the HRNet method, (d) is a reconstructed band graph of the AWAN method, (e) is a spectral error graph of the HSCNN-R method, (f) is a spectral error graph of the DFMN method, (g) is a spectral error graph of the HRNet method, (h) is a spectral error graph of the AWAN method, (i) is a reconstructed band graph of the HSRNet method, (j) is a reconstructed band graph of the Prnet+ method, (k) is a reconstructed band graph of the present embodiment, (l) is a reference reconstructed band graph, (m) is a spectral error graph of the HSRNet method, (n) is a spectral error graph of the Prnet+ method, and (o) is a spectral error graph of the present embodiment. Referring to fig. 9 for comparison, the hyperspectral computed radiography method with spatial frequency multi-domain fusion in this embodiment obtains a minimum spectrum error map, which indicates that the hyperspectral computed radiography method with spatial frequency multi-domain fusion in this embodiment can well reconstruct spatial detail information.
In summary, the hyperspectral computed imaging method for spatial spectrum frequency multi-domain fusion in this embodiment includes a frequency domain information learning module, a spatial spectrum domain information extraction module and a feature fusion module. In the frequency domain information learning module, an offline discrete cosine transform DCT is utilized to convert an RGB image into a frequency domain, a frequency domain feature is extracted through a frequency domain learning module FDLM, and finally, the frequency domain information is transformed into a spatial domain through an inverse discrete cosine transform IDCT. In the spatial domain information extraction branch, a symmetric convolutional neural network CNN structure is formed based on a proposed local feature extraction module LFEM and a spatial attention module to extract spatial domain features of an image. The feature fusion module FFM fuses the features extracted by the two branches with the initial features to generate a high-resolution hyperspectral image, and the problems of poor detail information and low reconstruction accuracy of the hyperspectral image in the existing spectrum imaging technology can be solved. Different from the traditional network model, the hyperspectral computation imaging method of the embodiment of the space-spectrum frequency multi-domain fusion introduces frequency domain information into hyperspectral imaging, constructs a space-spectrum frequency multi-domain feature fusion framework, designs a symmetrical convolution neural network space-spectrum feature extraction structure, and realizes high-fidelity reconstruction of a target spectrum.
In addition, the embodiment also provides a hyperspectral computed radiography system with the spatial-spectral-frequency multi-domain fusion, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the hyperspectral computed radiography method with the spatial-spectral-frequency multi-domain fusion. The present embodiment also provides a computer readable storage medium having a computer program stored therein for programming or configuring by a microprocessor to perform the method of hyperspectral computed radiography of the spatial-spectral-frequency multi-domain fusion.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (8)

1. The hyperspectral computed imaging method of the space-spectrum frequency multi-domain fusion is characterized by comprising the following steps of:
s1, RGB imageConversion to the frequency domain to obtain a frequency domain profile +.>
S2, from the frequency domain feature mapExtracting frequency information map->Comprising: s2.1, frequency domain feature map +.>According to three equal parts of Y, cb and Cr channels in a YCbCr space in the channel dimension, firstly, respectively performing space downsampling on each equal part of characteristics and dividing the characteristics into two parts of low-frequency information and high-frequency information, and then respectively performing channel recombination on the low-frequency information and the high-frequency information in each equal part to obtain a low-frequency specialSyndrome of->And high frequency characteristics->And is of a size +.>Wherein->Is a frequency domain feature map->Channel number of->Representing the size of the space downsampled; s2.2, low frequency characteristics +.>And high frequency characteristics->Performing mapping learning by adopting m dense residual blocks RDB, and adopting a GeLu function as an activation function in the dense residual blocks RDB; s2.3, carrying out mapping learning on the low-frequency features and the high-frequency features subjected to mapping learning by adopting n dense residual blocks RDB after channel dimension stacking to obtain a frequency reconfiguration diagram, and then upsampling the frequency reconfiguration diagram to obtain a frequency information diagram->
S3, mapping the frequency informationTransforming into spatial domain to obtain spatial domainFrequency information map->
S4, mapping the frequency information of the spatial domainFusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>Said RGB image->The acquisition of the spatial domain features comprises: for RGB image->Extracting spatial shallow features by convolution with convolution kernel size of 1×1>And shallow features of the empty spectrum +.>The convolution with convolution kernel size of 3 x 3 is used to send the convolution up-dimension into the symmetric convolution neural network to further extract RGB image +.>Deep features of the spatial spectrum->As RGB image->Is a spatial domain feature of (1); the frequency information map of the spatial domain +.>Fusion to RGB image->Generates hyperspectral image in the spatial domain feature of +.>The functional expression of (2) is:
in the above-mentioned method, the step of,a convolution operation representing a convolution kernel size of 3 x 3,/v>A convolution operation representing a convolution kernel size of 1 x 1,/->Is a frequency information diagram of a spatial domain.
2. The method of spatial-spectral-frequency multi-domain fusion hyperspectral computed Radiography (RGB) image processing as set forth in claim 1 wherein the RGB image is processed in step S1Conversion to the frequency domain to obtain a frequency domain profile +.>Comprising the following steps:
s1.1, RGB image to be inputConverting to a YCbCr space diagram and dividing into image blocks according to color channels;
s1.2, each image block is projected to a frequency domain by utilizing two-dimensional off-line discrete cosine transform DCT to obtain a frequency coefficient;
s1.3, arranging the frequency coefficients of each image block together and stacking the frequency coefficients together according to the Y, cb and Cr three channels in sequence, and vectorizing the frequency coefficients to obtain a frequency band; and then the frequency bands are recombined according to the Z shape to obtain a frequency domain characteristic diagramAnd the frequency domain feature map +.>One for each channel of the network.
3. The method for hyperspectral computed radiography (STC) as described in claim 1 wherein step S3 is performed by plotting the frequency information in a graphConversion to the spatial domain to obtain a frequency information map of the spatial domain>Comprising the following steps:
s3.1, dividing the frequency information graph into image blocks;
s3.2, each image block is projected to a space domain by utilizing two-dimensional off-line Inverse Discrete Cosine Transform (IDCT);
s3.3, combining the image blocks projected to the space domain to obtain a frequency information graph of the space domain
4. The hyperspectral computed tomography method of claim 1 wherein the symmetric convolutional neural network comprises an upper feature extraction branch, a spatial attention module SA, a lower feature extraction branch, a channel stacking module CAT and a convolution module, the upper feature extraction branch comprising N local feature extraction modules LFEM connected in sequence, the input symmetric convolution godThe first local feature extraction module LFEM of the upper feature extraction branch enters from the feature graph of the network, the local feature extraction modules LFEM in the upper feature extraction branch all comprise three paths of outputs, the first path of outputs is output to the channel stacking module CAT, the second path of outputs is output to a corresponding spatial attention module SA, the third path of the first N-1 local feature extraction modules LFEM is output to the next local feature extraction module LFEM in the upper feature extraction branch, the lower feature extraction branch comprises N local feature extraction modules LFEM, the output of the first spatial attention module SA is fused with the output of the Nth local feature extraction module LFEM in the feature extraction upper branch to be used as the input of the first local feature extraction module LFEM in the feature extraction lower branch, the output of the other spatial attention modules SA is fused with the output of the last local feature extraction module LFEM in the feature extraction lower branch to be used as the input of the corresponding local feature extraction module LFEM in the feature extraction lower branch, each local feature extraction module LFEM in the feature extraction lower branch comprises one output connected to a channel stacking module CAT, and the output of the channel stacking module CAT is refined by a convolution module with the convolution kernel size of 1 multiplied by 1 to obtain an RGB imageDeep features of the spatial spectrum->
5. The method of spatial-spectral-frequency multi-domain fused hyperspectral computed radiography (LFEM) as claimed in claim 4, wherein the local feature extraction module LFEM includes three multi-feature fused dual-attention modules MFFDAB, two convolution kernel sizes of 1×1 group convolutions, a channel stacking module, and a convolution kernel size of 1×1 convolution, and the function expression of feature extraction of the input feature map by the local feature extraction module LFEM is:
in the above-mentioned method, the step of,~/>output features of the two-attention module MFFDAB are fused for three multiple features, respectively, +.>Respectively representing three multi-feature fusion dual-attention modules MFFDAB, ">And->Respectively representing the input-output characteristics of the local characteristic extraction module LFEM,/->Representing a channel stack module->Representing a convolution with a convolution kernel size of 1 x 1,and->Respectively representing two group convolutions with convolution kernel sizes of 1 x 1; the multi-feature fusion dual-attention module MFFDAB is used for extracting deep features of an image space domain, and the function expression of the deep features of the image space domain is as follows:
in the above-mentioned method, the step of,~/>three sets of intermediate features, respectively>Representing a convolution with a convolution kernel size of 1 x 1,convolution representing a convolution kernel size of 3 x 3, ">Convolution representing a convolution kernel size of 5 x 5, ">Convolution representing a convolution kernel size of 7 x 7, ">Representing channel attention module, < >>Representing a spatial attention module->For the output characteristics of the channel attention module, +.>For the output characteristics of the spatial attention module, +.>And->Respectively representing the input and output characteristics of the j-th multi-characteristic fusion dual-attention module MFFDAB.
6. The method for hyperspectral computed radiography (hyperspectral) imaging with spatial-spectral-frequency multi-domain fusion as claimed in claim 5, wherein steps S1 to S4 are implemented based on a hyperspectral computed radiography (hyperspectral) imaging model, and the hyperspectral computed radiography (hyperspectral) imaging model comprises:
the frequency domain transformation module is used for executing the step S1;
the frequency domain learning module FDLM is used for executing the step S2;
the frequency domain inverse transformation module is used for executing the step S3;
the feature fusion module FFM is used for executing the step S4;
a spatial domain feature extraction module for extracting RGB imageIs a spatial domain feature of (1);
loss function adopted by hyperspectral calculation imaging model in trainingThe functional expression of (2) is:
in the above-mentioned method, the step of,is a spatial domain loss function, < >>For balance parameter->The space spectrum domain loss function is a space spectrum loss function between hyperspectral images generated by a hyperspectral calculation imaging model and true hyperspectral images serving as labels>A norm, wherein the frequency domain loss function is a frequency information graph of a spatial domain generated by a hyperspectral calculation imaging model and a frequency information graph of the spatial domain in a real hyperspectral image>Norms.
7. A spatial-spectral-frequency multi-domain fused hyperspectral computed radiography system comprising a microprocessor and a memory, connected to each other, wherein the microprocessor is programmed or configured to perform the spatial-spectral-frequency multi-domain fused hyperspectral computed radiography method of any one of claims 1 to 6.
8. A computer readable storage medium having a computer program stored therein, wherein the computer program is for programming or configuring by a microprocessor to perform the method of hyperspectral computed radiography (hyperspectral) as claimed in any one of claims 1 to 6.
CN202311622750.8A 2023-11-30 2023-11-30 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium Active CN117314757B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311622750.8A CN117314757B (en) 2023-11-30 2023-11-30 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311622750.8A CN117314757B (en) 2023-11-30 2023-11-30 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Publications (2)

Publication Number Publication Date
CN117314757A CN117314757A (en) 2023-12-29
CN117314757B true CN117314757B (en) 2024-02-09

Family

ID=89285272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311622750.8A Active CN117314757B (en) 2023-11-30 2023-11-30 Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium

Country Status (1)

Country Link
CN (1) CN117314757B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876754A (en) * 2018-05-31 2018-11-23 深圳市唯特视科技有限公司 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks
CN109636769A (en) * 2018-12-18 2019-04-16 武汉大学 EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN115861083A (en) * 2023-03-03 2023-03-28 吉林大学 Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features
CN116091916A (en) * 2022-11-22 2023-05-09 南京信息工程大学 Multi-scale hyperspectral image algorithm and system for reconstructing corresponding RGB images
CN116452930A (en) * 2023-03-28 2023-07-18 中国人民解放军军事科学院系统工程研究院 Multispectral image fusion method and multispectral image fusion system based on frequency domain enhancement in degradation environment
CN116468645A (en) * 2023-06-20 2023-07-21 吉林大学 Antagonistic hyperspectral multispectral remote sensing fusion method
CN116612004A (en) * 2023-05-22 2023-08-18 桂林理工大学 Double-path fusion-based hyperspectral image reconstruction method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10891527B2 (en) * 2019-03-19 2021-01-12 Mitsubishi Electric Research Laboratories, Inc. Systems and methods for multi-spectral image fusion using unrolled projected gradient descent and convolutinoal neural network
EP3992848A1 (en) * 2020-10-30 2022-05-04 Tata Consultancy Services Limited Method and system for learning spectral features of hyperspectral data using dcnn

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876754A (en) * 2018-05-31 2018-11-23 深圳市唯特视科技有限公司 A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks
CN109636769A (en) * 2018-12-18 2019-04-16 武汉大学 EO-1 hyperion and Multispectral Image Fusion Methods based on the intensive residual error network of two-way
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN116091916A (en) * 2022-11-22 2023-05-09 南京信息工程大学 Multi-scale hyperspectral image algorithm and system for reconstructing corresponding RGB images
CN115861083A (en) * 2023-03-03 2023-03-28 吉林大学 Hyperspectral and multispectral remote sensing fusion method for multi-scale and global features
CN116452930A (en) * 2023-03-28 2023-07-18 中国人民解放军军事科学院系统工程研究院 Multispectral image fusion method and multispectral image fusion system based on frequency domain enhancement in degradation environment
CN116612004A (en) * 2023-05-22 2023-08-18 桂林理工大学 Double-path fusion-based hyperspectral image reconstruction method
CN116468645A (en) * 2023-06-20 2023-07-21 吉林大学 Antagonistic hyperspectral multispectral remote sensing fusion method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Hyperspectral Image Super-Resolution via Dual-domain Network Based on Hybrid Convolution;Tingting Liu et al;arxiv;1-13 *
Hyperspectral Remote Sensing Images Deep Feature Extraction Based on Mixed Feature and Convolutional Neural Networks;Jing Liu et al;Remote Sensing(第13期);1-17 *
Naive Gabor Networks for Hyperspectral Image Classification;Chenying Liu et al;IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS;第32卷(第1期);376-390 *
基于离散余弦变换的高光谱图像复原方法;周鑫等;激光杂志;第44卷(第2期);118-122 *
对抗网络实现单幅RGB重建高光谱图像;刘鹏飞;赵怀慈;李培玄;;红外与激光工程(第S1期);143-150 *
高光谱遥感图像本征信息分解前沿与挑战;李树涛等;测绘学报;第52卷(第7期);1059-1073 *

Also Published As

Publication number Publication date
CN117314757A (en) 2023-12-29

Similar Documents

Publication Publication Date Title
Nie et al. Deeply learned filter response functions for hyperspectral reconstruction
Yang et al. Canonical correlation analysis networks for two-view image recognition
CN112184554B (en) Remote sensing image fusion method based on residual mixed expansion convolution
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN111325165B (en) Urban remote sensing image scene classification method considering spatial relationship information
CN109064396A (en) A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN109035267B (en) Image target matting method based on deep learning
Zhang et al. Symmetric all convolutional neural-network-based unsupervised feature extraction for hyperspectral images classification
Ma et al. Deep unfolding network for spatiospectral image super-resolution
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN115660955A (en) Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion
Cao et al. Hyperspectral imagery classification based on compressed convolutional neural network
Zhang et al. Fsanet: Frequency self-attention for semantic segmentation
Kumar et al. A Robust Approach for Image Super-Resolution using Modified Very Deep Convolution Networks
CN116343052B (en) Attention and multiscale-based dual-temporal remote sensing image change detection network
CN112686830A (en) Super-resolution method of single depth map based on image decomposition
CN117314757B (en) Space spectrum frequency multi-domain fused hyperspectral computed imaging method, system and medium
CN110197226B (en) Unsupervised image translation method and system
Li et al. Multi-view convolutional vision transformer for 3D object recognition
CN115330759B (en) Method and device for calculating distance loss based on Hausdorff distance
CN116563187A (en) Multispectral image fusion based on graph neural network
Wu et al. RepCPSI: Coordinate-Preserving Proximity Spectral Interaction Network With Reparameterization for Lightweight Spectral Super-Resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant