CN111579506A - Multi-camera hyperspectral imaging method, system and medium based on deep learning - Google Patents

Multi-camera hyperspectral imaging method, system and medium based on deep learning Download PDF

Info

Publication number
CN111579506A
CN111579506A CN202010311781.1A CN202010311781A CN111579506A CN 111579506 A CN111579506 A CN 111579506A CN 202010311781 A CN202010311781 A CN 202010311781A CN 111579506 A CN111579506 A CN 111579506A
Authority
CN
China
Prior art keywords
hyperspectral
image
camera
color
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010311781.1A
Other languages
Chinese (zh)
Other versions
CN111579506B (en
Inventor
李树涛
郭安静
孙斌
方乐缘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN202010311781.1A priority Critical patent/CN111579506B/en
Publication of CN111579506A publication Critical patent/CN111579506A/en
Application granted granted Critical
Publication of CN111579506B publication Critical patent/CN111579506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Spectrometry And Color Measurement (AREA)

Abstract

The invention discloses a multi-camera hyperspectral imaging method, a multi-camera hyperspectral imaging system and a medium based on deep learning, wherein the multi-camera hyperspectral imaging method comprises the following steps: constructing a color back projection network module according to the spectral response characteristics of the color camera, constructing a gray scale back projection network module according to the spectral response characteristics of the gray scale camera, constructing an iterative back projection hyperspectral reconstruction network by using the color back projection network module and the gray scale back projection network module, and respectively acquiring a color image and a gray scale image which are acquired by the color camera and the gray scale camera aiming at the same imaging target; and inputting the color image and the gray image into the trained iterative back-projection hyperspectral reconstruction network to obtain a hyperspectral image of the imaging target. The hyperspectral imager can effectively ensure the imaging quality, greatly shorten the hyperspectral imaging time, does not need additional auxiliary devices or hardware, greatly improves the convenience of hyperspectral imaging, and has wide application scenes.

Description

Multi-camera hyperspectral imaging method, system and medium based on deep learning
Technical Field
The invention relates to a hyperspectral imaging method, in particular to a multi-camera hyperspectral imaging method, a multi-camera hyperspectral imaging system and a multi-camera hyperspectral imaging medium based on deep learning.
Background
The hyperspectral image usually contains dozens of hundreds of spectral bands, has rich differential spectral information, and is widely applied to the fields of remote sensing ground feature classification, medical auxiliary diagnosis, agricultural pest and disease identification, agricultural product drug residue detection, commodity product authenticity identification and the like. However, due to the limitations of imaging elements and sensor technologies, the existing hyperspectral imaging equipment often has the disadvantages of slow imaging time, large volume, inconvenience in carrying and the like. The conventional portable hyperspectral imaging schemes such as a multi-lens multispectral imaging system, a mobile phone electric rotary multispectral imaging component and the like usually need expensive imaging devices or auxiliary hardware, the hyperspectral imaging under the condition without the auxiliary hardware is difficult to achieve, and the application range of the schemes is greatly limited. Therefore, the spectral response characteristics of the existing portable imaging equipment (such as a smart phone) are fully excavated, the convenient hyperspectral imaging under the condition of no auxiliary hardware is realized, the landing of related hyperspectral application can be greatly promoted, and the method has very important significance.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a multi-camera hyperspectral imaging method, a multi-camera hyperspectral imaging system and a multi-camera hyperspectral imaging medium based on deep learning.
In order to solve the technical problems, the invention adopts the technical scheme that:
a multi-camera hyperspectral imaging method based on deep learning comprises the following implementation steps:
1) respectively acquiring a color image and a gray image acquired by a color camera and a gray camera aiming at the same imaging target;
2) inputting the color image and the gray image into a trained iterative back-projection hyperspectral reconstruction network to obtain a hyperspectral image of the imaging target, wherein the iterative back-projection hyperspectral reconstruction network establishes a mapping relation between two input data of the color image and the gray image and one output data of the hyperspectral image through training.
Optionally, step 2) is preceded by a step of constructing an iterative backprojection hyperspectral reconstruction network, and the detailed steps include: s1) constructing a color back projection network module according to the spectral response characteristics of the color camera, wherein the color back projection network module is used for projecting the reconstructed hyperspectral image to an original color space and calculating the error between the reconstructed hyperspectral image and an input color image so as to extract the spectral characteristics of the color image; constructing a gray scale back projection network module according to the spectral response characteristic of the gray scale camera, wherein the gray scale back projection network module is used for projecting the reconstructed hyperspectral image to an original gray scale space and calculating the error between the reconstructed hyperspectral image and an input gray scale image so as to extract the spectral feature of the gray scale image; s2) an iterative back-projection hyperspectral reconstruction network is constructed by utilizing the color back-projection network module and the gray back-projection network module, and the iterative back-projection hyperspectral reconstruction network is used for fusing the characteristics of different layers of the color back-projection network module and the gray back-projection network module to improve the reconstruction effect.
Optionally, the color back projection network module constructed in step S1) includes:
the first characteristic stacking layer is used for stacking the characteristics of t stages of the input color image;
the first rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics;
the first recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic;
the RGB spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a color image;
the first difference solving module is used for subtracting the input color image from the reconstructed color image to obtain a reconstruction error;
the first up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain an error up-sampling characteristic;
the first fine tuning convolutional layer is used for fine tuning the error up-sampling characteristic;
and the first summing module is used for summing the fine-tuned error up-sampling characteristic and the rectification characteristic to obtain a final output characteristic.
Optionally, the grayscale backprojection network module constructed in step S1) includes:
the second characteristic stacking layer is used for stacking the characteristics of t stages of the input gray level image;
the second rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics;
the second recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic;
the grayscale spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a grayscale image;
the second difference solving module is used for subtracting the input gray level image from the reconstructed gray level image to obtain a reconstruction error;
the second up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain error up-sampling characteristics;
the second fine tuning convolution layer is used for fine tuning the error up-sampling characteristic;
and the second summation module is used for summing the fine-tuned error up-sampling characteristic and the rectification characteristic to obtain a final output characteristic.
Optionally, the iterative backprojection hyperspectral reconstruction network constructed by using the color backprojection network module and the grayscale backprojection network module in the step S2) includes:
the color image feature extraction and spectrum up-sampling module is used for performing feature extraction and spectrum up-sampling on the input color image;
the grayscale image feature extraction and spectrum up-sampling module is used for performing feature extraction and spectrum up-sampling on the input grayscale image;
a back projection convergence unit comprising one or more repeating sub-units, the sub-unit comprises a color back projection network module, a gray scale back projection network module and a characteristic convergence module for enhancing and converging color spectral characteristics and gray scale spectral characteristics output by the color back projection network module and the gray scale back projection network module, the input of the color back projection network module is the output of the color image feature extraction and spectrum up-sampling module or the output of the last subunit, the input of the gray scale back projection network module is the output of the gray scale image feature extraction and spectrum up-sampling module or the output of the previous subunit, the outputs of the color back projection network module and the gray scale back projection network module are used as the input of the feature convergence module of the corresponding subunit, and the output of the feature convergence module forms the output of the subunit;
and outputting the convolution layer for generating a final hyperspectral image by the characteristics output by the endmost subunit.
Optionally, step 2) is preceded by a step of training an iterative backprojection hyperspectral reconstruction network, and the detailed steps include: respectively utilizing spectral response characteristics of the color camera and the gray level camera to generate training data sets, and training the optimized network parameters of the obtained iterative back-projection hyperspectral reconstruction network through the training data sets to obtain the trained iterative back-projection hyperspectral reconstruction network.
Optionally, the training data set generated by using the spectral response characteristics of the color camera and the grayscale camera is { Xi,Yi,ZiIn which { Z }iIs the set of existing hyperspectral images, { YiThe set of the spectral response curve of the color camera to the existing hyperspectral image is used { Z }iCarrying out spectral down-sampling to obtain a corresponding color image set; { XiThe set of the spectral response curve of the grayscale camera to the existing hyperspectral image is used { Z }iCarrying out spectrum down-sampling to obtain a set of gray level images; the step of optimizing network parameters by training the iterative back-projection hyperspectral reconstruction network constructed by the training data set comprises the following steps: will train the data set { Xi,Yi,ZiSending the data into an iterative back-projection hyperspectral reconstruction network, and optimizing network parameters by minimizing an absolute value error function shown as the following formula;
Figure BDA0002458120360000031
in the above formula, N is the number of samples, Net represents the iterative back-projection hyperspectral reconstruction network, θ is a network parameter, Y isiTo utilize the spectral response curve of the color camera to the existing hyperspectral image ZiPerforming spectral downsampling to obtain a corresponding color image; xiTo use the spectral response curve of the gray camera to the existing hyperspectral image ZiAnd performing spectral downsampling to obtain a gray level image.
In addition, the invention also provides a multi-camera hyperspectral imaging system based on deep learning, which comprises a computer device, wherein a microprocessor of the computer device is programmed or configured to execute the steps of the multi-camera hyperspectral imaging method based on deep learning, or a memory of the computer device is stored with a computer program programmed or configured to execute the multi-camera hyperspectral imaging method based on deep learning.
In addition, the invention also provides an intelligent terminal, which at least comprises a color camera, a gray scale camera, a microprocessor and a memory, wherein the microprocessor of the intelligent terminal is programmed or configured to execute the steps of the multi-camera hyperspectral imaging method based on deep learning, or the memory of the intelligent terminal is stored with a computer program programmed or configured to execute the multi-camera hyperspectral imaging method based on deep learning.
Furthermore, the present invention also provides a computer readable storage medium having stored thereon a computer program programmed or configured to execute the aforementioned deep learning-based multi-camera hyper-spectral imaging method.
Compared with the prior art, the invention has the following advantages:
1. the invention inputs the color image and the gray image into the trained iterative back-projection hyperspectral reconstruction network to obtain the hyperspectral image of the imaging target, and learns the imaging prior from a large amount of hyperspectral image data by combining the built iterative back-projection hyperspectral image reconstruction network with the spectral response characteristic of the camera, thereby effectively ensuring the imaging quality and greatly shortening the hyperspectral imaging time.
2. According to the hyperspectral imaging method, the smartphone with the color camera and the gray level camera can be conveniently utilized, so that hyperspectral imaging based on deep learning can be conveniently extracted from the smartphone, the hyperspectral imaging method does not need additional auxiliary devices or hardware, only the smartphone and a built-in deep learning reconstruction method are utilized to realize hyperspectral imaging, and the convenience of hyperspectral imaging is greatly improved, so that the application range of a hyperspectral imaging technology can be effectively expanded, the application of various top-layer applications such as skin disease classification and crop drug residue detection based on hyperspectral images can be conveniently and quickly landed, and the hyperspectral imaging method has a wide application prospect and a huge practical value.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
FIG. 2 is a schematic diagram of a method according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a color back projection network module according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of a gray-scale back projection network module according to an embodiment of the present invention.
Fig. 5 is a schematic structural diagram of an iterative back-projection hyperspectral reconstruction network in an embodiment of the present invention.
Detailed Description
The following will further describe in detail the multi-camera hyperspectral imaging method, system and medium based on deep learning of the present invention, taking a smart phone as a typical device with multiple cameras as an example. The implementation of the multi-camera hyperspectral imaging method, system and medium based on deep learning of the invention is not limited to a specific acquisition processing device, namely a smart phone, and can be implemented by any computing device with a color camera and a grayscale camera. The present invention will be described in further detail with reference to the attached drawings for the purpose of facilitating understanding and implementation of the present invention by those of ordinary skill in the art, and it is to be understood that the embodiments described herein are merely for purposes of illustration and explanation and are not to be construed as a limitation of the present invention.
As shown in fig. 1, the implementation steps of the multi-camera hyperspectral imaging method based on deep learning in this embodiment include:
1) respectively acquiring a color image and a gray image acquired by a color camera and a gray camera aiming at the same imaging target;
2) inputting the color image and the gray image into a trained iterative back-projection hyperspectral reconstruction network to obtain a hyperspectral image of the imaging target, wherein the iterative back-projection hyperspectral reconstruction network establishes a mapping relation between two input data of the color image and the gray image and one output data of the hyperspectral image through training.
In this embodiment, step 2) is preceded by a step of constructing an iterative back-projection hyperspectral reconstruction network, and the detailed steps include: s1) constructing a color back projection network module according to the spectral response characteristics of the color camera, wherein the color back projection network module is used for projecting the reconstructed hyperspectral image to an original color space and calculating the error between the reconstructed hyperspectral image and an input color image so as to extract the spectral characteristics of the color image; constructing a gray scale back projection network module according to the spectral response characteristic of the gray scale camera, wherein the gray scale back projection network module is used for projecting the reconstructed hyperspectral image to an original gray scale space and calculating the error between the reconstructed hyperspectral image and an input gray scale image so as to extract the spectral feature of the gray scale image; s2) an iterative back-projection hyperspectral reconstruction network is constructed by utilizing the color back-projection network module and the gray back-projection network module, and the iterative back-projection hyperspectral reconstruction network is used for fusing the characteristics of different layers of the color back-projection network module and the gray back-projection network module to improve the reconstruction effect.
As shown in fig. 2, the basic principle of the multi-camera hyperspectral imaging method based on deep learning of the embodiment is as follows: firstly, acquiring spectral response characteristics of each camera of a multi-camera smart phone; then, a color and gray level iteration back projection network module used for building a depth network is constructed by utilizing the spectral response characteristics; constructing an iterative back-projection hyperspectral reconstruction network by utilizing a color and gray iterative projection network module; secondly, training data are generated on the existing hyperspectral data set by using spectral response characteristics of color and black and white cameras of the smart phone, and a constructed deep hyperspectral image reconstruction network is trained; and finally, reconstructing a network by using the trained deep hyperspectral image to reconstruct a clear test hyperspectral image.
The color back projection network module and the gray back projection network module are similar in structure, and the main processes of the modules are as follows: firstly, respectively reconstructing a color/gray image by using input characteristics; then, photographing the obtained color/gray level image with a mobile phone to obtain a reconstruction error; and finally, adjusting the input characteristics by utilizing the reconstruction errors to obtain more effective reconstruction characteristics.
As shown in fig. 3, the color rear projection network module constructed in step S1) includes:
the first characteristic stacking layer is used for stacking the characteristics of t stages of the input color image; the functional expression of the first feature stack layer in this embodiment can be expressed as:
Figure BDA0002458120360000051
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000052
representing the features resulting from stacking, c representing the feature stacking operation, H0~Ht-1Features of the 1 st to t-1 st stages of input are shown. In the process of constructing the depth iterative back projection reconstruction network, when t is 1, the stacking operation is not needed.
The first rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics; the first rectified convolutional layer in this embodiment is used to information rectify the stacked features to compress high dimensional features, and the convolutional layer contains 64 convolution kernels of 1 × 1 size, and is expressed as:
Figure BDA0002458120360000053
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000054
for the rectification feature (color image),
Figure BDA0002458120360000055
and
Figure BDA0002458120360000056
respectively representing the kernel weight and the offset of the convolutional layer,
Figure BDA0002458120360000057
for the stacked features, σ represents the activation function, and a modified linear unit (ReLU) is specifically used as the activation function in this embodiment.
The first recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic; in this embodiment, the first recovery convolutional layer is used to recover the compressed features to obtain the hyperspectral image, where the convolutional layer includes L (the predefined number of channels of the hyperspectral image to be imaged) convolution kernels 9 × 9, and the convolutional layer is expressed as:
Figure BDA0002458120360000058
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000061
in order to restore the hyperspectral image,
Figure BDA0002458120360000062
and
Figure BDA0002458120360000063
respectively representing the kernel weight and the offset of the convolutional layer,
Figure BDA0002458120360000064
for the features obtained by feature compression, σ represents the activation function.
The RGB spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a color image; in this embodiment, the RGB spectrum sampling layer is used to reconstruct a color image taken by the smartphone by sampling the spectrum of the restored hyperspectral image, and the layer adopts the spectral response characteristic of the smartphone color camera to construct a sampling kernel, which can be expressed as:
Figure BDA0002458120360000065
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000066
representing the reconstructed color image, Dot represents the matrix multiplication,
Figure BDA0002458120360000067
representing a reconstructed hyperspectral image, SpekRThe spectral response curve of the color camera obtained by measurement is shown and can be represented in a matrix form of L × 3.
A first difference module, configured to subtract the input color image from the reconstructed color image to obtain a reconstruction error, which can be expressed as:
Figure BDA0002458120360000068
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000069
which is indicative of the error in the reconstruction,
Figure BDA00024581203600000610
for reconstructed color images, IRIs a color image input by the network.
The first up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain an error up-sampling characteristic; the first upsampling convolutional layer is used for reconstructing errors
Figure BDA00024581203600000611
Performing spectral up-sampling to obtain features
Figure BDA00024581203600000612
The convolutional layer contains 64 convolution kernels of size 9 × 9, the convolution kernelThe layers are represented as:
Figure BDA00024581203600000613
in the above formula, the first and second carbon atoms are,
Figure BDA00024581203600000614
in order to perform the error up-sampling of the features,
Figure BDA00024581203600000615
and
Figure BDA00024581203600000616
respectively representing the kernel weight and the offset of the convolutional layer,
Figure BDA00024581203600000617
for the features obtained by feature compression, σ represents the activation function.
A first fine-tuning convolutional layer for fine-tuning the error upsampling characteristics, where the first fine-tuning convolutional layer in this embodiment includes 64 convolution kernels of size 3 × 3, and the convolutional layer is expressed as:
Figure BDA00024581203600000618
in the above formula, the first and second carbon atoms are,
Figure BDA00024581203600000619
for the fine-tuned error up-sampling feature,
Figure BDA00024581203600000620
and
Figure BDA00024581203600000621
the kernel weight and the offset of the convolutional layer are expressed, respectively, and σ represents an activation function.
The first summing module is used for summing the fine-tuned error up-sampling characteristic and the rectification characteristic to obtain a final output characteristic, and the final output characteristic is represented as follows:
Figure BDA00024581203600000622
in the above formula, HRtIn order to be the final output characteristic,
Figure BDA00024581203600000623
for the fine-tuned error up-sampling feature,
Figure BDA00024581203600000624
is a rectification characteristic.
As shown in fig. 4, the grayscale backprojection network module constructed in step S1) includes:
the second characteristic stacking layer is used for stacking the characteristics of t stages of the input gray level image; a second stack of features stacks together features at different stages from the channel dimension, which can be expressed as:
Figure BDA0002458120360000071
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000072
representing the features resulting from stacking, c representing the feature stacking operation, H0~Ht-1Features of the 1 st to t-1 st stages of input are shown. In the process of constructing the depth iterative back projection reconstruction network, when t is 1, the stacking operation is not needed.
The second rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics; the second rectified convolutional layer contains 64 convolution kernels of 1 × 1 size, which are represented as:
Figure BDA0002458120360000073
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000074
for the rectification feature (grayscale image),
Figure BDA0002458120360000075
and
Figure BDA0002458120360000076
respectively representing the kernel weight and the offset of the convolutional layer,
Figure BDA0002458120360000077
for the stacked features, σ represents the activation function, and a modified linear unit (ReLU) is specifically used as the activation function in this embodiment.
The second recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic; the second recovery convolutional layer contains L (the predefined number of channels of the hyperspectral image to be imaged) 9 × 9 convolutional kernels, which are expressed as:
Figure BDA0002458120360000078
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000079
in order to restore the hyperspectral image,
Figure BDA00024581203600000710
and
Figure BDA00024581203600000711
respectively representing the kernel weight and the offset of the convolutional layer,
Figure BDA00024581203600000712
for the features obtained by feature compression, σ represents the activation function.
The grayscale spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a grayscale image; the grayscale spectrum sampling layer adopts the spectral response characteristic of a black-and-white camera of the smart phone to construct a sampling kernel, which can be expressed as:
Figure BDA00024581203600000713
in the above formula, the first and second carbon atoms are,
Figure BDA00024581203600000714
for the reconstructed gray scale image, Dot represents a matrix multiplication,
Figure BDA00024581203600000715
representing a reconstructed hyperspectral image, SpekGThe spectral response curve of the measured gray-scale camera can be expressed in a matrix form of L × 1.
A second difference module, configured to subtract the input grayscale image from the reconstructed grayscale image to obtain a reconstruction error, which may be expressed as:
Figure BDA00024581203600000716
in the above formula, the first and second carbon atoms are,
Figure BDA00024581203600000717
in order to reconstruct the error,
Figure BDA00024581203600000718
for reconstructed gray-scale images, IGIs an input gray scale image.
The second up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain error up-sampling characteristics; second upsampled convolutional layer pair reconstruction error
Figure BDA00024581203600000719
Performing spectral up-sampling to obtain features
Figure BDA00024581203600000720
The convolutional layer contains 64 convolutional kernels of size 9 × 9, which is expressed as:
Figure BDA00024581203600000721
in the above formula, the first and second carbon atoms are,
Figure BDA00024581203600000722
in order to perform the error up-sampling of the features,
Figure BDA00024581203600000723
and
Figure BDA00024581203600000724
respectively representing the kernel weight kernel and the offset of the convolutional layer,
Figure BDA00024581203600000725
σ denotes the activation function for the projection error characterization.
A second fine tuning convolutional layer for fine tuning the error upsampling feature, which can be expressed as:
Figure BDA0002458120360000081
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000082
for the fine-tuned error up-sampling feature,
Figure BDA0002458120360000083
and
Figure BDA0002458120360000084
the kernel weight and the offset of the convolutional layer are expressed, respectively, and σ represents an activation function.
A second summing module, configured to sum the fine-tuned error upsampling characteristic and the rectification characteristic to obtain a final output characteristic, where the final output characteristic may be represented as:
Figure BDA0002458120360000085
in the above formula, HGtThe final output characteristics are represented in the form of,
Figure BDA0002458120360000086
in order to have the characteristic of rectification,
Figure BDA0002458120360000087
the feature is up-sampled for the error after the fine tuning.
The iterative back-projection hyperspectral reconstruction network firstly constructs a two-channel iterative back-projection hyperspectral image reconstruction network by respectively stacking two color back-projection network modules and two gray back-projection network modules; then, a characteristic convergence layer is introduced at different stages of the network, so that the characteristic fusion effect of the two sub-channel networks is enhanced; and finally, reconstructing a clear hyperspectral image by stacking the characteristics of different stages of the network. As shown in fig. 5, the iterative back-projection hyperspectral reconstruction network constructed by using the color back-projection network module and the grayscale back-projection network module in step S2) includes:
a color image feature extraction and spectral up-sampling module (as represented by b1 in fig. 5) for performing feature extraction and spectral up-sampling on the input color image;
a grayscale image feature extraction and spectral up-sampling module (as represented by a1 in fig. 5) for performing feature extraction and spectral up-sampling on the input grayscale image;
a back projection convergence unit comprising one or more repeating sub-units, the sub-unit comprises a color back projection network module, a gray scale back projection network module and a characteristic convergence module for enhancing and converging color spectral characteristics and gray scale spectral characteristics output by the color back projection network module and the gray scale back projection network module, the input of the color back projection network module is the output of the color image feature extraction and spectrum up-sampling module or the output of the last subunit, the input of the gray scale back projection network module is the output of the gray scale image feature extraction and spectrum up-sampling module or the output of the previous subunit, the outputs of the color back projection network module and the gray scale back projection network module are used as the input of the feature convergence module of the corresponding subunit, and the output of the feature convergence module forms the output of the subunit; fig. 5 shows n rereaded subunits, the 1 st subunit includes a color back projection network module b2, a gray back projection network module a2 and a feature convergence module c2, the 2 nd subunit includes a color back projection network module b3, a gray back projection network module a3 and a feature convergence module c3, and so on, the ith subunit includes a color back projection network module bi, a gray back projection network module ai and a feature convergence module ci, and the nth subunit includes a color back projection network module bn, a gray back projection network module an and a feature convergence module cn;
and outputting the convolution layer (indicated as d in fig. 5) for generating a final hyperspectral image from the features output by the endmost subunit.
As shown in fig. 5, in this embodiment, the color image feature extraction and spectral upsampling module and the grayscale image feature extraction and spectral upsampling module have the same structure, the color image feature extraction and spectral upsampling module and the grayscale image feature extraction and spectral upsampling module are respectively used for setting four convolutional layers at the color and grayscale image input end to perform initial feature extraction and spectral upsampling on an original input image, the number of convolutional kernels of the four convolutional layers is 128, 64, 128, and 64, and the sizes of the convolutional kernels are 3 × 3, 1 × 1, 3 × 3, and 1 × 1, respectively, and the process can be expressed as:
Figure BDA0002458120360000091
Figure BDA0002458120360000092
in the above formula, the first and second carbon atoms are,
Figure BDA0002458120360000093
for the output of the color image feature extraction and spectral up-sampling module,
Figure BDA0002458120360000094
for the output of the grayscale image feature extraction and spectral up-sampling module,
Figure BDA0002458120360000095
as a weight of the kernel, the weight of the kernel,
Figure BDA0002458120360000096
is the offset.
The back projection convergence unit respectively stacks the color back projection network module and the gray back projection network module on two network channels to form a two-channel iterative back projection reconstruction network. The back projection convergence unit can comprise one or more sub-units, each sub-unit comprises a color back projection network module, a gray scale back projection network module and a feature convergence module for enhancing convergence of color spectral features and gray scale spectral features output by the color back projection network module and the gray scale back projection network module, and the feature convergence module can be expressed as:
Ht=(HRt+HGt)/2
in the above formula, HtEnhancing the converged features for the t-phase, HRtAnd HGtThe adjusting characteristics are obtained by the color back projection network module and the gray back projection network module at the t stage respectively.
In this embodiment, the output convolution layer is used to stack the fusion features in different stages, and one convolution layer is set to finally generate the hyperspectral image, where the number of convolution kernels of the convolution layer is L, and the size of the convolution kernel is 3 × 3. This process can be expressed as:
HSI=σ(wtc([H1,H2,…,Ht])+bt)
in the above formula, HSI represents the hyperspectral image finally obtained, c is the feature stacking operation, wtAs weights of convolution kernels, btIs the offset of the convolution layer, H0~HtFeatures of the 1 st to t th stages of the input are shown.
In this embodiment, step 2) is preceded by a step of training an iterative back-projection hyperspectral reconstruction network, and the detailed steps include: respectively utilizing spectral response characteristics of the color camera and the gray level camera to generate training data sets, and training the optimized network parameters of the obtained iterative back-projection hyperspectral reconstruction network through the training data sets to obtain the trained iterative back-projection hyperspectral reconstruction network. In this embodiment, the spectral response characteristics of the color camera and the grayscale camera are usedGenerating a training data set of { Xi,Yi,ZiIn which { Z }iIs the set of existing hyperspectral images, { YiThe set of the spectral response curve of the color camera to the existing hyperspectral image is used { Z }iCarrying out spectral down-sampling to obtain a corresponding color image set; { XiThe set of the spectral response curve of the grayscale camera to the existing hyperspectral image is used { Z }iCarrying out spectrum down-sampling to obtain a set of gray level images; the step of optimizing network parameters by training the iterative back-projection hyperspectral reconstruction network constructed by the training data set comprises the following steps: will train the data set { Xi,Yi,ZiSending the data into an iterative back-projection hyperspectral reconstruction network, and optimizing network parameters by minimizing an absolute value error function shown as the following formula;
Figure BDA0002458120360000101
in the above formula, N is the number of samples, Net represents the iterative back-projection hyperspectral reconstruction network, θ is a network parameter, Y isiTo utilize the spectral response curve of the color camera to the existing hyperspectral image ZiPerforming spectral downsampling to obtain a corresponding color image; xiTo use the spectral response curve of the gray camera to the existing hyperspectral image ZiAnd performing spectral downsampling to obtain a gray level image.
In order to further verify the multi-camera hyperspectral imaging method based on deep learning in the embodiment, a simulation experiment and result analysis are performed on a real hyperspectral data set in the embodiment, a CAVE hyperspectral image data set is adopted to perform a simulation experiment in the embodiment, the CAVE data set comprises 32 hyperspectral images, the size of each image is 512 × 512 × 31, the coverage band range is 400-700nm, the original hyperspectral image of the data set is used as a reference image, the spectral response curves of a certain color camera and a black-and-white camera are respectively used for performing spectral downsampling on the original hyperspectral images to obtain simulated color images and gray images, training data pairs are constructed, the first 20 images of the data set are used for training, the last 12 images are used for testing, and the iterative backprojection of the hyperspectral imaging method in the embodimentThe spectral reconstruction network is formed by stacking 4 color and gray iterative back projection modules respectively, in the training process, image blocks which cut the generated simulated color, gray image and reference image into 64 × 64 × 3, 64 × 64 × 1 and 64 × 64 × 31 are input into the iterative back projection hyperspectral reconstruction network, an ADAM optimizer is used for training, and the learning rate is set to be 5 × 104. The fidelity of the reconstruction result is measured by Peak signal to noise ratio (PSNR) and Structural Similarity (SSIM), and the larger the PSNR and SSIM values are, the higher the reconstruction accuracy is. The final results are shown in table 1:
table 1: the method of the embodiment has the reconstruction effect on the test image.
Figure BDA0002458120360000102
Figure BDA0002458120360000111
As can be seen from table 1, the multi-camera hyperspectral imaging method based on deep learning of the embodiment can better recover the peak signal-to-noise ratio and the structural similarity of the image, which shows that the multi-camera hyperspectral imaging method based on deep learning of the embodiment can effectively mine the spectral response characteristics of a color/black-and-white lens of a smartphone, and reconstruct a hyperspectral image.
In summary, in order to solve the above difficult problem of convenient hyperspectral imaging, the embodiment provides a hyperspectral imaging scheme of a multi-camera smartphone based on deep learning, and an iterative back-projection hyperspectral image reconstruction network is constructed by fully mining spectral response characteristics of a color/black-and-white lens of the smartphone, so as to reconstruct a hyperspectral image. According to the scheme, the hyperspectral imaging of the multi-camera smart phone is realized, and any auxiliary device or component is not required. The method comprises the steps of firstly, respectively constructing a gray level and color iterative back projection network module by utilizing the spectral response characteristics of a black-white camera and a color camera of the smart phone; then, forming an iterative back-projection hyperspectral reconstruction network by utilizing the two modules; and finally, constructing training data by using the spectral response characteristics of the color/black-and-white lens, training a hyperspectral image reconstruction network, and reconstructing a hyperspectral image. According to the method, the hyperspectral image can be efficiently reconstructed while the imaging quality is guaranteed only by using the multi-camera smart phone and the deep learning imaging method. The embodiment can be applied to the practical application fields of skin disease auxiliary diagnosis, agricultural pest and disease identification, agricultural product drug residue detection, daily product authenticity identification and the like. Taking hyperspectral detection of crop drug residues as an example: the spectral characteristics of crops are determined by the absorption, transmission and reflectivity of the crops to light, and the absorption and reflectivity of light with different wave bands are different, so that the properties of the crops can be directly shown on the spectral curve of the hyperspectral image of the crops, the drug residue and putrefaction on the surfaces of the crops can greatly influence the spectral curve of the crops, the hyperspectral image obtained by the multi-camera hyperspectral imaging method based on deep learning can capture the changes, and qualitative and even quantitative detection of the drug residue of the crops can be realized by taking the hyperspectral image as the characteristic of a machine learning classification model.
In addition, the embodiment also provides a multi-camera hyperspectral imaging system based on deep learning, which comprises a computer device, wherein a microprocessor of the computer device is programmed or configured to execute the steps of the multi-camera hyperspectral imaging method based on deep learning, or a memory of the computer device is stored with a computer program programmed or configured to execute the multi-camera hyperspectral imaging method based on deep learning.
In addition, the embodiment also provides an intelligent terminal, which at least comprises a color camera and a grayscale camera, a microprocessor and a memory, wherein the microprocessor of the intelligent terminal is programmed or configured to execute the steps of the multi-camera hyperspectral imaging method based on deep learning, or the memory of the intelligent terminal stores a computer program programmed or configured to execute the multi-camera hyperspectral imaging method based on deep learning.
Furthermore, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program programmed or configured to execute the aforementioned deep learning-based multi-camera hyper-spectral imaging method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is directed to methods, apparatus (systems), and computer program products according to embodiments of the application wherein instructions, which execute via a flowchart and/or a processor of the computer program product, create means for implementing functions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A multi-camera hyperspectral imaging method based on deep learning is characterized by comprising the following implementation steps:
1) respectively acquiring a color image and a gray image acquired by a color camera and a gray camera aiming at the same imaging target;
2) inputting the color image and the gray image into a trained iterative back-projection hyperspectral reconstruction network to obtain a hyperspectral image of the imaging target, wherein the iterative back-projection hyperspectral reconstruction network establishes a mapping relation between two input data of the color image and the gray image and one output data of the hyperspectral image through training.
2. The method for multi-camera hyperspectral imaging based on deep learning according to claim 1, characterized in that step 2) is preceded by a step of constructing an iterative back-projection hyperspectral reconstruction network, and the detailed steps comprise: s1) constructing a color back projection network module according to the spectral response characteristics of the color camera, wherein the color back projection network module is used for projecting the reconstructed hyperspectral image to an original color space and calculating the error between the reconstructed hyperspectral image and an input color image so as to extract the spectral characteristics of the color image; constructing a gray scale back projection network module according to the spectral response characteristic of the gray scale camera, wherein the gray scale back projection network module is used for projecting the reconstructed hyperspectral image to an original gray scale space and calculating the error between the reconstructed hyperspectral image and an input gray scale image so as to extract the spectral feature of the gray scale image; s2) an iterative back-projection hyperspectral reconstruction network is constructed by utilizing the color back-projection network module and the gray back-projection network module, and the iterative back-projection hyperspectral reconstruction network is used for fusing the characteristics of different layers of the color back-projection network module and the gray back-projection network module to improve the reconstruction effect.
3. The method for multi-camera hyperspectral imaging based on deep learning of claim 2, wherein the color back projection network module constructed in step S1) comprises:
the first characteristic stacking layer is used for stacking the characteristics of t stages of the input color image;
the first rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics;
the first recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic;
the RGB spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a color image;
the first difference solving module is used for subtracting the input color image from the reconstructed color image to obtain a reconstruction error;
the first up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain an error up-sampling characteristic;
the first fine tuning convolutional layer is used for fine tuning the error up-sampling characteristic;
and the first summing module is used for summing the fine-tuned error up-sampling characteristic and the rectification characteristic to obtain a final output characteristic.
4. The method for multi-camera hyperspectral imaging based on deep learning of claim 2, wherein the gray-scale back projection network module constructed in step S1) comprises:
the second characteristic stacking layer is used for stacking the characteristics of t stages of the input gray level image;
the second rectification convolution layer is used for compressing the stacked characteristics in a high-dimensional manner to obtain rectification characteristics;
the second recovery convolution layer is used for recovering the hyperspectral image from the rectification characteristic;
the grayscale spectrum sampling layer is used for sampling the spectrum of the restored hyperspectral image to reconstruct a grayscale image;
the second difference solving module is used for subtracting the input gray level image from the reconstructed gray level image to obtain a reconstruction error;
the second up-sampling convolutional layer is used for performing spectral up-sampling on the reconstruction error to obtain error up-sampling characteristics;
the second fine tuning convolution layer is used for fine tuning the error up-sampling characteristic;
and the second summation module is used for summing the fine-tuned error up-sampling characteristic and the rectification characteristic to obtain a final output characteristic.
5. The method for multi-camera hyperspectral imaging based on deep learning of claim 2, wherein the iterative back-projection hyperspectral reconstruction network constructed by using the color back-projection network module and the gray back-projection network module in the step S2) comprises:
the color image feature extraction and spectrum up-sampling module is used for performing feature extraction and spectrum up-sampling on the input color image;
the grayscale image feature extraction and spectrum up-sampling module is used for performing feature extraction and spectrum up-sampling on the input grayscale image;
a back projection convergence unit comprising one or more repeating sub-units, the sub-unit comprises a color back projection network module, a gray scale back projection network module and a characteristic convergence module for enhancing and converging color spectral characteristics and gray scale spectral characteristics output by the color back projection network module and the gray scale back projection network module, the input of the color back projection network module is the output of the color image feature extraction and spectrum up-sampling module or the output of the last subunit, the input of the gray scale back projection network module is the output of the gray scale image feature extraction and spectrum up-sampling module or the output of the previous subunit, the outputs of the color back projection network module and the gray scale back projection network module are used as the input of the feature convergence module of the corresponding subunit, and the output of the feature convergence module forms the output of the subunit;
and outputting the convolution layer for generating a final hyperspectral image by the characteristics output by the endmost subunit.
6. The method for multi-camera hyperspectral imaging based on deep learning according to claim 2, characterized in that step 2) is preceded by a step of training an iterative back-projection hyperspectral reconstruction network, and the detailed steps comprise: respectively utilizing spectral response characteristics of the color camera and the gray level camera to generate training data sets, and training the optimized network parameters of the obtained iterative back-projection hyperspectral reconstruction network through the training data sets to obtain the trained iterative back-projection hyperspectral reconstruction network.
7. The method for multi-camera hyperspectral imaging based on deep learning of claim 6, wherein the training data set generated by using the spectral response characteristics of a color camera and a grayscale camera is { X }i,Yi,ZiIn which { Z }iIs the set of existing hyperspectral images, { YiThe set of the spectral response curve of the color camera to the existing hyperspectral image is used { Z }iCarrying out spectral down-sampling to obtain a corresponding color image set; { XiThe set of the spectral response curve of the grayscale camera to the existing hyperspectral image is used { Z }iCarrying out spectrum down-sampling to obtain a set of gray level images; the step of optimizing network parameters by training the iterative back-projection hyperspectral reconstruction network constructed by the training data set comprises the following steps: will train the data set { Xi,Yi,ZiSending the data into an iterative back-projection hyperspectral reconstruction network, and optimizing network parameters by minimizing an absolute value error function shown as the following formula;
Figure FDA0002458120350000021
in the above formula, N is the number of samples, Net represents the iterative back-projection hyperspectral reconstruction network, θ is a network parameter, Y isiTo utilize the spectral response curve of the color camera to the existing hyperspectral image ZiPerforming spectral downsampling to obtain a corresponding color image; xiTo use the spectral response curve of the gray camera to the existing hyperspectral image ZiAnd performing spectral downsampling to obtain a gray level image.
8. A multi-camera hyperspectral imaging system based on deep learning, comprising computer equipment, wherein a microprocessor of the computer equipment is programmed or configured to execute the steps of the multi-camera hyperspectral imaging method based on deep learning according to any one of claims 1 to 7, or a memory of the computer equipment stores a computer program programmed or configured to execute the multi-camera hyperspectral imaging method based on deep learning according to claims 1 to 7.
9. An intelligent terminal, which at least comprises a color camera, a gray scale camera, a microprocessor and a memory, and is characterized in that the microprocessor of the intelligent terminal is programmed or configured to execute the steps of the multi-camera hyper-spectral imaging method based on deep learning according to any one of claims 1 to 7, or the memory of the intelligent terminal stores a computer program programmed or configured to execute the multi-camera hyper-spectral imaging method based on deep learning according to any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon a computer program programmed or configured to perform the deep learning based multi-camera hyperspectral imaging method according to any of claims 1 to 7.
CN202010311781.1A 2020-04-20 2020-04-20 Multi-camera hyperspectral imaging method, system and medium based on deep learning Active CN111579506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010311781.1A CN111579506B (en) 2020-04-20 2020-04-20 Multi-camera hyperspectral imaging method, system and medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010311781.1A CN111579506B (en) 2020-04-20 2020-04-20 Multi-camera hyperspectral imaging method, system and medium based on deep learning

Publications (2)

Publication Number Publication Date
CN111579506A true CN111579506A (en) 2020-08-25
CN111579506B CN111579506B (en) 2021-04-09

Family

ID=72112538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010311781.1A Active CN111579506B (en) 2020-04-20 2020-04-20 Multi-camera hyperspectral imaging method, system and medium based on deep learning

Country Status (1)

Country Link
CN (1) CN111579506B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036397A (en) * 2020-09-29 2020-12-04 上海海事大学 Embedded cucumber leaf image recognition device based on deep learning
CN112801881A (en) * 2021-04-13 2021-05-14 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium
CN113240653A (en) * 2021-05-19 2021-08-10 中国联合网络通信集团有限公司 Rice quality detection method, device, server and system
CN113255581A (en) * 2021-06-21 2021-08-13 湖南大学 Weak supervision deep learning water body extraction method and device, computer equipment and medium
CN113743001A (en) * 2021-08-13 2021-12-03 湖南大学 Optical filter design method, optical filter and system for spectral super-resolution reconstruction
CN114972625A (en) * 2022-03-22 2022-08-30 广东工业大学 Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN116559119A (en) * 2023-05-11 2023-08-08 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium
CN112036397B (en) * 2020-09-29 2024-05-31 上海海事大学 Embedded cucumber leaf image recognition device based on deep learning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544451A (en) * 2018-11-14 2019-03-29 武汉大学 A kind of image super-resolution rebuilding method and system based on gradual iterative backprojection

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544451A (en) * 2018-11-14 2019-03-29 武汉大学 A kind of image super-resolution rebuilding method and system based on gradual iterative backprojection

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MUHAMMAD HARIS ET.AL: "Deep Back-Projection Networks For Super-Resolution", 《ARXIV》 *
李敏等: "改进的多光谱遥感影像超分辨率重构算法", 《计算机工程》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036397B (en) * 2020-09-29 2024-05-31 上海海事大学 Embedded cucumber leaf image recognition device based on deep learning
CN112036397A (en) * 2020-09-29 2020-12-04 上海海事大学 Embedded cucumber leaf image recognition device based on deep learning
CN112801881A (en) * 2021-04-13 2021-05-14 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium
CN112801881B (en) * 2021-04-13 2021-06-22 湖南大学 High-resolution hyperspectral calculation imaging method, system and medium
CN113240653A (en) * 2021-05-19 2021-08-10 中国联合网络通信集团有限公司 Rice quality detection method, device, server and system
CN113255581A (en) * 2021-06-21 2021-08-13 湖南大学 Weak supervision deep learning water body extraction method and device, computer equipment and medium
CN113255581B (en) * 2021-06-21 2021-09-28 湖南大学 Weak supervision deep learning water body extraction method and device, computer equipment and medium
CN113743001A (en) * 2021-08-13 2021-12-03 湖南大学 Optical filter design method, optical filter and system for spectral super-resolution reconstruction
CN113743001B (en) * 2021-08-13 2023-12-12 湖南大学 Spectral super-resolution reconstruction-oriented optical filter design method, optical filter and system
CN114972625A (en) * 2022-03-22 2022-08-30 广东工业大学 Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
CN114998109A (en) * 2022-08-03 2022-09-02 湖南大学 Hyperspectral imaging method, system and medium based on dual RGB image fusion
CN116559119A (en) * 2023-05-11 2023-08-08 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium
CN116559119B (en) * 2023-05-11 2024-01-26 东北林业大学 Deep learning-based wood dyeing color difference detection method, system and medium

Also Published As

Publication number Publication date
CN111579506B (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN111579506B (en) Multi-camera hyperspectral imaging method, system and medium based on deep learning
Arad et al. Ntire 2022 spectral recovery challenge and data set
Fu et al. Exploiting spectral-spatial correlation for coded hyperspectral image restoration
US9405960B2 (en) Face hallucination using convolutional neural networks
CN111127374B (en) Pan-sharing method based on multi-scale dense network
US8948540B2 (en) Optimized orthonormal system and method for reducing dimensionality of hyperspectral images
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN109872305B (en) No-reference stereo image quality evaluation method based on quality map generation network
US20130129256A1 (en) Spectral image dimensionality reduction system and method
CN110059728B (en) RGB-D image visual saliency detection method based on attention model
US11216913B2 (en) Convolutional neural network processor, image processing method and electronic device
CN110225350B (en) Natural image compression method based on generation type countermeasure network
Zhang et al. LR-Net: Low-rank spatial-spectral network for hyperspectral image denoising
CN113139902A (en) Hyperspectral image super-resolution reconstruction method and device and electronic equipment
CN107274441A (en) The wave band calibration method and system of a kind of high spectrum image
CN115700727A (en) Spectral super-resolution reconstruction method and system based on self-attention mechanism
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN112163998A (en) Single-image super-resolution analysis method matched with natural degradation conditions
Wu et al. Hprn: Holistic prior-embedded relation network for spectral super-resolution
CN113743001B (en) Spectral super-resolution reconstruction-oriented optical filter design method, optical filter and system
CN114511470B (en) Attention mechanism-based double-branch panchromatic sharpening method
CN114638761A (en) Hyperspectral image panchromatic sharpening method, device and medium
CN113066030B (en) Multispectral image panchromatic sharpening method and system based on space-spectrum fusion network
CN107895063A (en) One kind compression EO-1 hyperion mask optimization method
CN110807746B (en) Hyperspectral image sharpening method based on detail embedded injection convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant