CN110033032B - Tissue slice classification method based on microscopic hyperspectral imaging technology - Google Patents

Tissue slice classification method based on microscopic hyperspectral imaging technology Download PDF

Info

Publication number
CN110033032B
CN110033032B CN201910250023.0A CN201910250023A CN110033032B CN 110033032 B CN110033032 B CN 110033032B CN 201910250023 A CN201910250023 A CN 201910250023A CN 110033032 B CN110033032 B CN 110033032B
Authority
CN
China
Prior art keywords
classification
training
cnn model
dimensional
spectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910250023.0A
Other languages
Chinese (zh)
Other versions
CN110033032A (en
Inventor
胡炳樑
杜剑
张周锋
于涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an kanghuixin Optical Inspection Technology Co.,Ltd.
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN201910250023.0A priority Critical patent/CN110033032B/en
Publication of CN110033032A publication Critical patent/CN110033032A/en
Application granted granted Critical
Publication of CN110033032B publication Critical patent/CN110033032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention provides a tissue slice classification method based on a microscopic hyperspectral imaging technology. Firstly, preprocessing microscopic hyperspectral data so as to eliminate noise influence and eliminate data redundancy; establishing and training three types of Convolutional Neural Network (CNN) models; the two-dimensional CNN model and the three-dimensional CNN model respectively realize the combined feature extraction and classification of the spectrum-spectrum dimension; and (4) obtaining a final classification result by quantitative and qualitative analysis and voting of a model output result aiming at the actual microscopic hyperspectral image to be detected. The invention utilizes the deep learning convolutional neural network model to extract and classify deep features of the two, improves the integral classification precision and speed and simultaneously perfects the automatic data acquisition and classification process of pathological sections.

Description

Tissue slice classification method based on microscopic hyperspectral imaging technology
Technical Field
The invention belongs to the technical field of medical image signal processing, and particularly relates to a tissue slice classification method based on a microscopic hyperspectral imaging technology.
Background
The medical hyperspectral imaging is a comprehensive cross technology established on the basis of clinical medicine, imaging, medical sensors, pathological tissue analysis and other multi-gate technologies, and belongs to the application of a new field of hyperspectral technology in recent years.
The microscopic hyperspectral technology integrates a microscopic imaging system to obtain a microscopic hyperspectral image with higher spatial resolution, so that the study of microscopic scale objects (such as tissue slices, cells, microorganisms and the like) becomes possible. The microscopic hyperspectral imaging technology integrates two traditional optical diagnosis technologies of spectral analysis and optical imaging, and can provide information of two aspects of biological tissue sample maps at the same time. The spectrum analysis can obtain complete spectrum data of a certain point on the slice sample in a certain interested wavelength range, and can analyze biochemical component information of the histiocytes; optical imaging techniques record gray or color images of the sample, morphologically analyzing the slice tissue.
After a microscopic hyperspectral image of a tissue section is obtained, data preprocessing and feature extraction processes are generally required, and the key point is to analyze the morphological and spectral dimensional difference between a cancerous tissue and a normal tissue. At present, methods adopted by related researches at home and abroad mostly concentrate on shallow learning algorithms, and the methods commonly comprise Principal Component Analysis (PCA), Support Vector Machine (SVM), and the like, or a spectral angle mapping method (SAM) based on spectral information, a spectral curve matching method, and the like. For example, Akbai and the like acquire microscopic hyperspectral images of lung metastatic tumors at 450-950 nm, classify the lung metastatic tumors by using SVM, and detect the lung cancer metastatic tissues with the sensitivity reaching 92.6%; leqingli et al at the university of east China collected motor nerve cells and sensory nerve cells using a micro-spectral imaging system, and the improved spectral angle mapping SAM algorithm was used for the classification of nerve cells. Although these methods have achieved some results, there is a need for further improvements in both classification accuracy, computational efficiency, and adaptivity and portability. The result of the method can not meet the requirement of precise medical treatment on precise disease positioning, particularly when more complex disease sub-types and branches are faced, the image characteristics are not obvious enough, the spectral characteristic difference is small, and the traditional algorithm and the simple discrimination model are difficult to extract deep characteristics for effective discrimination and classification.
Disclosure of Invention
In order to overcome the defects of the prior method, the invention aims to provide a tissue section classification method based on a microscopic hyperspectral imaging technology, and aims to improve the overall classification precision and speed and perfect the automatic data acquisition and classification process of pathological sections.
The solution of the invention is as follows:
the tissue slice classification method based on the microscopic hyperspectral imaging technology comprises the following steps:
1) system modeling
1.1) preprocessing microscopic hyperspectral data of a training set so as to eliminate noise influence and eliminate data redundancy;
1.2) training of three classes of Convolutional Neural Network (CNN) models
1.2a) one-dimensional CNN model
Establishing a CNN model, inputting the preprocessed data into the CNN model, and training the CNN model according to a one-dimensional spectrum curve (one-dimensional spectrum data) in the preprocessed data so as to realize feature extraction and classification of spectrum dimensions; determining the number of approximate network layers according to the number of samples and the spectral dimension, then adjusting the network structure according to the training result, and adjusting the parameter optimization network model of each layer;
1.2b) two-dimensional CNN model
Establishing a two-dimensional CNN model by using the model parameters and the structure determined in the step 3a) for reference;
performing Principal Component Analysis (PCA) on the preprocessed data, selecting the first m principal components as approximate expressions of an original image, and taking K multiplied by K neighborhood pixel points (namely K multiplied by m) of a current pixel as the input of a two-dimensional CNN model; after training, the preprocessed data are converted into a series of feature vectors; in addition, spectral line features (one-dimensional spectral data) are extracted from the preprocessed data; inputting the features of the two aspects into an LR layer for classification, thereby realizing the combined feature extraction and classification of the spectrum-spectrum dimension;
1.2c) three-dimensional CNN model
Establishing a three-dimensional CNN model by using the model parameters and structures determined in the step 1.2a) and the step 1.2b) (simultaneously using a one-dimensional CNN model and a two-dimensional CNN model for reference);
taking K multiplied by b neighborhoods of a current pixel as the input of a three-dimensional CNN model, wherein b is the number of spectral segments, inputting a series of obtained feature vectors into an LR layer for classification after training, and further realizing the combined feature extraction and classification of the spectral-spectral dimensions;
2) tissue section classification aiming at actual microscopic hyperspectral image to be detected
Referring to the step 1.1), preprocessing actual microscopic hyperspectral data to be detected;
carrying out quantitative and qualitative analysis on the preprocessed data, and evaluating whether the requirements can be met only by carrying out spectral dimension feature extraction and classification according to the number of samples (magnitude of order) and spectral features (number of spectral segments and spectral resolution);
if so, inputting the preprocessed data into the one-dimensional CNN model trained in the step 1.2a) to realize the feature extraction and classification of the spectral dimension, and taking the classification result output by the model as a final classification result;
if not, respectively referring to the step 1.2b) and the step 1.2c), spectrum-spectrum dimension combined feature extraction and classification are realized through the trained two-dimensional CNN model and the trained three-dimensional CNN model (wherein for the two-dimensional CNN model, the preprocessed data is specifically subjected to PCA (principal component analysis) processing, KxKxm is used as the input of the two-dimensional CNN model, and one-dimensional spectrum data is combined on an LR layer to obtain a classification result; for the three-dimensional CNN model, K multiplied by b neighborhoods are directly used as the input of the three-dimensional CNN model to obtain a classification result); and finally voting (decision fusion is carried out by applying a voting method and a linear opinion pool) on the classification results output by the two models to obtain a final classification result.
Wherein, the step 1.1) may specifically be: firstly, eliminating the influence of random noise by adopting low-pass filtering, then carrying out strip removal processing on each spectral band image, and simultaneously eliminating the influence of high-frequency noise by using an S-G first-order derivative; and whitening processing is employed to reduce the correlation between the data.
Step 1.2a) may specifically comprise the steps of:
(1) initialization:
randomly initializing a network parameter theta, iter being 0, err being 0, nbDetermining each layer type and an activation function type as 0; determining a model input n1Output npNumber of iterationsImaxAnd a learning rate α;
(2) iterative training:
firstly, inputting one-dimensional spectral data, setting xiIs the ith layer input, calculates each network layer output;
Figure BDA0002012130990000031
wherein, WiAnd biRespectively a weight matrix and a bias matrix of the ith layer, wherein s is an excitation function, and P (y is l) predicts the probability of belonging to the ith class in the current iteration;
then, the cost function J (theta) and partial derivative are calculatedi
Figure BDA0002012130990000032
Figure BDA0002012130990000033
Where m is the number of training samples, Y is the target output, Y is the prediction output,
Figure BDA0002012130990000035
represents a dot product function;
continuously updating the parameter theta by using a gradient descent method in training;
Figure BDA0002012130990000034
and finally, gradually training the network to be optimal as the return value of the cost function is smaller, and finishing the training of the CNN model after the set threshold value is reached.
The invention has the following technical effects:
the invention researches the tissue structure form and the spectrum information in the pathological section, and utilizes the deep learning convolutional neural network model to extract and classify the deep features of the tissue structure form and the spectrum information, thereby improving the overall classification precision and speed and perfecting the automatic data acquisition and classification process of the pathological section.
Drawings
FIG. 1 is a schematic diagram of the construction of the experimental apparatus of the present invention.
Fig. 2 is a CNN modeling flowchart of the present invention.
Fig. 3 is a diagram of a two-dimensional CNN model-based spectrum-spectrum dimension combined feature extraction and classification flow chart.
FIG. 4 is a schematic diagram of the tissue section classification method of the present invention.
FIG. 5 is a comparison of the classification results of the CNN classification model of the present invention and other methods.
Detailed Description
In order to make the technical solution and advantages of the present invention more clear, the present invention will be described in more detail with reference to the accompanying drawings and specific embodiments.
As shown in figure 1, the microscopic hyperspectral imager consists of a hyperspectral imaging system, a biological microscope system and a control computer, wherein the system comprises 256 wave bands, the spectral range is 400 nm-1000 nm, the average spectral resolution is 3nm, the spatial resolution can reach 0.5 mu m, and the image size is 753 × 696.
During experiments, microscope objectives with different magnifications (such as 4 x, 10 x, 20 x, 40 x and 100 x) can be selected according to different targets, the light source intensity is adjusted, the sample is not easy to saturate by adjusting a focusing mechanism, a target area is selected, and a microscopic hyperspectral image of a pathological section in the area is acquired. Keeping the brightness of the light source and the magnification unchanged, moving the section to select other areas or replacing other pathological sections, collecting a plurality of microscopic hyperspectral images by the same method, and storing the images in a classified manner. Taking pathological section of gastric cancer as an example, the pathological section of gastric cancer and the normal tissue section are respectively from samples of a plurality of patients with gastric cancer and normal human bodies, and isolated pathological tissues need to be subjected to H-E staining after being embedded, sliced, dewaxed and the like. After dyeing, doctors carry out detailed marking to divide cancerous regions and normal regions, and data acquisition and classification training are carried out on the basis of the cancerous regions and the normal regions during experiments.
Step one, preprocessing microscopic hyperspectral data: considering the influence of a light source and a sensor on the imaging quality, firstly adopting low-pass filtering to eliminate the influence of random noise; then, carrying out banding removal processing on each spectral band image, and eliminating the influence of high-frequency noise by using an S-G first-order derivative; and finally, whitening processing is carried out, mainly for reducing the correlation among all the characteristics and the data complexity, and the method is favorable for the stability and the high efficiency of a later model.
The main process of the whitening treatment is as follows:
(1) firstly, assuming original hyperspectral data as x, constructing an autocorrelation matrix R of the original datax=E(xxT)≠I
(2) Then find matrix B transforms x by y ═ Bx, such that autocorrelation matrix Ry=BE(xxT)BT=Ι
(3) Transforming B ═ Λ-1/2ΦTObtaining Ry=(Λ-1/2ΦT)ΦΛΦT-1/2ΦT)T=I
Finally, the y components are uncorrelated through the transformation of B, and the aim of eliminating data redundancy is fulfilled.
Step two, extracting and classifying spectral dimensional features:
inputting the preprocessed data into a CNN model for training, firstly establishing a 1-D CNN model, determining the number of approximate network layers according to the number of samples and spectral dimensions, then adjusting the network structure according to the training result, and adjusting parameters of each layer to optimize the network model. The final CNN model contains seven layers, including input layer I1, two convolutional layers C2 and C4, two pooling layers P3 and P5, full-link layer F6, and output layer O7. The first convolutional layer contains 8 convolutional kernels, and the size of the convolutional kernel is 5; the second convolution kernel contains 16 convolution kernels, with a convolution kernel size of 5. Thereafter, a maxporoling pooling layer was ligated, followed by a fully-ligated layer containing 100 neurons. Meanwhile, overfitting can be effectively prevented by applying the nonlinear functions ReLU and dropout method, and when the dropout parameter is set to be 0.25, the training convergence speed is fastest.
The main process of training the CNN model for one-dimensional spectral curves is shown in fig. 2:
(1) initialization: randomly initializing a network parameter theta, iter being 0, err being 0, nbEach layer type and activation function type are determined as 0. Determining a model input n1Output npNumber of iterations ImaxAnd a learning rate α.
(2) Iterative training:
firstly, inputting one-dimensional spectral data, calculating the output of each network layer,
Figure BDA0002012130990000051
wherein, WiAnd biRespectively are a weight matrix and a bias matrix of the ith layer, and y is the probability of each class currently attributed.
The cost function J (θ) and partial derivatives are then calculated.
Figure BDA0002012130990000052
Figure BDA0002012130990000053
Where m is the number of training samples, Y is the target output, Y is the prediction output,
Figure BDA0002012130990000055
representing a dot product function.
The parameter theta is continuously updated by a gradient descent method in the training process,
Figure BDA0002012130990000054
and finally, gradually training the network to be optimal as the return value of the cost function is smaller, and finishing the training of the CNN model after a certain threshold value is reached.
Step three, spectrum-spectrum dimension combined feature extraction and classification:
in order to more effectively apply spatial information such as tissue texture and cell structure of pathological sections, more detailed image dimensional characteristics need to be learned on the basis of spectrum dimension, and model performance and tumor tissue classification efficiency can be effectively improved. The method can be realized by respectively adopting a 2-D CNN model and a 3-D CNN model:
A2-D CNN model is established, which comprises 3 layers of convolution, a ReLU activation function and a max-posing layer. In order to make the model have better generalization performance, the number of parameters of the model, namely the layer number and the scale of each layer, needs to be controlled, so that a smaller convolution network is used, and the number of filters in each layer is not large. Before training, Principal Component Analysis (PCA) is firstly carried out on original data, the first m principal components are selected as approximate expressions of an original image, and K multiplied by K neighborhood pixel points (namely K multiplied by m) of a current pixel are used as input of a model. Considering the image size, cell structure and slice tissue characteristics, the neighborhood window K takes a value of 45, the convolution kernel size is 5, and 32, 64 and 128 convolution kernels are respectively arranged on the three convolution layers. The reserved spectrum information can be adjusted by changing the number of the selected principal components, and experiments prove that 3-5 principal components are reserved, so that good classification accuracy can be obtained. For example, the first 3 principal components after PCA processing may be selected as an approximate representation of the raw data, with a 45 × 45 neighborhood (i.e., 45 × 45 × 3) of the current pixel as the model input. ReLU activation and dropout methods are adopted in the training process, so that overfitting can be effectively inhibited. After CNN training, the original data are converted into a series of feature vectors, the features are input into an LR layer for classification, and meanwhile, one-dimensional spectral data are also used as feature input, so that combined feature extraction and classification of a spectrum-spectral dimension are realized, as shown in FIG. 3.
Establishing a 3-D CNN model, taking K multiplied by b neighborhoods of a current pixel as the input of the 3-D CNN model, wherein b is the number of spectral segments, the size of a convolution kernel of each layer is 5 multiplied by 32, sampling is carried out by a pooling layer 2 multiplied by 2 kernel, and finally, the samples are input into an LR layer for classification, thereby realizing the combined feature extraction and classification of the spectrum-spectrum dimension. ReLU activation and dropout methods are adopted in the training process, so that overfitting can be effectively inhibited. Under a proper 3-D CNN framework, neighborhood pixel points in all spectral bands can be utilized to fully learn the spectral and spatial characteristics of tumor tissues.
Step four, classifier integration and result visualization:
and selecting one of the classification results of the CNN models as a final classification result of the tumor tissue and the normal tissue.
For the hyperspectral image of the tissue section, the relation between medical pathological change and the features obtained by training is better understood, and the model can be explained according to the learned features of different cancer clinical manifestations. Therefore, a deconvolution network is added in the 2-D CNN model, and the deconvolution network has no learning ability and is only used for detecting a trained CNN. And (3) taking the feature maps obtained by each layer as input, reconstructing corresponding input stimuli by operations such as inverse pooling, inverse activation, deconvolution and the like after the feature maps are activated to the original input layer, displaying more useful information on the features by the reconstructed stimuli, realizing model tuning by analyzing the information, and researching clinical interpretation between the learned features and cancer classification.
As shown in fig. 4, in the tissue section classification method of the present invention, firstly, the actual microscopic hyperspectral data to be measured is preprocessed; then, carrying out quantitative and qualitative analysis on the preprocessed data, and evaluating whether the spectrum dimensional feature extraction and classification can meet the requirements or not according to the number (magnitude) of samples and the spectrum features; if so, inputting the preprocessed data into the trained one-dimensional CNN model to realize the feature extraction and classification of the spectral dimension, and taking the classification result output by the model as a final classification result; if not, the two-dimensional CNN model and the three-dimensional CNN model after training are respectively used for realizing the combined feature extraction and classification of the spectrum-spectrum dimension, and voting (decision fusion is carried out by applying a voting method and a linear opinion pool) is carried out on the classification results output by the two models to obtain the final classification result.
The results of the accuracy, sensitivity and specificity of the different methods are shown in FIG. 5, comparing the methods of the present invention with other methods. The experimental results show that: the tissue section classification method based on the microscopic hyperspectral imaging technology can efficiently extract the structural feature and spectral feature difference of a pathological tissue and a normal tissue, and the CNN model established based on the method can realize accurate identification of the pathological tissue and the normal tissue, thereby realizing the automatic data acquisition and classification process of pathological sections.

Claims (3)

1. A tissue slice classification method based on a microscopic hyperspectral imaging technology is characterized by comprising the following steps:
1) system modeling
1.1) preprocessing microscopic hyperspectral data of a training set so as to eliminate noise influence and eliminate data redundancy;
1.2) training of three classes of Convolutional Neural Network (CNN) models
1.2a) one-dimensional CNN model
Establishing a CNN model, inputting the preprocessed data into the CNN model, and training the CNN model according to a one-dimensional spectral curve in the preprocessed data so as to realize feature extraction and classification of spectral dimensions; determining the number of approximate network layers according to the number of samples and the spectral dimension, then adjusting the network structure according to the training result, and adjusting the parameter optimization network model of each layer;
1.2b) two-dimensional CNN model
Establishing a two-dimensional CNN model by using the model parameters and the structure determined in the step 1.2 a);
performing Principal Component Analysis (PCA) on the preprocessed data, selecting the first m principal components as approximate expression of an original image, and taking K multiplied by K neighborhood pixel points of a current pixel as input of a two-dimensional CNN model; after training, the preprocessed data are converted into a series of feature vectors; in addition, extracting spectral line characteristics from the preprocessed data; inputting the features of the two aspects into an LR layer for classification, thereby realizing the combined feature extraction and classification of the spectrum-spectrum dimension;
1.2c) three-dimensional CNN model
Establishing a three-dimensional CNN model by using the model parameters and structures determined in the step 1.2a) and the step 1.2 b);
taking K multiplied by b neighborhoods of a current pixel as the input of a three-dimensional CNN model, wherein b is the number of spectral segments, inputting a series of obtained feature vectors into an LR layer for classification after training, and further realizing the combined feature extraction and classification of the spectral-spectral dimensions;
2) tissue section classification aiming at actual microscopic hyperspectral image to be detected
Referring to the step 1.1), preprocessing actual microscopic hyperspectral data to be detected;
carrying out quantitative and qualitative analysis on the preprocessed data, and evaluating whether the spectrum dimensional feature extraction and classification can meet the requirements or not according to the number of samples and the spectrum features;
if so, inputting the preprocessed data into the one-dimensional CNN model trained in the step 1.2a) to realize the feature extraction and classification of the spectral dimension, and taking the classification result output by the model as a final classification result;
if not, respectively referring to the step 1.2b) and the step 1.2c), and realizing the combined feature extraction and classification of the spectrum-spectrum dimension through the trained two-dimensional CNN model and the trained three-dimensional CNN model; and finally voting the classification results output by the two models to obtain a final classification result.
2. The tissue slice classification method based on the microscopic hyperspectral imaging technology according to claim 1 is characterized in that: the step 1.1) is specifically as follows: firstly, eliminating the influence of random noise by adopting low-pass filtering, then carrying out strip removal processing on each spectral band image, and simultaneously eliminating the influence of high-frequency noise by using an S-G first-order derivative; and whitening processing is employed to reduce the correlation between the data.
3. The tissue slice classification method based on the microscopic hyperspectral imaging technology according to claim 1 is characterized in that: step 1.2a) specifically comprises the following steps:
(1) initialization:
randomly initializing a network parameter theta, iter being 0, err being 0, nbDetermining each layer type and an activation function type as 0; determining a model input n1Output npNumber of iterations ImaxAnd a learning rate α;
(2) iterative training:
firstly, inputting one-dimensional spectral data, setting xiIs the ith layer input, calculates each network layer output;
Figure FDA0002685642290000021
wherein, WiAnd biRespectively a weight matrix and a bias matrix of the ith layer, wherein s is an excitation function, and P (y is l) predicts the probability of belonging to the ith class in the current iteration;
then, the cost function J (theta) and partial derivative are calculatedi
Figure FDA0002685642290000022
Figure FDA0002685642290000023
Where m is the number of training samples, Y is the target output, Y is the prediction output,
Figure FDA0002685642290000024
represents a dot product function;
continuously updating the parameter theta by using a gradient descent method in training;
Figure FDA0002685642290000025
and finally, gradually training the network to be optimal as the return value of the cost function is smaller, and finishing the training of the CNN model after the set threshold value is reached.
CN201910250023.0A 2019-03-29 2019-03-29 Tissue slice classification method based on microscopic hyperspectral imaging technology Active CN110033032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910250023.0A CN110033032B (en) 2019-03-29 2019-03-29 Tissue slice classification method based on microscopic hyperspectral imaging technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910250023.0A CN110033032B (en) 2019-03-29 2019-03-29 Tissue slice classification method based on microscopic hyperspectral imaging technology

Publications (2)

Publication Number Publication Date
CN110033032A CN110033032A (en) 2019-07-19
CN110033032B true CN110033032B (en) 2020-12-25

Family

ID=67236939

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910250023.0A Active CN110033032B (en) 2019-03-29 2019-03-29 Tissue slice classification method based on microscopic hyperspectral imaging technology

Country Status (1)

Country Link
CN (1) CN110033032B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648763A (en) * 2019-09-29 2020-01-03 江苏拉曼医疗设备有限公司 Method and apparatus for tumor assessment using artificial intelligence for spectral analysis
CN110991339B (en) * 2019-12-02 2023-04-28 太原科技大学 Three-dimensional palate wrinkle identification method adopting cyclic frequency spectrum
CN111007021A (en) * 2019-12-31 2020-04-14 北京理工大学重庆创新中心 Hyperspectral water quality parameter inversion system and method based on one-dimensional convolution neural network
CN113450305B (en) * 2020-03-26 2023-01-24 太原理工大学 Medical image processing method, system, equipment and readable storage medium
CN112861627A (en) * 2021-01-07 2021-05-28 中国科学院西安光学精密机械研究所 Pathogenic bacteria species identification method and system based on microscopic hyperspectral technology
CN112819032B (en) * 2021-01-11 2023-10-27 平安科技(深圳)有限公司 Multi-model-based slice feature classification method, device, equipment and medium
CN113239755B (en) * 2021-04-28 2022-06-21 湖南大学 Medical hyperspectral image classification method based on space-spectrum fusion deep learning
CN115859029B (en) * 2022-11-29 2023-09-15 长沙理工大学 Spectrum quantitative analysis method based on two-dimensional reconstruction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
EP3270095A1 (en) * 2016-07-13 2018-01-17 Sightline Innovation Inc. System and method for surface inspection
CN107798348A (en) * 2017-10-27 2018-03-13 广东省智能制造研究所 Hyperspectral image classification method based on neighborhood information deep learning

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8577820B2 (en) * 2011-03-04 2013-11-05 Tokyo Electron Limited Accurate and fast neural network training for library-based critical dimension (CD) metrology
CN105095964B (en) * 2015-08-17 2017-10-20 杭州朗和科技有限公司 A kind of data processing method and device
CN109471074B (en) * 2018-11-09 2023-04-21 西安电子科技大学 Radar radiation source identification method based on singular value decomposition and one-dimensional CNN network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3270095A1 (en) * 2016-07-13 2018-01-17 Sightline Innovation Inc. System and method for surface inspection
CN106845381A (en) * 2017-01-16 2017-06-13 西北工业大学 Sky based on binary channels convolutional neural networks composes united hyperspectral image classification method
CN107798348A (en) * 2017-10-27 2018-03-13 广东省智能制造研究所 Hyperspectral image classification method based on neighborhood information deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Learning Contextual Dependence With Convolutional Hierarchical Recurrent Neural Networks";Zhen Zuo et al.;《IEEE Transactions on Image Processing》;20160731;第25卷(第7期);第2983-2996页 *
"基于卷积神经网络与光谱特征的夏威夷果品质鉴定研究";杜剑 等;《光谱学与光谱分析》;20180531;第38卷(第5期);第1514-1519页 *
"基于卷积神经网络与显微高光谱的胃癌组织分类方法研究";杜剑 等;《光学学报》;20180630;第38卷(第6期);第1-7页 *

Also Published As

Publication number Publication date
CN110033032A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN110033032B (en) Tissue slice classification method based on microscopic hyperspectral imaging technology
Al Bashish et al. A framework for detection and classification of plant leaf and stem diseases
CN110503630B (en) Cerebral hemorrhage classifying, positioning and predicting method based on three-dimensional deep learning model
Kumar et al. Breast cancer classification of image using convolutional neural network
US7555155B2 (en) Classifying image features
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
Prakash et al. A study on image processing with data analysis
Jayalakshmi et al. Performance analysis of convolutional neural network (CNN) based cancerous skin lesion detection system
CN115272196B (en) Method for predicting focus area in histopathological image
Das et al. Efficient automated detection of mitotic cells from breast histological images using deep convolution neutral network with wavelet decomposed patches
CN112508953B (en) Meningioma rapid segmentation qualitative method based on deep neural network
CN111009324A (en) Mild cognitive impairment auxiliary diagnosis system and method based on brain network multi-feature analysis
CN112465905A (en) Characteristic brain region positioning method of magnetic resonance imaging data based on deep learning
JP7427080B2 (en) Weakly supervised multitask learning for cell detection and segmentation
Mandache et al. Basal cell carcinoma detection in full field OCT images using convolutional neural networks
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN111311574A (en) Terahertz lesion detection method and system based on artificial intelligence
Ma et al. A novel two-stage deep method for mitosis detection in breast cancer histology images
CN113012129A (en) System and device for counting area positioning and marked nerve cells of brain slice image
CN114972254A (en) Cervical cell image segmentation method based on convolutional neural network
CN115496953A (en) Brain network classification method based on space-time graph convolution
CN115131503A (en) Health monitoring method and system for iris three-dimensional recognition
Motlak et al. Detection and classification of breast cancer based-on terahertz imaging technique using artificial neural network k-nearest neighbor algorithm
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Nikitaev et al. Bone marrow cells recognition methods in the diagnosis of minimal residual disease

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220509

Address after: 710100 room 312, third floor, building 2, South No. 17, information Avenue, new industrial park, high tech Zone, Xi'an, Shaanxi Province

Patentee after: XI'AN ZHONGKE INTEL SPECTRUM TECHNOLOGY CO.,LTD.

Address before: 710119, No. 17, information Avenue, new industrial park, hi tech Zone, Shaanxi, Xi'an

Patentee before: XI'AN INSTITUTE OF OPTICS AND PRECISION MECHANICS, CHINESE ACADEMY OF SCIENCES

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220621

Address after: Room 1020, tower C, Chaoyang International, No. 166, Changle West Road, Xincheng District, Xi'an, Shaanxi 710032

Patentee after: Xi'an kanghuixin Optical Inspection Technology Co.,Ltd.

Address before: 710100 room 312, third floor, building 2, South No. 17, information Avenue, new industrial park, high tech Zone, Xi'an, Shaanxi Province

Patentee before: XI'AN ZHONGKE INTEL SPECTRUM TECHNOLOGY CO.,LTD.