CN114139588A - Depth feature fusion-based pathological image classification device and method and use method of device - Google Patents

Depth feature fusion-based pathological image classification device and method and use method of device Download PDF

Info

Publication number
CN114139588A
CN114139588A CN202010818297.8A CN202010818297A CN114139588A CN 114139588 A CN114139588 A CN 114139588A CN 202010818297 A CN202010818297 A CN 202010818297A CN 114139588 A CN114139588 A CN 114139588A
Authority
CN
China
Prior art keywords
image
convolution
feature
classification
deep
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010818297.8A
Other languages
Chinese (zh)
Inventor
魏湘国
高翔
钟飞
江伟
李明睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Boco Inter Telecom Technology Co ltd
Original Assignee
Beijing Boco Inter Telecom Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Boco Inter Telecom Technology Co ltd filed Critical Beijing Boco Inter Telecom Technology Co ltd
Priority to CN202010818297.8A priority Critical patent/CN114139588A/en
Publication of CN114139588A publication Critical patent/CN114139588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a pathological image classification device based on depth feature fusion, which comprises: an image dataset acquisition unit that acquires a patch image dataset; the integration units with specific quantity are used for receiving the small images in the small image data set, enabling the small images to repeatedly acquire further weighted and deepened image convolution characteristics through the operation of a convolution kernel and a weighting channel and the cascade of dimension deepening according to set network training parameters, and extracting deep image convolution characteristics and shallow image convolution characteristics; the feature fusion unit is used for calculating the obtained deep image convolution feature and the shallow image convolution feature and then cascading the calculated deep image convolution feature and the vectorized deep image convolution feature to obtain an image depth fusion feature; and the classifier is used for obtaining a classification label of the image by combining a preset classification standard according to the image depth fusion characteristics obtained by the characteristic fusion module. The invention also discloses a pathological image classification method based on depth feature fusion; a method for using a pathological image classification device based on depth feature fusion is provided. The invention can realize more accurate pathological image classification.

Description

Depth feature fusion-based pathological image classification device and method and use method of device
Technical Field
The invention relates to the field of pathological image classification, in particular to a pathological image classification technology based on depth feature fusion.
Background
In recent years, with the development of deep learning, a pathological image classification method based on a convolutional neural network is developed to a certain extent and is applied to some computer-aided diagnosis systems, however, due to the limitation of collecting and labeling pathological image data, the problems of small data volume, noise data and the like, the accuracy and reliability of a model are seriously influenced.
Therefore, experts in the field have conducted a series of research and study on a pathological image classification method based on a convolutional neural network. In 2016, h.kallen trained random forest and SVM classifiers to classify prostate cancer pathology images using convolutional features extracted by convolutional neural networks to reduce the physician's work intensity. In 2018, the K.Nagpal uses an inclusion network to classify the prostate cancer pathological images, so that the model precision is further improved. In 2018, E.Arvaniti trains a lightweight network MobileNet by using small images, and obtains higher classification accuracy. In 2019, J.Wang proposed a weakly supervised method for tissue microarray classification using a graph convolution network, which modeled the spatial organization of cells as a graph to better capture the proliferation and colony structure of tumor cells. The convolutional neural network adopted by the method only maps the pathological images layer by layer, so that the accuracy of classification is improved, but the convolutional characteristic is not fused with the pathological images, so that the classification performance of the model is limited, and the classification accuracy of the model is limited.
Therefore, a new technology for fusing the depth features with the pathological images to obtain more discriminant feature representation so as to improve the classification accuracy of the model is urgently needed.
Disclosure of Invention
The invention aims to provide a pathological image classification technology based on depth feature fusion, and the technical means for constructing a depth convolution neural network and extracting convolution features under different visual fields to obtain a high classification precision model is achieved.
In order to achieve the above object, the present invention discloses a pathological image classification device based on depth feature fusion, the device comprising:
the image data set acquisition unit is used for sampling a characteristic region in the morphological digital section of the prostate pathological image to acquire a small image data set;
the integration units with specific quantity are used for receiving the small images in the small image data set acquired by the image data set acquisition unit, enabling the small images to repeatedly acquire further weighted and deepened image convolution characteristics through the cascade of operation and dimension deepening of a convolution kernel and a weighting channel according to set network training parameters, and extracting deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
and the classifier is used for obtaining a classification label of the image by combining a preset classification standard according to the image depth fusion characteristics obtained by the characteristic fusion module.
Wherein the integrated unit further comprises:
the attention mechanism modules in specific number are used for enabling the small images to obtain the image convolution characteristics after repeated weighting through the operation of a convolution kernel and a weighting channel;
the down-sampling module is used for extracting the features of different fields of view of the image convolution features repeatedly weighted by the attention mechanism modules in a specific number, and fusing the features in a cascading mode to obtain the deepened image convolution features;
the specific number of the integrated units are stacked in sequence, the convolution feature extracted by the last integrated unit is extracted again, the convolution feature extracted by the last integrated unit is a deep image convolution feature, and the convolution feature extracted by the last but one integrated unit is a shallow image convolution feature.
Preferably:
the feature fusion module converts the deep image convolution feature and the shallow image convolution feature into feature matrixes with the same dimensionality, then performs outer product operation, converts the deep image convolution feature into a full convolution vector through convolution operation, and cascades the full convolution vector and the result of the outer product operation to obtain the image depth fusion feature.
To increase the accuracy of the device, it is preferable that:
the device also comprises an error analysis unit and a parameter setting unit;
the error analysis unit is used for calculating a classification error according to the classification result of the classifier;
the parameter setting unit is used for updating and setting the network training parameters according to the classification errors calculated by the error analysis unit;
the small image data set acquired by the image data set acquisition unit is a training set and a test set.
Specifically, the method comprises the following steps:
and presetting different classification standards for different classifiers, and classifying the image depth fusion features by using the corresponding classifiers according to the judgment requirement.
The invention also discloses a pathological image classification method based on depth feature fusion, which comprises the following steps:
sampling a characteristic region in a morphological digital section of the prostate pathological image to obtain a small image data set;
repeatedly acquiring further weighted and deepened image convolution characteristics of the small images in the small image data set through the operation of convolution kernels and weighting channels and the cascade of dimension deepening according to set network training parameters, and extracting deep image convolution characteristics and shallow convolution characteristics;
after the deep image convolution characteristic and the shallow layer convolution image characteristic are operated, the deep layer image convolution characteristic and the vectorized deep layer image convolution characteristic are cascaded to obtain an image depth fusion characteristic;
and obtaining a classification label of the image by combining a preset classification standard according to the obtained image depth fusion characteristics.
Further, the method for repeatedly obtaining further weighted and deepened image convolution characteristics through the cascade of operation of convolution kernels and weighting channels and dimension deepening on the small images in the small image data set according to the set network training parameters and extracting deep image convolution characteristics and shallow convolution characteristics specifically comprises the following steps:
calculating the small images through a convolution kernel and a weighting channel to obtain the image convolution characteristics after repeated weighting;
extracting features of different fields of view for the repeatedly weighted image convolution features, and fusing the features in a cascading mode to obtain deepened image convolution features;
and extracting the convolution features of the image again, wherein the convolution features extracted for the last time are deep image convolution features, and the convolution features extracted for the second last time are shallow image convolution features.
Preferably:
and converting the deep image convolution feature and the shallow image convolution feature into feature matrixes with the same dimensionality, then performing outer product operation, converting the deep image convolution feature into a full convolution vector through convolution operation, and cascading the full convolution vector and the outer product operation result to obtain the image depth fusion feature.
In order to improve the accuracy of classification, preferably, the method further comprises:
calculating a classification error according to the obtained classification label of the image;
updating and setting the network training parameters according to the errors;
the patch image data sets are training sets and test sets.
Specifically, the method comprises the following steps:
different classification standards are preset, and the image depth fusion features are classified by using the corresponding classification standards according to the judgment requirements.
The invention also discloses a use method of the pathological image classification device based on depth feature fusion, according to the method, the device can be trained into a high-precision classification device, and the method comprises the following steps:
the image data set acquisition unit samples characteristic areas in the morphological digital section of the prostate pathological image to acquire a small image data set, and divides the small image data set into a training set and a test set according to a specific proportion;
a specific number of integration units receive the small images in the training set in the small image dataset acquired by the image dataset acquisition unit, and repeatedly acquire further weighted and deepened image convolution characteristics through the cascade of operation of convolution kernels and weighting channels and dimension deepening according to set initialized network training parameters, and extract deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module;
the error analysis unit calculates a classification error according to the image classification label result of the classifier;
the parameter setting unit updates and sets the network training parameters according to the classification errors calculated by the error analysis unit;
and repeatedly executing the steps until the corresponding times are executed according to the training algebra set in the network training parameters.
The invention also discloses a use method of the pathological image classification device based on depth feature fusion, and the method can be used for accurately classifying pathological images by the device, and comprises the following steps:
the image data set acquisition unit samples characteristic areas in the morphological digital section of the prostate pathological image to acquire a small image data set, and divides the small image data set into a training set and a test set according to a specific proportion;
the specific number of integration units receive the small images in the test set in the small image data set acquired by the image data set acquisition unit, and the small images are subjected to operation of a convolution kernel and a weighting channel and cascade of dimension deepening according to the finally updated set network training parameters of the parameter setting unit to repeatedly acquire further weighted and deepened image convolution characteristics and extract deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
and the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module.
The invention discloses a pathological image classification technology based on depth feature fusion, which combines a convolution model and image classification in the prior art better and effectively improves the classification precision of prostate pathological images. The method comprises the steps of dividing morphological digital slices in data into a training set and a testing set by constructing a data set; according to the labeling information, cutting a labeling area of the morphological digital slice to obtain a small image for training and testing a deep convolutional neural network; constructing an attention mechanism module, a down-sampling module and a feature fusion module, wherein a stacking module constructs a deep convolution neural network; training a deep convolutional neural network by means of a random gradient descent method, and optimizing parameters of the deep convolutional neural network; performing outer product operation on the convolution characteristics extracted by the deep convolution neural network, and calculating to obtain bilinear characteristics; obtaining a convolution feature vector through convolution operation of the convolution feature; and cascading the convolution characteristic vectors and the bilinear characteristics to obtain cascade vectors, and inputting the cascade vectors into a classifier to obtain a prediction label. The network model constructed by the invention cascades the convolution characteristic vector and the bilinear characteristic, ensures that the cascade vector has richer semantic information, improves the image classification performance of the model, and leads the classification precision of the prostate pathological image to step on a new step.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a pathological image classification device based on depth feature fusion according to an embodiment of the present application;
fig. 2 is a schematic structural diagram provided in the second embodiment of the present application;
fig. 3-1 is a schematic structural diagram of an attention module provided in a third embodiment of the present application;
fig. 3-2 is a schematic structural diagram of a down-sampling module according to a third embodiment of the present application;
3-3 are schematic structural diagrams of feature fusion units provided in the third embodiment of the present application;
fig. 4 is a schematic flowchart of a pathological image classification method based on depth feature fusion according to a fourth embodiment of the present application;
fig. 5 is a schematic flow chart of a method provided in the fifth embodiment of the present application;
fig. 6 is a schematic flowchart of a method for using a depth feature fusion-based pathological image classification device according to a sixth embodiment of the present application;
fig. 7 is a flowchart illustrating a method for using a depth feature fusion-based pathological image classification device according to a seventh embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
As shown in fig. 1, a depth feature fusion-based pathological image analysis apparatus includes:
the image data set acquisition unit 1 is used for sampling a characteristic region in a morphological digital section of a prostate pathological image to acquire a small image data set.
The digital slicing (WSI) is a digital slicing of WSI for short, which is a Whole full-field-of-view (white Slide Image) generated by scanning and seamlessly splicing traditional glass slices by using a full-automatic microscope scanning system and combining a virtual slicing software system. The feature regions in the morphological digital slices, which are regions of interest to the user, are labeled as gleason classification type 0, type 3, type 4, or type 5, etc. Of course, other labeling methods may be used to determine the feature region and sample it. The present invention does not limit the manner in which the feature regions are determined and sampled.
Generally, according to the characteristic region, sampling the morphological digital slice, constructing an image data set required by the neural network model, carrying out tiled sampling on the morphological digital slice by a sliding window with the size of l and the step length of s, and when the center edge of a sampling picture isIs long as
Figure BDA0002633541120000071
When the area of the area(s) belonging to the marked area(s) reaches a proportionality coefficient r, putting the sampling picture into a data set X, and acquiring a label, wherein X is [ X ═ X1,x2,...,xN]Representing a sample set of all pictures, each image being denoted xi1, { i ═ 2, ·, N, where N is the number of image samples; y ═ Y1,y2,...,yM]Representing the label to which the image dataset X corresponds. The label is used for recording the label of the characteristic area.
And the integration units 2 with specific quantity are used for receiving the small images in the small image data set acquired by the image data set acquisition unit, enabling the small images to repeatedly acquire further weighted and deepened image convolution characteristics through the cascade of operation of a convolution kernel and a weighting channel and dimension deepening according to set network training parameters, and extracting deep image convolution characteristics and shallow image convolution characteristics.
For the sake of clear description of the relationship of the integrated units, a plurality of integrated units are shown in the figure, the last two integrated units being the (N-1) th integrated unit and the Nth integrated unit (N is a positive integer).
In order to obtain multi-level rich image features with a certain depth, a plurality of integrated units are adopted to further extract convolution features. The integrated units are stacked in sequence, the convolution characteristics extracted by the previous integrated unit are extracted again, the characteristics extracted by the integrated units for multiple times are more abstract and typical, and a good data base is laid for subsequent characteristic fusion.
The integration unit weights the image features through the operation of a convolution kernel and a weighting channel on the small images, and then increases the dimensionality of the images through the cascading of the dimensionality deepening, so that the feature expression is richer. Concatenation, as used herein, is the concatenation of vectors or matrices in one dimension.
The network training parameters can be initialized according to actual experience, and can also be updated according to classification errors, so that the accuracy of the whole neural network is improved.
The network training parameters mainly comprise: a learning rate (learning rate) may be set to 0.0001, a momentum (momentum) may be set to 0.9, and a training algebra may be set to 50. It should be noted that the values of the network training parameters may be modified according to practical situations, and are not limited to the parameter values listed herein.
And the feature fusion unit 3 is used for performing operation on the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature.
The shallow image convolution feature has weak expression capability on pathological images, so the shallow image convolution feature and the deep image convolution feature are jointly added into operation to obtain the image depth fusion feature.
And the classifier 4 is used for obtaining a classification label of the image by combining a preset classification standard according to the image depth fusion characteristics obtained by the characteristic fusion module.
The classifier can preset classification standards, and if the classification standards are met, corresponding classification labels are output.
Therefore, the small images are subjected to multi-dimensional feature extraction for multiple times, feature fusion is realized through operation of the shallow convolution features and the deep convolution features, corresponding classification labels are output by combining a classifier, and the pathological images subjected to deep feature fusion are combined with a classification method to achieve an accurate classification effect.
In order to better explain the invention, a second embodiment is given to explain the working principle of each unit and module in detail, as shown in fig. 2.
The image data set acquisition unit 1 is used for sampling a characteristic region in a morphological digital section of a prostate pathological image to acquire a small image data set.
The small image data set acquired by the image data set acquisition unit is a training set and a test set, and the training set is marked as XtrainTest set Xtest
The test set and the verification set are randomly divided according to a certain proportion by taking the WSI as a unit. For example: 35 morphological digital slices may be used to construct the training set and the remaining 9 morphological digital slices used to construct the test set. Of course, other numbers of digital slices may be used to construct the training set and the test set, and many options are possible according to actual needs.
According to the labeling information, sampling is carried out on the morphological digital slices to form an image data set, the image data set can be divided into a training set and a testing set according to a specific proportion, when the training set is used as target data of the device, training of the neural network is completed according to set network training parameters, and after a certain level of precision is achieved, the network training parameters are determined to form a fixed neural network model. And when the test set is taken as the target data of the device, classifying the small data in the test set according to the fixed neural network model to obtain an accurate classification result.
A specific number of integrated units 2:
the number of the integrated units is flexibly set according to needs, and the integrated units need to be matched with information contained in the image so as to clearly express image characteristics as a set standard.
The structure of only one integrated unit is shown in fig. 2 for simplicity, all of which have the same structure.
Wherein the integrated unit 2 further comprises:
and a specific number of attention mechanism modules 21, configured to enable the small block image to obtain a convolution feature of the repeatedly weighted image through an operation between a convolution kernel and a weighting channel.
The attention mechanism module extracts salient features of the image, ignores the non-salient portions of the image, and weights the salient features in the image, even if they are more salient, by a number of attention mechanism modules.
The number of the attention mechanism modules can be flexibly set according to actual conditions.
And the attention mechanism module increases the linear mapping of the image characteristics after the image is subjected to convolution and maximum pooling, obtains the image characteristics with increased weight, performs multiplication operation on the image characteristics and the convolution characteristics, and finally performs addition operation on the image characteristics and the convolution characteristics to obtain the weighted image convolution characteristics.
And the down-sampling module 22 is configured to perform feature extraction on different views of the image convolution features repeatedly weighted by the attention mechanism modules of the specific number, and fuse the features in a cascading manner to obtain a deepened image convolution feature.
The convolution layer and the maximum pooling layer of the down-sampling module perform feature extraction of different views on the image convolution features extracted by the plurality of attention mechanism modules, and then the extracted results are fused in a cascading mode to obtain the deepened image convolution features.
The specific number of the integrated units are stacked in sequence, the convolution feature extracted by the last integrated unit is extracted again, the convolution feature extracted by the last integrated unit is a deep image convolution feature, and the convolution feature extracted by the last but one integrated unit is a shallow image convolution feature.
And the feature fusion module converts the deep image convolution feature and the shallow image convolution feature into a feature matrix with the same dimensionality, performs outer product operation, converts the deep image convolution feature into a full convolution vector through convolution operation, and cascades the full convolution vector and the result of the outer product operation to obtain the image depth fusion feature.
Deep layer convolution feature hdAnd shallow convolution feature hsEntering a feature fusion unit, and converting into a feature matrix M with the same dimension through a convolution layer and a pooling layerdAnd MsObtaining bilinear feature M by bilinear pooling layer calculationbThe calculation formula is as follows:
Figure BDA0002633541120000101
bilinear feature MbRearranging to bilinear vector Vb. Deep layer convolution feature hdConverted into a full convolution vector V by the convolution layerfc. Bilinear vector VbAnd the full convolution vector VfcThe vector V is obtained by cascading, and the calculation formula is as follows:
V=[Vfc,Vb]
this completes the feature fusion.
And the classifier 4 is used for obtaining a classification label of the image by combining a preset classification standard according to the image depth fusion characteristics obtained by the characteristic fusion module.
And the error analysis unit 5 is used for calculating a classification error according to the classification result of the classifier.
The softmax of the classifier is generally used as a loss function, and the deep convolutional neural network is trained by using a stochastic gradient descent method to obtain optimal parameters.
For example: and processing the vector of the cascade vector 2 multiplied by 1 output by the feature fusion unit, namely the value corresponding to 2 category labels into a prediction probability vector of 2 multiplied by 1 by Softmax, and calculating the Softmaxloss. Or mapping the cascade vector V into a 3 × 1 vector, namely, values corresponding to 3 category labels, processing the cascade vector V into a 3 × 1 prediction probability vector through Softmax, and calculating the Softmaxloss.
And the parameter setting unit 6 is used for updating and setting the network training parameters according to the classification errors calculated by the error analysis unit.
And performing back propagation according to the initial error, and updating the network training parameters. In the iterative process, the error rate is gradually reduced along with the increase of training algebra in the network training parameters. In the training process, network training parameters are finely adjusted until a convergence state is reached, and an optimal deep convolutional neural network is determined.
In order to explain the working principle and the working process of the device in more detail, the third embodiment of the invention is given below, and is explained by combining the examples:
the present embodiment uses the TCGA image dataset as an example dataset, and selects morphological digital slices of 44 patients of the dataset, and the regions of interest of these images are labeled as 4 categories: g0, G3, G4 and G5. In the embodiment, based on the cancer deep convolutional neural network constructed by the pathological image classification device based on depth feature fusion, the cancer is judged to be present or absent, and then the cancer type is judged through the cancer classification deep convolutional neural network constructed by the pathological image classification device based on depth feature fusion.
The image data acquisition unit acquires an image and divides the data set.
The image data acquisition unit selects 44 high-quality morphological digital slices in the prostate cancer public data set TCGA, wherein 35 morphological digital slices are used for constructing a training set, and the remaining 9 morphological digital slices are used for constructing a test set.
The tile image is sampled using a sliding window of size 1200 and step size 300. When the central area of the small image with the side length of 600 is positioned in the marked area and is more than 90 percent, the small image is put into a training set or a testing set. The training set is used for model training, and the test set image is used for model testing. The doctor labels the area on the pathological section and gives a category label. The label of the collected small image is the same as the label of the corresponding labeling area. The labels here are rated with gleason: 61854 images are obtained by collecting the four labels of the category 0, the category 3, the category 4 and the category 5. To accommodate the convolutional neural network, the image is scaled to 224 × 224. Configured as an image dataset X ═ X1,x2,…,x61854]The 4 class labels of dataset X are denoted Y ═ Y1,y2,y3,y4]Each sample X in XiEach of { i ═ 1,2, …,61854} includes image features and labels.
A certain number of integration units and feature fusion units and classifiers constitute a deep convolutional neural network model. In this embodiment, 5 integration units, 1 feature fusion unit, and 1 classifier are adopted to form 1 deep convolution neural model, where each integration unit includes 2 attention mechanism modules and 1 down-sampling module.
And (3) constructing two depth feature fusion-based pathological image classification devices, namely constructing two depth convolutional neural networks, wherein the first depth convolutional neural network is used for judging whether cancer exists, and the second depth convolutional neural network is used for judging the type of the cancer. It should be noted that the two pathological image classification devices based on depth feature fusion have the same structure, and the difference is that the set network training parameters are different, the classification standards preset by the classifiers are different, the first deep convolutional neural network adopts a cancer classifier, and the second deep convolutional neural network adopts a cancer type classifier.
The integrated unit of the deep convolutional neural network is formed by stacking two attention mechanism modules and a down-sampling module.
The number of 3 x 3 convolution kernels for the first attention mechanism module is set to 64. The convolution layer and the maximum pooling layer of the down-sampling module respectively extract the features with 3 x 3 and 2 x 2 visual fields, and the features are fused in a cascading mode. Note that the force mechanism module is shown in FIG. 3-1 and the downsampling module is shown in FIG. 3-2.
And the five integrated units and the feature fusion unit are sequentially stacked. The feature fusion unit is shown in fig. 3-3. The five integrated units are named as a first integrated unit, a second integrated unit, a third integrated unit, a fourth integrated unit and a fifth integrated unit respectively. The five integration units are stacked once, the result output by the last integration unit is operated again, the result output by the fourth integration unit is the shallow image convolution feature, the result output by the fifth integration unit is the deep image convolution feature, the feature fusion unit performs outer product operation on the convolution features extracted by the fourth integration unit and the fifth integration unit, and then the feature fusion unit is cascaded with the full convolution vector to realize fusion of the convolution features. Since the more abstract the features extracted by the integration unit backward, the stronger the feature representation capability. The experiment is carried out by using the features extracted by different integration units to carry out the outer product operation, and the model classification effect is best when the outer product operation is carried out by using the features extracted by the fourth integration unit (the second last) and the fifth integration unit (the last).
Training the deep convolutional neural network through a training set, setting initialization parameters, and adjusting network training parameters through a parameter setting unit according to training errors to enable the model to reach a convergence state.
(1) Initializing network training parameters, setting the learning rate of training to 10-4The weight reduction rate is 0.9, the training batch is set to be 100, the training algebra is set to be 50, and the image features are propagated forward.
Will train set XtrainThe small block image in the middle is input into a deep convolutional neural network, the convolutional features of an integration unit 4 and an integration unit 5 are extracted and mapped into a 1 x 1024-dimensional bilinear vector and a 1 x 2408-dimensional full convolutional vector, the bilinear vector and the full convolutional vector are cascaded to obtain a 1 x 3425-dimensional cascaded vector V, and the cascaded vector V is input into a classifier to obtain a prediction label.
(2) And (3) error back propagation:
in the deep convolutional neural network for judging the existence of cancer, the fully-connected layer maps the cascade vector V into a 2 x 1 vector, namely, values corresponding to 2 class labels, and the values are processed into a 2 x 1 prediction probability vector through Softmax so as to calculate the Softmaxloss.
In the deep convolutional neural network for judging the cancer type, the fully-connected layer maps the cascade vector V into a vector of 3 × 1, namely values corresponding to 3 class labels, the values are processed into a prediction probability vector of 3 × 1 by Softmax, and the Softmaxloss is calculated. This example sets the training algebra to 50 generations.
At this time, the network training parameters are updated by performing back propagation according to the initial error. In the iterative process, the error rate gradually decreases as the training algebra increases. In the training process, network training parameters are finely adjusted until a convergence state is reached, and an optimal deep convolutional neural network is determined.
The deep convolutional neural network is tested through a test set, and the adopted network training parameters are determined after the error of the training set is adjusted through a parameter setting unit. Can be used directly.
And inputting the small block images of the test set into a depth convolution neural network to obtain the prediction labels of the small block images. The classification accuracy of the device is calculated. The classification accuracy is as follows: is to divide the number of images in the test set that are correctly classified by the device by the total number of images in the test set.
And (4) according to the sample prediction label of the test set and the real label, calculating the classification accuracy of the image layer. Table 1 below is the classification accuracy of the model, the results of which are more dominant on the TCGA dataset compared to the ResNet-50 based method.
TABLE 1 Classification accuracy on TCGA datasets
Figure BDA0002633541120000131
The fourth embodiment of the invention discloses a pathological image classification method based on depth feature fusion, which is shown in fig. 4.
Step S41: and sampling the characteristic region in the morphological digital section of the prostate pathological image to obtain a small image data set.
Step S42: and repeatedly acquiring further weighted and deepened image convolution characteristics of the small images in the small image data set through the operation of a convolution kernel and a weighting channel and the cascade of dimension deepening according to the set network training parameters, and extracting deep image convolution characteristics and shallow convolution characteristics.
Step S43: and after the deep layer image convolution characteristic and the shallow layer convolution image characteristic are operated, the deep layer image convolution characteristic and the vectorized deep layer image convolution characteristic are cascaded to obtain an image depth fusion characteristic.
Step S44: and obtaining a classification label of the image by combining a preset classification standard according to the obtained image depth fusion characteristics.
In order to better explain the working principle of each step, a fifth embodiment of the invention is given, as shown in fig. 5.
Step S51: and sampling the characteristic region in the morphological digital section of the prostate pathological image to obtain a small image data set.
The patch image data sets are training sets and test sets.
Step S521: and carrying out operation on the small block image through a convolution kernel and a weighting channel to obtain the image convolution characteristics after repeated weighting.
Step S522: and extracting the features of different visual fields for the repeatedly weighted image convolution features, and fusing the features in a cascading mode to obtain the deepened image convolution features.
Step S523: and extracting the convolution features of the image again, wherein the convolution features extracted for the last time are deep image convolution features, and the convolution features extracted for the second last time are shallow image convolution features.
And step S53, converting the deep image convolution characteristics and the shallow image convolution characteristics into characteristic matrixes with the same dimensionality, then performing outer product operation, converting the deep image convolution characteristics into full convolution vectors through convolution operation, and cascading the full convolution vectors and the outer product operation results to obtain image depth fusion characteristics.
Step S54: different classification standards are preset, and the image depth fusion features are classified by using the corresponding classification standards according to the judgment requirements.
Step S55: and calculating a classification error according to the obtained classification label of the image.
Step S56: and updating and setting the network training parameters according to the errors.
The invention content of the method part is similar to that of the device part, and the detailed description can refer to the device part and is not repeated herein.
In order to describe the working principle of a depth feature fusion-based pathological image classification device in a training set in detail, a sixth embodiment of the present invention is specifically provided, and includes the following steps:
step 61: the image data set acquisition unit samples characteristic regions in the digital section of the prostate pathological image form to acquire small image data sets, and the small image data sets are divided into training sets and test sets according to a specific proportion.
Step 62: and receiving the small images in the training set in the small image data set acquired by the image data set acquisition unit by a specific number of integration units, repeatedly obtaining further weighted and deepened image convolution characteristics by cascading a convolution kernel with a weighting channel and dimension deepening of the small images according to set initialized network training parameters, and extracting deep image convolution characteristics and shallow image convolution characteristics.
And step 63: and the feature fusion unit is used for operating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature.
Step 64: and the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module.
Step 65: an error analysis unit calculates a classification error according to the image classification label result of the classifier.
And step 66: and the parameter setting unit updates and sets the network training parameters according to the classification errors calculated by the error analysis unit.
Step 67: and judging whether the corresponding times of the training algebra set in the network training parameters are finished, if not, returning to the step 62, finishing the execution, and ending.
The embodiment is that a depth feature fusion-based pathological image classification device is used for completing the training process, the data of a training set is used, the images are classified according to the initialized network training parameters, and the network training parameters are adjusted according to the classification errors, so that the network training parameters of the depth feature fusion-based pathological image classification device are finally in the optimal state, the error of the classification result is minimum, and the accuracy is highest.
In order to describe the working principle of the pathological image classification device based on depth feature fusion in the test set in detail, a seventh embodiment of the present invention is specifically provided, which includes the following steps:
step S71: the image data set acquisition unit samples characteristic regions in the digital section of the prostate pathological image form to acquire small image data sets, and the small image data sets are divided into training sets and test sets according to a specific proportion.
Step S72: and a specific number of integration units receive the small images in the test set in the small image data set acquired by the image data set acquisition unit, and the small images are subjected to operation of a convolution kernel and a weighting channel and cascade of dimension deepening according to the finally updated set network training parameters of the parameter setting unit to repeatedly acquire further weighted and deepened image convolution characteristics and extract deep image convolution characteristics and shallow image convolution characteristics.
Step S73: and the feature fusion unit is used for operating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature.
Step S74: and the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module.
In this embodiment, a process of testing a depth feature fusion-based pathological image classification device by using a test set is mainly performed, and according to the training steps in the sixth embodiment, a depth feature fusion-based pathological image classification device has reached a more accurate state, and the accuracy of the test set is tested by using images. Examples of testing thereof have been listed in the above embodiments and will not be described herein.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the application described herein may be practiced in sequences other than those illustrated.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A depth feature fusion-based pathological image classification device, comprising:
the image data set acquisition unit is used for sampling a characteristic region in the morphological digital section of the prostate pathological image to acquire a small image data set;
the integration units with specific quantity are used for receiving the small images in the small image data set acquired by the image data set acquisition unit, enabling the small images to repeatedly acquire further weighted and deepened image convolution characteristics through the cascade of operation and dimension deepening of a convolution kernel and a weighting channel according to set network training parameters, and extracting deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
and the classifier is used for obtaining a classification label of the image by combining a preset classification standard according to the image depth fusion characteristics obtained by the characteristic fusion module.
2. The apparatus of claim 1, wherein the integrated unit further comprises:
the attention mechanism modules in specific number are used for enabling the small images to obtain the image convolution characteristics after repeated weighting through the operation of a convolution kernel and a weighting channel;
the down-sampling module is used for extracting the features of different fields of view of the image convolution features repeatedly weighted by the attention mechanism modules in a specific number, and fusing the features in a cascading mode to obtain the deepened image convolution features;
the specific number of the integrated units are stacked in sequence, the convolution feature extracted by the last integrated unit is extracted again, the convolution feature extracted by the last integrated unit is a deep image convolution feature, and the convolution feature extracted by the last but one integrated unit is a shallow image convolution feature.
3. The apparatus of claim 2, wherein:
the feature fusion module converts the deep image convolution feature and the shallow image convolution feature into feature matrixes with the same dimensionality, then performs outer product operation, converts the deep image convolution feature into a full convolution vector through convolution operation, and cascades the full convolution vector and the result of the outer product operation to obtain the image depth fusion feature.
4. The apparatus of any of claims 1-3, wherein:
the device also comprises an error analysis unit and a parameter setting unit;
the error analysis unit is used for calculating a classification error according to the classification result of the classifier;
the parameter setting unit is used for updating and setting the network training parameters according to the classification errors calculated by the error analysis unit;
the small image data set acquired by the image data set acquisition unit is a training set and a test set.
5. The apparatus of claim 4, wherein;
and presetting different classification standards for different classifiers, and classifying the image depth fusion features by using the corresponding classifiers according to the judgment requirement.
6. A depth feature fusion-based pathological image classification method is characterized by comprising the following steps:
sampling a characteristic region in a morphological digital section of the prostate pathological image to obtain a small image data set;
repeatedly acquiring further weighted and deepened image convolution characteristics of the small images in the small image data set through the operation of convolution kernels and weighting channels and the cascade of dimension deepening according to set network training parameters, and extracting deep image convolution characteristics and shallow convolution characteristics;
after the deep image convolution characteristic and the shallow layer convolution image characteristic are operated, the deep layer image convolution characteristic and the vectorized deep layer image convolution characteristic are cascaded to obtain an image depth fusion characteristic;
and obtaining a classification label of the image by combining a preset classification standard according to the obtained image depth fusion characteristics.
7. The method according to claim 6, wherein the repeatedly obtaining further weighted and deepened image convolution features for the small images in the small image data set through the cascade of operation of convolution kernels and weighting channels and dimension deepening according to the set network training parameters, and the method for extracting deep image convolution features and shallow convolution features specifically comprises:
calculating the small images through a convolution kernel and a weighting channel to obtain the image convolution characteristics after repeated weighting;
extracting features of different fields of view for the repeatedly weighted image convolution features, and fusing the features in a cascading mode to obtain deepened image convolution features;
and extracting the convolution features of the image again, wherein the convolution features extracted for the last time are deep image convolution features, and the convolution features extracted for the second last time are shallow image convolution features.
8. The method of claim 7, wherein:
and converting the deep image convolution feature and the shallow image convolution feature into feature matrixes with the same dimensionality, then performing outer product operation, converting the deep image convolution feature into a full convolution vector through convolution operation, and cascading the full convolution vector and the outer product operation result to obtain the image depth fusion feature.
9. The method according to any one of claims 5-8, further comprising:
calculating a classification error according to the obtained classification label of the image;
updating and setting the network training parameters according to the errors;
the patch image data sets are training sets and test sets.
10. The method of claim 9, wherein:
different classification standards are preset, and the image depth fusion features are classified by using the corresponding classification standards according to the judgment requirements.
11. A method for using a depth feature fusion-based pathological image classification device, the method comprising:
the image data set acquisition unit samples characteristic areas in the morphological digital section of the prostate pathological image to acquire a small image data set, and divides the small image data set into a training set and a test set according to a specific proportion;
a specific number of integration units receive the small images in the training set in the small image dataset acquired by the image dataset acquisition unit, and repeatedly acquire further weighted and deepened image convolution characteristics through the cascade of operation of convolution kernels and weighting channels and dimension deepening according to set initialized network training parameters, and extract deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module;
the error analysis unit calculates a classification error according to the image classification label result of the classifier;
the parameter setting unit updates and sets the network training parameters according to the classification errors calculated by the error analysis unit;
and repeatedly executing the steps until the corresponding times are executed according to the training algebra set in the network training parameters.
12. A method for using a depth feature fusion-based pathological image classification device, the method comprising:
the image data set acquisition unit samples characteristic areas in the morphological digital section of the prostate pathological image to acquire a small image data set, and divides the small image data set into a training set and a test set according to a specific proportion;
the specific number of integration units receive the small images in the test set in the small image data set acquired by the image data set acquisition unit, and the small images are subjected to operation of a convolution kernel and a weighting channel and cascade of dimension deepening according to the finally updated set network training parameters of the parameter setting unit to repeatedly acquire further weighted and deepened image convolution characteristics and extract deep image convolution characteristics and shallow image convolution characteristics;
the feature fusion unit is used for calculating the deep image convolution feature and the shallow image convolution feature acquired by the integration unit and then cascading the deep image convolution feature with the vectorized deep image convolution feature to acquire an image depth fusion feature;
and the classifier combines a preset classification standard to obtain a classification label of the image according to the image depth fusion characteristics obtained by the characteristic fusion module.
CN202010818297.8A 2020-08-14 2020-08-14 Depth feature fusion-based pathological image classification device and method and use method of device Pending CN114139588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010818297.8A CN114139588A (en) 2020-08-14 2020-08-14 Depth feature fusion-based pathological image classification device and method and use method of device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010818297.8A CN114139588A (en) 2020-08-14 2020-08-14 Depth feature fusion-based pathological image classification device and method and use method of device

Publications (1)

Publication Number Publication Date
CN114139588A true CN114139588A (en) 2022-03-04

Family

ID=80438212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010818297.8A Pending CN114139588A (en) 2020-08-14 2020-08-14 Depth feature fusion-based pathological image classification device and method and use method of device

Country Status (1)

Country Link
CN (1) CN114139588A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876797A (en) * 2024-03-11 2024-04-12 中国地质大学(武汉) Image multi-label classification method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876797A (en) * 2024-03-11 2024-04-12 中国地质大学(武汉) Image multi-label classification method, device and storage medium
CN117876797B (en) * 2024-03-11 2024-06-04 中国地质大学(武汉) Image multi-label classification method, device and storage medium

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN108830209B (en) Remote sensing image road extraction method based on generation countermeasure network
CN110533683B (en) Image omics analysis method fusing traditional features and depth features
CN103914705B (en) Hyperspectral image classification and wave band selection method based on multi-target immune cloning
CN117253122B (en) Corn seed approximate variety screening method, device, equipment and storage medium
CN110543916B (en) Method and system for classifying missing multi-view data
CN113743353B (en) Cervical cell classification method for space, channel and scale attention fusion learning
CN115601751B (en) Fundus image semantic segmentation method based on domain generalization
CN113240683A (en) Attention mechanism-based lightweight semantic segmentation model construction method
CN110766084A (en) Small sample SAR target identification method based on CAE and HL-CNN
CN116524253A (en) Thyroid cancer pathological image classification method based on lightweight transducer
CN112434172A (en) Pathological image prognosis feature weight calculation method and system
CN111984817A (en) Fine-grained image retrieval method based on self-attention mechanism weighting
CN115100467A (en) Pathological full-slice image classification method based on nuclear attention network
CN114067126A (en) Infrared image target detection method
CN112529908A (en) Digital pathological image segmentation method based on cascade convolution network and model thereof
CN118430790A (en) Mammary tumor BI-RADS grading method based on multi-modal-diagram neural network
CN114139588A (en) Depth feature fusion-based pathological image classification device and method and use method of device
CN114067313A (en) Crop leaf disease identification method of bilinear residual error network model
CN107644230B (en) Spatial relationship modeling method for remote sensing image object
CN116486183B (en) SAR image building area classification method based on multiple attention weight fusion characteristics
CN111242839A (en) Image scaling and cutting method based on scale grade
CN112818982B (en) Agricultural pest image detection method based on depth feature autocorrelation activation
CN114170634A (en) Gesture image feature extraction method based on DenseNet network improvement
CN114913164A (en) Two-stage weak supervision new crown lesion segmentation method based on super pixels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination