CN116843956A - Cervical pathology image abnormal cell identification method, system and storage medium - Google Patents

Cervical pathology image abnormal cell identification method, system and storage medium Download PDF

Info

Publication number
CN116843956A
CN116843956A CN202310700927.5A CN202310700927A CN116843956A CN 116843956 A CN116843956 A CN 116843956A CN 202310700927 A CN202310700927 A CN 202310700927A CN 116843956 A CN116843956 A CN 116843956A
Authority
CN
China
Prior art keywords
image
module
cervical
model
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310700927.5A
Other languages
Chinese (zh)
Inventor
魏登峰
杜文飞
周晓龙
李鹏
李萌
周丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingzhou Central Hospital Jingzhou Hospital Affiliated To Yangtze University
Yangtze University
Original Assignee
Jingzhou Central Hospital Jingzhou Hospital Affiliated To Yangtze University
Yangtze University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingzhou Central Hospital Jingzhou Hospital Affiliated To Yangtze University, Yangtze University filed Critical Jingzhou Central Hospital Jingzhou Hospital Affiliated To Yangtze University
Priority to CN202310700927.5A priority Critical patent/CN116843956A/en
Publication of CN116843956A publication Critical patent/CN116843956A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a method, a system and a storage medium for identifying abnormal cells in cervical pathology images, wherein the method comprises the steps of determining multiple types of historical cervical cell images; constructing a model training set based on a plurality of types of historical cervical cell images; selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions; inputting the model training set into an initial abnormal cell identification model for model training, and obtaining a target abnormal cell identification model after training is finished; and identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model. The implementation of the method can realize automation of cervical cancer diagnosis.

Description

Cervical pathology image abnormal cell identification method, system and storage medium
Technical Field
The application relates to the technical field of medical image processing, in particular to a cervical pathology image abnormal cell identification method, a cervical pathology image abnormal cell identification system and a storage medium.
Background
Cervical cancer is a common malignant tumor severely threatening the health of women, but the cause of cervical cancer is clear, and is a cancer which can be found and prevented early. The traditional cervical cancer screening method mainly comprises the steps of preparing cervical exfoliated cells into cervical pathological images, and observing whether abnormal cells exist in the cell morphology of the cervical pathological images by a pathologist. The number of cervical pathology images is large, so that the working intensity of a pathologist is high, the traditional cervical cancer cell screening method is highly dependent on the professional knowledge and diagnosis experience of the pathologist, and misdiagnosis is easily caused if the experience of the doctor is insufficient. Therefore, by adopting a rapid, accurate and efficient cervical abnormal cell identification method, the problems of insufficient pathologists, difficult diagnosis of patients, slow result and the like are solved, so that automation of cervical cancer diagnosis is realized, and the cervical cancer diagnosis method is a problem to be solved by the application.
Disclosure of Invention
The embodiment of the application aims to provide a cervical pathology image abnormal cell identification method, a cervical pathology image abnormal cell identification system and a cervical pathology image abnormal cell identification storage medium, so that cervical cancer diagnosis can be automated.
The embodiment of the application also provides a cervical pathology image abnormal cell identification method, which comprises the following steps:
s1, determining a plurality of types of historical cervical cell images;
s2, constructing a model training set based on the multi-type historical cervical cell images;
s3, selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions;
s4, inputting the model training set into an initial abnormal cell identification model for model training, and obtaining a target abnormal cell identification model after training is finished;
s5, identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model, and outputting a corresponding diagnosis result.
In a second aspect, the embodiment of the application also provides a cervical pathology image abnormal cell identification system, which comprises a data acquisition module, a data processing module, a model construction module, a model training module and a diagnosis module, wherein:
the data acquisition module is used for determining a plurality of types of historical cervical cell images;
the data processing module is used for constructing a model training set based on the multi-type historical cervical cell images;
the model construction module is used for selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions;
the model training module is used for inputting the model training set into an initial abnormal cell identification model to perform model training, and obtaining a target abnormal cell identification model after the training is finished;
the diagnosis module is used for identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model and outputting corresponding diagnosis results.
In a third aspect, an embodiment of the present application further provides a readable storage medium, where the readable storage medium includes a cervical pathology image abnormal cell identification method program, where the cervical pathology image abnormal cell identification method program, when executed by a processor, implements the steps of a cervical pathology image abnormal cell identification method according to any one of the above embodiments.
It can be seen from the above that, according to the method, the system and the storage medium for identifying abnormal cells in cervical pathology image provided by the embodiment of the application, the ResNet50 network is selected for image feature learning, and the constructed abnormal cell identification model can learn more complex features by utilizing the network depth characteristic, so that the identification accuracy is improved, and the CA mechanism is introduced into the ResNet50 network, so that the improved model can accurately capture the relationship between different features, further improve the image classification performance, and realize the automation of cervical cancer diagnosis.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for identifying abnormal cells in cervical pathology image according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the basic residual block of the ResNet network before modification;
FIG. 3 is a schematic diagram of the CA module following a basic residual block;
fig. 4 is a schematic structural diagram of a basic block dsc_ca obtained by modifying a basic residual block;
FIG. 5 is a schematic diagram of the overall structure of an improved DSC_CA_ResNet50 network;
fig. 6 is a schematic structural diagram of a cervical pathology image abnormal cell recognition system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Referring to fig. 1, fig. 1 is a flowchart of a method for identifying abnormal cells in cervical pathology image according to some embodiments of the present application. The method comprises the following steps:
step S1, determining a plurality of types of historical cervical cell images.
Specifically, cervical cell images can be obtained by scanning cervical cell slides treated by liquid-based thin-layer cell detection (ThinPrep Cytology Test, TCT) with an image scanner.
And S2, constructing a model training set based on the multi-type historical cervical cell images.
Specifically, in the current embodiment, the image set formed by the multiple types of historical cervical cell images is divided according to the ratio of 4:1:1, so as to obtain a model training set, a model verification set and a model test set.
Wherein:
action of the model training set: is used for fitting the model and training the classification model by setting the parameters of the classifier.
The model validation set functions as: after a plurality of models are trained through a model training set, in order to find out the model with the best effect, each model is used for predicting verification set data, and model accuracy is recorded. Finally, the parameters corresponding to the model with the best effect are selected, namely the parameters of the model are adjusted.
The model test set functions as: and after the optimal model is obtained through the model training set and the model verification set, model prediction is carried out by using the model test set. In particular, it is used to measure the performance and classification capabilities of the optimal model.
And S3, selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions.
Specifically, please refer to fig. 2 for a schematic structure diagram of the reference network model, before the basic residual block of the res net network is modified, the basic residual block is composed of 2 conventional convolution layers connected in sequence. Among other things, the role of a conventional convolution layer is: the whole original input image is weighted according to a region to obtain feature images, and each element of each feature image is the result of convolution of the corresponding position.
In the current embodiment, a CA (Coordinate Attention) mechanism is introduced into the main structure of the ResNet50 network (namely, a CA module is added after a basic residual block of the ResNet50 network), and the CA_ResNet50 network structure is constructed, so that the improved model can capture the relation between different features more accurately, and the image classification performance is further improved.
And S4, inputting the model training set into an initial abnormal cell identification model for model training, and obtaining a target abnormal cell identification model after training is finished.
Specifically, in the current embodiment, an Adam optimization function is used for model training, where the training batch size is defined as 4, the iteration number is defined as 50, and the initial learning rate is defined as 0.0001. In addition, in this embodiment, the model training adopts a version 2.7 of deep learning open source framework, which supports GPU acceleration and dynamic neural network, and can cooperate with CUDA (Compute Unified Device Architecture, which is a parallel computing platform and programming model pushed by NVIDIA of a graphics card manufacturer) to perform model training.
And S5, identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model, and outputting a corresponding diagnosis result.
According to the identification method of the cervical pathology image abnormal cells, which is disclosed by the application, the ResNet50 network is selected for image feature learning, and the network depth characteristic is utilized, so that the constructed abnormal cell identification model can learn more complex features, thereby improving the identification accuracy, and a CA mechanism is also introduced into the ResNet50 network, so that the improved model can accurately capture the relation between different features, further improve the image classification performance, and realize the automation of cervical cancer diagnosis.
In one embodiment, in step S1, the determining the historical cervical cell image includes:
step S11, acquiring an initial historical cervical cell image.
In particular, the foregoing has described the way in which cervical cell images are acquired, which is not described in detail.
Step S12, adopting a preset size adjustment mode to adjust the initial historical cervical cell image to a preset size, and obtaining a standard size image.
Specifically, since the image input size of the Resnet network is 224×224, and for most cases, since the resolution of the acquired image is greater than this value, in the present embodiment, the acquired initial historical cervical cell images are scaled to the size of "224×224" by the image scaling operation, so as to complete the construction of the standard size image.
Of course, to ensure that the scaled image meets the picture input criteria of the Resnet network, the size of the resulting scaled image may also be detected and, when it is determined that the size of the scaled image does not meet the picture input size criteria of the Resnet network, a cropping or edge zero padding process is performed to uniformly adjust the size of the input image to "224×224".
And S13, carrying out enhancement processing on the standard-size image according to a preset image enhancement mode to obtain an enhanced image.
Specifically, after the standard-size image is obtained, data enhancement processing is performed on the standard-size image to expand the data set size.
In one embodiment, the data enhancement processing method adopted by the application comprises random rotation transformation, random brightness transformation, random mirror transformation and the like, and the specific enhancement method is not limited currently because the technology belongs to the prior art.
And step S14, classifying based on the enhanced images to obtain multiple types of historical cervical cell images.
Specifically, in the present embodiment, the cell types contained in all the images are classified into 5 types, which are respectively: shallow medial squamous cells, columnar epithelial cells, mild abnormal cells, moderate abnormal cells, and carcinoid cells.
Among them, mild abnormal cells and moderate abnormal cells can be distinguished from each other in terms of nuclear morphology abnormality, nuclear staining level, and nuclear to cytoplasmic contrast.
In a specific embodiment, the classification of slightly abnormal cells, as well as moderately abnormal cells, may be further achieved based on the characteristics that the nuclear morphology of moderately abnormal cells is more abnormal and abnormal, the nuclear staining is more intense, and the contrast between the nucleus and the cytoplasm is higher than that of slightly abnormal cells.
In the above embodiment, the image is preprocessed through the image scaling operation and the image enhancement operation, so that a good data basis can be provided for subsequent model training and identification of abnormal cells in cervical pathology images, and the automation degree of cervical cancer diagnosis is improved.
In one embodiment, in step S12, the adjusting the initial historical cervical cell image to a preset size by using a preset size adjustment method to obtain a standard size image includes:
step S121, performing scaling processing on the initial historical cervical cell image by adopting a preset size adjustment mode, so as to obtain a scaled image.
Specifically, the scaling operation adopted in the current embodiment belongs to the prior art, and the specific implementation manner thereof is not limited at present.
Step S122, obtaining the image size of the scaled image, and when determining that the image size is greater than the preset size, clipping the scaled image to obtain a standard size image.
Specifically, the clipping processing operation adopted in the current embodiment belongs to the prior art, and the specific implementation manner thereof is not limited at present.
Step S123, when it is determined that the image size is smaller than the preset size, performing edge zero padding processing on the scaled image to obtain a standard size image.
Specifically, the edge zero-filling processing operation adopted in the present embodiment belongs to the prior art, and the specific implementation manner thereof is not limited at present.
And step S124, when the image size is determined to be equal to the preset size, taking the scaled image as a standard size image.
In the above embodiment, the size of the scaled image is adjusted through the clipping processing operation and the edge zero padding processing operation to form the standard-size image, and the implementation of the present mode can maximally reserve the image main body portion, so that the model matching performance is effectively improved.
In one embodiment, in step S14, the plurality of types of historic cervical cell images include a first type of historic cervical cell image reflecting characteristics of superficial middle layer squamous cells, a second type of historic cervical cell image reflecting characteristics of columnar epithelial cells, a third type of historic cervical cell image reflecting characteristics of mild abnormal cells, a fourth type of historic cervical cell image reflecting characteristics of moderate abnormal cells, and a fifth type of historic cervical cell image reflecting characteristics of carcinoid cells.
In one embodiment, in step S3, the CA module includes: two parallel average pooling modules; a channel dimension reduction module connected to the two parallel average pooling modules; an attention feature extraction module connected to the channel dimension reduction module; two parallel channel dimension-increasing modules connected to the attention feature extraction module; an attention weight calculation module connected to the two parallel channel dimension-increasing modules; a multiplicative weighting calculation module connected to the attention weighting calculation module.
Specifically, the structure of the CA module is understood with reference to fig. 3. The function of each sub-module in the CA module will be further described later, and will not be described in detail.
In one embodiment, in step S3, the target feature map is obtained by processing the following steps:
and S31, carrying out feature aggregation on the original feature map output through the basic residual block along the width and height space directions respectively based on the two parallel average pooling modules to obtain a corresponding first width feature map and a first height feature map.
Specifically, referring to fig. 3, the two parallel average pooling modules are the "X Avg Pool" module and the "Y Avg Pool" module illustrated in fig. 3, and the original feature images output by the basic residual block are respectively input into the "X Avg Pool" module and the "Y Avg Pool" module, and feature aggregation is performed along two spatial directions of width and height, so as to obtain a corresponding first width feature image and a corresponding first height feature image.
And step S32, based on the channel dimension reduction module, sequentially performing splicing processing on the first width feature image and the first height feature image, and performing channel dimension reduction processing on the obtained spliced image according to a preset dimension reduction ratio to obtain a dimension reduction image.
Specifically, referring to fig. 3, the channel dimension reduction module is the "concat+conv2d" module illustrated in fig. 3, where the first width feature map and the first height feature map obtained through the processing in step S31 are input to the "Concat" module for performing the stitching processing, so as to obtain a corresponding stitched image. Then, the spliced image is further input into a Conv2d module to perform channel dimension reduction processing, namely the number of channels of the spliced image is reduced from C to C/r, wherein C is the number of channels, and r is the dimension reduction ratio.
In one embodiment, the "Concat" module selects a Concat function to perform image stitching processing, and the "Conv2d" module selects a convolution module with a convolution kernel of 1×1.
And step S33, based on the attention feature extraction module, sequentially carrying out normalization processing on the dimension-reduced image and carrying out attention feature learning on the obtained normalized image through a Sigmoid activation function to obtain an attention feature map.
Specifically, referring to fig. 3, the attention feature extraction module is the "batch norm+sigmoid" module illustrated in fig. 3, where the dimension-reduced image obtained through the processing in step S32 is input to the "batch norm" module for normalization processing, so as to obtain a normalized image. The obtained normalized image is further input to a Sigmoid module for attention feature learning, and an attention feature map is obtained.
And step S34, based on two parallel channel dimension increasing modules, carrying out channel dimension increasing processing with a convolution kernel of 1 multiplied by 1 on the attention feature map according to the initial image width and the initial image height before the dimension reduction by splicing, so as to obtain a corresponding second width feature map and a corresponding second height feature map.
Specifically, the two parallel channel dimension-increasing modules are two parallel "Conv2d" modules which are illustrated in fig. 3 and are connected to "batch norm+sigmoid". And the two parallel Conv2d modules perform convolution operation with a convolution kernel of 1 multiplied by 1 according to the image width and the image height before the splicing dimension reduction to obtain a second width characteristic diagram and a second height characteristic diagram with the same channel number as the original ones.
And step S35, based on the attention weight calculation module, performing attention weight calculation on the second width feature map and the second height feature map through a Sigmoid activation function respectively to obtain a first attention weight corresponding to the width space direction and a second attention weight corresponding to the height space direction.
Specifically, the attention weight calculation module is two "Sigmoid" modules, which are illustrated in fig. 3 and are connected after two parallel "Conv2d" modules, and is configured to perform attention weight calculation on the second width feature map and the second height feature map based on a Sigmoid activation function, so as to obtain the attention weights of the input feature map in the width direction and the height direction.
Step S36, acquiring an original feature map based on the multiplication weight calculation module, and carrying out multiplication weight calculation based on the first attention weight and the second attention weight on the basis of the original feature map to obtain a target feature map with attention weights in the width and height directions.
Specifically, the multiplication weighted calculation module is a "Re-weight" module illustrated in FIG. 3, and the module is used for obtaining a target feature map with attention weights in width and height directions finally through multiplication weighted calculation (Re-weight) based on the original feature map.
In the above embodiment, the structure of the model is improved based on the reference network model, that is, the CA mechanism is introduced into the main structure of the res net50 network, and the degree of attention to the target area is improved by constructing the ca_res net50 network structure, so as to improve the classification performance of the model.
In one embodiment, the basic residual block is composed of a conventional convolutional layer and a depth separable convolutional layer connected in sequence, wherein: the depth separable convolution layer is used for carrying out convolution operation on each input channel of the input feature map by respectively applying a depth convolution kernel; the depth separable convolution layer is further used for performing convolution operation on each depth convolved output channel by using a 1 multiplied by 1 convolution check; the depth separable convolution layer is further used for linearly combining channel characteristic diagrams of different output channels to generate a final output characteristic diagram.
Specifically, as shown in fig. 4, comparing fig. 2 and fig. 4, in the present embodiment, the depth separable convolution layer is used to replace the second conventional convolution layer in the residual block, so that the situation that the network layer number is deepened (on the premise that the gradient is avoided to disappear by using the quick connection method in the present application, the purpose of extracting stronger features and improving the accuracy is achieved), the increase of the parameter number is avoided, the computational complexity is reduced, and the possibility is provided for the deployment of the subsequent model at the mobile end.
It should be noted that, the depth separable convolution layer disclosed in the embodiment of the present application decomposes the convolution operation into two steps: first, a convolution operation, namely a depth convolution (Depthwise Convolution), is performed on each input channel of the input feature map by applying a depth convolution kernel; then, the output result of the depth convolution is subjected to point-by-point convolution (i.e., pointwise Convolution), that is, a convolution operation is performed on each of the output channels of the depth convolution by using a convolution kernel of 1×1, and the final output feature map is generated by linearly combining feature maps of different channels as in the conventional convolution operation.
In the above embodiment, the depth separable convolution layer is selected to replace the second traditional convolution layer in the residual block, so that the parameters used by the model are greatly reduced, the reasoning speed can be further improved, and the possibility is provided for the deployment of the subsequent model at the mobile end.
In one embodiment, in step S5, the identifying abnormal cells in the cervical pathology image based on the target abnormal cell identification model, and outputting the corresponding diagnosis result includes:
step S51, acquiring a real-time cervical cell image, and inputting the real-time cervical cell image into the target abnormal cell identification model for processing to obtain a corresponding identification category and a prediction probability.
Specifically, the overall structure of the improved dsc_ca_resnet50 network can be referred to in fig. 5. In the present embodiment, the model is modified by constructing a modified dsc_ca_res net50 network, i.e., based on the original res net50 network, and finally a final modified model (target abnormal cell recognition model) is obtained.
And step S52, comparing the prediction probability corresponding to the identification category with an optimal threshold, wherein when the prediction probability is determined to be greater than the optimal threshold, a carcinoid cell diagnosis result is output, and when the prediction probability is determined to be less than or equal to the optimal threshold, a non-carcinoid cell diagnosis result is output.
Specifically, the optimal threshold is determined by:
determining a plurality of thresholds to be selected, and screening an optimal threshold from the thresholds to be selected by adopting a cross-validation mode and combining model prediction accuracy.
For example, if three thresholds of 0.45, 0.5 and 0.55 are selected according to previous experience, and the output probability distribution of the model verification set is respectively combined with the three thresholds, a model prediction result is obtained. The accuracy of the model over the validation set of the model is then calculated, where accuracy is the ratio of the number of results that the model predicts correctly over the validation set to the total number. And finally, taking the model prediction accuracy with the highest accuracy in the three thresholds as an optimal threshold.
It should be further noted that in the present embodiment, the softmax function will be used to transform the output of the last layer of the model into a probability distribution, i.e. map each element of the output vector to a range of 0 to 1, and ensure that the sum of all elements is 1, each element representing the predicted probability of the corresponding class. And simultaneously, carrying out two classifications based on the optimal threshold, namely taking the carcinoid cells as positive classes and the other classes as negative classes. And then comparing the prediction probability of each category with an optimal threshold value respectively, judging the sample as a positive category if the prediction probability is larger than the optimal threshold value, and judging the sample as a negative category if the prediction probability is smaller than or equal to the optimal threshold value, so as to provide a data basis for the output of the subsequent diagnosis result.
Referring to fig. 6, the application discloses a cervical pathology image abnormal cell identification system 600, wherein the system 600 comprises a data acquisition module 601, a data processing module 602, a model construction module 603, a model training module 604 and a diagnosis module 605, wherein:
the data acquisition module 601 is configured to determine a plurality of types of historical cervical cell images.
The data processing module 602 is configured to construct a model training set based on the multiple types of historical cervical cell images.
The model building module 603 is configured to select a res net network as an original reference network model, and add a CA module after a basic residual block of the res net network to obtain an initial abnormal cell identification model, where the CA module is configured to extract attention weight features of an output of the basic residual block, and obtain a target feature map with attention weights in a width direction and a height direction.
The model training module 604 is configured to input the model training set into an initial abnormal cell identification model for model training, and obtain a target abnormal cell identification model after training is completed.
The diagnosis module 605 is configured to perform cervical pathology image abnormal cell recognition based on the target abnormal cell recognition model, and output a corresponding diagnosis result.
In one embodiment, the modules in the system are further configured to perform the method of any of the alternative implementations of the above embodiments.
According to the cervical pathology image abnormal cell recognition system disclosed by the application, the ResNet50 network is selected for image feature learning, and the network depth characteristic is utilized, so that the constructed abnormal cell recognition model can learn more complex features, thereby improving recognition accuracy, and a CA mechanism is introduced into the ResNet50 network, so that the improved model can accurately capture the relation between different features, further improve the image classification performance, and realize automation of cervical cancer diagnosis.
The present application provides a readable storage medium which, when executed by a processor, performs the method of any of the alternative implementations of the above embodiments. The storage medium may be implemented by any type of volatile or nonvolatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, SRAM), electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), programmable Read-Only Memory (PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The readable storage medium selects the ResNet50 network to perform image feature learning, utilizes the network depth characteristic to enable the constructed abnormal cell identification model to learn more complex features, thereby improving the identification accuracy, and introduces a CA mechanism into the ResNet50 network to enable the improved model to accurately capture the relation between different features, further improve the image classification performance and realize the automation of cervical cancer diagnosis.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and variations will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for identifying abnormal cells in cervical pathology images, which is characterized by comprising the following steps:
s1, determining a plurality of types of historical cervical cell images;
s2, constructing a model training set based on the multi-type historical cervical cell images;
s3, selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions;
s4, inputting the model training set into an initial abnormal cell identification model for model training, and obtaining a target abnormal cell identification model after training is finished;
s5, identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model, and outputting a corresponding diagnosis result.
2. The method according to claim 1, wherein in step S1, the determining the historical cervical cell image comprises:
s11, acquiring an initial historical cervical cell image;
s12, adjusting the initial historical cervical cell image to a preset size by adopting a preset size adjustment mode to obtain a standard size image;
s13, carrying out enhancement processing on the standard-size image according to a preset image enhancement mode to obtain an enhanced image;
s14, classifying based on the enhanced images to obtain multiple types of historical cervical cell images.
3. The method according to claim 2, wherein in step S12, the step of adjusting the initial historical cervical cell image to a preset size by using a preset size adjustment method, includes:
s121, scaling the initial historical cervical cell image by adopting a preset size adjustment mode to obtain a scaled image;
s122, acquiring the image size of the scaled image, and when the image size is determined to be larger than a preset size, cutting the scaled image to obtain a standard size image;
s123, when the image size is smaller than a preset size, performing edge zero filling processing on the scaled image to obtain a standard size image;
s124, when the image size is determined to be equal to a preset size, the scaled image is used as a standard size image.
4. The method of claim 1, wherein in step S14, the plurality of types of historic cervical cell images includes a first type of historic cervical cell image reflecting characteristics of superficial middle layer squamous cells, a second type of historic cervical cell image reflecting characteristics of columnar epithelial cells, a third type of historic cervical cell image reflecting characteristics of mild abnormal cells, a fourth type of historic cervical cell image reflecting characteristics of moderate abnormal cells, and a fifth type of historic cervical cell image reflecting characteristics of carcinoid cells.
5. The method according to claim 1, wherein in step S3, the CA module comprises:
two parallel average pooling modules;
a channel dimension reduction module connected to the two parallel average pooling modules;
an attention feature extraction module connected to the channel dimension reduction module;
two parallel channel dimension-increasing modules connected to the attention feature extraction module;
an attention weight calculation module connected to the two parallel channel dimension-increasing modules;
a multiplicative weighting calculation module connected to the attention weighting calculation module.
6. The method according to claim 5, wherein in step S3, the target feature map is obtained by processing:
s31, based on the two parallel average pooling modules, performing feature aggregation on the original feature map output through the basic residual block along the width and height space directions respectively to obtain a corresponding first width feature map and a first height feature map;
s32, based on the channel dimension reduction module, sequentially performing splicing processing on the first width feature image and the first height feature image, and performing channel dimension reduction processing on the obtained spliced image according to a preset dimension reduction ratio to obtain a dimension reduction image;
s33, based on the attention feature extraction module, sequentially carrying out normalization processing on the dimension-reduced image and carrying out attention feature learning on the obtained normalized image through a Sigmoid activation function to obtain an attention feature map;
s34, based on two parallel channel dimension-increasing modules, carrying out channel dimension-increasing processing with a convolution kernel of 1 multiplied by 1 on the attention feature map according to the initial image width and the initial image height before the dimension reduction by splicing to obtain a corresponding second width feature map and a corresponding second height feature map;
s35, based on the attention weight calculation module, performing attention weight calculation on the second width feature map and the second height feature map through a Sigmoid activation function respectively to obtain a first attention weight corresponding to the width space direction and a second attention weight corresponding to the height space direction;
s36, acquiring an original feature map based on the multiplication weighted calculation module, and carrying out multiplication weighted calculation based on the first attention weight and the second attention weight on the basis of the original feature map to obtain a target feature map with attention weights in the width and height directions.
7. The method of claim 1, wherein the basic residual block is comprised of a legacy convolutional layer and a depth separable convolutional layer connected in sequence, wherein:
the depth separable convolution layer is used for carrying out convolution operation on each input channel of the input feature map by respectively applying a depth convolution kernel;
the depth separable convolution layer is further used for performing convolution operation on each depth convolved output channel by using a 1 multiplied by 1 convolution check;
the depth separable convolution layer is further used for linearly combining channel characteristic diagrams of different output channels to generate a final output characteristic diagram.
8. The method according to claim 1, wherein in step S5, the identifying abnormal cells in the cervical pathology image based on the target abnormal cell identification model, and outputting the corresponding diagnosis result, includes:
s51, acquiring a real-time cervical cell image, and inputting the real-time cervical cell image into the target abnormal cell identification model for processing to obtain a corresponding identification category and a prediction probability;
s52, comparing the prediction probability corresponding to the identification category with an optimal threshold, wherein when the prediction probability is determined to be greater than the optimal threshold, a carcinoid cell diagnosis result is output, and when the prediction probability is determined to be less than or equal to the optimal threshold, a non-carcinoid cell diagnosis result is output.
9. The system for identifying abnormal cells in cervical pathology image is characterized by comprising a data acquisition module, a data processing module, a model construction module, a model training module and a diagnosis module, wherein:
the data acquisition module is used for determining a plurality of types of historical cervical cell images;
the data processing module is used for constructing a model training set based on the multi-type historical cervical cell images;
the model construction module is used for selecting a ResNet network as an original reference network model, and adding a CA module after a basic residual block of the ResNet network to obtain an initial abnormal cell identification model, wherein the CA module is used for extracting attention weight characteristics of the output of the basic residual block to obtain a target characteristic diagram with attention weight in the width and height directions;
the model training module is used for inputting the model training set into an initial abnormal cell identification model to perform model training, and obtaining a target abnormal cell identification model after the training is finished;
the diagnosis module is used for identifying abnormal cells of the cervical pathology image based on the target abnormal cell identification model and outputting corresponding diagnosis results.
10. A storage medium, characterized in that it comprises a cervical pathology image abnormal cell identification method program, which, when executed by a processor, implements the steps of the method according to any one of claims 1 to 9.
CN202310700927.5A 2023-06-14 2023-06-14 Cervical pathology image abnormal cell identification method, system and storage medium Pending CN116843956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310700927.5A CN116843956A (en) 2023-06-14 2023-06-14 Cervical pathology image abnormal cell identification method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310700927.5A CN116843956A (en) 2023-06-14 2023-06-14 Cervical pathology image abnormal cell identification method, system and storage medium

Publications (1)

Publication Number Publication Date
CN116843956A true CN116843956A (en) 2023-10-03

Family

ID=88171637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310700927.5A Pending CN116843956A (en) 2023-06-14 2023-06-14 Cervical pathology image abnormal cell identification method, system and storage medium

Country Status (1)

Country Link
CN (1) CN116843956A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113570573A (en) * 2021-07-28 2021-10-29 中山仰视科技有限公司 Pulmonary nodule false positive eliminating method, system and equipment based on mixed attention mechanism
WO2022145999A1 (en) * 2020-12-30 2022-07-07 ㈜엔티엘헬스케어 Artificial-intelligence-based cervical cancer screening service system
CN114897779A (en) * 2022-04-12 2022-08-12 华南理工大学 Cervical cytology image abnormal area positioning method and device based on fusion attention

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022145999A1 (en) * 2020-12-30 2022-07-07 ㈜엔티엘헬스케어 Artificial-intelligence-based cervical cancer screening service system
CN113570573A (en) * 2021-07-28 2021-10-29 中山仰视科技有限公司 Pulmonary nodule false positive eliminating method, system and equipment based on mixed attention mechanism
CN114897779A (en) * 2022-04-12 2022-08-12 华南理工大学 Cervical cytology image abnormal area positioning method and device based on fusion attention

Similar Documents

Publication Publication Date Title
EP3674968B1 (en) Image classification method, server and computer readable storage medium
CN111652321B (en) Marine ship detection method based on improved YOLOV3 algorithm
CN110490850B (en) Lump region detection method and device and medical image processing equipment
CN110033456B (en) Medical image processing method, device, equipment and system
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
US10121245B2 (en) Identification of inflammation in tissue images
Takahama et al. Multi-stage pathological image classification using semantic segmentation
CN113728335A (en) Method and system for classification and visualization of 3D images
CN109389129A (en) A kind of image processing method, electronic equipment and storage medium
CN110796199B (en) Image processing method and device and electronic medical equipment
CN115272196B (en) Method for predicting focus area in histopathological image
WO2021136368A1 (en) Method and apparatus for automatically detecting pectoralis major region in molybdenum target image
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN112836756B (en) Image recognition model training method, system and computer equipment
More et al. Convolutional neural network based brain tumor detection
CN114897782B (en) Gastric cancer pathological section image segmentation prediction method based on generation type countermeasure network
CN117015796A (en) Method for processing tissue images and system for processing tissue images
CN112036298A (en) Cell detection method based on double-segment block convolutional neural network
CN114693971A (en) Classification prediction model generation method, classification prediction method, system and platform
CN114864075A (en) Glioma grade analysis method and device based on pathological image
CN113096080B (en) Image analysis method and system
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN113781387A (en) Model training method, image processing method, device, equipment and storage medium
CN113705595A (en) Method, device and storage medium for predicting degree of abnormal cell metastasis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination