CN116977805A - Cervical fluid-based cell detection method based on feature fusion and storage medium - Google Patents

Cervical fluid-based cell detection method based on feature fusion and storage medium Download PDF

Info

Publication number
CN116977805A
CN116977805A CN202310738261.2A CN202310738261A CN116977805A CN 116977805 A CN116977805 A CN 116977805A CN 202310738261 A CN202310738261 A CN 202310738261A CN 116977805 A CN116977805 A CN 116977805A
Authority
CN
China
Prior art keywords
image
cell
feature
features
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310738261.2A
Other languages
Chinese (zh)
Inventor
王晓梅
胡荷萍
张仕侨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yice Technology Co ltd
Original Assignee
Hangzhou Yice Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yice Technology Co ltd filed Critical Hangzhou Yice Technology Co ltd
Priority to CN202310738261.2A priority Critical patent/CN116977805A/en
Publication of CN116977805A publication Critical patent/CN116977805A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image detection, in particular to a cervical fluid-based cell detection method and a storage medium based on feature fusion, comprising the following steps: step S1: extracting rough contour features from an input image, and dividing the rough contour features to obtain a rough contour image; step S2: carrying out morphological feature extraction on the rough contour image, constructing a description vector, and extracting the rough contour feature to obtain an image extraction feature; step S3: fusing the image extraction features and the description vectors to obtain fusion vectors; step S4: and predicting based on the fusion vector to obtain the cell position and cell label of each cell to be detected. The beneficial effects are that: the morphological characteristics related to the morphological characteristics of the cells are added in the detection process, and are fused with the traditional image characteristics to form the fusion vector, and then the detection is carried out based on the fusion vector, so that the artificial intelligent model can be directly trained and detected according to the morphological characteristics related to the cells in the detection process, and the detection efficiency is improved.

Description

Cervical fluid-based cell detection method based on feature fusion and storage medium
Technical Field
The invention relates to the technical field of image detection, in particular to a cervical fluid-based cell detection method based on feature fusion and a storage medium.
Background
Cervical fluid-based cytological examination (Thinprep cytologic test, TCT), a cervical cancer cytological examination technique, uses a fluid-based thin-layer cell detection system to detect cervical cells and performs cytologically differential diagnosis. Compared with the traditional cervical scraping blade pap smear examination, the satisfaction degree of specimens and the detection rate of abnormal cervical cells are obviously improved. And part of precancerous lesions can be found, and microbial infections such as mould, trichomonas, virus, chlamydia and the like can be found.
In the prior art, in order to improve the efficiency of cell examination by doctors, there are already technical schemes of image processing, detection and classification by introducing artificial intelligent models, and then adding labels to assist the doctors in examination. Such as for example. The Chinese patent CN202010553018.X discloses an automatic cervical cell classification model establishment and automatic cervical cell classification method, wherein after an image is processed, cells in the image are detected and classified based on a pre-trained VGG16 model.
However, in the practical implementation process, the inventor finds that the technical scheme is realized by extracting image features, training the model and actually detecting based on the traditional artificial intelligent model, only pays attention to the rule of the image features which can be extracted by each convolution layer, has relatively weak interpretation in the whole training and detecting process, and needs to determine that the model can extract correct image features and detect through repeated training and verification.
Disclosure of Invention
Aiming at the problems in the prior art, a cervical liquid-based cell detection method based on feature fusion is provided; in another aspect, a storage medium storing computer instructions corresponding to the cervical fluid-based cell detection method is also provided.
The specific technical scheme is as follows:
a cervical fluid-based cell detection method based on feature fusion, comprising:
step S1: acquiring an input image, extracting rough contour features from the input image, and dividing the rough contour features to obtain a rough contour image;
step S2: carrying out morphological feature extraction on the rough contour image, constructing a description vector, and further extracting the rough contour feature to obtain an image extraction feature;
step S3: fusing the image extraction features and the description vector to obtain a fusion vector;
step S4: and predicting based on the fusion vector to obtain the cell position and cell label of each cell to be detected.
On the other hand, the step S1 includes:
step S11: acquiring the input image, and carrying out feature extraction on the input image for a plurality of times to obtain feature map information;
step S12: constructing a plurality of candidate frames in the input image according to the feature map information;
step S13: classifying the candidate frames to obtain the rough contour features, and dividing the input image according to the candidate frames corresponding to the rough contour features to obtain the rough contour image.
On the other hand, in the step S2, the method for constructing the description vector includes:
step A21: performing cell contour segmentation and cell nucleus contour segmentation on the crude contour image to obtain a cell contour image and a cell nucleus image;
step A22: carrying out morphological feature extraction on the cell outline image and the cell nucleus image to obtain a plurality of morphological features, and constructing the description vector based on the morphological features;
the morphological features include: cytoplasmic color, nuclear membrane regularity, nuclear plasma ratio, intracellular chromatin coarse particles, and nuclear spacing;
the nuclear membrane regularity comprises a mean value of a nuclear center-to-nuclear membrane edge distance, a standard deviation of the nuclear center-to-nuclear membrane edge distance, and a maximum curvature of the nuclear membrane.
On the other hand, in the step S2, the method for constructing the image extraction feature includes:
step B21: dividing the rough contour features to obtain a plurality of candidate units;
step B22: performing bilinear interpolation on each candidate unit, and calculating to obtain a central pixel value corresponding to the candidate unit;
step B23: the image extraction features are constructed in accordance with the center pixel values.
On the other hand, the step S3 includes:
step S31: splicing the image extraction features and the description vectors to obtain a combined vector;
step S32: predicting according to the combined vector to obtain dimension weight corresponding to each characteristic dimension in the combined vector;
step S33: and calculating according to the dimension weight, the characteristic dimension and the combined vector weight to obtain the fusion vector.
On the other hand, in the step S31, the combination vector is formed by sequentially combining one set of the image extraction features and two sets of the description vectors.
On the other hand, in the step S32, the dimension weight is calculated by using a weight calculation model, where the weight calculation model includes:
the first full-connection layer receives the combination vector and predicts to obtain a first weight characteristic;
the noise linear rectifying module is connected with the first full-connection layer, and the noise linear rectifying module processes the first weight characteristic to obtain a second weight characteristic;
the second full-connection layer is connected with the noise linear rectification module, and the second full-connection layer processes the second weight characteristic to obtain a third weight characteristic;
an activation function, wherein the activation function is connected with the second full-connection layer, and the activation function remaps the third weight feature to obtain the dimension weight;
the first weight feature, the second weight feature, the third weight feature, and the number of dimension weights correspond to the feature dimension.
On the other hand, the step S33 includes:
step S331: calculating according to the dimension weight and each feature dimension to obtain a plurality of weighted features;
step S332: splicing the weighted features and the combined vector to obtain spliced features;
step S333: and inputting the splicing characteristics into a full-connection layer to obtain the fusion vector.
On the other hand, the step S4 includes:
step S41: predicting a plurality of cell areas corresponding to the cells to be detected based on the fusion vector;
step S42: classifying the cell areas to obtain the cell labels, and then outputting the cells to be detected.
A storage medium having stored therein computer instructions adapted to be executed on a computer device, wherein the computer device, when executing the computer instructions, performs the cervical liquid-based cell detection method described above.
The technical scheme has the following advantages or beneficial effects:
aiming at the problem that in the prior art, the image feature-dependent detection process can be realized only by repeated verification of an artificial intelligent model training and detecting method which only depends on the image feature, in the scheme, the morphological feature related to the morphological feature of the cell is added in the detection process, the morphological feature is fused with the traditional image feature to form a fusion vector, and then the fusion vector is used for detection, so that the artificial intelligent model can be trained and detected directly according to the morphological feature related to the cell in the detection process, and the detection efficiency is improved.
Drawings
Embodiments of the present invention will now be described more fully with reference to the accompanying drawings. The drawings, however, are for illustration and description only and are not intended as a definition of the limits of the invention.
FIG. 1 is an overall schematic of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the substep of step S1 in the embodiment of the invention;
FIG. 3 is a diagram illustrating a vector construction process according to an embodiment of the present invention;
fig. 4 is a schematic diagram of contour extraction according to an embodiment of the invention.
FIG. 5 is a schematic diagram of image feature extraction in an embodiment of the invention;
FIG. 6 is a schematic diagram showing the sub-steps of step S3 in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a weight calculation model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram showing the substep of step S33 in the embodiment of the invention;
fig. 9 is a schematic diagram of the substep of step S4 in the embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention is further described below with reference to the drawings and specific examples, which are not intended to be limiting.
The invention comprises the following steps:
a cervical fluid-based cell detection method based on feature fusion, as shown in fig. 1, comprising:
step S1: acquiring an input image, extracting rough contour features from the input image, and dividing the rough contour features to obtain a rough contour image;
step S2: carrying out morphological feature extraction on the rough outline image, constructing a description vector, and further extracting the rough outline feature to obtain an image extraction feature;
step S3: fusing the image extraction features and the description vectors to obtain fusion vectors;
step S4: and predicting based on the fusion vector to obtain the cell position and cell label of each cell to be detected.
Specifically, for the training and detection method of the artificial intelligent model only relying on image features in the prior art, repeated verification is needed to realize the detection process relying on image features, in this embodiment, after coarse contour feature extraction and image segmentation are performed on an input image, morphological feature extraction associated with morphological features is performed respectively to obtain a plurality of feature dimensions for describing cell information, and then the feature dimensions are assembled into description vectors, and the image extraction features are obtained by adopting a traditional neural network extraction method; and after the description vector and the image extraction feature are obtained, fusing the description vector and the image extraction feature to obtain a fusion vector. And then predicting based on the fusion vector, so as to realize a better detection effect.
In the implementation process, the cervical liquid-based cell detection method is set in computer equipment as a software embodiment, and is used for processing the input image, detecting the cell position of each cell to be detected in the input image and classifying the cell label. The input image refers to an image obtained by scanning a cervical fluid-based slice based on a digital pathology image scanning system, and the slice is sampled, sliced, stained and the like according to the existing slice manufacturing process. The input image contains a plurality of cells to be detected, which are defined with corresponding categories in different diagnostic methods for characterizing the corresponding different disease processes. The rough outline features refer to a series of candidate boxes obtained by detecting and predicting an area possibly containing a cell to be detected in an input image, and are used for roughly marking the area of the cell to be detected. Morphological features refer to a plurality of features directly related to the morphology of a cell, which are used to assemble a description vector as each dimension of the description vector, respectively. The image extraction features are high-dimensional features of the image extracted by convolution and other operations in a neural network mode.
In one embodiment, as shown in fig. 2, step S1 includes:
step S11: acquiring the input image, and carrying out feature extraction on the input image for a plurality of times to obtain feature map information;
step S12: constructing a plurality of candidate frames in the input image according to the feature map information;
step S13: classifying the candidate frames to obtain the rough contour features, and dividing the input image according to the candidate frames corresponding to the rough contour features to obtain the rough contour image.
Specifically, in order to achieve a better detection effect, in this embodiment, an extraction process similar to the RPN network structure is introduced. When an input image is acquired, a pretrained convolution network can be adopted to conduct feature extraction on the input image for multiple times to obtain feature map information, then, a plurality of candidate frames with different sizes are built in an input area candidate network based on the feature map information, then, the contents in the candidate frames are classified, whether cells to be detected exist in the candidate frames or not is judged, and coarse contour features are obtained through screening. After determining whether the cell to be detected exists in the candidate frames, the input image can be segmented based on the candidate frames corresponding to the classification result, and the segmented image is used as a rough contour image.
In one embodiment, in step S2, as shown in fig. 3, a method for constructing a description vector includes:
step A21: performing cell contour segmentation and cell nucleus contour segmentation on the rough contour image to obtain a cell contour image and a cell nucleus image;
step A22: carrying out morphological feature extraction on the cell outline image and the cell nucleus image to obtain a plurality of morphological features, and constructing a description vector based on the morphological features;
morphological features include: cytoplasmic color, nuclear membrane regularity, nuclear plasma ratio, intracellular chromatin coarse particles, and nuclear spacing;
the nuclear membrane regularity includes the mean value of the nuclear center to nuclear membrane edge distance, the standard deviation of the nuclear center to nuclear membrane edge distance, the maximum curvature of the nuclear membrane.
Specifically, for the artificial intelligent detection method in the prior art, the detection is performed only by relying on the image features extracted by the convolution layer, so that the problem of unstable detection effect is solved.
Further, in the process of morphological description of cells, the prior art may obtain more features, including tens of features such as color, texture, image features of color, image features of texture, area, aspect ratio, perimeter of cells, and the like. If too many feature dimensions are set in the process of constructing the description vector, the problem that the model is over-fitted in the subsequent model training and prediction processes is easy to occur, so that the detection accuracy is reduced. In order to solve the problem, the inventor judges the influence of each feature on the prediction result through a large number of regression researches, and combines the experience knowledge of a senior pathology expert to determine the features with strong interpretation and high correlation. As shown in fig. 4, after the input image is segmented and classified by the candidate frame, a rough contour image of the input image segmented by the external detection frame 101 is obtained, which contains the cells to be detected. Then, contour extraction is performed on the rough contour image to obtain a cell contour image 102 and a cell nucleus image 103, a plurality of sampling points are selected on the cell nucleus image 103, and the minimum distance from the sampling points to the cell contour image 102 is calculated and used as the nuclear membrane edge distance. Wherein, in the aspect of characteristic calculation, the cytoplasmic color is divided into red and blue, and the corresponding colors are 0 and 1; the regularity of the nuclear membrane is expressed by 3 variables, namely, the mean value, standard deviation, and maximum curvature of the nuclear membrane from the center of the core to the edge of the nuclear membrane; the nuclear plasma ratio refers to the ratio of the area of the nucleus to the area of the cell; coarse particles refer to the mean value of the pixel values in the nucleus; the inter-nuclear distance refers to the mean and standard deviation of the distance between the center of the current cell and the center of other cells of the Patch, and default values are 0 and 0 if there is only one cell. Better morphological characterization of the cells is achieved based on the above-described treatments.
In one embodiment, in step S2, as shown in fig. 5, a method for constructing an image extraction feature includes:
step B21: dividing the rough contour features to obtain a plurality of candidate units;
step B22: performing bilinear interpolation on each candidate unit, and calculating to obtain a central pixel value corresponding to the candidate unit;
step B23: an image extraction feature is constructed in accordance with the center pixel value.
Specifically, in order to achieve a better feature extraction effect, in the present embodiment, an image feature extraction method similar to ROI alignment is introduced. After the rough contour features are received, the rough contour features are segmented through corresponding window sizes, so that a plurality of candidate units are obtained, and each candidate unit is not overlapped with each other. And then, sampling the candidate unit based on a bilinear interpolation method, and calculating to obtain the central pixel value of the candidate unit. Based on a plurality of center pixel values, the original input rough contour features can be counter-propagated and mapped to the original image, and meanwhile, coordinate offset generated in the sampling process is corrected, so that the construction of image extraction features is realized.
In one embodiment, as shown in fig. 6, step S3 includes:
step S31: splicing the image extraction characteristics and the description vector to obtain a combined vector;
step S32: predicting according to the combined vector to obtain dimension weight corresponding to each characteristic dimension in the combined vector;
step S33: and obtaining a fusion vector according to the dimension weight, the feature dimension and the combined vector weight calculation.
Specifically, in order to achieve a better prediction effect in the following prediction process, in this embodiment, the image extraction feature and the description vector are spliced in advance as multiple dimensions of a combined vector to obtain the combined vector, and then, the dimension weights of the multiple feature dimensions of the combined vector are calculated respectively, and then weighting processing is performed; meanwhile, in order to better retain the original characteristic information, a combined vector which is not subjected to weighting processing is introduced in the process of constructing the fusion vector. Through the arrangement, the model can achieve a good prediction effect.
Further, in step S31, a combination vector is formed by sequentially combining one set of image extraction features and two sets of description vectors. Which in implementation is embodied in the form of "combined vector = image extracted feature + descriptive vector". The weight of the description vector associated with the morphological characteristics in the combination vector can be increased through the combination mode, so that the model can pay more attention to the morphological characteristics of the cells.
In one embodiment, in step S32, a weight calculation model is used to calculate the dimension weight, as shown in fig. 7, where the weight calculation model includes:
the first full-connection layer C1 receives the combination vector and predicts to obtain a first weight characteristic;
the noise linear rectifying module C2 is connected with the first full-connection layer C1, and the noise linear rectifying module C2 processes the first weight characteristic to obtain a second weight characteristic;
the second full-connection layer C3 is connected with the noise linear rectifying module, and the second full-connection layer C3 processes the second weight characteristic to obtain a third weight characteristic;
activating a function C4, wherein the activating function C4 is connected with a second full-connection layer C3, and the activating function C4 remaps the third weight characteristic to obtain a dimension weight;
the number of the first weight features, the second weight features, the third weight features and the dimension weights corresponds to the feature dimension.
Specifically, in order to achieve a better effect of determining the weight of each feature dimension, in this embodiment, by designing the weight calculation model, the combination vector is mapped in weight space by the first full-connection layer C1, the noise linear rectification module C2, the second full-connection layer C2 and the activation function C4, which are sequentially set, and the value domain and the target domain of the full-connection layer are fitted by the noise linear rectification module C2 and the activation function C4. In order to achieve a good fitting effect, the noise linear rectification module C2 is achieved through a ReLU function, and the activation function C4 is achieved through a sigmoid function.
In one embodiment, as shown in fig. 8, step S33 includes:
step S331: calculating according to the dimension weight and each feature dimension to obtain a plurality of weighted features;
step S332: splicing the weighted features and the combined vectors to obtain spliced features;
step S333: and inputting the spliced characteristics into the full-connection layer to obtain a fusion vector.
Specifically, in order to achieve a better fusion effect, in this embodiment, after the dimension weight is obtained, the feature dimension is calculated in advance based on the dimension weight, so as to obtain a weighted feature. And then, splicing the weighted features and the combined vector to obtain spliced features, wherein the spliced features comprise feature dimensions added with dimension weights and feature dimensions which are not subjected to dimension weight adjustment. And then, inputting the full connection layer to the splicing characteristics, so as to obtain a fusion vector and realize a better fusion effect.
In one embodiment, as shown in fig. 9, step S4 includes:
step S41: predicting a plurality of cell areas corresponding to cells to be detected based on the fusion vector;
step S42: and classifying the cell areas to obtain cell labels, and then outputting the cells to be detected.
Specifically, in order to achieve a better prediction effect, in this embodiment, after the fusion vector constructed in the above process is obtained, the input image is predicted through the full-connection layer based on the fusion vector, so as to predict and obtain a cell region associated with the cell to be detected. Then, classifying the intercepted cell areas based on a classifier to generate cell labels corresponding to each cell to be detected, wherein the cell labels are pre-designed labels corresponding to relevant medical requirements and are used for representing possible states of the cells. And combining the cell label with the image corresponding to the cell area to obtain the cell to be detected, and outputting the cell.
A storage medium having stored therein computer instructions adapted to be executed on a computer device, which when executed by the computer device, performs the cervical liquid based cell detection method described above.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the description and illustrations of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A cervical fluid-based cell detection method based on feature fusion, comprising:
step S1: acquiring an input image, extracting rough contour features from the input image, and dividing the rough contour features to obtain a rough contour image;
step S2: carrying out morphological feature extraction on the rough contour image, constructing a description vector, and further extracting the rough contour feature to obtain an image extraction feature;
step S3: fusing the image extraction features and the description vector to obtain a fusion vector;
step S4: and predicting based on the fusion vector to obtain the cell position and cell label of each cell to be detected.
2. The cervical fluid based cell testing method according to claim 1, wherein the step S1 comprises:
step S11: acquiring the input image, and carrying out feature extraction on the input image for a plurality of times to obtain feature map information;
step S12: constructing a plurality of candidate frames in the input image according to the feature map information;
step S13: classifying the candidate frames to obtain the rough contour features, and dividing the input image according to the candidate frames corresponding to the rough contour features to obtain the rough contour image.
3. The cervical liquid based cell testing method according to claim 1, wherein in the step S2, the method for constructing the description vector includes:
step A21: performing cell contour segmentation and cell nucleus contour segmentation on the crude contour image to obtain a cell contour image and a cell nucleus image;
step A22: carrying out morphological feature extraction on the cell outline image and the cell nucleus image to obtain a plurality of morphological features, and constructing the description vector based on the morphological features;
the morphological features include: cytoplasmic color, nuclear membrane regularity, nuclear plasma ratio, intracellular chromatin coarse particles, and nuclear spacing;
the nuclear membrane regularity comprises a mean value of a nuclear center-to-nuclear membrane edge distance, a standard deviation of the nuclear center-to-nuclear membrane edge distance, and a maximum curvature of the nuclear membrane.
4. The cervical liquid based cell testing method according to claim 1, wherein in the step S2, the method for constructing the image extraction features includes:
step B21: dividing the rough contour features to obtain a plurality of candidate units;
step B22: performing bilinear interpolation on each candidate unit, and calculating to obtain a central pixel value corresponding to the candidate unit;
step B23: the image extraction features are constructed in accordance with the center pixel values.
5. The cervical fluid based cell testing method according to claim 1, wherein the step S3 comprises:
step S31: splicing the image extraction features and the description vectors to obtain a combined vector;
step S32: predicting according to the combined vector to obtain dimension weight corresponding to each characteristic dimension in the combined vector;
step S33: and calculating according to the dimension weight, the characteristic dimension and the combined vector weight to obtain the fusion vector.
6. The cervical liquid based cell testing method according to claim 5, wherein in the step S31, the combination vector is formed by sequentially combining one set of the image extraction features and two sets of the description vectors.
7. The cervical liquid based cell testing method according to claim 5, wherein in the step S32, the dimension weight is calculated by using a weight calculation model, the weight calculation model includes:
the first full-connection layer receives the combination vector and predicts to obtain a first weight characteristic;
the noise linear rectifying module is connected with the first full-connection layer, and the noise linear rectifying module processes the first weight characteristic to obtain a second weight characteristic;
the second full-connection layer is connected with the noise linear rectification module, and the second full-connection layer processes the second weight characteristic to obtain a third weight characteristic;
an activation function, wherein the activation function is connected with the second full-connection layer, and the activation function remaps the third weight feature to obtain the dimension weight;
the first weight feature, the second weight feature, the third weight feature, and the number of dimension weights correspond to the feature dimension.
8. The cervical liquid based cell assay according to claim 1, wherein step S33 comprises:
step S331: calculating according to the dimension weight and each feature dimension to obtain a plurality of weighted features;
step S332: splicing the weighted features and the combined vector to obtain spliced features;
step S333: and inputting the splicing characteristics into a full-connection layer to obtain the fusion vector.
9. The cervical fluid based cell testing method according to claim 1, wherein the step S4 comprises:
step S41: predicting a plurality of cell areas corresponding to the cells to be detected based on the fusion vector;
step S42: classifying the cell areas to obtain the cell labels, and then outputting the cells to be detected.
10. A storage medium having stored therein computer instructions adapted to be executed on a computer device, wherein the computer device, when executing the computer instructions, performs the cervical liquid-based cell detection method according to any one of claims 1-9.
CN202310738261.2A 2023-06-20 2023-06-20 Cervical fluid-based cell detection method based on feature fusion and storage medium Pending CN116977805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310738261.2A CN116977805A (en) 2023-06-20 2023-06-20 Cervical fluid-based cell detection method based on feature fusion and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310738261.2A CN116977805A (en) 2023-06-20 2023-06-20 Cervical fluid-based cell detection method based on feature fusion and storage medium

Publications (1)

Publication Number Publication Date
CN116977805A true CN116977805A (en) 2023-10-31

Family

ID=88472078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310738261.2A Pending CN116977805A (en) 2023-06-20 2023-06-20 Cervical fluid-based cell detection method based on feature fusion and storage medium

Country Status (1)

Country Link
CN (1) CN116977805A (en)

Similar Documents

Publication Publication Date Title
EP3486836B1 (en) Image analysis method, apparatus, program, and learned deep learning algorithm
CN108364288B (en) Segmentation method and device for breast cancer pathological image
EP3872705B1 (en) Detection model training method and apparatus and terminal device
EP3389011A1 (en) Image analysis system for detecting edges of a nucleus
CN106780522B (en) A kind of bone marrow fluid cell segmentation method based on deep learning
US20060204953A1 (en) Method and apparatus for automated analysis of biological specimen
CN112819821B (en) Cell nucleus image detection method
CN105894490A (en) Fuzzy integration multiple classifier integration-based uterine neck cell image identification method and device
CN112132166B (en) Intelligent analysis method, system and device for digital cell pathology image
CN111160407A (en) Deep learning target detection method and system
CN113261012B (en) Method, device and system for processing image
EP3895122A1 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN113763370B (en) Digital pathology image processing method and device, electronic equipment and storage medium
CN113052228A (en) Liver cancer pathological section classification method based on SE-Incepton
CN114782948B (en) Global interpretation method and system for cervical fluid-based cytological smear
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
CN112990015A (en) Automatic lesion cell identification method and device and electronic equipment
CN112766340A (en) Depth capsule network image classification method and system based on adaptive spatial mode
CN117612164B (en) Cell division equilibrium degree detection method based on double edge detection
CN113837255B (en) Method, apparatus and medium for predicting cell-based antibody karyotype class
CN113536896B (en) Insulator defect detection method and device based on improved Faster RCNN and storage medium
CN113989799A (en) Cervical abnormal cell identification method and device and electronic equipment
CN110930369B (en) Pathological section identification method based on group et-variable neural network and conditional probability field
CN111797737A (en) Remote sensing target detection method and device
CN116386037A (en) Pathological image IHC (IHC) staining membrane plasma expression cell classification device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination