CN116311240A - Application method and system in cell classification process - Google Patents

Application method and system in cell classification process Download PDF

Info

Publication number
CN116311240A
CN116311240A CN202310272806.5A CN202310272806A CN116311240A CN 116311240 A CN116311240 A CN 116311240A CN 202310272806 A CN202310272806 A CN 202310272806A CN 116311240 A CN116311240 A CN 116311240A
Authority
CN
China
Prior art keywords
cell
image
training
classification
cell classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310272806.5A
Other languages
Chinese (zh)
Inventor
杨志明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ideepwise Artificial Intelligence Robot Technology Beijing Co ltd
Original Assignee
Ideepwise Artificial Intelligence Robot Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ideepwise Artificial Intelligence Robot Technology Beijing Co ltd filed Critical Ideepwise Artificial Intelligence Robot Technology Beijing Co ltd
Publication of CN116311240A publication Critical patent/CN116311240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an application method and system in a cell classification process. In the embodiment of the application, the cell classification network is provided with a multi-level characteristic pyramid structure on a network structure, a self-adaptive comparison loss function is provided on the loss function, and a cell classification model is obtained after training, so that abnormal, normal and impurity categories can be output. According to the embodiment of the application, the multistage feature fusion module enables each feature layer to fuse semantic feature information and spatial feature information at the same time, so that positive cells with different feature forms can be identified; the loss weight can be automatically adjusted by using the similarity between cell characteristics to enable the model to learn more differentiated high-dimensional characteristics, thereby improving the classification accuracy.

Description

Application method and system in cell classification process
Technical Field
The invention relates to the technical field of computer image processing, in particular to an application method and system in a cell classification process of multi-level feature fusion and self-adaptive contrast loss.
Background
In recent years, machine learning techniques based on supervised learning have contributed significantly in the field of artificial intelligence. In the medical cell field, a plurality of researchers have performed related technical exploration and technical implementation, and the identification diagnosis rate and accuracy rate in the medical field are improved. If a plurality of researchers in the cervical fluid-based cell field perform related research and exploration in the detection, classification and segmentation fields, the detection and classification exploration of cells is also performed in the urine abscission cell field, the pleural ascites abscission cell field and the like; the method is used for exploring and realizing pathological diagnosis in the field of histopathology. The above techniques are typically performed on a large volume of data that is supervised, high quality labeling. In the practical environment, it is very difficult to label a large amount of training data with wide data sources, wide distribution and high quality, so that in the practical use process, researchers are required to conduct more intensive research on a small amount of labeled data, and the recognition accuracy is improved on the basis. In addition, in the field of medical cells, the similarity among cells is higher, the morphological difference among cells is smaller, and the information such as the size of the cells, the dyeing depth of cell nuclei, the texture of cell nuclei chromatin, the nuclear plasma ratio and the like is the characteristics required to be discussed and considered by the artificial intelligent cell classification method.
The feature pyramid FPN fuses shallow detail features and deep semantic features together in a top-down mode, so that the model learning of multi-scale features of a target can be promoted, and the problem of cell size in the cell classification field can be solved. The core idea of contrast learning is to compare positive samples and negative samples in a feature space and learn the feature representation of the samples. Contrast learning does not learn signals from a single data sample at a time, but rather learns by comparing between different samples, i.e., maximizing the consistency between different transformed views of the same image (e.g., clipping, flipping, color transformation, etc.), and minimizing the consistency between the transformed views of different images, can cope with the problems of small training data sets and narrower data sample distribution caused by difficulty in labeling in the cell classification field, and maximizes utilization of labeled data.
Disclosure of Invention
In view of this, the embodiment of the application provides an application method in a cell classification process of multi-level feature fusion and self-adaptive contrast loss, which builds a cell classification network structure with multi-level feature fusion, and applies a self-supervision learning technology to a task in the field of supervised cell classification, so that the cell image classification process is optimized in the subsequent application.
The embodiment of the application also provides an application system in the cell classification process of multi-level feature fusion and self-adaptive contrast loss, and the system can use a cell classification network structure model with multi-level feature fusion and apply the self-supervision learning technology to the task in the field of supervised cell classification, so that the cell image classification process is optimized in the subsequent application.
The embodiment of the application is realized in the following way:
an application method in a cell classification process of multistage feature fusion and self-adaptive contrast loss, comprising the following steps:
constructing a cell classification network model of a multi-level feature fusion module based on a convolutional neural network;
in the training process, training data are randomly divided into a plurality of image blocks with the size of B, and the training image X1 of each B is subjected to image enhancement to obtain an enhanced image X2. And X1 is a training image set in B, and X2 is an image set obtained by carrying out data enhancement on the training image set in B.
Image sequence splicing concat (X1, X2) is carried out on the cell original image set X1 and the enhanced image set X2, the spliced image sequence is taken as a batch to be input into a model for feature extraction, after the features of the cell images are extracted by a backstone, the multi-scale fused features are obtained through a multi-level feature fusion module, and the multi-scale fused features are input into a cell classification layer to respectively obtain the classification features of data;
and carrying out self-adaptive loss calculation and parameter iteration between classification features of the cell original image and the enhanced image based on the self-adaptive contrast loss to obtain a cell classification model based on multi-level feature fusion and the self-adaptive contrast loss.
When the method is applied, unlabeled cell image data are input into the cell classification model, classification recognition processing is carried out, and a classification result of the cell image is output.
The initialized cell classification model is a convolutional neural network layer CNN, wherein the CNN carries out convolutional calculation on the cell image data after image transformation.
The application method is characterized in that the initialized cell classification model network structure comprises a main network, a multistage feature fusion module and a classification layer, wherein the three parts comprise:
the construction of a cell classification network model of a multi-level feature fusion module comprises the following steps of;
the multi-level feature fusion module is performed on three level feature layers, and S8, S16 and S32 represent feature maps downsampled 8-fold, 16-fold and 32-fold, respectively. Each feature layer integrates upper and lower level features at the same time, and integrates both top semantic feature information and bottom spatial feature information. The method is characterized in that:
in the S8 feature layer, S32 and S16 are respectively up-sampled and fused with S8, and then weighted and added with S8 to obtain abundant low-level features;
in the S16 feature layer, respectively carrying out up-sampling on the S32, down-sampling on the S8, fusing with the S16, and carrying out weighted addition on the S16 to obtain rich middle layer features;
and in the S32 feature layer, respectively downsampling S8 and S16, fusing with S32, and carrying out weighted addition with S32 to obtain rich high-level features.
And finally, weighting and adding the characteristics obtained by the three modules.
Before application, the method further comprises: and performing model training on the constructed cell classification model to obtain a trained cell classification model.
The adaptive loss function employed for the constructed cell classification model comprises cross entropy loss and adaptive contrast loss.
The model training process comprises the following steps:
inputting X1 and X2 into a feature extraction module in FIG. 1 to respectively obtain feature vectors F1 and F2., calculating cosine similarity of the feature vectors of the Xi1 and positive samples (the image Xi2 enhanced by the Xi 1) and all negative samples (all other images in the X1) for any one image Xi1 in the X1, and then calculating the adaptive contrast loss CL according to a formula.
Figure BDA0004135840950000031
The adaptive loss function of the embodiment of the present application is composed of the above three losses, namely:
loss=cl+cel1+cel2 (formula 2)
Before application, the method further comprises: and carrying out experimental verification on the cell classification model, and determining that the cell classification model meets the verification requirement.
An application system in a cell classification process for multi-level feature fusion and adaptive contrast loss, the system comprising: the training device comprises a training unit, a storage unit and a processing unit, wherein the training unit is used for training the training device.
The training unit is used for training to obtain a classified neural network, and the classified neural network is composed of a backbone network backbone, a multi-level feature fusion module and a cell classification unit.
And the storage unit is used for storing the classified neural network obtained through training.
And the processing unit is used for acquiring the classified neural network from the storage unit when the cell image is received, extracting the characteristics of the cell image by the backbone network back, generating multi-scale characteristics by the multi-level characteristic fusion module, inputting the multi-scale characteristics into the cell classification unit for cell classification to obtain cell classification branch characteristics, and obtaining a cell classification result of the cell image.
The training unit is further configured to perform the training to obtain a classified neural network, including: firstly, obtaining cell image samples, and labeling cell categories in each cell image sample, such as abnormality, normal and impurity, to obtain a training data set; randomly dividing a training data set into a plurality of image blocks with the size of B, carrying out image enhancement on a training image X1 of each B to obtain an enhanced image X2, carrying out image sequence splicing concat (X1, X2) on a cell original image set X1 and the enhanced image set X2, and inputting the spliced image sequence into a model as a batch for feature extraction; after the backbone network backup in the classified neural network extracts the characteristics of the cell image according to the cell image sample, generating multi-scale characteristics through a multi-level characteristic fusion module, inputting the multi-scale characteristics into a cell classification unit, and respectively obtaining the classification characteristics of the cell image and the classification characteristics of the cell enhancement image; and then, performing category loss calculation and contrast loss calculation by utilizing the self-adaptive contrast loss, so as to train and obtain the cell classification model.
Drawings
FIG. 1 is a schematic diagram of a training process of a cell classification model according to an embodiment of the present application;
FIG. 2 is a flow chart of a network framework in a cell classification process of multi-level feature fusion and adaptive contrast loss provided in an embodiment of the present application;
FIG. 3 is a flowchart of a method for multi-level feature fusion module according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a process of processing unlabeled cell image data using a cell classification model according to an embodiment of the present disclosure;
fig. 5 is a schematic diagram of an application system in a cell classification process of multi-level feature fusion and adaptive contrast loss according to an embodiment of the present application.
Detailed Description
The following description of the technical solutions in the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the present application is described in detail below with specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
When a pathologist reads a piece, the characteristics of nuclear plasma ratio, cytoplasmatic and nuclear allotype, cytokeratinization, nuclear elongation, nuclear roundness and the like are all important factors for judging cytopathy. The cell nucleus of the positive cell is deeply stained, the shape and structure are irregular, the chromatin is rough and compact, the nuclear plasma ratio is smaller, and the cell nucleus of the negative cell is relatively smaller and has lighter color. In order to enhance the capability of the model to extract sample feature information, the embodiment of the application designs a multi-level feature fusion network to enhance the capability of the model to fuse multi-level features.
Therefore, the embodiment of the application applies the multi-level feature fusion module, the structure integrates the top semantic feature information and the bottom space feature information in the feature layer, and simultaneously fuses the upper and lower level features, thereby being beneficial to reserving multi-scale features and enriching semantic features.
Before the cell classification model is actually applied, the training loss function of the initialized cell classification model is further adjusted, and the self-adaptive contrast loss function is specifically adopted. A common Loss function is NCE Loss, which is used to calculate the Loss of a certain sample with all positive and negative samples. For the learning model, the smaller the loss calculated by the positive sample, the better the loss calculated by the negative sample. The self-adaptive contrast Loss is improved on the basis of NCE Loss, the Loss weight is adjusted according to the similarity of two samples, and the model is promoted to extract the high-dimensional characteristics with more robustness.
In this way, the cell classification model is learned through the network structure of multi-level feature fusion and the training strategy of the self-adaptive loss function, so that the cell image classification process is optimized.
Fig. 1 is a schematic diagram of a training process of a cell classification model according to an embodiment of the present application, and fig. 2 is a schematic diagram of an overall framework of a cell classification network structure;
the method comprises the following specific steps:
step 101, constructing a cell classification network model of a multi-level feature fusion module based on a convolutional neural network.
Step 102, in the training process, the training data is randomly divided into a plurality of image blocks with the size of B, and the training image X1 of each B is subjected to image enhancement to obtain an enhanced image X2. And X1 is a training image set in B, and X2 is an image set obtained by carrying out data enhancement on the training image set in B.
In this step, the embodiment of the present application performs 5 data enhancement methods: image overturning, image rotation, color enhancement, gaussian noise addition and Gaussian blur processing.
And 103, performing image sequence stitching on the cell original image set X1 and the enhanced image set X2, and inputting the stitched image sequence as a batch into a model for feature extraction. And extracting the characteristics of the cell image by using a backstone, obtaining the characteristics after multi-scale fusion by using a multi-level characteristic fusion module, and inputting the characteristics to a cell classification layer to obtain cell classification characteristics respectively.
In this step, the input labeled cell image tags contain a total of 3 categories of abnormalities, normals, and impurities.
And 104, performing self-adaptive loss calculation and parameter iteration between classification features of the cell original image and the enhanced image based on the self-adaptive contrast loss to obtain a cell classification model based on multi-level feature fusion and the self-adaptive contrast loss.
And 105, inputting unlabeled cell image data into the cell classification model during application, performing classification recognition processing, and outputting a classification result of the obtained cell image.
In this step, unlabeled cell image data is output as one of 3 categories of abnormality, normal and impurity after calculation and analysis by the cell classification model.
In the method, before application, the cell classification model network structure comprises a backbone network, a multi-stage feature fusion module and a classification layer.
The multistage feature fusion module considers the characteristics of nuclear plasma ratio, cytoplasm, cell nucleus and the like of cells to be comprehensively combined in the current cell interpretation background, and the characteristics not only comprise local characteristics (such as nuclear characteristics) but also comprise global characteristics (such as nuclear plasma ratio) of a larger area, so that the model is more beneficial to the identification of abnormal cells if the local characteristics and the global characteristics can be extracted at the same time. In addition, the model often performs a downsampling operation when features are extracted, so that features of smaller cell nuclei in urine shed cells, which remain in a deep feature map, are very few or even absent, and the recognition of small objects is not facilitated.
Specifically, as shown in fig. 3, the multi-level feature fusion module is performed on three level feature layers, and S8, S16 and S32 represent feature maps of downsampling by 8 times, 16 times and 32 times, respectively. Each feature layer integrates upper and lower level features at the same time, and integrates both top semantic feature information and bottom spatial feature information. If in the S8 feature layer, respectively up-sampling the S32 and the S16, fusing with the S8, and then weighting and adding with the S8 to obtain abundant low-level features; in the S16 feature layer, respectively carrying out up-sampling on the S32, down-sampling on the S8, fusing with the S16, and carrying out weighted addition on the S16 to obtain rich middle layer features; in the S32 feature layer, respectively downsampling S8 and S16, fusing with S32, and weighting and adding with S32 to obtain rich high-level features; and finally, weighting and adding the characteristics obtained by the three modules.
The multistage feature fusion module fuses upper and lower stage features at the same time in each feature layer, so that local features and global features are extracted at the same time, distinguishing features of positive cells and negative cells in form are more favorable for learning, and the sensitivity of cell identification is improved.
An adaptive contrast loss section as shown in fig. 1, which is embodied in the training phase.
Specifically, the embodiment of the application adds contrast learning to the supervised training of urine shed cells, and provides a self-adaptive contrast loss function to promote the model to learn more robust high-dimensional features. The main idea is to adaptively adjust the loss weight, and give larger loss to samples with inaccurate classification and smaller loss to samples with accurate classification. The calculation formula of the loss function is as follows:
Figure BDA0004135840950000061
wherein Sim is + Representing the similarity of the input image to the positive sample,
Figure BDA0004135840950000062
representing similarity of input image and negative sample, N - Representing the number of negative samples. For one sample, the smaller and better the loss it calculates with the positive sample, the larger and better the loss it calculates with the negative sample. The self-adaptive contrast loss function provided by the embodiment of the application automatically adjusts the weight through the similarity among cells, and for positive samples, the higher the similarity is, the more the loss weight isSmall, i.e. when the feature similarity Sim + The larger the value of (2), the weight value 1-Sim + The smaller the size; for negative samples, the higher the similarity, the greater the loss weight, i.e., when 1-Sim - The lower the value of (2), the weight value Sim - The larger. The training sample features are closer to the positive samples and away from the negative samples, and the loss function can dynamically update the loss weight, so that the accuracy of samples difficult to classify is improved.
Training procedures based on adaptive contrast loss, in particular, prior to application.
Randomly dividing training data into N image blocks with the size of B;
and carrying out image enhancement on the image block X1 with the image size B to obtain an enhanced image X2.
And splicing the X1 and the X2 to form a training data block with the size of batch, and inputting the training data block into a cell classification network to obtain feature vectors F1 and F2 corresponding to the X1 and the X2. For any one image X1i in X1, the cosine similarity of the feature vectors of X1i with the positive sample (X1 i enhanced image X2 i) and all negative samples (all other images in X1) is calculated, and then the adaptive contrast loss CL is calculated according to the formula.
Inputting F1 and F2 into the classifier in FIG. 1, calculating cross entropy losses CEL1 and CEL2 of X1 and X2, the loss function of the algorithm is composed of the above three losses, namely
Loss=cl+cel1+cel2 (formula 2)
Before application, the method further comprises: and carrying out experimental verification on the cell classification model, and determining that the cell classification model meets the verification requirement.
Fig. 4 is a schematic diagram of a process of processing unlabeled cell image data by using a cell classification model according to an embodiment of the present application, where specific steps include:
step 201, inputting unlabeled cell images into a cell classification model.
Step 202, after the characteristics of the cell image are extracted by a backbone network backbone, generating multi-scale characteristics through a multi-level characteristic fusion module, and inputting the multi-scale characteristics into a cell classification unit for cell classification to obtain cell classification branch characteristics.
And 203, calculating and analyzing through a cell classification model to obtain a cell classification result of the cell image. The output is one of abnormal, normal, and impurity.
Fig. 5 is a schematic structural diagram of an application system in a cell classification process of multi-level feature fusion and adaptive contrast loss according to an embodiment of the present application, including a training unit, a storage unit and a processing unit, wherein,
the training unit is used for training to obtain a classified neural network, and the classified neural network consists of a backbone network backbone, a multi-level feature fusion module and a cell classification unit;
the storage unit is used for storing the classified neural network obtained through training;
and the processing unit is used for acquiring the classified neural network from the storage unit when the cell image is received, extracting the characteristics of the cell image by the backbone network back, generating multi-scale characteristics by the multi-level characteristic fusion module, inputting the multi-scale characteristics into the cell classification unit for cell classification to obtain cell classification branch characteristics, and obtaining a cell classification result of the cell image.
The system further comprises a training unit for self-adaptive contrast loss, and the training unit is used for carrying out parameter adjustment on the built cell classification model by adopting a self-adaptive contrast loss learning mode before the processing unit is applied.
The specific method is as follows: firstly, obtaining cell image samples, and labeling cell categories in each cell image sample, such as abnormality, normal and impurity, to obtain a training data set; randomly dividing a training data set into a plurality of image blocks with the size of B, carrying out image enhancement on a training image X1 of each B to obtain an enhanced image X2, carrying out image sequence splicing on a cell original image set X1 and the enhanced image set X2, and inputting the spliced image sequence into a model as a batch for feature extraction; after the backbone network backup in the classified neural network extracts the characteristics of the cell image according to the cell image sample, generating multi-scale characteristics through a multi-level characteristic fusion module, inputting the multi-scale characteristics into a cell classification unit, and respectively obtaining the classification characteristics of the cell image and the classification characteristics of the cell enhancement image; and then, performing category loss calculation and contrast loss calculation by utilizing the self-adaptive contrast loss, so as to train and obtain the cell classification model.
According to the method and the system provided by the embodiment of the application, the upper and lower level features are fused with each feature layer based on the multi-level feature fusion module, so that the multi-scale features are reserved, semantic features are enriched, and positive cells with different forms of features are identified. The self-adaptive contrast loss function solves the problem that a marked data sample is difficult to acquire by utilizing the advantages of unsupervised learning; the method can also dynamically update the loss weight, promote the model to learn more robust high-dimensional characteristics, and facilitate the improvement of classification accuracy.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or the claims may be combined in various combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the present application. In particular, the features recited in the various embodiments and/or claims of the present application may be combined in various combinations and/or combinations without departing from the spirit and teachings of the application, all of which are within the scope of the disclosure.
The foregoing description of the embodiments is provided to facilitate understanding of the method and core idea of the present application, and is not intended to limit the present application. It will be apparent to those skilled in the art that variations can be made in the present embodiments and in the scope of the application in light of the spirit and principles of this application, and any modifications, equivalents, improvements, etc. are intended to be included within the scope of this application.

Claims (10)

1. A method of use in a cell sorting process, comprising:
a. constructing a cell classification network model of a multi-level feature fusion module based on a convolutional neural network;
b. in the training process, training data are randomly divided into a plurality of image blocks with the size of B, each training image X1 with the size of B is subjected to image enhancement to obtain an enhanced image X2, wherein X1 is a training image set in B, and X2 is an image set after the data enhancement of the training image set of B;
c. image sequence splicing concat (X1, X2) is carried out on the cell original image set X1 and the enhanced image set X2, the spliced image sequence is used as a batch to be input into a model for feature extraction, and image feature information of X1 and X2 is obtained;
d. inputting a training batch image set into the cell classification model, extracting the characteristics of the cell image by a backstene, obtaining the characteristics after multi-scale fusion by a multi-level characteristic fusion module, and inputting the characteristics to a cell classification layer to obtain the classification characteristics of data respectively;
e. performing self-adaptive loss calculation and parameter iteration between classification features of the cell original image set X1 and the enhanced image set X2 based on the self-adaptive contrast loss to obtain a cell classification model based on multi-level feature fusion and the self-adaptive contrast loss;
f. when the method is applied, unlabeled cell image data are input into the cell classification model, classification recognition processing is carried out, and a classification result of the cell image is output.
2. The method of claim 1, wherein the initialized cell classification model is a convolutional neural network layer CNN, wherein CNN performs a convolution calculation on the image-transformed cell image data.
3. The method of claim 1, wherein the initialized cell classification model network structure comprises a backbone network, a multi-level feature fusion module, and a classification layer.
4. The method of claim 3, wherein a cell classification network model of a multi-level feature fusion module is built, the multi-level feature fusion module is performed on three level feature layers, and S8, S16 and S32 represent feature maps of 8 times, 16 times and 32 times downsampling, respectively; each feature layer integrates upper and lower level features at the same time, integrates the top semantic feature information and the bottom space feature information, and specifically comprises the following steps:
a. in the S8 feature layer, S32 and S16 are respectively up-sampled and fused with S8, and then weighted and added with S8 to obtain abundant low-level features;
b. in the S16 feature layer, respectively carrying out up-sampling on the S32, down-sampling on the S8, fusing with the S16, and carrying out weighted addition on the S16 to obtain rich middle layer features;
c. in the S32 feature layer, respectively downsampling S8 and S16, fusing with S32, and weighting and adding with S32 to obtain rich high-level features;
d. and finally, weighting and adding the characteristics obtained by the three modules.
5. The method of claim 1, wherein the method further comprises: and performing model training on the constructed cell classification model to obtain a trained cell classification model.
6. The method of claim 5, wherein the adaptive loss function employed on the constructed cell classification model comprises cross entropy loss and adaptive contrast loss, and the model training process comprises:
a. the step c of claim 1, inputting X1 and X2 into a cell classification network to obtain feature vectors F1 and F2., respectively, for any one image X1i in X1, calculating cosine similarity of the feature vectors of X1i with positive samples (X1 i-enhanced image X2 i) and all negative samples (all other images in X1), and then calculating adaptive contrast loss CL according to a formula;
Figure FDA0004135840930000021
b. inputting F1 and F2 into a cell classification model, and calculating cross entropy losses CEL1 and CEL2 of X1 and X2;
c. the adaptive loss function of the embodiment of the present application is composed of the above three losses, namely:
loss=cl+cel1+cel2 (formula 2).
7. The method of claim 1 or 5, wherein the method further comprises: and carrying out experimental verification on the cell classification model, and determining that the cell classification model meets the verification requirement.
8. An application system in a cell classification process for multi-level feature fusion and adaptive contrast loss, the system comprising: comprises a training unit, a storage unit and a processing unit, wherein,
the training unit is used for training to obtain a classified neural network, and the classified neural network consists of a backbone network backbone, a multi-level feature fusion module and a cell classification unit;
the storage unit is used for storing the classified neural network obtained through training;
and the processing unit is used for acquiring the classified neural network from the storage unit when the cell image is received, extracting the characteristics of the cell image by the backbone network back, generating multi-scale characteristics by the multi-level characteristic fusion module, inputting the multi-scale characteristics into the cell classification unit for cell classification to obtain cell classification branch characteristics, and obtaining a cell classification result of the cell image.
9. The system of claim 8, wherein the training unit further for the training to obtain a categorized neural network comprises: firstly, obtaining cell image samples, and labeling cell categories in each cell image sample, such as abnormality, normal and impurity, to obtain a training data set; randomly dividing a training data set into a plurality of image blocks with the size of B, carrying out image enhancement on a training image X1 of each B to obtain an enhanced image X2, carrying out image sequence splicing on a cell original image set X1 and the enhanced image set X2, and inputting the spliced image sequence into a model as a batch for feature extraction; after the backbone network backup in the classified neural network extracts the characteristics of the cell image according to the cell image sample, generating multi-scale characteristics through a multi-level characteristic fusion module, inputting the multi-scale characteristics into a cell classification unit, and respectively obtaining the classification characteristics of the cell image and the classification characteristics of the cell enhancement image; and then, performing category loss calculation and contrast loss calculation by utilizing the self-adaptive contrast loss, so as to train and obtain the cell classification model.
10. The system of claim 8, wherein the cell image is a liquid-based slide-based cell image.
CN202310272806.5A 2023-01-19 2023-03-20 Application method and system in cell classification process Pending CN116311240A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310093367 2023-01-19
CN2023100933671 2023-01-19

Publications (1)

Publication Number Publication Date
CN116311240A true CN116311240A (en) 2023-06-23

Family

ID=86801045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310272806.5A Pending CN116311240A (en) 2023-01-19 2023-03-20 Application method and system in cell classification process

Country Status (1)

Country Link
CN (1) CN116311240A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079055A (en) * 2023-09-04 2023-11-17 成都川油瑞飞科技有限责任公司 Shale gas well data acquisition method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079055A (en) * 2023-09-04 2023-11-17 成都川油瑞飞科技有限责任公司 Shale gas well data acquisition method and system

Similar Documents

Publication Publication Date Title
CN111931684B (en) Weak and small target detection method based on video satellite data identification features
CN110163234B (en) Model training method and device and storage medium
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN112131978B (en) Video classification method and device, electronic equipment and storage medium
CN113033249A (en) Character recognition method, device, terminal and computer storage medium thereof
CN110046550B (en) Pedestrian attribute identification system and method based on multilayer feature learning
CN114220124A (en) Near-infrared-visible light cross-modal double-flow pedestrian re-identification method and system
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
CN115272330B (en) Defect detection method, system and related equipment based on battery surface image
CN112966691A (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN114283350B (en) Visual model training and video processing method, device, equipment and storage medium
CN110826609B (en) Double-current feature fusion image identification method based on reinforcement learning
CN107622280B (en) Modularized processing mode image saliency detection method based on scene classification
CN113762138A (en) Method and device for identifying forged face picture, computer equipment and storage medium
CN114693624A (en) Image detection method, device and equipment and readable storage medium
CN116740362B (en) Attention-based lightweight asymmetric scene semantic segmentation method and system
CN116311240A (en) Application method and system in cell classification process
CN114639101A (en) Emulsion droplet identification system, method, computer equipment and storage medium
CN116342624A (en) Brain tumor image segmentation method combining feature fusion and attention mechanism
CN114492581A (en) Method for classifying small sample pictures based on transfer learning and attention mechanism element learning application
CN111582057B (en) Face verification method based on local receptive field
CN113837015A (en) Face detection method and system based on feature pyramid
CN113743497B (en) Fine granularity identification method and system based on attention mechanism and multi-scale features
CN115439791A (en) Cross-domain video action recognition method, device, equipment and computer-readable storage medium
Pei et al. FGO-Net: Feature and Gaussian Optimization Network for visual saliency prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination