CN117649660A - Global information fusion-based cell division equilibrium degree evaluation method and terminal - Google Patents

Global information fusion-based cell division equilibrium degree evaluation method and terminal Download PDF

Info

Publication number
CN117649660A
CN117649660A CN202410116623.9A CN202410116623A CN117649660A CN 117649660 A CN117649660 A CN 117649660A CN 202410116623 A CN202410116623 A CN 202410116623A CN 117649660 A CN117649660 A CN 117649660A
Authority
CN
China
Prior art keywords
cell
global
area
network
information fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410116623.9A
Other languages
Chinese (zh)
Other versions
CN117649660B (en
Inventor
周龙阳
谭威
陈长胜
彭松林
云新
熊祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Mutual United Technology Co ltd
Original Assignee
Wuhan Mutual United Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Mutual United Technology Co ltd filed Critical Wuhan Mutual United Technology Co ltd
Priority to CN202410116623.9A priority Critical patent/CN117649660B/en
Publication of CN117649660A publication Critical patent/CN117649660A/en
Application granted granted Critical
Publication of CN117649660B publication Critical patent/CN117649660B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a cell division balance evaluation method and a terminal based on global information fusion, wherein the method comprises the following steps: extracting global features in an embryo ROI area; detecting the target of the embryo ROI area and detecting the area of the single cell; extracting cell characteristics of the areas where all single cells are located; fusing the global features with all the cell features; and expanding the fused characteristics along the channel dimension, inputting the expanded characteristics into a classification network to obtain cell balance confidence, and when the confidence is greater than a set threshold, balancing the cells. The feature extraction is carried out on the area where the single cell is located, the feature extraction is fused with the integral tissue area feature, and the single cell position annotation and the cell equilibrium annotation are used for supervising the training process in the training process, so that the feature information can contain the embryo equilibrium information, and the target of cell equilibrium detection is detected, and a better detection effect is obtained.

Description

Global information fusion-based cell division equilibrium degree evaluation method and terminal
Technical Field
The invention relates to the technical field of cell detection, in particular to a cell division equilibrium evaluation method and a terminal based on global information fusion.
Background
Cell balance is an important evaluation index for evaluating the merits of blastomeres. The cell area can reflect the cell size, and the difference of the cell sizes can be basically embodied by comparing the areas of the cells, so that the evaluation of the cell balance is realized. The incubator not only can provide a stable in-vitro culture environment for cells, but also can periodically and continuously acquire the whole process image of cell division. In combination with the photographing time of the cells recorded in the incubator, the cytologist needs to judge each cell image according to own cell evaluation experience to acquire the division equilibrium degree of the cells, so that the workload of the cytologist is greatly increased, and the method for assisting the cytologist to quickly detect the equilibrium degree in the cell division process by means of a computer vision method has very important research significance. Currently, although there are some methods of image segmentation to calculate the area of cells to assess cell division uniformity. However, intelligent prediction of cell division balance still has the following problems in practical applications:
(1) The image segmentation method with semantic information can segment cells to a certain extent, and can obtain the size of each cell according to pixel-level information. However, since the environment in which cells grow is limited during cell division, a plurality of cells may be stacked together after two or more divisions, resulting in an overlapping phenomenon. However, image segmentation can only classify pixels once and cannot cope with the overlapping problem.
(2) A better fit can be made to most cells using elliptic fitting techniques, but due to the extrusion between some cells, irregularities in the shape of some cells may occur. In this case, if the ellipse fitting technique is used to fit the cell area, a larger error is generated, which is unfavorable for the consideration of the subsequent cell balance.
Disclosure of Invention
The invention provides a cell division equilibrium degree evaluation method and a terminal based on global information fusion, which aim to solve the technical problem that the existing feature extraction method and ellipse fitting technology have larger detection error on cell equilibrium degree.
In order to solve the technical problems, the invention provides a cell division balance evaluation method based on global information fusion, which comprises the following steps:
step S1: extracting global features in an embryo ROI area;
step S2: detecting the target of the embryo ROI area and detecting the area of the single cell;
step S3: extracting cell characteristics of the areas where all single cells are located;
step S4: fusing the global features with all the cell features;
step S5: and expanding the fused characteristics along the channel dimension, inputting the expanded characteristics into a classification network to obtain cell balance confidence, and when the confidence is greater than a set threshold, balancing the cells.
Preferably, the embryo ROI area and single cell area are extracted through a target detection network.
Preferably, the method for extracting the target detection network comprises the following steps:
step S111: extracting features through a feature extraction network;
step S112: judging whether the set area is matched with the cell tissue or not through an area extraction network, and returning an offset value to a preset area to obtain an accurate position;
step S113: collecting global features output by a feature extraction network and regional information output by the regional extraction network through a regional pooling network;
step S114: and carrying out regression adjustment and constraint on the characteristics output by the regional pooling network to obtain the target.
Preferably, the feature extraction in step S1 uses res net50 as the feature extraction network.
Preferably, step S1 extracts global features through the 3×3 convolution layer and the pooling layer; introducing a residual connection layer increases the depth of the network.
Preferably, in step S2, when performing target detection of a single cell region, the target detection is performed by nine rectangular frames with pixel sizes of 1:1, 1:2, 2:1, 2:2, 2:4, 4:2, 3:3, 3:6 and 6:3, and the region where the single cell is located is obtained by a linear model and a regression model.
Preferably, the method of step S4 comprises: global featuresObtaining new downsampled global feature +.>Wherein->Is->Width of the feature map after downsampling, +.>Is->The height of the feature map after downsampling; conduct cellular characterization with Global characterization->Scaling with the same magnification, and marking the corresponding +.>Is a position of (2); characterization of cells and->Mapping vector +.>Multiplying and then adding +.>Adding to obtain fusion feature->
Preferably, the fused expression is:
in the method, in the process of the invention,representing that the cell characteristic corresponds to the downsampled global characteristic +.>K represents the number of detected single cells,/->Representing global features->Mapping vector of>Representing the i-th cell->The characteristics of the location.
The invention also provides a terminal, which comprises a memory and a processor;
the memory is used for storing a computer program and a cell division balance evaluation method based on global information fusion;
the processor is used for executing the computer program and the cell division balance evaluation method based on global information fusion so as to realize the method.
The beneficial effects of the invention at least comprise:
1) The invention effectively extracts the high-level characteristics of cells in the image by using the neural network model, replaces the complex flow of the traditional method, and directly judges the cell balance degree by extracting the global characteristics of cell tissues and the characteristics of single cells and using the end-to-end neural network;
2) According to the invention, through extracting the integral characteristics of the embryo area, detecting the single cell area according to the target detection network frame, fusing the single cell area into the integral characteristics, training a ResNet characteristic extractor capable of containing cell balance degree information by using the constraint of the target area and the label, solving the problem that the extruded cells cannot be fitted by the traditional fitting scheme, and having universality on all abnormal cells;
3) As an additional technical feature, the extraction network of the nine rectangular extraction frames provided by the invention can effectively distinguish overlapped cells, the target detection frames can be overlapped, instead of judging whether the target detection frames belong to a fixed cell area or not according to single pixels, the problem of pixel division in the overlapped area is solved, the problem that the semantic segmentation network cannot correctly divide the positions of overlapped pixels is solved, and the detection accuracy of the single cell area is higher.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the invention;
FIG. 2 is a flow chart of a method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a cell balance network according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a target extraction network according to an embodiment of the present invention;
fig. 5 is a schematic diagram of nine detection frames of a target extraction network according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of the detection results of the area where a single cell is located according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a feature fusion network according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is evident that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
In the cell division process, partial areas of cells are covered due to overlapping, the cell areas cannot be accurately judged through image detection, most cells can be fitted by the existing edge detection method, but irregular shapes such as heart shapes and the like of cell shapes can be generated due to extrusion due to larger invasion among cells in the cell division process, and how to find other effective information to evaluate the cell sizes is a key technical problem to be solved by the invention
Therefore, as shown in fig. 1 to 3, the embodiment of the invention provides a cell division balance evaluation method based on global information fusion, which comprises the following steps:
step S1: global features within the embryo ROI area are extracted.
Illustratively, embryo ROI region extraction of cell tissue, high-level characteristics of cells are extracted by artificial intelligence technology, the positions of cell clusters in an image are detected, and a microscope image is preprocessed. Because the cell balance evaluation problem only needs to pay attention to the embryo ROI region, noise outside the embryo region can be effectively eliminated through a target detection algorithm of deep learning, and pretreatment of a shooting image is realized. The number of images of the photographed cells is set to n,respectively corresponding to the cell division images corresponding to the ith image. Tool withThe method comprises the following steps:
1) Collecting cell images in the whole cell division process, labeling the position of the cell center in each image, dividing the labeled images into a training set, a verification set and a test set, performing pretreatment operations such as overturning, rotating, translating and the like on the cell images in the training set, and expanding a data set;
2) Training the model by using the marked training set;
3) According to the change of the accuracy of the model on the verification set, the super parameters of the model are adjusted, and finally the network model with the best performance on the test set is stored;
4) Detecting shot by using saved modelAnd outputting a prediction result of the cell centers of the images.
The embodiment of the invention adopts a general target detection network as a network frame, comprising yolo, fastRCNN and the like, and detects embryo areas in images. The following description will take FaterRCNN as an example, and the network structure is shown in FIG. 4.
Specifically, cell images are input into a FaterRCNN network, embryo regions are selected, and the interference of fragments, granular cells and other impurities in the images is removed. The FasterRCNN mainly comprises four network parts, namely a feature extraction network, a region pooling network and a classification network:
1) Feature extraction network: resNet50 is used as the feature extraction network. ResNet effectively trains deeper neural networks by introducing a structure of residual blocks. The method mainly comprises a convolution layer and a pooling layer which are 3 multiplied by 3 and used for extracting the characteristics of an input image; the residual connection layer is used for increasing the depth of the network and extracting more semantic information;
2) Area extraction network: the region extraction network is mainly used for judging whether a preset region is matched with an embryo region or not, and regressing an offset value to the preset region to obtain an accurate position.
3) Regional pooling network: the regional pooling network collects the global features output by the feature extraction network and regional information output by the regional extraction network, and new features are obtained after the information is integrated.
4) Classification network: since tissue detection is a single class detection problem. And finally, inputting the characteristics into a classification network, performing primary regression adjustment and constraint, and outputting a detection result.
Illustratively, in the embodiment of the present invention, when global feature extraction is performed on the embryo ROI area, the res net50 is used as a feature extraction network, and in the embodiment of the present invention, a convolution layer and a pooling layer of 3×3 are mainly used for feature extraction on the input image; meanwhile, in order to increase the depth of the network and extract more semantic information, the embodiment of the invention leads the network to learn identity mapping more easily by introducing the structure of the residual block, thereby effectively training a neural network with deeper level; finally, the characteristics are obtainedWherein->Is the resolution of the width of the picture after it has passed through the feature extraction network,is the high resolution of the picture after the picture passes through the feature extraction network, c is the channel number, and the value of the invention is 128.
Step S2: and (3) carrying out target detection on the embryo ROI area, and detecting the area of the single cell.
In the embodiment of the invention, the above-mentioned region extraction network is adopted to obtain the position information and offset information of the cell region, the image of the region is input into the feature extractor, and K cell features are extracted
Specifically, in the embodiment of the present invention, in order to effectively extract the characteristics of the overlapping region between cells, 9 rectangular frames are introduced for target detection, and a schematic diagram of the 9 detection frames is shown in fig. 5, which specifically includes nine rectangular frames with pixel sizes of 1:1, 1:2, 2:1, 2:2, 2:4, 4:2, 3:3, 3:6, and 6:3.
The cell area is detected by nine rectangular detection frames, and whether a single cell is in the target area is judged, wherein the rectangular frames contain three attributes, namely, center point coordinates, width and height.
In specific operation, the global features are aimed atAnd each super pixel position in the image is subjected to sliding detection, and whether the super pixel position belongs to one of 9 rectangular frames is judged, so that the target detection effect is achieved. Because partial detection results may have offset, fine adjustment is performed on the rectangular frame, in the embodiment of the invention, a linear model is adopted to translate and scale three parameters of the rectangular frame, and finally a regression model is adopted to perform regression prediction.
In the target task of cell balance detection, only parameters of the detection frames are needed instead of judging each pixel in the tissue region, so that the nine rectangular frame detection modes provided by the embodiment of the invention can be adopted, and the detection frames can overlap, so that the region where a single cell is located can be extracted more accurately, the condition that the overlapping region is not extracted is avoided, and the effect of effectively extracting the characteristics of the overlapping region is achieved, and the extraction result is shown in fig. 6.
Step S3: extracting the cell characteristics of the areas where all the single cells are located.
Step S4: the global features and all cellular features are fused.
Specifically, as shown in fig. 7, the embodiment of the invention performs feature fusion on global featuresObtaining new downsampled global feature +.>Wherein->Is->Width of the feature map after downsampling, +.>Is->The height of the feature map after downsampling; introducing a piece of and global feature->Mapping vectors of the same shapeThe method comprises the steps of carrying out a first treatment on the surface of the The embodiment of the invention combines the characteristic of the extraction area with the global characteristic +.>Scaling at the same magnification, the localization obtained by cell extraction network is found at +.>Is provided. By extracting the features and mapping vectors of the region from cells +.>Multiplying and then adding with the down-sampling global feature to obtain the fusion feature +.>The formula is as follows:
in the method, in the process of the invention,representing that the cell characteristic corresponds to the downsampled global characteristic +.>K represents the detected singleNumber of individual cells,/->Representing global features->Mapping vector of>Representing the cellular characteristics of the ith cell.
Through this step, the embodiment of the invention directly omits the fitting stage of the traditional scheme. First, the location of the area where the individual cells are located is detected. The feature extraction is carried out on the area where the single cell is located, the feature extraction is fused with the integral tissue area feature, and the single cell position annotation and the cell equilibrium annotation are used for supervising the training process in the training process, so that the feature information can contain the embryo equilibrium information, and the target of cell equilibrium detection is detected, and a better detection effect is obtained.
Step S5: and expanding the fused characteristics along the channel dimension, inputting the expanded characteristics into a classification network to obtain cell balance confidence, and when the confidence is greater than a set threshold, balancing the cells.
Specifically, the features are to be fusedExpanding along the channel dimension, inputting into a classification network for cell balance judgment, wherein the classification network consists of a plurality of linear layers and pooling layers, and finally regressing to obtain the cell balance confidence degree ∈ ->We will->Confidence threshold value of setting->Compare if->Then it is recognized thatIs cell balancing.
The whole form of the classification network is end-to-end, and firstly, an organization area is detected through a target detection network. The interference of noise in other areas in the microscope to the cell balance judging task can be effectively avoided. The tissue region is used as input and passes through a cell balance neural network. The cell balance network firstly inputs the whole tissue region, extracts the characteristics of the whole tissue cells, and the semantic information in the characteristics can carry out preliminary judgment on the stacking condition of the whole tissue cells and preliminarily locate the positions of the cells in the tissue. The semantic information of the single cells can be extracted through the cell feature extraction network, and the stacking condition and the area size of the single cells are known through the semantic information because the sizes of the fixed search boxes are consistent, and the consistency of the feature information among the cells is ensured. And the information is integrated and delivered to the final classification neural network to judge whether the cell balance degree of the organization reaches the standard. The position and the balance index of the search box in the whole process can gradually approach to an ideal result under the supervision of the data set.
The training set for training the embodiment of the present invention is briefly described as follows:
1) Cell images are collected during the whole cell division process, and the labels of cells in each image are marked. Including a predictive box for cellular tissue, a predictive box for individual cells, and a label for whether cells are balanced (indicated by 0 and 1). Dividing the marked image into a training set, a verification set and a test set, performing preprocessing operations such as overturning, rotating, translating and the like on the cell image in the training set, and expanding a data set;
2) Training the model by using the marked training set;
3) According to the change of the accuracy of the model on the verification set, the super parameters of the model are adjusted, and finally the network model with the best performance on the test set is stored;
4) Detecting shot by using saved modelAnd outputting a prediction result of the image balance degree by the image.
The embodiment of the invention also provides a terminal, which comprises a memory and a processor; a memory for storing a computer program and a cell division balance evaluation method based on global information fusion; and the processor is used for executing the computer program and the cell division balance evaluation method based on global information fusion so as to realize the method.
In order to illustrate the effectiveness of the method provided by the embodiment of the invention, 1 kilo cell images are selected for ablation experiments, the influence of adding a fusion network and not using the fusion network on the experiments are compared, the experiments are carried out in different division periods of the cells, the accuracy of judgment is tested, and the experimental results are shown in table 1:
TABLE 1
As can be seen from the table, compared with the scheme without the fusion module, the detection accuracy of the scheme is greatly improved, and the effect of the scheme is obviously improved corresponding to embryos with more cell numbers, because the overlapping phenomenon of cells is aggravated in the process of increasing the cell numbers, and the scheme of the invention has obvious effect of solving the overlapping of cells.
The foregoing embodiments may be combined in any way, and all possible combinations of the features of the foregoing embodiments are not described for brevity, but only the preferred embodiments of the invention are described in detail, which should not be construed as limiting the scope of the invention. The scope of the present specification should be considered as long as there is no contradiction between the combinations of these technical features.
It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (9)

1. A cell division balance evaluation method based on global information fusion is characterized by comprising the following steps of: the method comprises the following steps:
step S1: extracting global features in an embryo ROI area;
step S2: detecting the target of the embryo ROI area and detecting the area of the single cell;
step S3: extracting cell characteristics of the areas where all single cells are located;
step S4: fusing the global features with all the cell features;
step S5: and expanding the fused characteristics along the channel dimension, inputting the expanded characteristics into a classification network to obtain cell balance confidence, and when the confidence is greater than a set threshold, balancing the cells.
2. The method for evaluating the cell division balance based on global information fusion according to claim 1, wherein the method comprises the following steps: the embryo ROI area and the single cell area are extracted through a target detection network.
3. The method for evaluating the cell division balance based on global information fusion according to claim 2, wherein: the method for extracting the target detection network comprises the following steps:
step S111: extracting features through a feature extraction network;
step S112: judging whether the set area is matched with the cell tissue or not through an area extraction network, and returning an offset value to a preset area to obtain an accurate position;
step S113: collecting global features output by a feature extraction network and regional information output by the regional extraction network through a regional pooling network;
step S114: and carrying out regression adjustment and constraint on the characteristics output by the regional pooling network to obtain the target.
4. The method for evaluating the cell division balance based on global information fusion according to claim 1, wherein the method comprises the following steps: in step S1, the feature extraction uses res net50 as the feature extraction network.
5. The method for evaluating the cell division balance based on global information fusion according to claim 4, wherein: step S1, extracting global features through a convolution layer and a pooling layer of 3 multiplied by 3; introducing a residual connection layer increases the depth of the network.
6. The method for evaluating the cell division balance based on global information fusion according to claim 1, wherein the method comprises the following steps: in the step S2, when target detection of a single cell area is performed, target detection is performed through nine rectangular frames with pixel sizes of 1:1, 1:2, 2:1, 2:2, 2:4, 4:2, 3:3, 3:6 and 6:3, and the area where the single cell is located is obtained through a linear model and a regression model.
7. The method for evaluating the cell division balance based on global information fusion according to claim 1, wherein the method comprises the following steps: the method of step S4 comprises: global featuresObtaining new down-sampling global feature through down-samplingWherein->Is->Width of the feature map after downsampling, +.>Is->The height of the feature map after downsampling;conduct cellular characterization with Global characterization->Scaling with the same magnification, and marking the corresponding +.>Is a position of (2); characterization of cells and->Mapping vector +.>Multiplying and then adding +.>Adding to obtain fusion feature->
8. The method for evaluating the cell division balance based on global information fusion according to claim 7, wherein: the fused expression is:
in the method, in the process of the invention,representing that the cell characteristic corresponds to the downsampled global characteristic +.>K represents the number of detected single cells,/->Representing global features->Mapping vector of>Representing the i-th cell->The characteristics of the location.
9. A terminal, characterized by: comprising a memory and a processor;
the memory is used for storing a computer program and a cell division balance evaluation method based on global information fusion;
the processor is configured to execute the computer program and the cell division balance evaluation method based on global information fusion, so as to implement the method of any one of claims 1 to 8.
CN202410116623.9A 2024-01-29 2024-01-29 Global information fusion-based cell division equilibrium degree evaluation method and terminal Active CN117649660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410116623.9A CN117649660B (en) 2024-01-29 2024-01-29 Global information fusion-based cell division equilibrium degree evaluation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410116623.9A CN117649660B (en) 2024-01-29 2024-01-29 Global information fusion-based cell division equilibrium degree evaluation method and terminal

Publications (2)

Publication Number Publication Date
CN117649660A true CN117649660A (en) 2024-03-05
CN117649660B CN117649660B (en) 2024-04-19

Family

ID=90045391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410116623.9A Active CN117649660B (en) 2024-01-29 2024-01-29 Global information fusion-based cell division equilibrium degree evaluation method and terminal

Country Status (1)

Country Link
CN (1) CN117649660B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122138A1 (en) * 2006-08-28 2011-05-26 Guenter Schmidt Context driven image mining to generate image-based biomarkers
CN110310253A (en) * 2019-05-09 2019-10-08 杭州迪英加科技有限公司 Digital slices classification method and device
CN111539308A (en) * 2020-04-20 2020-08-14 浙江大学 Embryo quality comprehensive evaluation device based on deep learning
CN112069874A (en) * 2020-07-17 2020-12-11 中山大学 Method, system, equipment and storage medium for identifying cells in embryo optical lens image
CN113378796A (en) * 2021-07-14 2021-09-10 合肥工业大学 Cervical cell full-section classification method based on context modeling
US20210375458A1 (en) * 2020-05-29 2021-12-02 Boston Meditech Group Inc. System and method for computer aided diagnosis of mammograms using multi-view and multi-scale information fusion
US20230162353A1 (en) * 2021-11-23 2023-05-25 City University Of Hong Kong Multistream fusion encoder for prostate lesion segmentation and classification
CN116580394A (en) * 2023-05-19 2023-08-11 杭州电子科技大学 White blood cell detection method based on multi-scale fusion and deformable self-attention
CN116681958A (en) * 2023-08-04 2023-09-01 首都医科大学附属北京妇产医院 Fetal lung ultrasonic image maturity prediction method based on machine learning
CN116758539A (en) * 2023-08-17 2023-09-15 武汉互创联合科技有限公司 Embryo image blastomere identification method based on data enhancement

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110122138A1 (en) * 2006-08-28 2011-05-26 Guenter Schmidt Context driven image mining to generate image-based biomarkers
CN110310253A (en) * 2019-05-09 2019-10-08 杭州迪英加科技有限公司 Digital slices classification method and device
CN111539308A (en) * 2020-04-20 2020-08-14 浙江大学 Embryo quality comprehensive evaluation device based on deep learning
US20210375458A1 (en) * 2020-05-29 2021-12-02 Boston Meditech Group Inc. System and method for computer aided diagnosis of mammograms using multi-view and multi-scale information fusion
CN112069874A (en) * 2020-07-17 2020-12-11 中山大学 Method, system, equipment and storage medium for identifying cells in embryo optical lens image
CN113378796A (en) * 2021-07-14 2021-09-10 合肥工业大学 Cervical cell full-section classification method based on context modeling
US20230162353A1 (en) * 2021-11-23 2023-05-25 City University Of Hong Kong Multistream fusion encoder for prostate lesion segmentation and classification
CN116580394A (en) * 2023-05-19 2023-08-11 杭州电子科技大学 White blood cell detection method based on multi-scale fusion and deformable self-attention
CN116681958A (en) * 2023-08-04 2023-09-01 首都医科大学附属北京妇产医院 Fetal lung ultrasonic image maturity prediction method based on machine learning
CN116758539A (en) * 2023-08-17 2023-09-15 武汉互创联合科技有限公司 Embryo image blastomere identification method based on data enhancement

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MENG LONG;等: "Cervical cell TCT image detection and segmentation based on multi-scale feature fusion", 2021 IEEE 5TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 5 April 2021 (2021-04-05), pages 192 - 196 *
罗会兰;张云;: "结合上下文特征与CNN多层特征融合的语义分割", 中国图象图形学报, no. 12, 31 December 2019 (2019-12-31), pages 148 - 157 *

Also Published As

Publication number Publication date
CN117649660B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
WO2020253629A1 (en) Detection model training method and apparatus, computer device, and storage medium
CN108564026B (en) Network construction method and system for thyroid tumor cytology smear image classification
CN107633226B (en) Human body motion tracking feature processing method
CN105260749B (en) Real-time target detection method based on direction gradient binary pattern and soft cascade SVM
CN103440478B (en) A kind of method for detecting human face based on HOG feature
CN111814741A (en) Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism
CN108564123B (en) Thyroid tumor cytology smear image classification method and device
CN112102237A (en) Brain tumor recognition model training method and device based on semi-supervised learning
CN106446890B (en) A kind of candidate region extracting method based on window marking and super-pixel segmentation
CN106327507A (en) Color image significance detection method based on background and foreground information
CN103761504A (en) Face recognition system
CN108596038A (en) Erythrocyte Recognition method in the excrement with neural network is cut in a kind of combining form credit
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN111524132A (en) Method, device and storage medium for identifying abnormal cells in sample to be detected
CN116665095B (en) Method and system for detecting motion ship, storage medium and electronic equipment
CN116051560B (en) Embryo dynamics intelligent prediction system based on embryo multidimensional information fusion
CN111488943A (en) Face recognition method and device
CN113706579A (en) Prawn multi-target tracking system and method based on industrial culture
CN116386120A (en) Noninductive monitoring management system
CN112419335B (en) Shape loss calculation method of cell nucleus segmentation network
CN109584267A (en) A kind of dimension self-adaption correlation filtering tracking of combination background information
CN110287970B (en) Weak supervision object positioning method based on CAM and covering
CN117649660B (en) Global information fusion-based cell division equilibrium degree evaluation method and terminal
Guo et al. Pathological Detection of Micro and Fuzzy Gastric Cancer Cells Based on Deep Learning.
CN116824333A (en) Nasopharyngeal carcinoma detecting system based on deep learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant