CN113077457A - System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm - Google Patents

System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm Download PDF

Info

Publication number
CN113077457A
CN113077457A CN202110426107.2A CN202110426107A CN113077457A CN 113077457 A CN113077457 A CN 113077457A CN 202110426107 A CN202110426107 A CN 202110426107A CN 113077457 A CN113077457 A CN 113077457A
Authority
CN
China
Prior art keywords
embryo
module
cell
sequence
camera system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110426107.2A
Other languages
Chinese (zh)
Inventor
李科珍
马丁
徐树公
张麒
艾继辉
廖秋月
冯雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji Medical College of Huazhong University of Science and Technology
Original Assignee
Tongji Medical College of Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji Medical College of Huazhong University of Science and Technology filed Critical Tongji Medical College of Huazhong University of Science and Technology
Priority to CN202110426107.2A priority Critical patent/CN113077457A/en
Publication of CN113077457A publication Critical patent/CN113077457A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • G06T7/0016Biomedical image inspection using an image reference approach involving temporal comparison
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a system for predicting whether an embryo can be encapsulated based on a delayed camera system and a deep learning algorithm. The system for predicting whether the embryo can be encapsulated based on the time-delay camera system and the deep learning algorithm separately identifies the time sequence and the morphological information of the video by utilizing the time domain model and the space domain model and integrates the time sequence and the morphological information, so that the prediction accuracy is obviously improved.

Description

System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm
Technical Field
The invention relates to the technical field of artificial intelligence learning and analysis of medical images, in particular to a system for predicting whether an embryo can be encapsulated or not based on a time-delay camera system and a deep learning algorithm.
Background
Currently, the criteria for embryo evaluation screening widely used in the clinic are still the traditional morphological methods, i.e. embryologists evaluate the morphology and development stage of embryos at several fixed time points after their fertilization to determine their developmental competence. However, morphological assessment is greatly influenced by the subjectivity of the embryologist. More importantly, the embryo development is a dynamic changing process, and the observation of the discontinuous time points can not obtain the comprehensive embryo development information, thereby omitting the key events in the embryo development process. In recent years, Time-lapse camera systems (TLS) have provided a new approach to embryo monitoring and assessment, which takes one image of an embryo every 5-20 minutes in an incubator, and integrates it into a dynamic video, thus enabling the complete and detailed recording of the entire process of embryo development without interfering with the embryo culture environment.
Although TLS helps to improve the success rate of embryo screening and transplantation, the massive data obtained by TLS also presents a huge challenge to researchers. Based on the Embryo development information recorded by TLS, people extract and analyze the relationship between various morphologies, morphological dynamics and division modes and the Embryo development potential, and establish various Embryo Screening Algorithms (ESAs). However, the effectiveness and applicability of these ESAs is greatly controversial. The EevaTM system was the first automated ESAs developed to predict blastocyst formation. The combination of EevaTM improved embryo screening compared to the morphology of D3 alone (day 3). However, the accuracy of the EevaTM system for automatic labeling of early embryos remains to be improved: EevaTM did not show the advantage of predicting blastocyst and embryo screening when compared to manual labeling of the same embryo videos. Another study examined the validity of 6 published ESAs and found that they were not diagnostically valuable for data outside of the model, indicating that existing ESAs may not be clinically useful. Clearly, it is necessary to utilize the vast amount of information acquired by TLS to develop a method for accurately predicting whether an embryo will be able to capsulously form.
Disclosure of Invention
The invention aims to overcome the defects of the background technology and provides a system for predicting whether an embryo can be encapsulated based on a delayed camera system and a deep learning algorithm.
In order to achieve the purpose, the invention designs a system for predicting whether an embryo can be encapsulated based on a delayed camera system and a deep learning algorithm, which comprises a single multi-cell recognition module, a prokaryotic disappearance recognition module, a five-classification cell recognition module, a time domain prediction module, a space domain prediction module and a time-space domain combination prediction module;
the single-cell and multi-cell identification module is used for identifying single cells and multi-cells in a delayed camera system video;
the prokaryotic recognition module is used for recognizing prokaryotic cells in a single-cell image in a delayed camera system video and outputting a sequence;
the pronucleus disappearance recognition module is used for positioning the last frame image of pronucleus existing in the output sequence of the pronucleus recognition module and determining the frame number of pronucleus disappearance;
the five-classification cell identification module is used for automatically classifying the cell development stages so as to acquire development time sequence information by analyzing the frame number and the change of the frame number in different cell stages in the embryo video;
the time domain prediction module is used for learning the cell classification labels output by the five-classification cell identification module, acquiring the change information of different stages of cell development in the video and judging whether the embryo has the potential of developing into a capsule;
the airspace prediction module is used for identifying embryo morphological information in a delayed camera system video by using a deep learning algorithm and judging whether the embryo has the potential for developing into a capsule;
and the time-space domain combined prediction module is used for predicting whether the embryo is encapsulated or not by performing weighted average on the predicted encapsulation probability output by the time-space domain prediction module and the space domain prediction module and integrating time sequence and morphological information.
Further, the single-cell and multi-cell identification module utilizes a two-classification module constructed by a DenseNet201 network, wherein one type represents a single cell, and the other type represents a multi-cell, so that the single cell and the multi-cell in the delayed camera system video can be identified. Compared with other deep neural network structures, DenseNet not only increases the back propagation speed of the gradient so that the network is easier to train, but also has smaller parameters and is computationally efficient.
Furthermore, the prokaryotic recognition module utilizes a DenseNet121 network to recognize pronucleus in a single-cell image in a delayed camera system video, the module is a two-classification module, one type represents the existence of the pronucleus, and the output sequence of the module is 1; one type represents prokaryotic disappearance, and the module output sequence is 0; and after passing through the prokaryotic identification module, the delayed camera system video outputs a series of sequences containing 1 and 0.
Further, the prokaryotic disappearance recognition module determines the number of frames of prokaryotic disappearance of each instance of the delayed camera system video by positioning the number of frames of the last 1 in the output sequence of the prokaryotic recognition module according to the computer.
Further, the prokaryotic disappearance recognition module corrects the output sequence of the prokaryotic recognition module, sets the sequence different from the left and right sequences as a disordered sequence, and sets the sequence identical to the left and right values as an ordered sequence, thereby removing disorder in the output sequence. Therefore, the output sequence of the prokaryotic recognition module is corrected, and the accuracy of the prokaryotic disappearance recognition module is improved.
Furthermore, the five classifications of the five classification cell recognition modules correspond to five stages of embryo development into 1 cell, 2 cells, 3 cells, 4 cells and more than or equal to 5 cells respectively.
Further, the five-class cell recognition module combines the Focal local Loss function with the DenseNet201 network to perform five classes of video frame numbers. The invention realizes automatic classification of cell development stages through a five-classification cell identification module, obtains development time sequence information by analyzing the frame number and the change of the frame number in different cell periods in an embryo video, and solves the problem that the embryo time sequence information is difficult to automatically obtain due to the increase of the cell number, the change of the cell morphology and the like in the process of embryo division. The DenseNet201 network is combined with the Focal local, wherein the introduction of the Focal local can solve the problem that the number of samples in a certain class is not balanced compared with that in other classes in the target detection task, and therefore the performance of the model is improved.
Furthermore, the time domain prediction module utilizes a long-short term memory network (LSTM) to orderly learn the cell classification labels output by the five classification cell recognition modules, and after the five classification cell modules are combined with the LSTM network, the change information of different stages of cell development in the video can be acquired, and whether the embryo has the potential of developing into a capsule or not is judged. The LSTM network has the advantage of better learning of contextual long-term dependencies between data.
And further, the spatial domain prediction module extracts 1000-dimensional high-dimensional spatial features of each frame of image by using a DenseNet201 network, and then sequentially inputs the 1000-dimensional high-dimensional spatial features into a Gradient Boosting classifier to obtain embryo morphological information of the image in the video and judge whether the embryo has potential for developing into a capsule.
Furthermore, the time-space domain combined prediction module performs weighted average on the predicted capsule forming probability output by the time domain and space domain modules through a multi-stage network module combining the time domain prediction module and the space domain prediction module, and integrates time sequence and form information to predict whether the embryo is in a capsule form.
Compared with the prior art, the invention has the following advantages:
firstly, the system for predicting whether the embryo can be encapsulated based on the delayed camera system and the deep learning algorithm learns the TLS video 3 days before the embryo is developed by using the deep learning model, and achieves the purpose of accurately predicting whether the embryo is encapsulated at 5/6 days (D5/6) at 3 days (D3) by respectively identifying the time domain information and the space domain information of the video and integrating the time domain information and the space domain information. The model of the invention extracts the time sequence information (time domain) and the morphological information (airspace) of the embryo development based on the system for predicting whether the embryo can be encapsulated by the delayed camera system and the deep learning algorithm, and fuses and complements the two information, so that the prediction result is more accurate, and the prediction accuracy is obviously higher than that of the traditional prediction method of an embryologist.
Secondly, the invention identifies the number of embryo pronucleus disappearance frames in the delayed camera system video through the single multi-cell identification module, the pronucleus identification module and the pronucleus disappearance identification module, and unifies the starting point of video analysis based on the pronucleus disappearance frame number, so embryos (IVF and ICSI) of different fertilization modes can be analyzed simultaneously, and the applicability of the invention can be improved.
Thirdly, the module of the invention mainly adopts a two-classification model constructed by a DenseNet201 network. Compared with other deep neural network structures, DenseNet not only increases the back propagation speed of the gradient so that the network is easier to train, but also has smaller parameters and is computationally efficient.
Fourthly, the invention realizes automatic classification of cell development stages through a five-classification cell identification module, obtains development time sequence information by analyzing the frame number and the change of the frame number in different cell periods in the embryo video, and solves the problem that the embryo time sequence information is difficult to automatically obtain due to the increase of the cell number, the change of the cell morphology and the like in the process of embryo division. The DenseNet201 network is combined with the Focal local, wherein the introduction of the Focal local can solve the problem that the number of samples in a certain class is not balanced compared with that in other classes in the target detection task, and therefore the performance of the model is improved.
Drawings
Fig. 1 is the general framework of the module STEM of the present invention;
FIG. 2 shows the accuracy of prokaryotic recognition modules;
FIG. 3 is a graph of the accuracy of a prokaryotic disappearance recognition module;
FIG. 4 shows the accuracy of five categorical cell identification modules;
FIG. 5 is the AUC curve of STEM, wherein (a) is the AUC curve of STEM in time domain prediction module, spatial domain prediction module, and time-spatial domain integration module; (b) the algorithm and the accuracy rate of weighted average of the time-space domain integration module are obtained;
FIG. 6 is a graphical representation of the comparison of the efficacy of a module STEM to that of an embryologist predicting embryo cyst formation.
Detailed Description
The following describes the embodiments of the present invention in detail with reference to the embodiments, but they are not intended to limit the present invention and are only examples. While the advantages of the invention will be apparent and readily appreciated by the description.
The system for predicting whether the embryo can be encapsulated or not based on the delayed camera system and the deep learning algorithm comprises a single multi-cell recognition module, a prokaryotic disappearance recognition module, a five-classification cell recognition module, a time domain prediction module, a space domain prediction module and a time-space domain combined prediction module (STEM);
the single-cell and multi-cell identification module is used for identifying single cells and multi-cells in a delayed camera system video; the single-cell and multi-cell identification module utilizes two classification modules constructed by a DenseNet201 network, wherein one classification module represents a single cell, and the other classification module represents a multi-cell, so that the single cell and the multi-cell in a delayed camera system video are identified.
The prokaryotic recognition module is used for recognizing prokaryotic cells in a single-cell image in a delayed camera system video and outputting a sequence; the prokaryotic recognition module utilizes a DenseNet121 network to recognize the prokaryotic cells in the single-cell image in the delayed camera system video, the module is a two-classification module, one type represents the existence of the prokaryotic cells, and the output sequence of the module is 1; one type represents prokaryotic disappearance, and the module output sequence is 0; and after passing through the prokaryotic identification module, the delayed camera system video outputs a series of sequences containing 1 and 0.
The pronucleus disappearance recognition module is used for positioning the last frame image of pronucleus existing in the output sequence of the pronucleus recognition module and determining the frame number of pronucleus disappearance; and the prokaryotic disappearance recognition module determines the number of frames of prokaryotic disappearance of each instance of the delayed camera system video by positioning the number of frames of the last 1 in the output sequence of the prokaryotic recognition module according to the computer. The prokaryotic disappearance recognition module corrects the output sequence of the prokaryotic recognition module, sets the sequence different from the left sequence and the right sequence as a disordered sequence, and sets the sequence with the same value as the left sequence and the right sequence as an ordered sequence, thereby removing disorder in the output sequence.
The five-classification cell identification module is used for automatically classifying the cell development stages so as to acquire development time sequence information by analyzing the frame number and the change of the frame number in different cell stages in the embryo video; the five classifications of the five classification cell recognition modules respectively correspond to five stages of embryo development into 1 cell, 2 cells, 3 cells, 4 cells and more than or equal to 5 cells. The five-class cell recognition module combines the Focal local Loss function with the DenseNet201 network to perform five classes of video frame numbers.
The time domain prediction module is used for learning the cell classification labels output by the five-classification cell identification module, acquiring the change information of different stages of cell development in the video and judging whether the embryo has the potential of developing into a capsule; the time domain prediction module orderly learns the cell classification labels output by the five classification cell recognition modules by using a long-term and short-term memory network, and is combined with the LSTM network through the five classification cell modules.
The airspace prediction module is used for identifying embryo morphological information in the delayed camera system video by using a deep learning algorithm to judge whether the embryo has the potential of developing into a capsule; the spatial domain prediction module extracts 1000-dimensional high-dimensional spatial features of each frame of image by using a DenseNet201 network, and then sequentially inputs the 1000-dimensional high-dimensional spatial features into a Gradient Boosting classifier to obtain embryo morphological information of the image in the video.
The time-space domain combined prediction module is used for carrying out weighted average on predicted capsule forming probabilities output by the time-space domain prediction module and the space domain prediction module and integrating time sequence and form information to predict whether the embryo is in a capsule or not.
The system for predicting whether the embryo can be encapsulated based on the delayed camera system and the deep learning algorithm is based on the deep learning frames PyTorch 1.4 and Python 3.7. The experimental platform is Ubuntu 18.04, and is provided with 2 Intel Xeon CPUs, 2 NVIDIA GTX 1080ti 11Gb GPUs and 256Gb memories. The delayed camera system is Primo Vision, an embryo image is shot every 5min, and after 3 days from fertilization monitoring, all images of each embryo are integrated into an embryo video with 750-800 frames in sequence. To evaluate the performance of the algorithm, indicators such as sensitivity, specificity, accuracy, and area under the curve (AUC) were used. Wherein, a receiver operating characteristic curve (ROC) is plotted by sensitivity/1-specificity at various threshold settings, and AUC is the area under the ROC curve.
This example collected embryo data for IVF and ICSI cycles at the department of reproductive medicine center, affiliated with the department of college of science and technology, Wash, from month 2 2014 to month 7 2017. All embryos cultured to D3 in TLS, followed by D5/6 in the normal incubator were included. A total of 13327 embryos were included, and after downloading TLS video, a preliminary quality review was performed, eventually preserving 12912 embryo videos for subsequent analysis.
Firstly, establishing a video preprocessing module
In the assisted reproduction technology, IVF and ICSI fertilization modes exist, and the embryo division time of fertilization in different modes is different. Therefore, the invention establishes a video preprocessing module to identify the prokaryotic disappearance frame number, thereby adjusting the TLS video to take the prokaryotic disappearance as a unified starting point.
1319 TLS embryo videos were randomly selected and divided into a training set (80%, 1055 cases) and a test set (20%, 264 cases). And marking the number of frames of the occurrence points of pronucleus appearance, pronucleus disappearance, first division and the like of the video by an experienced embryologist.
(1) And a single multi-cell recognition module. The identification of single cells (before the first division) and multiple cells (after the first division) was performed on the embryo image frames in the training dataset using the DenseNet201 network. Each frame of the video is assigned a binary classification value of 1 or 0, where 1 represents a single cell image frame and 0 represents a multi-cell image frame. After the module learns the image frames corresponding to the single cells and the multiple cells marked by the embryologist, the accuracy rate is verified in 264 test sets, and the result is as high as 99.4%.
Table 1 results of single multi-cell recognition modules:
Figure BDA0003029624650000081
(2) and a prokaryotic recognition module. After the video is processed by the single-cell and multi-cell module, the image frame with the single cell as the label is output and further used for training the prokaryotic recognition module. The module also uses DenseNet201 network training, the processed unicellular image frame is output as a string of values containing 0 and 1, wherein 0 is no prokaryotic image frame, and 1 is prokaryotic image frame. As shown in FIG. 2, the accuracy of the prokaryotic recognition module in the test set reaches 92.9%.
(3) And a prokaryotic disappearance recognition module. After passing through a pronucleus recognition module, a series of sequences containing 0 and 1 are output, and the last 1 in the computer positioning sequence is the last frame image in which pronucleus exists, so that the frame number of pronucleus disappearance is determined. Because the pronucleus identification module has error probability, the output sequence has disorder which is not in accordance with the rule of pronucleus appearance and disappearance. The present invention introduces a sequence correction mechanism. Sequences different from the left and right sequences are set as unordered sequences, and sequences having the same values as the left and right sequences are set as ordered sequences. The correction code is as follows:
For i=1:k
the length of the disordered sequence is set to num IF num ═ i
The lengths of the ordered sequences at both ends of the unordered sequence are denoted as j _ left and j _ right IF (j _ left and j _ right > ═ i +1) or (j _ right > ═ i and j _ left > ═ i +1)
Correcting the value corresponding to the unordered sequence to the value corresponding to the ordered sequence
K is set to 6 and the result after the current (i) correction is the start data of the next (i +1) corrections.
As shown in FIG. 3, the module has an accuracy of 97.7% in 264 cases of verification concentration for identifying pronucleus disappearance, wherein the module is correctly identified by setting that the absolute difference between the predicted value of the module and the mark value of an embryo expert does not exceed 10 frames.
Establishing of time domain prediction module
Timing information in the development of an embryo, such as the length of time the embryo divides into stages and the duration of time at each stage, is closely related to the potential for development of the embryo. Therefore, the invention establishes a time domain prediction module to predict the embryo cyst probability through the time sequence information of the embryo development.
After 1319 video training video preprocessing modules are randomly selected from the 12912 embryo videos, the remaining videos are subjected to video length screening to remove videos with monitoring interruption, early termination and the like. 10540 videos with the total frame number larger than or equal to 750 frames are screened out, 10432 videos with prokaryotic disappearance can be identified after the videos are processed by the video preprocessing module, and the videos are used for establishing the prediction module. Labeling of embryo fates by an experienced embryologist, where a D5/6 chambered embryo is defined as a vesicle formation, labeled 1; an embryo with no capsule present is defined as non-encapsulated and has a label of 0. The video was randomly divided into a training set (80%, 8346 cases, 5061 cases with capsulogenesis and 3285 cases without capsulogenesis) and a testing set (20%, 2086 cases, 1265 cases with capsulogenesis and 821 cases with capsulogenesis).
(1) And a five-classification cell identification module. Due to the increase of the number of cells and the continuous change of the cell morphology in the process of embryo division, the time sequence information of the embryo development is difficult to automatically learn by using a deep learning network. The invention firstly provides a five-classification cell recognition module to realize automatic classification of the embryonic development stage, thereby providing development stage information for a time domain prediction module. The data set 577 used by this module was randomly chosen from 1319 data for the preprocessing module, and an experienced embryologist marked five stages of embryo development: 1 cell, 2 cells, 3 cells, 4 cells, and more than or equal to 5 cells. In the 463 training sets (20%), the DenseNet201 network was combined with Focal local to make five classifications of video frame numbers. As 3 cells in the training set only account for 6% of stage pictures and are far less than cells in other stages (42%, 16%, 18% and 18% in sequence), the invention introduces Focal local to solve the problem of data imbalance, and the formula is as follows:
FL(pt)=-αt(1-pt)γlog(pt)
in the formula, alphatDividing the total number of samples of the training data by the number of samples of each class of training samples; γ is set to 2; p is a radical oftThe probability that a sample is a positive sample is predicted for the network.
The five classification module is verified in 114 data sets, the accuracy of identifying 1, 2, 3, 4 and 5 cells in each stage is respectively 97.2%, 88.3%, 89.7%, 92.8% and 97.5%, and the total accuracy is 94.6%, which is shown in FIG. 4.
(2) And a time domain prediction module. Based on a video preprocessing module and a five-classification cell identification module, the invention provides a time domain prediction module to predict the probability of embryo cyst formation. In 8346 training sets with video length screening and prokaryotic disappearance recognition, 600 images of the first 100 frames to the last 499 frames of nuclei in each video are captured first, which can make the analysis start and end points of IVF and ICSI embryos consistent. After 600 frames of images are input into the five-classification cell identification module, each image frame is output as a numerical value of the corresponding cell number, and then the numerical values are sequentially input into the LSTM network for training. When 2086 verification sets are used for verification, the time domain prediction module outputs the probability of predicting embryo cyst, when the probability value is more than or equal to 0.5, the embryo is predicted cyst, when the probability value is less than 0.5, the embryo is not predicted cyst, the prediction accuracy is 76.9 percent, and the AUC is 0.77, which is shown in figure 5 (a).
Establishment of three-dimensional space domain prediction module
Morphological features in the development process of embryos have been important indicators for embryologists to assess the potential for embryo development. Therefore, the invention utilizes deep learning to identify the morphological information in the embryo development process, thereby predicting the embryo encapsulation probability.
Clinically, embryologists typically assess the morphology of embryos after fertilization, pronucleus disappearance, early cleavage, D2, and D3. Therefore, according to the time points, 35 frames including-75-69 frames of pronucleus disappearance, +/-3 frames of pronucleus disappearance, + 33-39 frames of pronucleus disappearance, + 249-255 frames of pronucleus disappearance and + 493-499 frames of pronucleus disappearance are selected for training the airspace module. In 8346 training sets, 1000-dimensional full-link layers in the DenseNet201 network are used as the output of the network, 1000-dimensional high-dimensional spatial features of each frame of image are extracted, and then 35 × 1000 features of each example of embryo data are sequentially input into a Gradient Boosting classifier to extract spatial information. The spatial domain prediction module has an accuracy of 70.0% and an AUC of 0.76 on 2086 test sets, as shown in fig. 5 (a).
Establishment of four, time-space domain combined prediction module (STEM)
The time domain module and the space domain module respectively evaluate the development potential of the embryo by the time sequence information and the morphological information and predict the outcome. Since both are important indicators in embryo screening and evaluation, the present invention integrates time domain and space domain modules to improve the accuracy and clinical utility of prediction modules.
2086 verification collection videos are respectively input into a time domain prediction module and a space domain prediction module, output as a predicted capsule formation probability value based on a time domain and a predicted probability value based on a space domain, the two probability values are weighted and averaged, the weight of the probability values is any number between 0 and 1, the obtained value is the predicted probability value of the integration module, the value is more than or equal to 0.5 for predicted capsule formation, the value is less than 0.5 for predicted non-capsule formation, and the comparison with the actual ending of the embryo is carried out to calculate the accuracy of the integration module. After the ownership weight value between 0 and 1 at an interval of 0.01 is passed, when the weight of 0.66 in the time domain module and 0.34 in the space domain module are given, the STEM accuracy of the integration module reaches the highest point, which is 78.2%, and the AUC is 0.82, as shown in fig. 5.
Table 2STEM results on 2086 test set data
Item Rate of accuracy Sensitivity of the composition Specificity of
Time domain prediction module 76.9% 84.7% 64.7%
Airspace prediction module 70.0% 70.4% 69.5%
Integrated module STEM 78.2% 85.9% 66.3%
Fifth, predictive efficacy of Module STEM
To evaluate the performance of the module STEM to predict whether embryos are cystogenic, STEM was compared to the predicted results of four embryologists. Four embryologists with over 10 years of experience analyzed 2086 embryo videos from the test set to give predictions of whether an embryo can be encapsulated. The accuracy was 67.8%, 64.5%, 65.5% and 64.9%, respectively, and figure 6 shows the embryologist's comparison to STEM predicted efficacy. The results of the McNemar test show that the modular STEM is more sensitive and specific than the four-position embryologist.
Table 3 results of the module STEM and four embryo experts on 2086 test data sets:
Figure BDA0003029624650000111
note: in the table, P is the McNemar test result of sensitivity comparison of a single embryologist with STEM, and P' is the McNemar test result of specificity comparison of a single embryologist with STEM. Test result values < 0.05 indicate significant differences.
The above description is only an embodiment of the present invention, and it should be noted that any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the protection scope of the present invention, and the rest that is not described in detail is the prior art.

Claims (10)

1. A system for predicting whether an embryo can be encapsulated or not based on a delayed camera system and a deep learning algorithm is characterized in that: the system comprises a single multi-cell identification module, a prokaryotic disappearance identification module, a five-classification cell identification module, a time domain prediction module, a space domain prediction module and a time-space domain combination prediction module;
the single-cell and multi-cell identification module is used for identifying single cells and multi-cells in a delayed camera system video;
the prokaryotic recognition module is used for recognizing prokaryotic cells in a single-cell image in a delayed camera system video and outputting a sequence;
the pronucleus disappearance recognition module is used for positioning the last frame image of pronucleus existing in the output sequence of the pronucleus recognition module and determining the frame number of pronucleus disappearance;
the five-classification cell identification module is used for automatically classifying the cell development stages so as to acquire development time sequence information by analyzing the frame number and the change of the frame number in different cell stages in the embryo video;
the time domain prediction module is used for learning the cell classification labels output by the five-classification cell identification module, acquiring the change information of different stages of cell development in the video and judging whether the embryo has the potential of developing into a capsule;
the airspace prediction module is used for identifying embryo morphological information in a delayed camera system video by using a deep learning algorithm and judging whether the embryo has the potential for developing into a capsule;
and the time-space domain combined prediction module is used for predicting whether the embryo is encapsulated or not by performing weighted average on the predicted encapsulation probability output by the time-space domain prediction module and the space domain prediction module and integrating time sequence and morphological information.
2. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to claim 1, wherein: the single-cell and multi-cell identification module utilizes two classification modules constructed by a DenseNet201 network, wherein one classification module represents a single cell, and the other classification module represents a multi-cell, so that the single cell and the multi-cell in a delayed camera system video are identified.
3. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to claim 2, wherein: the prokaryotic recognition module utilizes a DenseNet121 network to recognize the prokaryotic cells in the single-cell image in the delayed camera system video, the module is a two-classification module, one type represents the existence of the prokaryotic cells, and the output sequence of the module is 1; one type represents prokaryotic disappearance, and the module output sequence is 0; and after passing through the prokaryotic identification module, the delayed camera system video outputs a series of sequences containing 1 and 0.
4. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to claim 3, wherein: and the prokaryotic disappearance recognition module determines the number of frames of prokaryotic disappearance of each instance of the delayed camera system video by positioning the number of frames of the last 1 in the output sequence of the prokaryotic recognition module according to the computer.
5. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to claim 4, wherein: the prokaryotic disappearance recognition module corrects the output sequence of the prokaryotic recognition module, sets the sequence different from the left sequence and the right sequence as a disordered sequence, and sets the sequence with the same value as the left sequence and the right sequence as an ordered sequence, thereby removing disorder in the output sequence.
6. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to any one of claims 1-5, wherein: the five classifications of the five classification cell recognition modules respectively correspond to five stages of embryo development into 1 cell, 2 cells, 3 cells, 4 cells and more than or equal to 5 cells.
7. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm as claimed in claim 6, wherein: the five-class cell recognition module combines the Focal local Loss function with the DenseNet201 network to perform five classes of video frame numbers.
8. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to any one of claims 1-5, wherein: the time domain prediction module orderly learns the cell classification labels output by the five classification cell recognition modules by using a long-term and short-term memory network, and can acquire the change information of different stages of cell development in a video after the five classification cell modules are combined with the LSTM network to judge whether the embryo has the potential of developing into a capsule.
9. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to any one of claims 1-5, wherein: the airspace prediction module extracts 1000-dimensional high-dimensional spatial features of each frame of image by using a DenseNet201 network, and then sequentially inputs the 1000-dimensional high-dimensional spatial features into a Gradient Boosting classifier to obtain embryo morphological information of the image in the video and judge whether the embryo has potential for developing into a capsule.
10. The system for predicting whether an embryo can be encapsulated based on the delayed camera system and the deep learning algorithm according to any one of claims 1-5, wherein: the time-space domain combined prediction module is used for predicting whether the embryo is encapsulated or not by combining a multi-stage network module of the time domain prediction module and the space domain prediction module and integrating time sequence and form information through carrying out weighted average on predicted encapsulation probabilities output by the time domain and space domain modules.
CN202110426107.2A 2021-04-20 2021-04-20 System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm Pending CN113077457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110426107.2A CN113077457A (en) 2021-04-20 2021-04-20 System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110426107.2A CN113077457A (en) 2021-04-20 2021-04-20 System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm

Publications (1)

Publication Number Publication Date
CN113077457A true CN113077457A (en) 2021-07-06

Family

ID=76618147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110426107.2A Pending CN113077457A (en) 2021-04-20 2021-04-20 System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm

Country Status (1)

Country Link
CN (1) CN113077457A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116844160A (en) * 2023-09-01 2023-10-03 武汉互创联合科技有限公司 Embryo development quality assessment system based on main body identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214375A (en) * 2018-11-07 2019-01-15 浙江大学 A kind of embryo's pregnancy outcome prediction meanss based on block sampling video features
CN111681230A (en) * 2020-06-10 2020-09-18 华中科技大学同济医学院附属同济医院 System and method for scoring high-signal of white matter of brain
CN111783854A (en) * 2020-06-18 2020-10-16 武汉互创联合科技有限公司 Intelligent embryo pregnancy state prediction method and system
CN111787877A (en) * 2017-12-15 2020-10-16 维特罗莱夫公司 Systems and methods for estimating embryo viability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111787877A (en) * 2017-12-15 2020-10-16 维特罗莱夫公司 Systems and methods for estimating embryo viability
CN109214375A (en) * 2018-11-07 2019-01-15 浙江大学 A kind of embryo's pregnancy outcome prediction meanss based on block sampling video features
CN111681230A (en) * 2020-06-10 2020-09-18 华中科技大学同济医学院附属同济医院 System and method for scoring high-signal of white matter of brain
CN111783854A (en) * 2020-06-18 2020-10-16 武汉互创联合科技有限公司 Intelligent embryo pregnancy state prediction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QIUYUE LIAO,ET AL.: "Development of deep learning algorithms for predicting blastocyst formation and quality by time-lapse monitoring", 《COMMUNICATIONS BIOLOGY》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116844160A (en) * 2023-09-01 2023-10-03 武汉互创联合科技有限公司 Embryo development quality assessment system based on main body identification
CN116844160B (en) * 2023-09-01 2023-11-28 武汉互创联合科技有限公司 Embryo development quality assessment system based on main body identification

Similar Documents

Publication Publication Date Title
JP7072067B2 (en) Systems and methods for estimating embryo viability
CN106650796B (en) Cell fluorescence image classification method and system based on artificial intelligence
JP2022551683A (en) Methods and systems for non-invasive genetic testing using artificial intelligence (AI) models
US20230018456A1 (en) Methods and systems for determining optimal decision time related to embryonic implantation
CN116153495A (en) Prognosis survival prediction method for immunotherapy of esophageal cancer patient
WO2023121575A1 (en) Determining the age and arrest status of embryos using a single deep learning model
US20230027723A1 (en) Stain-free detection of embryo polarization using deep learning
Silva-Rodríguez et al. Predicting the success of blastocyst implantation from morphokinetic parameters estimated through CNNs and sum of absolute differences
Erlich et al. Pseudo contrastive labeling for predicting IVF embryo developmental potential
CN113077457A (en) System for predicting whether embryo can be encapsulated or not based on delayed camera system and deep learning algorithm
AU2024200572A1 (en) Automated evaluation of quality assurance metrics for assisted reproduction procedures
EP4214675A1 (en) Methods and systems for predicting neurodegenerative disease state
CN110443282B (en) Embryo development stage classification method in embryo time sequence image
TW201913565A (en) Evaluation method for embryo images and system thereof
Malmsten et al. Automated cell stage predictions in early mouse and human embryos using convolutional neural networks
Yuzkat et al. Detection of sperm cells by single-stage and two-stage deep object detectors
EP4352691A1 (en) Methods and systems for embryo classification
CA3156826A1 (en) Imaging system and method of use thereof
Bhanumathi et al. Underwater Fish Species Classification Using Alexnet
Eswaran et al. Deep Learning Algorithms for Timelapse Image Sequence-Based Automated Blastocyst Quality Detection
RU2800079C2 (en) Systems and methods of assessing the viability of embryos
Miled et al. Embryo Development Stage Onset Detection by Time Lapse Monitoring Based on Deep Learning.
RU2810125C1 (en) Automated assessment of quality assurance indicators for assisted reproduction procedures
AU2019101174A4 (en) Systems and methods for estimating embryo viability
Diakiw et al. An artificial intelligence model that was trained on pregnancy outcomes for embryo viability assessment is highly correlated with Gardner score

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210706

RJ01 Rejection of invention patent application after publication