WO2020198779A1 - Method and system for selecting embryos - Google Patents

Method and system for selecting embryos Download PDF

Info

Publication number
WO2020198779A1
WO2020198779A1 PCT/AU2020/000027 AU2020000027W WO2020198779A1 WO 2020198779 A1 WO2020198779 A1 WO 2020198779A1 AU 2020000027 W AU2020000027 W AU 2020000027W WO 2020198779 A1 WO2020198779 A1 WO 2020198779A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
image
images
embryo
training
Prior art date
Application number
PCT/AU2020/000027
Other languages
French (fr)
Inventor
Jonathan Michael MacGillivray HALL
Donato PERUGINI
Michelle PERUGINI
Original Assignee
Presagen Pty Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2019901152A external-priority patent/AU2019901152A0/en
Application filed by Presagen Pty Ltd filed Critical Presagen Pty Ltd
Priority to CN202080041427.XA priority Critical patent/CN113906472A/en
Priority to JP2021560476A priority patent/JP2022528961A/en
Priority to EP20783755.0A priority patent/EP3948772A4/en
Priority to US17/600,739 priority patent/US20220198657A1/en
Priority to AU2020251045A priority patent/AU2020251045A1/en
Publication of WO2020198779A1 publication Critical patent/WO2020198779A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/42Gynaecological or obstetrical instruments or methods
    • A61B17/425Gynaecological or obstetrical instruments or methods for reproduction or fertilisation
    • A61B17/435Gynaecological or obstetrical instruments or methods for reproduction or fertilisation for embryo or ova transplantation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/145Square transforms, e.g. Hadamard, Walsh, Haar, Hough, Slant transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Definitions

  • the present disclosure relates to In-vitro Fertil isation ( IVF).
  • IVF In-vitro Fertil isation
  • the present disclosure relates to methods for selecting embryos.
  • An In-Vitro Ferti lisation (IVF) procedure starts with an ovarian stimulation phase which stimulates egg production.
  • Eggs (oocytes) are then retrieved from the patient and ferti l ized in-vitro with sperm which penetrates the Zona Pel lucida, which is a glycoprotein layer surrounding the egg (oocyte) to form a zygote.
  • An embryo develops over a period of around 5 days, after which time the embryo has formed a blastocyst (formed of the trophoblast, blastocoele and inner cell mass) suitable for transfer back into the patient.
  • the blastocyst is still surrounded by the Zona Pellucida, from which the blastocyst wi ll hatch to then implant in the endometrial wall .
  • We wil l refer to the region bounded by the inner surface of the Zona Pel lucida as the InnerZonal Cavity (IZC).
  • IZC InnerZonal Cavity
  • one commonly used scoring system is the Gardner Scale in which morphological features such as inner cell mass qual ity, trophectoderm qual ity, and embryo developmental advancement are evaluated and graded according to an alphanumeric scale. The embryologist then selects one (or more) of the embryos which is then transferred back to the patient.
  • embryo selection is currently a manual process that involves a subjective assessment of embryos by an embryologist through visual inspection.
  • One of the key challenges in embryo grading is the high level of subjectivity and intra- and inter-operator variabi lity that exists between embryologists of different ski ll levels. This means that standardization is difficult even within a single laboratory and impossible across the industry as a whole.
  • the process rel ies heavi ly on the expertise of the embryologist, and despite their best efforts, the success rates for IVF are sti l l relatively low (around 20%).
  • Whi lst the reasons for low pregnancy outcomes are complex, tools to more accurately select the most viable embryo's is expected to result in increases in successful pregnancy outcomes.
  • PGS pre-implantation genetic screening
  • time lapse photography a technique that takes a biopsy, and then screening the extracted cel ls. Whi lst this can be useful to identify genetic risks which may lead to a failed pregnancy, this also has the potential to harm the embryo during the biopsy process. It is also expensive and has l imited or no availabil ity in many large developing markets such as China.
  • Another tool that has been considered is the use of time-lapse imaging over the course of embryo development. However this requires expensive specialized hardware that is cost prohibitive for many cl inics. Further there is no evidence that it can rel iably improve embryo selection.
  • a method for computational ly generating an Artificial Intell igence (Al) model configured to estimate an embryo viabi l ity score from an image comprising:
  • each image is captured during a pre-determined time window after In-Vitro Ferti l isation (IVF) and the pre-determined time window is 24 hours or less, and the metadata associated with the image comprises at least a pregnancy outcome label; pre-processing each image comprising at least segmenting the image to identify a Zona Pel lucida region;
  • IVF In-Vitro Ferti l isation
  • an Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an input image by training at least one Zona Deep Learning Model using a deep learning method, comprising training a deep learning model on a set of Zona Pel lucida images in which the Zona Pel lucida regions are identified, and the associated pregnancy outcome labels are at least used to assess the accuracy of a trained model;
  • the set of Zona Pel lucida images comprising images in which regions bounded by the Zona Pel lucida region are masked.
  • generating the Al model further comprises training one or more additional Al models wherein each additional Al model is either a computer vision model trained using a machine learning method that uses a combination of one or more computer vision descriptors extracted from an image to estimate an embryo viabi lity score, a deep learning model trained on images local ised to the embryo comprising both Zona Pel lucida and IZC regions, and a deep learning model trained on a set of IntraZonal Cavity (IZC) images in which al l regions apart from the IZC are masked, and either using an ensemble method to combine at least two of the at least one Zona deep learning model and the one or more additional Al models to generate the Al model embryo viabi l ity score from an input image or using a disti llation method to train an Al model to generate the Al model embryo viabi lity score using the at least one Zona deep learning model and the one or more additional Al models to generate the Al model .
  • each additional Al model is either a computer vision model trained using
  • the Al model is generated using an ensemble model comprising selecting at least two contrasting Al models from the at least one Zona deep learning model and the one or more additional Al models, and selection of Al models is performed to generate a set of contrasting Al models and applying a voting strategy to the at least two contrasting Al models that defines how the selected at least two contrasting Al models are combined to generate an outcome score for an image.
  • selecting at least two contrasting Al models comprises
  • the pre-determined time window is a 24 hour timer period beginning 5 days after fertil isation.
  • the pregnancy outcome label is a ground-truth pregnancy outcome measurement performed within 12 weeks after embryo transfer.
  • the ground-truth pregnancy outcome measurement is whether a foetal heartbeat is detected.
  • the method further comprises cleaning the plural ity of image comprising identifying images with l ikely incorrect pregnancy outcome labels, and excluding or re-labell ing the identified images.
  • cleaning the plural ity of images comprises estimating the likel ihood that a pregnancy outcome label associated with an image is incorrect and comparing against a threshold value, and then excluding or relabel ling images with a l i kel ihood exceeding the threshold value.
  • estimating the l i kel ihood a pregnancy outcome label associated with an image is incorrect is be performed by using a plurality of Al classification models and a k-fold cross validation method in which the plural ity of images are split into k mutual ly exclusive validation datasets, and each of the plural ity of Al classifications model is trained on k-1 validation datasets in combination and then used to classify images in the remaining val idation dataset, and the l ikel ihood is determined based on the number of Al classification models which misclassify the pregnancy outcome label of an image.
  • training each Al model or generating the ensemble model comprises assessing the performance of an Al model using a plural ity of metrics comprising at least one accuracy metric and at least one confidence metric, or a metric combining accuracy and confidence.
  • pre-processing the image further comprises cropping the image by local ising an embryo in the image using a deep learning or computer vision method.
  • pre-processing the image further comprises one or more of padding the image, normal ising the colour balance, normal ising the brightness, and scal ing the image to a predefined resolution.
  • padding the image may be performed to generate a square aspect ratio for the image.
  • the method further comprises generating one or more one or more augmented images for use in training an Al model . Preparing each image may also comprise generating one or more augmented images by making a copy of an image with a change or the augmentation may be performed on the image. It may be performed prior to training or during training (on the fly).
  • Any number of augmentations may be performed with varying amounts of 90 degree rotations of the image, mirror fl ip, a non-90 degree rotation where a diagonal border is fi lled in to match a background colour, image blurring, adjusting an image contrast using an intensity histogram, and applying one or more smal l random translations in both the horizontal and/or vertical direction, random rotations, J PEG noise, random image resizing, random hue j itter, random brightness j itter, contrast l imited adaptive histogram equal ization, random fl ip/mirror, image sharpening, image embossing, random brightness and contrast, RGB colour shift, random hue and saturation, channel shuffle, swap RGB to BGR or RBG or other, coarse dropout, motion blur, median blur, Gaussian blur, random shift-scale-rotate (i .e. al l three combined).
  • pre-processing an image may further comprise annotating the image using one or more feature descriptor models, and masking al l areas of the image except those within a given radius of the descriptor key point.
  • the one or more feature descriptor models may comprise a Gray-Level Co- Occurrence Matrix (G LCM) Texture Analysis, a Histogram of Oriented Gradients (HOG), a Oriented Features from Accelerated Segment Test (FAST) and Rotated Binary Robust Independent Elementary Features (BRI EF), a Binary Robust Invariant Scalable Key-points (BRISK), a Maximally Stable Extremal Regions (MSER) or a Good Features To Track (GFTT) feature detector.
  • G LCM Gray-Level Co- Occurrence Matrix
  • HOG Histogram of Oriented Gradients
  • FAST Oriented Features from Accelerated Segment Test
  • BRISK Binary Robust Invariant Scalable Key-points
  • MSER Maximally Stable Extremal Regions
  • GFTT Good Features To Track
  • each Al model generates an outcome score wherein the outcome is a n-ary outcome having n states
  • training an Al model comprises a plurality of training-val idation cycles further comprises randomly al locating the plurality of images to one of a training set, a validation set or a bl ind val idation set, such that the training dataset comprises at least 60% of the images, the validation dataset comprises at least 10% of the images, and the bl ind val idation dataset comprises at least 10% of the images, and after al locating the images to the training set, val idation set and blind val idation set, calculating the frequency of each of the n-ary outcome states in each of the training set, val idation set and bl ind val idation set, and testing that the frequencies are simi lar, and if the frequencies are not simi lar then discarding the al location and repeating the randomisation unti l a randomisation is obtained in
  • training a computer vision model comprising performing a plurality of training- val idation cycles, and during each cycle the images are clustered based on the computer vision descriptors using an unsupervised clustering algorithm to generate a set of clusters, and each image is assigned to a cluster using a distance measure based on the values of the computer vision descriptors of the image, and a supervised learning method is use to determine whether a particular combination of these features corresponds to an outcome measure, and frequency information of the presence of each computer vision descriptor in the plural ity of images.
  • the deep learning model may be a convolutional neural network (CN N) and for an input image each deep learning model generates an outcome probabil ity.
  • CN N convolutional neural network
  • the deep learning method may use a loss function configured to modify an optimization surface is to emphasise global minima.
  • the loss function may include a residual term defined in terms of the network weights, which encodes the col lective difference in the predicted value from the model and the target outcome for each image, and includes it as an additional contribution to the normal cross entropy loss function.
  • the method may be performed on a cloud based computing system using a Webserver, a database, and a plurality of training servers, wherein the Webserver receives one or more model training parameters from a user, and the Webserver initiates a training process on one or more of the plurality of training servers, comprising uploading training code to one of the plural ity the training server, and the training server requests the plurality of images and associated metadata from a data repository, and performs the steps of preparing each image, generating a plural ity of computer vision models and generating a plural ity of deep learning models, and each training server is configured to periodically save the models to a storage service, and accuracy information to one or more log fi les to allow a training process to be restarted.
  • the ensemble model may be trained to bias residual inaccuracies to minimize false negatives.
  • the outcome is a binary outcome of either viable or non-viable, and randomisation may comprise calculating the frequency of images with a viable classification and a non-viable classification, in each of the training set, val idation set and bl ind val idation set and testing if they are simi lar.
  • the outcome measure is a measure of embryo viabi lity using the viabil ity classification associated with each image.
  • each outcome probabi lity may be a probabil ity that the image is viable.
  • each image may be a phase contrast image.
  • a method for computationally generating an embryo viabi l ity score from an image comprising:
  • an Artificial Intel ligence (Al) model configured to generate an embryo viabi lity score from an image according to the method of the first aspect
  • pre-processing the image according to the pre-processing steps used to generate the Al model; providing the pre-processed image to the Al model to obtain an estimate of the embryo viabi l ity score;
  • a method for obtaining an embryo viabi lity score from an image comprising:
  • a cloud based computational system configured to computational ly generate an Artificial Intell igence (Al) model configured to estimate an embryo viabi lity score from an image according to the method of the first aspect.
  • a cloud based computational system configured to computational ly generate an embryo viabi l ity score from an image, wherein the computational system comprises:
  • Al Artificial Intel l igence
  • a computational system configured to generate an embryo viabi l ity score from an image
  • the computational system comprises at least one processor, and at least one memory comprising instructions to configure the at least one processor to: receive an image captured during a pre-determined time window after In-Vitro Ferti l isation ( IVF) upload, via a user interface, the image captured during a pre-determined time window after In- Vitro Ferti l isation (IVF) to a cloud based Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of the first aspect;
  • IVF In-Vitro Ferti l isation
  • IVF In- Vitro Ferti l isation
  • Al Artificial Intel l igence
  • Figure 1A is a schematic flowchart of the generation of an Artificial Intel l igence (Al) model configured to estimate an embryo viabi l ity score from an image according to an embodiment
  • Figure IB is a schematic block diagram of a cloud based computation system configured to computational ly generate and use an Al model configured to estimate an embryo viabi l ity score from an image according to an embodiment
  • Figure 2 is a schematic diagram of an IVF procedure using an Al model configured to estimate an embryo viabi l ity score from an image to assist in selecting an embryo for implantation according to an embodiment
  • Figure 3A is schematic architecture diagram of cloud based computation system configured to generate and use an Al model configured to estimate an embryo viabil ity score from an image according to an embodiment
  • Figure 3B is a schematic flowchart of a model training process on a training server according to an embodiment
  • Figure 4 is schematic diagram of binary thresholding for boundary-finding on images of human embryos according to an embodi ment
  • Figure 5 is schematic diagram of a boundary-finding method on images of human embryos according to an embodiment
  • Figure 6A is an example of the use of a Geometrical Active Contour (GAC) model as appl ied to a fixed region of an image for image segmentation according to an embodiment;
  • GAC Geometrical Active Contour
  • Figure 6B is an example of the use of a morphological snake as applied to a fixed region of an image for image segmentation according;
  • Figure 6C is a schematic architecture diagram of a U-Net architecture for an semantic segmentation model according to an embodiment
  • Figure 6D is an image of a day 5 embryo
  • Figure 6E is a padded version of Figure 6D creating a square image
  • Figure 6F shows a Zona Image based on Figure 6E in which the IZC is masked according to an embodiment
  • Figure 6G shows a IZC image based on Figure 6E in which the Zona Pel lucida and background is masked according to an embodiment
  • Figure 7 is a plot of a Gray Level Co-occurrence Matrix (G LCM ) showing GLCM correlation of sample feature descriptors: ASM, homogeneity, correlation, contrast and entropy, calculated on a set of six Zona Pel lucida regions and six cytoplasm regions according to an embodiment associated;
  • G LCM Gray Level Co-occurrence Matrix
  • Figure 8 is schematic architecture diagram of a deep learning method, including convolutional layers, which transform the input image to a prediction, after training, according to an embodiment
  • Figure 9 is a plot of the accuracy of an embodiment of an ensemble model in identifying embryo viabi lity according to an embodi ment
  • Figure 10 is a bar chart showing the accuracy of an embodiment of the ensemble model compared to world-leading embryologists (clinicians) in accurately identifying embryo viabi l ity;
  • Figure 11 is a bar chart showing the accuracy of an embodiment of the ensemble model compared to world-leading embryologists (clinicians) in correctly identifying embryo viabi lity where the embryologists' assessment was incorrect, compared with embryologists correctly identifying embryo viabi lity where the ensemble model assessment was incorrect;
  • Figure 12 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the embodiment of the ensemble model, when appl ied to the bl ind val idation dataset of Study 1;
  • Figure 13 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) using the embodiment of the ensemble model, when appl ied to the blind validation dataset of Study 1.
  • Figure 14 is a histogram of the rank obtained from the embryologist scores across the total bl ind dataset
  • Figure 15 is a histogram of the rank obtained from the embodiment of the ensemble model inferences across the total bl ind dataset
  • Figure 16 is a histogram of the ensemble model inferences, prior to being placed into rank bandings from 1 to 5;
  • Figure 17 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the ensemble model, when appl ied to the blind val idation dataset of Study 2;
  • Figure 18 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) using the ensemble model, when applied to the blind validation dataset of Study 2;
  • Figure 19 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the ensemble model, when appl ied to the blind val idation dataset of Study 3;
  • Figure 20 is a plot of the distribution of inference scores for non-viable embryos (successful cl inical pregnancy) using Ensemble model, when applied to the blind validation dataset of Study 3.
  • l i ke reference characters designate l ike or corresponding parts throughout the figures.
  • FIG. 1A is schematic flow chart of the generation of an Al model 100 using a cloud based computation system 1 according to an
  • a plural ity of images and associated metadata is received (or obtained) from one or more data sources 101.
  • Each image is captured during a pre-determined time window after In-Vitro
  • the images and metadata can be sourced from IVF cl inics and may be images captured using optical light microscopy including phase contrast images.
  • the metadata includes a pregnancy outcome label (e.g. heart beat detected at first scan post IVF) and may include a range of other clinical and patient information.
  • the images are then pre-processed 102, with the pre-processing including segmenting the image to identify a Zona Pel lucida region of the image.
  • the segmentation may also include identification of the IntraZonal Cavity (IZC) which is surrounded by the Zona Pel lucida region.
  • IZC IntraZonal Cavity
  • Pre-processing an image may also involve one or more (or all) of object detection, alpha channel removal, padding, cropping/localising, normal ising the colour balance, normal ising the brightness, and/or scal ing the image to a predefined resolution as discussed below.
  • Pre-processing the image may also include calculating/determining computer vision feature descriptors from an image, and performing one or more image augmentations, or generating one or more augmented images.
  • At least one Zona Deep Learning model is trained on a set of Zona Pellucida images 103 in order to generate the Artificial Intel ligence (Al) model 100 configured to generate an embryo viabi l ity score from an input image 104.
  • the set of Zona Pellucida images are images in which the Zona Pellucida regions are identified (e.g. during segmentation in step 102).
  • the set of Zona Pel lucida images are images in al l regions of the image apart from the Zona Pellucida region are masked (i .e. so the deep learning model is only trained on information from/relating to the Zona Pel lucida region).
  • the pregnancy outcome labels are used at least in the assessment of a trained model (i.e. to assess accuracy/performance) and may also be used in model training (e.g. by the loss function to drive model optimisation).
  • Multiple Zona Deep Learning Models may be trained, with the best performing model selected as the Al model 100.
  • one or more additional Al models are trained on the pre-processed images 106. These may be additional deep learning models trained directly on the embryo image, and/or on a set of IZC images in which all regions of the image apart from the IZC are masked, or Computer Vision (CV) models trained to combine computer vision features/descriptors generating in the preprocessing step 102 to generate an embryo viabil ity score from an image.
  • CV Computer Vision
  • Each of the Computer Vision models uses a combination of one or more computer vision descriptors extracted from an image to estimate an embryo viabi l ity score of an embryo in an image, and a machine learning method performs a plural ity of training-val idation cycles to generate the CV model. Simi larly each of the deep learning models is trained in a plurality of training-validation cycles so that each deep learning model learns how to estimate an embryo viabi l ity score of an embryo in an image.
  • each training-val idation cycle comprises a (further) randomisation of the plural ity of images within each of the training set, val idation set and blind validation set. That is the images within each set are randomly sampled each cycle, so that each cycle a different subset of images are analysed, or are analysed in a different ordering. Note however that as they are randomly sampled this does al low two or more sets to be identical, provided this occurred through a random selection process.
  • the multiple Al models are then combined into the single Al model 100, using ensemble, disti l lation or other simi lar techniques 107 to generate the AU model 100 in step 104.
  • An ensemble approach involves selecting models from the set of available models and using a voting strategy that defines how an outcome score is generated from the individual outcomes of the selected models.
  • the models are selected to ensure that the results contrast to generate a distribution of results. These are preferably as independent as possible to ensure a good distribution of results.
  • the multiple Al models are used as teachers to train a single student model, with the student model becoming the final Al model 100.
  • a final Al model is selected. This may be one of the Zona Deep Learning models trained in step 103, or it may be a model obtained using an ensemble, disti l lation or simi lar combination step (stepl07) where the training included at least one Zona Deep Learning model (from 103) and one or more additional Al models (Deep Learning and/or CV; step 106).
  • a final Al model 100 is generated (104), this is deployed for operational use to estimate an embryo viability score from an input image 105, e.g. on a cloud server that is configured to receive a phase contrast image of a day 5 embryo captured at an IVF cl inic using a l ight microscope.
  • deployment comprises saving or exporting the trained model, such as by writing the model weights and associated model metadata to a fi le which is transferred to the operational computation system and uploaded to recreate the trained model .
  • Deployment may also comprise moving, copying, or replicating the trained model onto an operational computational system, such as one or more cloud based servers, or locally based computer servers at IVF cl inics.
  • deployment may comprise reconfiguring the computational system the Al model was trained on to accept new images and generate viabi lity estimates using the trained model, for example by adding an interface to receive images, run the trained model on the received images, and to send the results back to the source, or to store the results for later retrieval.
  • the deployed system is configured to receive an input image, and perform any preprocessing steps used to generate the Al model (i .e. so new images are pre-processed in the same way as the trained images).
  • the images may be pre-processed prior to uploading to the cloud system (i .e. local pre-processing).
  • the pre-processing may be distributed between the local system and the remote (e.g. cloud) system.
  • the deployed model is executed or run over the image to generate an embryo viabi l ity score that is then provided to the user.
  • Figure IB is schematic block diagram a cloud based computation system 1 configured to computational ly generate an Al model 100 configured to estimate an embryo viabi l ity score from an image (i .e. an embryo viabi l ity assessment model), and then use this Al model 100 to generate an embryo viabi lity score (i.e. an outcome score) which is an estimate (or assessment) of the viabi lity of a received image.
  • the input 10 comprises data such as the images of the embryo and pregnancy outcome information (e.g. heart beat detected at first ultrasound scan post IVF, l ive birth or not, or successful implantation) which can be used to generate a viabi l ity classification.
  • Models may be trained using a variety of methods and information including the use of segmented datasets (e.g. Zona images, IZC images) and pregnancy outcome data.
  • a best performing model may be selected according to some criteria, such as based on the pregnancy outcome information or multiple Al models may be combined using an ensemble model which selects Al models and generates an outcome based on a voting strategy, or a disti l lation method may be used in which the multiple Al models are used as teachers to train a student Al model, or some other simi lar method may be used to combine the multiple models into a single model .
  • a cloud based model management and monitoring tool which we refer to as the model monitor 21, is used to create (or generate) the Al models. This uses a series of linked services, such as Amazon Web Services (AWS) which manages the training, logging and tracking of models specific to image analysis and the model .
  • AWS Amazon Web Services
  • a cloud based del ivery platform 30 is used which provides a user interface 42 to the system for a user 40.
  • FIG. 2 is a schematic diagram of an IVF procedure 200 using a previously trained Al model to generate an embryo viabi lity score to assist in selecting an embryo for implantation according to an embodiment.
  • harvested eggs are fertil ised 202. These are then in-vitro cultured for several days and then an image of the embryo is captured, for example using a phase contrast microscope 204. As discussed below, it was generally found that images taken 5 days after in-vitro ferti l isation produced better results than images taken at earlier days.
  • the model is trained and used on day 5 embryos, however it is to be understood that a model could be trained and used on embryo's taken during a specific time window with reference to a specific epoch.
  • the time is 24 hours, but other time windows such as 12 hours, 36 hours, or 48 hours could be used.
  • General ly smal ler time windows of 24 hours or less are preferable to ensure greater simi larity in appearance.
  • this could a specific day which is a 24 hour window starting at the beginning of the day (0:00) to the end of the day (23:39), or specific days such days 4 or 5 (a 48 hour window starting at the start of day 4).
  • the time window could define a window size and epoch, such as 24 hours centred on day 5 (i.e. 4.5 days to 5.5 days).
  • the time window could be open ended with a lower bound, such as at least 5 days.
  • whi lst is preferable to use images of embryos from a time window of 24 hours around day 5, it is to be understood that earl ier stage embryos could be used including day 3 or day 4 images.
  • Typical ly several eggs wi l l be ferti lised at the same time and thus a set of multiple images wi l l be obtained for consideration of which embryo is the best (i .e. most viable) to implant.
  • the user uploads the captured image to the platform 30 via user interface 42, for example using "drag and drop" functional ity.
  • the user can upload a single image or multiple images, for example to assist in selection which embryo from a set of multiple embryos being considered for implantation.
  • the platform 30 receives the one or more images 312 which are is stored in a database 36 that includes an image repository.
  • the cloud based del ivery platform comprises on-demand cloud servers 32 that can do the image pre-processing (e.g.
  • the trained Al (embryo viabi l ity assessment) model 100 which executes on one of the on- demand cloud servers 32 to generate an embryo viabi l ity score 314.
  • a report including the embryo viabi lity score is generated 316 and this is sent or otherwise provided to the user 40, such as through the user interface 42.
  • the user e.g. embryologist
  • the selected embryo is then implanted 205.
  • pregnancy outcome data such as detection (or not) of a heartbeat in the first ultrasound scan after implantation (normal ly around 6-10 weeks post ferti l isation) may be provided to the system.
  • the image may be captured using a range of imaging systems, such as those found in existing IVF cl inics. This has the advantage of not requiring IVF cl inics to purchase new imaging systems or use specific imaging systems. Imaging systems are typical ly l ight microscopes configured to capture single phase contrast images embryos.
  • imaging systems may be used, in particular optical l ight microscope systems using a range of imaging sensors and image capture techniques. These may include phase contrast microscopy, polarised light microscopy, differential interference contrast (DIC) microscopy, dark-field microscopy, and bright field microscopy. Images may be captured using a conventional optical microscope fitted with a camera or image sensor, or the image may be captured by a camera with an integrated optical system capable of taking a high resolution or high magnification image, including smart phone systems.
  • Image sensors may be a CMOS sensor chip or a charge coupled device (CCD), each with associated electronics.
  • the optical system may be configured to col lect specific wavelengths or use filters including band pass fi lters to col lect (or exclude) specific wavelengths.
  • Some image sensors may be configured to operate or sensitive to l ight in specific wavelengths, or at wavelengths beyond the optical range including in the Infrared (IR) or near IR.
  • the imaging sensor is a multispectral camera which collects an image at multiple distinct wavelength ranges.
  • I l lumination systems may also be used i l luminate the embryo with light of a particular wavelength, in a particular wavelength band, or a particular intensity. Stops and other components may be used to restrict or modify illumination to certain parts of the image (or image plane).
  • the image used in embodiments described herein may be sourced from video and time lapse imaging systems.
  • a video stream is a periodic sequence of image frames where the interval between image frames is defined by the capture frame rate (e.g. 24 or 48 frames per second).
  • a time- lapse system captures a sequence of images with a very slow frame rate (e.g. 1 image per hour) to obtain a sequence of images as the embryo grows (post-ferti lisation).
  • a very slow frame rate e.g. 1 image per hour
  • the image used in embodiments described herein may be a single image extracted from a video stream or a time lapse sequence of images of an embryo.
  • the image to use may be selected as the image with a capture time nearest to a reference time point such as 5.0 or 5.5 days post ferti lisation.
  • pre-processing may include an image qual ity assessment so that an image may be excluded if it fai ls a quality assessment.
  • a further image may be captured if the original image fails a qual ity assessment.
  • the image selected is the first image which passes the qual ity assessment nearest the reference time.
  • a reference time window may be defined, (e.g. 30 minutes fol lowing the start of day 5.0) along with image qual ity criteria.
  • the image selected is the image with the highest qual ity during the reference time window is selected.
  • the image qual ity criteria used in performing qual ity assessment may be based on a pixel colour distribution, a brightness range, and/or an unusual image property or feature that indicates poor quality or equipment failure.
  • the thresholds may be determined by analysing a reference set of images. This may be based on manual assessment or automated systems which extract outl iers from distributions.
  • FIG. 3A is a schematic architecture diagram of cloud based computation system 1 configured to generate and use an Al model 100 configured to estimate an embryo viabi l ity score from an image according to an embodiment.
  • Figure IB the Al model generation method is handled by the model monitor 21.
  • the model monitor 21 al lows a user 40 to provide image data and metadata 14 to a data management platform which includes a data repository.
  • a data preparation step is performed, for example to move the images to specific folder, and to rename and perform pre-processing on the image such as objection detection, segmentation, alpha channel removal, padding, cropping/local ising, normal ising, scaling, etc.
  • Feature descriptors may also be calculated, and augmented images generated in advance. However additional pre-processing including augmentation may also be performed during training (i.e. on the fly). Images may also undergo qual ity assessment, to allow rejection of clearly poor images and allow capture of replacement images.
  • Simi larly patient records or other cl inical data is processed (prepared) to extra an embryo viabi l ity classification (e.g. viable or non-viable) which is l inked or associated with each image to enable use in training the Al models and/or in assessment.
  • the prepared data is loaded 16 onto a cloud provider (e.g. AWS) template server 28 with the most recent version of the training algorithms.
  • the template server is saved, and multiple copies made across a range of training server clusters 37, which may be CPU, GPU, ASIC, FPGA, or TPU (Tensor Processing Unit)-based, which form training servers 35.
  • the model monitor web server 31 then applies for a training server 37 from a plurality of cloud based training servers 35 for each job submitted by the user 40.
  • Each training server 35 runs the pre-prepared code (from template server 28) for training an Al model, using a library such as Pytorch, Tensorflow or equivalent, and may use a computer vision library such as OpenCV. PyTorch and OpenCV are open- source l ibraries with low-level commands for constructing CV machine learning models.
  • the training servers 37 manage the training process. This may include may dividing the images in to training, val idation, and bl ind val idation sets, for example using a random al location process. Further during a training-val idation cycle the training servers 37 may also randomise the set of images at the start of the cycle so that each cycle a different subset of images are analysed, or are analysed in a different ordering. If pre-processing was not performed earlier or was incomplete (e.g. during data management) then additional pre-processing may be performed including object detection, segmentation and generation of masked data sets (e.g.
  • Pre-processing may also include padding, normal ising, etc. as required. That is the pre-processing step 102 may be performed prior to training, during training, or some combination (i .e. distributed pre-processing).
  • the number of training servers 35 being run can be managed from the browser interface. As the training progresses, logging information about the status of the training is recorded 62 onto a distributed logging service such as Cloudwatch 60. Key patient and accuracy information is also parsed out of the logs and saved into a relational database 36.
  • the models are also periodical ly saved 51 to a data storage (e.g.
  • AWS Simple Storage Service (S3) or simi lar cloud storage service) 50 so they can be retrieved and loaded at a later date (for example to restart in case of an error or other stoppage).
  • the user 40 is sent emai l updates 44 regarding the status of the training servers if their jobs are complete, or an error is encountered.
  • each training cluster 37 a number of processes take place. Once a cluster is started via the web server 31, a script is automatical ly run, which reads the prepared images and patient records, and begins the specific Pytorch/OpenCV training code requested 71.
  • the input parameters for the model training 28 are suppl ied by the user 40 via the browser interface 42 or via a configuration script.
  • the training process 72 is then initiated for the requested model parameters, and can be a lengthy and intensive task. Therefore, so as not to lose progress whi le the training is in progress, the logs are periodically saved 62 to the logging (e.g. AWS Cloudwatch) service 60 and the current version of the model (whi le training) is saved 51 to the data (e.g.
  • the logging e.g. AWS Cloudwatch
  • FIG. 3B An embodiment of a schematic flowchart of a model training process on a training server is shown in Figure 3B.
  • multiple models can be combined together for example using ensemble, distil lation or simi lar approaches in order to incorporate a range of deep learning models (e.g. PyTorch) and/or targeted computer vision models (e.g. OpenCV) to generate a robust Al model 100 which is provided to the cloud based delivery platform 30.
  • a range of deep learning models e.g. PyTorch
  • targeted computer vision models e.g. OpenCV
  • the cloud-based del ivery platform 30 system then al lows users 10 to drag and drop images directly onto the web appl ication 34, which prepares the image and passes the image to the
  • the web appl ication 34 also al lows clinics to store data such as images and patient information in database 36, create a variety of reports on the data, create audit reports on the usage of the tool for their organisation, group or specific users, as well as bi l ling and user accounts (e.g. create users, delete users, reset passwords, change access levels, etc.).
  • the cloud-based delivery platform 30 also enables product admin to access the system to create new customer accounts and users, reset passwords, as wel l as access to customer/user accounts (including data and screens) to faci litate technical support.
  • effective models can sti l l be developed using a shorter time window such as 12 hours, or images taken at other days such as day 3 or day 4, or a minimum time period after ferti lisation such as at least 5 days (e.g. open ended time window).
  • a shorter time window such as 12 hours
  • images taken at other days such as day 3 or day 4
  • a minimum time period after ferti lisation such as at least 5 days (e.g. open ended time window).
  • the exact time window e.g. 4 day or 5 days
  • images used for training of an Al model, and then subsequent classification by the trained Al model are taken during simi lar and preferably the same time windows (e.g. the same 12 or 24 hour time window).
  • each image undergoes pre-processing (image preparation) procedure 102 including at least segmenting the image to identify a Zona Pel lucida region.
  • pre-processing image preparation
  • a range of pre-processing steps or techniques may be applied. The may be performed after adding to the data store 14 or during training by a training server 37.
  • an objection detection (localisation) module is used to detect and local ise the image on the embryo. Objection detection/localisation comprises estimating the bounding box containing an embryo. This can be used for cropping and/or segmentation of the image.
  • the image may also be padded with a given boundary, and then the color balance and brightness are normal ized. The image is then cropped so that the outer region of the embryo is close to the boundary of the image.
  • Image segmentation is a computer vision technique that is useful for preparing the image for certain models to pick out relevant areas for the model training to focus on such as the Zona Pel lucida, and the IntraZonal Cavity (IZC).
  • the image may masked to generate images of just the Zona Pellucida (i .e. crop the border of the Zona Pellucida, and mask the IZC -see Figure 6F) or just IZC (i.e. crop to the border of the IZC to exclude the Zona Pellucida ( Figure 6G).
  • the background may be left in in the image or it may be masked as wel l.
  • Embryo viabi lity models may then be trained using just the masked images, for example Zona images which are masked to just contain the Zona Pel lucida and background of the image, and/or IZC i mages which are masked to just contain the IZC.
  • Scal ing involves rescal ing the image to a predefined scale to suit the particular model being trained.
  • Augmentation involves incorporating making smal l changes to a copy of the images, such as rotations of the image in order to control for the direction of the embryo dish.
  • the use of segmentation prior to deep learning was found to have a significant effect on the performance of the deep learning method. Simi larly augmentation was important for generating a robust model.
  • a range of image pre-processing techniques may be used for the preparation of human embryo images prior to training an Al model. These include:
  • Alpha Channel Stripping comprises stripping an image of an alpha channel (if present) to ensure it is coded in a 3-channel format (e.g. RGB), for example to remove transparency maps;
  • a 3-channel format e.g. RGB
  • Padding/Bolstering each image with a padded border to generate a square aspect ratio, prior to segmentation, cropping or boundary-finding. This process ensured that image dimensions were consistent, comparable, and compatible for deep learning methods, which typical ly require square dimension images as input, whi le also ensuring that no key components of the image were cropped;
  • RGB red-green-blue
  • gray-scale images Normalizing the RGB (red-green-blue) or gray-scale images to a fixed mean value for al l the images. For example this includes taking the mean of each RGB channel, and dividing each channel by its mean value. Each channel was then multipl ied by a fixed value of 100/255, in order to ensure the mean value of each image in RGB space was (100, 100, 100). This step ensured that color biases among the images were suppressed, and that the brightness of each image was normal ized;
  • Thresholding images using binary, Otsu, or adaptive methods Includes morphological processing of the image using di lation (opening), erosion (closing) and scale gradients, and using a scaled mask to extract the outer and inner boundaries of a shape;
  • Object Detection/Cropping the image to localise the image on the embryo and ensure that there are no artefacts around the edges of the image. This may be performed using an Object Detector which uses an object detection model (discussed below) which is trained to estimate a bounding box which contains the embryo (including the Zona Pel lucida);
  • Extracting the geometric properties of the boundaries using an el liptical Hough transform of the image contours for example the best el lipse fit from an el l iptical Hough transform calculated on the binary threshold map of the image.
  • This method acts by selecting the hard boundary of the embryo in the image, and by cropping the square boundary of the new image so that the longest radius of the new el lipse is encompassed by the new image width and height, and so that the center of the el l ipse is the center of the new image;
  • Segmentation may be performed by calculating the best-fit contour around an un el l iptical image using a Geometrical Active Contour (GAC) model, or morphological snake, within a given region.
  • GAC Geometrical Active Contour
  • the inner and other regions of the snake can be treated differently depending on the focus of the trained model on the zona pellucida region or the cytoplasmic (IntraZonal Cavity) region, that may contain a blastocyst.
  • a Semantic Segmentation model may be trained which identifies a class for each pixel in an image.
  • a semantic segmentation model was developed using a U-Net architecture with a pretrained ResNet-50 encoder to segment the Zona Pel lucida and IZC.
  • the model was trained using a BinaryCrossEntropy loss function;
  • Tensor conversion comprising transforming each image to a tensor rather than a visually displayable image, as this data format is more usable by deep learning models.
  • Tensor normal ization was obtained from standard pre-trained ImageNet values with a mean: (0.485, 0.456, 0.406) and standard deviation (0.299, 0.224, 0.225).
  • Figure 4 is schematic diagram of binary thresholding 400 for boundary-finding on images of human embryos according to an embodiment.
  • Figure 4 shows 8 binary thresholds appl ied to the same image, namely levels 60, 70, 80, 90 100, 110 (images 401, 402, 403, 404, 405, 406, respectively), adaptive Gaussian 407 and Otsu's Gaussian 408.
  • Figure 5 is schematic diagram of a boundary-finding method 500 on an image of human embryo according to an embodiment.
  • the first panel shows outer boundary 501, inner boundary 502, and the image with detected inner (and outer boundaries 503.
  • the inner boundary 502 may approximately correspond to the IZC boundary
  • the outer boundary 501 may approximately correspond to the outer edge of the Zona Pel lucida region.
  • Figure 6A is an example of the use of a Geometrical Active Contour (GAC) model as appl ied to a fixed region of an image 600 for image segmentation according to an embodiment.
  • the blue solid line 601 is the outer boundary of the Zona Pellucida region and the dashed green line 602 denotes the inner boundary defining the edge of the Zona Pel lucida region and the cytoplasmic (IntraZonal Cavity or IZC) region.
  • Figure 6B is an example of the use of a morphological snake as applied to a fixed region of an image for image segmentation.
  • the blue sol id l ine 611 is the outer boundary of the Zona Pellucida region and the dashed green l ine 612 denotes the inner boundary defining the edge of the Zona Pel lucida region and the cytoplasmic (inner) region.
  • the boundary 612 (defining the cytoplasmic IntraZonal Cavity region) has an irregular shape with a bump or projecting portion in the lower right hand quadrant.
  • an object detector uses an object detection model which is trained to estimate a bounding box which contains the embryo.
  • the goal of object detection is to identify the largest bounding box that contains al l of the pixels associated with that object. This requires the model to both model the location of an object and a category/label (i.e. what's in the box) and thus detection models typical ly contain both an object classifier head and a bounding box regression head.
  • Region-Convolutional Neural Net (or R-CNN) which uses an expensive search process is appl ied to search for image patch proposals (potential bounding boxes). These bounding boxes are then used to crop the regions of the image of interest. The cropped images are then run through a classifying model to classify the contents of the image region. This process is compl icated and computational ly expensive.
  • R-CNN Region-Convolutional Neural Net
  • An alternative is Fast-CNN which uses a CNN that proposed feature regions rather a search for image patch proposals. This model uses a CNN to estimate a fixed number of candidate boxes, typical ly set to be between 100 and 2000.
  • Faster- RCN N which uses anchor boxes to l imit the search space of required boxes.
  • Faster-RCNN This uses a small network which jointly learns to predict the feature regions of interest, and this can speed up the runtime compared to R-CNN or Fast-CN N as expensive region search can be replaced.
  • anchor point For every feature activation coming out of the back one model is considered anchor point (Red in the image below). For every anchor point, the 9 (or more, or less, depending on problem) anchor boxes are generated. The anchor boxes correspond to common object sizes in the training dataset. As there are multiple anchor points with multiple anchor boxes, this results in 10s of thousands of region proposals. The proposals are then fi ltered via a process called Non-Maximal Suppression (NMS) that selects the largest box that has confident smal ler boxes contained within it. This ensures that there is only 1 box for each object. As the NMS is relies on the confidence of each bounding box prediction, a threshold must be set for when to consider objects as part of the same object instance. As the anchor boxes wi ll not fit the objects perfectly, the job of the regression head is to predict the offsets to these anchor boxes which morph them into the best fitting bounding box.
  • NMS Non-Maximal Suppression
  • the detector can also special ise and only estimate boxes for a subset of objects e.g. only people for pedestrian detectors.
  • Object categories that are not of interest are encoded into the 0-class which corresponds with the background class.
  • patches/boxes for the background class are usual ly sampled at random from image regions which contain no bounding box information. This step al lows the model to become invariant to those undesirable objects e.g. it can learn to ignore them rather than classifying them incorrectly.
  • the other common box format is (cx, cy, height, width), where the bounding box/rectangle is encoded as a centre point of the box (cx, cy) and the box size (height, width).
  • Different detection methods wi l l use different encodings/formats depending on the task and situation.
  • the regression head may be trained using a LI loss and the classification head may be trained using a CrossEntropy loss.
  • An objectness loss may also be used (is this background or an object) as well
  • the final loss is computed as the sum of these losses.
  • an embryo detection model based upon Faster-RNN was used.
  • approximately 2000 images were hand labelled with the ground truth bounding boxes.
  • the boxes were label led such that the full embryo, including the Zona Pel lucida region, were inside the bounding box.
  • a.k.a Double transfer both embryos were label led in order to al low the model to differentiate between double transfer and single transfer.
  • the model was configured to raise an error to the use if a double transfer was detected.
  • Models with multiple lobes' are label led as being a single embryo.
  • semantic segmentation may be used. Semantic segmentation
  • Segmentation is the task of trying to predict a category or label for every pixel .
  • Tasks l ike semantic segmentation are referred to as pixel-wise dense prediction tasks as an output is required for every input pixel.
  • Semantic segmentation models are setup differently to standard models as they require a ful l image output.
  • a semantic segmentation (or any dense prediction model) wi ll have an encoding module and a decoding module.
  • the encoding module is responsible for create a low-dimensional representation of the image (sometimes cal led a feature representation). This feature representation is then decoded into the final output image via the decoding module.
  • the predicted label map (for semantic segmentation) is then compared against the ground truth label maps that assign a category to each pixel, and the loss is computed.
  • the standard loss function for Segmentation models is either BinaryCrossEntropy, standard Crossentopy loss (depending on if the problem is multi-class or not). These implementations are identical to their image classification cousins, except that the loss is applied pixel wise (across the image channel dimension of the tensor).
  • FCN Ful ly Convolutional Network
  • a pretrained model such as a ResNet
  • This low resolution label map is then up-sampled to the original image resolution and the loss is computed.
  • semantic segmentation masks are very low frequency and do not need al l the extra parameters of a larger decoder.
  • More compl icated versions of this model exist, which use multi-stage upsampl ing to improve segmentation results. Simply stated, the loss is computed at multiple resolutions in a progressive manner to refine the predictions at each scale.
  • One down side of this type of model is that if the input data is high resolution, or contains high frequency information (i.e. smal ler/thinner objects), the low-resolution label map wi l l fai l to capture these smal ler structures (especial ly when the encoding model does not use di lated convolutions).
  • the input image/image features are progressively downsampled as the model gets deeper. However, as the image/features are downsampled key high frequency detai ls can be lost.
  • an alternative U-Net architecture may be used that instead uses skip connections between the symmetric components of the encoder and decoder.
  • every encoding block has a corresponding block in the decoder.
  • the features at each stage are then passed to the decoder alongside the lowest resolution feature representation.
  • the input feature representation is upsampled to match the resolution of its corresponding encoding block.
  • the feature representation from the encoding block and the upsampled lower resolution features are then concatenated and passed through a 2D convolution layer.
  • U-Net architecture 620 An example of a U-Net architecture 620 is shown in Figure 6C.
  • the main difference between FCN style models and U-Net style models is that in the FCN model, the encoder is responsible for predicting a low resolution label map that is then upsampled (possibly progressively). Whereas, the U-Net model does not have a ful ly complete label map prediction until the final layer. Ultimately, there do exist many variants of these models that trade off the differences between them (e.g. Hybrids).
  • U-net architectures may also use pre-trained weights, such as ResNet-18 or ResNet-50, for use in cases where there is insufficient data to train models from scratch.
  • FIG. 6D is an image of a day 5 embryo 630 comprising a Zona Pel lucida region 631 surrounding the IntraZonal Cavity (IZC, 632).
  • IZC IntraZonal Cavity
  • the embryo is starting to hatch with the ISZ emerging (hatching) from the Zona Pellucida.
  • the embryo is surrounded by background pixels 633.
  • Figure 6E is a padded image 640 created from Figure 6D by adding padding pixels 641 642 to create a square image more easi ly processed by the deep learning methods.
  • Figure 6F shows a Zona Image 650 in which the IZC is masked 652 to leave the Zona Pel lucida 631 and background pixels 633
  • Figure 6G shows a IZC image 660 in which the Zona Pel lucida and background is masked 661 leaving only the IZC region 632.
  • Al models could be separated into two groups: first, those that included additional image segmentation, and second those that required the entire unsegmented image. Models that were trained on images that masked the IZC, exposing the zona region, were denoted as Zona models. Models that were trained on images that masked the Zona (denoted IZC models), and models that were trained on full-embryo images (i .e. second group), were also considered in training.
  • the name of the new image is set equal to the hash of the original image contents, as a png (lossless) fi le.
  • the data parser wil l output images in a multi-threaded way, for any images that do not already exist in the output directory (which, if it doesn't exist, wi ll create it), so if it is a lengthy process, it can be restarted from the same point even if it is interrupted.
  • the data preparation step may also include processing the metadata to remove images associated with inconsistent or contradictory records, and identify any mistaken clinical records. For example a script may be run on a spreadsheet to conform the metadata into a predefined format. This ensures the data used to generate and train the models is of high quality, and has uniform characteristics (e.g. size, colour, scale etc.).
  • the data is cleaned by identifying i mages with l ikely incorrect pregnancy outcome labels (i.e. mis-label led data), and excluding or re-label ling the identified images. In one embodiment this is performed by estimating the l ikelihood that a pregnancy outcome label associated with an image is incorrect and comparing the likel ihood against a threshold value. If the l ikelihood exceeds the threshold value then the image is excluded or relabelled. Estimating the l ikelihood a pregnancy outcome label is incorrect may be performed by using a plural ity of Al classification models and a k-fold cross val idation method.
  • the images are spl it into k mutual ly exclusive val idation datasets.
  • Each of the plural ity of Al classifications model is trained on k-1 val idation datasets in combination and then used to classify images in the remaining validation dataset.
  • the l ikelihood is then determined based on the number of Al classification models which misclassify the pregnancy outcome label of an image.
  • a deep learning model may further be used to learn the l ikelihood value.
  • the Al model is a deep learning model trained on a set of Zona Pellucida images in al l regions of the images except the Zona Pellucida are masked during pre-processing.
  • multiple Al models are trained and then combined using an ensemble or disti l lation method.
  • the Al models may be one or more deep learning models and/or one or more computer vision (CV) models.
  • the deep learning models may be trained on ful l embryo images, Zona images or IZC images.
  • the computer vision (CV) models may be generated using a machine learning method using a set feature descriptors calculated from each image
  • Each of the individual models are configured to estimate an embryo viabi l ity score of an embryo in an image, and the Al model combines selected models to produce an overall embryo viabi l ity score that is returned by the Al model .
  • Training is performed using randomised datasets.
  • Sets of complex image data can suffer from uneven distribution, especial ly if the data set is smal ler than around 10,000 images, where exemplars of key viable or non-viable embryos are not distributed evenly through the set. Therefore, several (e.g. 20) randomizations of the data are considered at one time, and then split into the training, val idation and blind test subsets defined below. Al l randomizations are used for a single training example, to gauge which exhibits the best distribution for training. As a corol lary, it is also beneficial to ensure that the ratio between the number of viable and non-viable embryos is the same across every subset.
  • Embryo images are quite diverse, and thus ensuring even distribution of images across test and training sets can be used to improve performance.
  • validation set and blind val idation set is calculated and tested to ensure that the ratios are simi lar. For example this may include testing if the range of the ratios is less than a threshold value, or within some variance taking into account the number of images. If the ranges are not simi lar then the randomisation is discarded and a new randomisation is generated and tested unti l a randomisation is obtained in which the ratios are simi lar.
  • the calculation step may comprise calculating the frequency of each of the n-ary outcome states in each of the training set, validation set and bl ind val idation set, and testing that the frequencies are simi lar, and if the frequencies are not simi lar then discarding the al location and repeating the randomisation unti l a randomisation is obtained in which the frequencies are simi lar.
  • Training further comprises performing a plurality of training-val idation cycles.
  • each randomization of the total useable dataset is spl it into typical ly 3 separate datasets known as the training, val idation and blind validation datasets.
  • more than 3 could be used, for example the val idation and bl ind val idation datasets could be stratified into multiple sub test sets of varying difficulty.
  • the first set is the training dataset and comprises at least 60% and preferably 70-80% of images. These images are used by deep learning models and computer vision models to create an embryo viabi lity assessment model to accurately identify viable embryos.
  • the second set is the Validation dataset, which is typical ly around (or at least) 10% of images: This dataset is used to validate or test the accuracy of the model created using the training dataset. Even though these images are independent of the training dataset used to create the model, the val idation dataset sti l l has a smal l positive bias in accuracy because it is used to monitor and optimize the progress of the model training.
  • the third dataset is the Bl ind val idation dataset which is typical ly around 10-20% of the images.
  • a third bl ind val idation dataset is used to conduct a final unbiased accuracy assessment of the final model. This val idation occurs at the end of the model ling and val idation process, when a final model has been created and selected.
  • pre-processing the data further comprises augmenting images, in which a change is made to the image. This may be performed prior to training, or during training (i .e. on the fly). Augmentation may comprise directly augmenting (altering) and image or by making a copy of an image with a smal l change.
  • Any number of augmentations may be performed with varying amounts of 90 degree rotations of the image, mirror fl ip, a non-90 degree rotation where a diagonal border is fi l led in to match a background colour, image blurring, adjusting an image contrast using an intensity histogram, and applying one or more smal l random translations in both the horizontal and/or vertical direction, random rotations, adding J PEG (or compression) noise, random i mage resizing, random hue j itter, random brightness j itter, contrast l imited adaptive histogram equal ization, random fl ip/mirror, image sharpening, image embossing, random brightness and contrast, RGB colour shift, random hue and saturation, channel shuffle: swap RGB to BGR or RBG or other, coarse dropout, motion blur, median blur, Gaussian blur, random shift-scale-rotate (i .e.
  • the same set of augmented images may be used for multiple training-val idation cycles, or new augmentations may be generated on the fly during each cycle.
  • An additional augmentation used for CV model training is the alteration of the 'seed' of the random number generator for extracting feature descriptors.
  • the techniques for obtaining computer vision descriptors contain an element of randomness in extracting a sample of features. This random number can be altered and included among the augmentations to provide a more robust training for CV models.
  • Computer vision models rely on identifying key features of the image and expressing them in terms of descriptors. These descriptors may encode qualities such as pixel variation, gray level, roughness of texture, fixed corner points or orientation of image gradients, which are implemented in the OpenCV or simi lar l ibraries. By selection on such feature to search for in each image, a model can be bui lt by finding which arrangement of the features is a good indicator for embryo viabi l ity. This procedure is best carried out by machine learning processes such as Random Forest or Support Vector Machines, which are able to separate the images in terms of their descriptions from the computer vision analysis.
  • descriptors may encode qualities such as pixel variation, gray level, roughness of texture, fixed corner points or orientation of image gradients, which are implemented in the OpenCV or simi lar l ibraries.
  • a range of computer vision descriptors are used, encompassing both smal l and large scale features, which are combined with traditional machine learning methods to produce "CV models" for embryo selection. These may optional ly be later combined with deep learning (DL) models, for example into an Ensemble model or used in disti llation to train a student model .
  • Suitable computer vision image descriptors include:
  • Zona-Pellucida through Hough transformation ⁇ finds inner and outer ell ipses to approximate the Zona Pel lucida and IntraZonal Cavity spl it, and records the mean and difference in radi i as features;
  • Gray-Level Co-Occurrence Matrix (GLCM) Texture Analysis detects roughness of different regions by comparing neighbouring pixels in the region.
  • the sample feature descriptors used are: angular second moment (ASM), homogeneity, correlation, contrast and entropy.
  • ASM angular second moment
  • the selection of the region is obtained by randomly sampl ing a given number of square sub-regions of the image, of a given size, and records the results of each of the five descriptors for each region as the total set of features;
  • Histogram of Oriented Gradients (HOG) detects objects and features using scale-invariant feature transform descriptors and shape contexts. This method has precedence for being used in embryology and other medical imaging, but does not itself constitute a machine learning model;
  • Binary Robust Invariant Scalable Key-points (BRISK): a FAST-based detector in combination with an assembly of intensity comparisons of pixels, which is achieved by sampling each neighbourhood around a feature specified at a key-point;
  • MSER Maximally Stable Extremal Regions
  • GFTT Good Features To Track
  • Figure 7 is a plot 700 of a Gray Level Co-occurrence Matrix (GLCM) showing GLCM correlation of sample feature descriptors 702 : ASM, homogeneity, correlation, contrast and entropy, calculated on a set of six Zona Pel lucida regions (label led 711 to 716; cross hatch) and six cytoplasm/IZC regions (label led 721 to 726; dotted) in image 701.
  • GLCM Gray Level Co-occurrence Matrix
  • a computer vision (CV) model is constructed by the fol lowing method.
  • One (or more) of the computer vision image descriptors techniques l isted above is selected, and the features are extracted from al l of the images in the training dataset. These features are arranged into a combined array, and then supplied to a KMeans unsupervised clustering algorithm, this array is cal led the Codebook, for a 'bag of visual words'.
  • the number of clusters is a free parameter of the model .
  • the clustered features from this point on represent the 'custom features' that are used, through whichever combination of algorithms, to which each individual image in the validation or test set wi l l be compared. Each image has features extracted and is clustered individual ly.
  • the 'distance' (in feature-space) to each of the clusters in the codebook is measured using a KDTree query algorithm, which gives the closest clustered feature.
  • the results from the tree query can then be represented as a histogram, showing the frequency at which each feature occurs in that image.
  • the histogram and the ground-truth outcomes are used to carry out supervised learning.
  • the methods used to obtain the final selection model include Random Forest or Support Vector Machine (SVM).
  • SVM Support Vector Machine
  • Deep Learning models are based on neural network methods, typical ly convolutional neural network (CN N) that consist of a plural ity of connected layers, with each layer of 'neurons' containing a non-l inear activation function, such as a 'rectifier', 'sigmoid' etc. Contrasting with feature based methods (i.e. CV models), Deep Learning and neural networks instead ' learn' features rather than relying on hand designed feature descriptors. This al lows them to learn 'feature representations' that are tai lored to the desired task.
  • CN N typical ly convolutional neural network
  • a variety of deep learning models are avai lable each with different architectures (i.e. different number of layers and connections between layers) such as residual networks (e.g. ResNet-18, ResNet-50 and ResNet-101), densely connected networks (e.g. DenseNet-121 and DenseNet-161), and other variations (e.g. lnceptionV4 and Inception- ResNetV2).
  • residual networks e.g. ResNet-18, ResNet-50 and ResNet-101
  • densely connected networks e.g. DenseNet-121 and DenseNet-161
  • other variations e.g. lnceptionV4 and Inception- ResNetV2
  • Deep Learning models may be assessed based on stabi l isation (how stable the accuracy value was on the validation set over the training process) transferabi l ity (how wel l the accuracy on the training data correlated with the accuracy on the val idation set) and prediction accuracy (which models provided the best val idation accuracy, for both viable and non-viable embryos, the total combined accuracy, and the balanced accuracy, defined as the weighted average accuracy across both class types of embryos). Training involves trying different combinations of model parameters and hyper-parameters, including input image resolution, choice of optimizer, learning rate value and schedul ing, momentum value, dropout, and initial ization of the weights (pre-training).
  • a loss function may be defined to assess performing of a model, and during training a Deep Learning model is optimised by varying learning rates to drive the update mechanism for the network's weight parameters to minimize an objective/loss function.
  • Deep learning models may be implemented using a variety of l ibraries and software languages.
  • the PyTorch l ibrary is used to implement neural networks in the language of python.
  • the library Pytorch additional ly al lows tensors to be created that uti lize Hardware (GPU, TPU) acceleration, and includes modules for building multiple layers for neural networks.
  • Whi le deep learning is one of the most powerful techniques for image classification, it can be improved by providing guidance through the use of segmentation or augmentation described above. The use of segmentation prior to deep learning was found to have a significant effect on the performance of the deep learning method, and assisted in generating contrasting models.
  • the plural ity of deep learning models includes at least one model trained on segmented images, and one model trained on images not subject to segmentation. Simi larly augmentation was important for generating robust models.
  • the effectiveness of an approach is determined by the architecture of the Deep Neural Network (DNN).
  • DNN Deep Neural Network
  • the DNN learns the features itself throughout the convolutional layers, before employing a classifier. That is, without adding in proposed features by hand, the DN N can be used to check existing practices in the l iterature, as wel l as developing previously unguessed descriptors, especial ly those that are difficult for the human eye to detect and measure.
  • the architecture of the DNN is constrained by the size of images as input, the hidden layers, which have dimensions of the tensors describing the DNN, and a l inear classifier, with the number of class labels as output.
  • Most architectures employ a number of down-sampl ing ratios, with smal l (3x3 pixel) filters to capture notion of left/right, up-down and centre.
  • the top layer typical ly includes one or more ful ly-connected neural network layers, which act as a classifier, simi lar to SVM .
  • a Softmax layer is used to normal ize the resulting tensor as containing probabi l ities after the fully connected classifier. Therefore, the output of the model is a l ist of probabi l ities that the image is either non-viable or viable.
  • Figure 8 is schematic architecture diagram of a deep learning method, including convolutional layers, which transform the input image to a prediction, after training, according to an embodiment.
  • Figure 8 shows a series of layers based on a RESN ET 152 architecture according to an embodiment. The components are annotated as fol lows.
  • CONV indicates a convolutional 2D layer, which computes cross-correlations of the input from the layer below.
  • Each element or neuron within the convolutional layer processes the input from its receptive field only, e.g. 3x3 or 7x7 pixels. This reduces the number of learnable parameters required to describe the layer, and allows deeper neural networks to be formed than those constructed from ful ly-connected layers where every neuron is connected to every other neuron in the subsequent layer, which is highly memory intensive and prone to overfitting.
  • Convolutional layers are also spatial translation invariant, which is useful for processing images where the subject matter cannot be guaranteed to be precisely centred.
  • "POOL” refers the max pool ing layers, which is a down-sampl ing method whereby only representative neuron weights are selected within a given region, to reduce the complexity of the network and also reduce overfitting. For example, for weights within a 4x4 square region of a convolutional layer, the maximum value of each 2x2 corner block is computed, and these representative values are then used to reduce the size of the square region to 2x2 in dimension.
  • the final layers at the end of the network is typical ly a ful ly connected (FC) layer, which acts as a classifier.
  • FC ful ly connected
  • This layer takes the final input and outputs an array of the same number of dimensions as the classification categories. For two categories, e.g. 'viable Day 5 embryo' and 'non-viable Day 5 embryo', the final layer wi l l output an array of length 2, which indicates the proportion that the input image contains features that al ign with each category respectively.
  • a final softmax layer is often added, which transforms the final numbers in the output array to percentages that fit between 0 and 1, and both together add up to a total of 1, so that the final output can be interpreted as a confidence limit for the image to be classified in one of the categories.
  • One suitable DNN architecture is Resnet (https://ieeexplore. ieee.org/document/7780459) such as ResNetl52, ResNetlOl, ResNet50 or ResNet-18.
  • ResNet advanced the field significantly in 2016 by using an extremely large number of hidden layers, and introducing 'skip connections' also known as 'residual connections'. Only the difference from one layer to the next is calculated, which is more time- cost efficient, and if very l ittle change is detected at a particular layer, that layer is skipped over, thus create a network that wil l very quickly tune itself to a combination of smal l and large features in the image.
  • ResNet-18, ResNet-50, ResNet-101, DenseNet-121 and DenseNet-161 general ly outperformed the other architectures.
  • Another suitable DNN architecture is DenseNet
  • DenseNet is an extension of ResNet, where now every layer can skip over to any other layer, with the maximal number of skip connections. This architecture requires much more memory, and so is less efficient, but can exhibit improved performance over ResNet. With a large number of model parameters, it is also easy to overtrain/overfit. All model architectures are often combined with methods to control for this In particular DenseNet-121 and DenseNet-161.
  • Inception (-ResNet) (https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/viewPaper/14806), such as: lnceptionV4, lnceptionResNetV2.
  • Inception represents a more compl icated convolutional unit, whereby instead of simply using a fixed size filter (e.g. 3x3 pixels) as described in Section 3.2, several sized fi lters are calculated in parallel : (5x5, 3x3, lxl pixels), with weights that are free parameters, so that the neural network may prioritize which filter is most suitable at each layer in the DNN.
  • weights of the network are adjusted, and the running total accuracy so far is assessed.
  • weights are updated during the batch for example using gradient accumulation.
  • the training set is shuffled (i.e. a new randomisation with the set is obtained), and the training starts again from the top, for the next epoch.
  • a number of epochs may be run, depending on the size of the data set, the complexity of the data and the complexity of the model being trained.
  • An optimal number of epochs is typically in the range of 2 to 100, but may be more depending on the specific case.
  • the model is run on the validation set, without any training taking place, to provide a measure of the progress in how accurate the model is, and to guide the user whether more epochs should be run, or if more epochs will result in overtraining.
  • the validation set guides the choice of the overall model parameters, or hyperparameters, and is therefore not a truly blind set.
  • augmentations may also be included for each image (all), or not (noaug). Furthermore, the augmentations for each image may be combined to provide a more robust final result for the image.
  • combination/voting strategies may be used including: mean- confidence (taking the mean value of the inference of the model across all the augmentations), median- confidence, majority-mean-confidence (taking the majority viability assessment, and only providing the mean confidence of those that agree, and if no majority, take the mean), max-confidence, weighted average, majority-max-confidence, etc.
  • Pretraining Another method used in the field of machine learning is transfer learning, where a previously trained model is used as the starting point to train a new model. This is also referred to as Pretraining.
  • Pre-training is used extensively, which allows new models to be built rapidly.
  • One embodiment of pre-training is ImageNet pre-training.
  • Most model architectures are provided with a set of pre-trained weights, using the standard image database ImageNet. While it is not specific for medical images, and includes one thousand different types of objects, it provides a method for a model to have already learnt to identify shapes. The classifier of the thousand objects is completely removed, and a new classifier for viability replaces it. This kind of pre-training outperforms other initialization strategies.
  • pre-training is custom pre-training which uses a previously-trained embryo model, either from a study with a different set of outcomes, or on different images (PGS instead of viabi lity, or randomly assigned outcomes). These models only provide a smal l benefit to the classification.
  • the weights need to be initial ized.
  • the initial ization method can make a difference to the success of the training.
  • Al l weights set to 0 or 1 for example, wi ll perform very poorly.
  • a uniform arrangement of random numbers, or a Gaussian distribution of random numbers, also represent commonly used options.
  • These are also often combined with a normal ization method, such as Xavier or Kaiming algorithms. This addresses an issue where nodes in the neural network can become 'trapped' in a certain state, by becoming saturated (close to 1), or dead (close to 0), where it is difficult to measure in which direction to adjust the weights associated with that particular neuron. This is especially prevalent when introducing a hyperbol ic-tangent or a sigmoid function, and is addressed by the Xavier initial ization.
  • the neural network weights are randomized in such a way that the inputs of each layer to the activation function wi l l not fal l too close to either the saturated or dead extreme ends.
  • the use of ReLU is better behaved, and different initializations provide a smal ler benefit, such as the Kaiming initial ization.
  • the Kaiming initial ization is better suited to the case where ReLU is used as the neuron’s non-l inear activation profile. This achieves the same process as the Xavier initial ization effectively.
  • a range of free parameters is used to optimize the model training on the val idation set.
  • One of the key parameters is the learning rate, which determines by how much the underlying neuron weights are adjusted after each batch.
  • overtraining, or overfitting the data should be avoided. This happens when the model contains too many parameters to fit, and essential ly 'memorizes' the data, trading general izabi l ity for accuracy on the training or validation sets. This is to be avoided, since the generalizability is the true measure of whether the model has correctly identified true underlying parameters that indicate embryo health, among the noise of the data, and not compromised this in order to fit the training set perfectly.
  • CosineAnneal ling incorporating the aforementioned methods of tensor initial ization or pre-training, and the addition of noise, such as Dropout layers, or Batch Normalization.
  • Batch Normal isation is used to counteract vanishing or exploding gradients which improves the stabi lity of training large models resulting in improved general isation.
  • Dropout regularization effectively simplifies the network by introducing a random chance to set al l incoming weights zero within a rectifier’s receptive range. By introducing noise, it effectively ensures the remaining rectifiers are correctly fitting to the representation of the data, without relying on over-specialization. This al lows the DNN to generalize more effectively and become less sensitive to specific values of network weights.
  • Batch Normal ization improves training stabi lity of very deep neural networks, which al low s for faster learning and better generalization by shifting the input weights to zero mean and unit variance as a precursor to the rectification stage.
  • the methodology for altering the neuron weights to achieve an acceptable classification includes the need to specify an optimization protocol . That is, for a given definition of 'accuracy' or ' loss' (discussed below) exactly how much the weights should be adjusted, and how the value of the learning rate should be used, has a number of techniques that need to be specified.
  • Suitable optimisation techniques include Stochastic Gradient Descent (SGD) with momentum (and/or Nesterov accelerated gradients), Adaptive Gradient with Delta (Adadelta), Adaptive Moment Estimation (Adam), Root-Mean-Square Propagation (RMSProp), and Limited-Memory Broyden-Fletcher-Goldfarb- Shanno ( L-BFGS) Algorithm.
  • SGD Stochastic Gradient Descent
  • Adadelta Adaptive Gradient with Delta
  • Adam Adaptive Moment Estimation
  • RMSProp Root-Mean-Square Propagation
  • L-BFGS Limited-Memory Broyden-Fletcher-Goldfarb- Shanno Algorithm.
  • SGD based techniques general ly outperformed other optimisation techniques.
  • Typical learning rates for phase contrast microscope images of human embryos were between 0.01 to 0.0001.
  • the learning rate wil l depend upon batch size, which is dependent upon hardware capacity.
  • Stochastic Gradient Descent with momentum (and/or Nesterov accelerated gradients) represents the most simple and commonly used optimizer.
  • Gradient descent algorithms typical ly compute the gradient (slope) of the effect of a given weight on the accuracy. Whi le this is slow if it is required to calculate the gradient for the whole dataset to perform an update to the weights, stochcistic gradient descent performs an update for each training image, one at a time. Whi le this can result in fluctuations in the overal l objective accuracy or loss achieved, it has a tendency to generalize better than other methods, as it is able to jump into new regions of the loss parameter landscape, and find new minimum loss functions. For a noisy loss landscape in difficult problems such as embryo selection, SGD performs wel l .
  • SGD can have trouble navigating asymmetrical loss function surface curves that are more steep on one side than the other, this can be compensated for by adding a parameter cal led momentum. This helps accelerate SGD in the direction and dampens high fluctuations in the accuracy, by adding an extra fraction to the update of the weight, derived from the previous state.
  • An extension of this method is to include the estimated position of the weight in the next state as well, and this extension is known as the Nesterov accelerated gradient.
  • Adaptive Gradient with Delta is an algorithm for adapting the learning rate to the weights themselves, performing smal ler updates for parameters that are frequently occurring, and larger updates for infrequently occurring features, and is well-suited to sparse data. Whi le this can suddenly reduce the learning rate after a few epochs across the entire dataset, the addition of a delta parameter in order to restrict the window allowed for the accumulated past gradients, to some fixed size. This process makes a default learning rate redundant, however, and the freedom of an additional free parameter provides some control in finding the best overal l selection model .
  • Adaptive Moment Estimation stores exponential ly decaying average of both past squared and non-squared gradients, incorporating them both into the weight update. This has the effect of providing 'friction' for the direction of the weight update, and is suitable for problems that have relatively shal low or flat loss minima, without strong fluctuations.
  • training with Adam has a tendency to perform wel l on the training set, but often overtrain, and is not as suitable as SGD with momentum.
  • Root-Mean-Square Propagation is related to the adaptive gradient optimizers above, and almost identical to Adadelta, except that the update term to the weights divides the learning rate by an exponential ly decaying average of the squared gradients.
  • L-BFGS Limited-Memory Broyden-Fletcher-Goldfarb-Shanno Algorithm. Whi le computational ly intensive, the L-BFGS algorithm that actual ly estimates the curvature of the loss landscape rather than other methods than attempt to compensate for this lack of estimation with additional terms. It has a tendency to outperform Adam when the data set is smal l, but doesn't necessari ly outperform SGD in terms of speed and accuracy.
  • the learning rate of the convolution layers can be specified to be much larger or smal ler than the learning rate of the classifier. This is useful in the case of pre-trained models, where changes to the filters underneath the classifier should be kept more 'frozen', and the classifier be retrained, so that the pre-training is not undone by additional retraining.
  • Whi le the optimizer specifies how to update the weights given a specific loss or accuracy measure, in some embodiments the loss function is modified to incorporate distribution effects. These may include cross-entropy (CE) loss, weighted CE, residual CE, inference distribution or a custom loss function.
  • CE cross-entropy
  • Cross Entropy Loss is a commonly used loss function, which has a tendency to outperform simple mean-squared-of-difference between the ground truth and the predicted value. If the result of the network is passed through a Softmax layer, such as is the case here, then the distribution of the cross entropy results in better accuracy. This is because is natural ly maximizes the l i kel ihood of classifying the input data correctly, by not weighting distant outl iers too heavi ly.
  • the loss function should be weighted proportionally so that misclassifying an element of the less numerous class is penal ized more heavi ly. This is achieved by pre-multiplying the right hand side of Eq.(2) with the factor: where N[class] is the total number of images for each class, N is the total number of samples in the dataset and C is the number of classes. It is also possible to manual ly bias the weight towards the viable embryos in order to reduce the number of false negatives compared to false positives, if necessary.
  • an Inference Distribution may be used. Whi le it is important to seek a high level of accuracy in classifying embryos, it is also important to seek a high level of transferabi l ity in the model. That is, it is often beneficial to understand the distribution of the scores, and that whi le seeking a high accuracy is an important goal, the separate of the viable and non-viable embryos confidently with a margin of certainty is an indicator that the model wil l general ize wel l to a test set.
  • a Custom Loss function is used.
  • a new term is added to the loss function which maintains differentiabi l ity, called a residual term, which is defined in terms of the networks weights. It encodes the collective difference in the predicted value from the model and the target outcome for each image, and includes it as an additional contribution to the normal cross entropy loss function.
  • the formula for the residual term is as fol lows, for iV images:
  • Custom Loss function wel l-space clusters of viable and non-viable embryo scores are thus considered consistent with an improve loss rating. It is noted that this custom loss function is not specific to the embryo detection appl ication, and could be used in other Deep Learning Models.
  • the models are combined to generate a more robust final Al model 100. That is deep learning and/or computer vision models are combined together to contribute to the overal l prediction of the embryo viabi l ity.
  • an ensemble method is used. First, models that perform wel l are selected. Then, each model 'votes' on one of the images (using augmentations or otherwise), and the voting strategy that leads to the best result is selected.
  • Example voting strategies include maximum- confidence, mean-value, majority-mean-value, median-value, mean-confidence, median-confidence, majority-mean-confidence, weighted average, majority-max-confidence, etc. Once the voting strategy has been selected, the evaluation method for the combination of augmentations must also be selected, which describes how each of the rotations should be treated by the ensemble, as before.
  • the final Al model 100 can thus be defined as a collection of trained Al models, using deep learning and/or computer vision models, together with a mode, which encodes the voting strategy that defines how the individual Al model results wi l l be combined, and an evaluation mode that defines how the
  • This procedure effectively assesses whether the distributions of the embryo scores on a test set for two different models are simi lar or not.
  • the contrasting criterion drives model selection with diverse prediction outcome distributions, due to different input images or segmentation. This method ensured translatabi l ity by avoiding selection of models that performed wel l only on specific clinic datasets, thus preventing over-fitting. Additionally model selection may also use a diversity criterion.
  • the diversity criterion drives model selection to include different model 's hyperparameters and configurations. The reason is that, in practice, simi lar model settings result in simi lar prediction outcomes and hence may not be useful for the final ensemble model .
  • this can be implemented by using a counting approach and specifying a threshold simi larity, such as 50%, 75% or 90% overlapping images in the two sets.
  • a threshold simi larity such as 50%, 75% or 90% overlapping images in the two sets.
  • the scores in a set of images could be totalled and two sets (totals) compared, and ranked simi lar if the two totals are less than a threshold amount.
  • Statistical based comparisons could also be used, for example taking into account the number of images in the set, or otherwise comparing the distribution of images in each of the sets.
  • a disti l lation method could be used to combine the individual Al models.
  • the Al models are used as teacher models to train a student model .
  • Selection of the individual Al models may be performed using diversity and contrasting criterion as discussed for ensemble methods. Further other methods for selecting the best model from a range of models or for combining outputs from multiple models into a single output maybe used.
  • An embodiment of an ensemble based embryo viabi l ity assessment model was generated and two val idation (or bench marking) studies were performed in IVF cl inics to assess the performance of the embryo viabi lity assessment model described herein compared to working embryologists. For ease of reference this wil l be referred to as the ensemble model .
  • These val idation studies showed that the embryo viabi lity assessment model showed a greater than 30% improvement in accuracy in identifying the viabi lity of embryos when compared directly with world-leading embryologists.
  • the studies thus val idates the abi l ity of embodiments of the ensemble model described herein to inform and support embryologists' selection decision, which is expected to contribute to improved IVF outcomes for couples.
  • the first study was a pi lot study conducted with an Austral ian clinic (Monash IVF) and the second study was conducted across multiple cl inics and geographical sites.
  • the studies assessed the abi l ity of an embodiment of an ensemble based embryo viabi lity assessment model as described, to predict Day 5 embryo viabi l ity, as measured by cl inical pregnancy.
  • each patient in the IVF process may have multiple embryos to select from.
  • An embodiment of an embryo viabi l ity assessment model as described herein was used to assess and score the viabil ity of each of these embryos.
  • only embryos that are implanted and which the pregnancy outcome is known e.g. foetal heartbeat detected at the first ultrasound scan
  • the total data set thus comprises images of embryos that have been implanted into the patient, with associated known outcomes, for which the accuracy (and thus the performance) of the model can be validated.
  • some of the images used for val idation comprise the embryologist's score as to the viabi l ity of the embryo.
  • an embryo that is scored as 'non-viable' may sti ll be implanted if it nevertheless sti ll the most favorable embryo choice, and/or upon the request of the patient. This data enables a direct comparison of how the ensemble model performs compared with the embryologist.
  • Both the ensemble model and the embryologists’ accuracies are measured as the percentage of the number of embryos that were scored as viable and had a successful pregnancy outcome (true positives), in addition to the number of embryos that were scored non-viable and had an unsuccessful pregnancy outcome (true negatives), divided by the total number of scored embryos.
  • This approach is used to val idate whether the ensemble model performs comparably or better when directly compared with leading embryologists. It is noted that not al l images have corresponding embryologist scores in the dataset.
  • the fol lowing interpretation of the embryologist scores for each clinic is used, for a degree of expansion that is at least a blastocyst (' BL' in Ovation Ferti lity notation, or 'XB' in M idwest Fertil ity Special ists notation).
  • Embryos that are l isted as the cellular stage (e.g. 10 cel l), as compacting from the cellular stage to the morula, or as cavitating morula (where the blastocoel cavity is less than 50% of the total volume at Day 5 after IVF) are considered likely to be non-viable.
  • PPS pregnancy and pre-implantation genetic screening
  • Ferti lity Associates (Auckland, Hami lton, Well ington, Wales and Dunedin, New Zealand), Oregon Reproductive Medicine (Portland, OR, USA) and Alpha Ferti l ity Centre (Petaling J aya, Selangor, Malaysia).
  • Al model for use in the trial proceeded as fol lows.
  • Initial fi ltering is performed to select models which exhibit stabi l ity (accuracy stable over the training process), transferabi lity (accuracy stable between training and validation sets) and predictions accuracy.
  • Prediction accuracy examined which models provided the best val idation accuracy, for both viable and non-viable embryos, the total combined accuracy, and the balanced accuracy, defined as the weighted average accuracy across both class types of embryos.
  • the use of ImageNet pretrained weights demonstrated improved performance of these quantities. Evaluation of loss functions indicated that weighted CE and residual CE loss functions general ly outperformed other models.
  • the final ensemble based Al model was an ensemble of the highest performing individual models selected on the basis of diversity and contrasting results.
  • Voting strategies evaluated included mean, median, max, majority mean voting, maximum- confidence, mean-value, majority-mean-value, median-value, mean-confidence, median-confidence, majority-mean-confidence, weighted average, majority-max-confidence, etc.
  • the majority mean voting strategy is used as in testing it outperformed other voting strategies giving the most stable model across all datasets.
  • the final ensemble based Al model includes eight deep learning models of which four are Zona models and four are full-embryo models.
  • the final model configuration used in this embodiment is as fol lows:
  • Measures of accuracy used in the assessment of model behaviour on data included sensitivity, specificity, overal l accuracy, distributions of predictions, and comparison to embryologists' scoring methods.
  • sensitivity was defined as the number of embryos that the Al model identified as viable divided by the total number of known viable embryos that resulted in a positive cl inical pregnancy.
  • non-viable embryos was defined as the number of embryos that the Al model identified as non-viable divided by the total number of known non-viable embryos that resulted in a negative cl inical pregnancy outcome.
  • Overal l accuracy of the Al model was determined using a weighted average of sensitivity and specificity, and percentage improvement in accuracy of the Al model over the embryologist was defined as the difference in accuracy as a proportion of the original embryologist accuracy
  • AI_accuracy - embryologist_accuracy / embryologist_accuracy
  • Monash IVF provided the ensemble model with approximately 10,000 embryo images and related pregnancy and live birth data for each image. Additional data provided included patient age, BM I, whether the embryo was implanted fresh or was frozen prior, and any ferti lity related medical conditions. Data for some of the images contained the embryologist's score for the viabi l ity of the embryo. Prel iminary training, val idation and analysis showed that the model's accuracy is significantly higher for day 5 embryos compared with day 4 embryos. Hence all day 4 embryos were removed, leaving approximately 5,000 images. The usable dataset for training and validation was 4650 images. This initial dataset was spl it into 3 separate datasets. A further 632 images were then provided which was used as a second Bl ind val idation dataset. The final datasets for training and validation include:
  • Val idation dataset 390 images, of which 70 (17.9%) had a successful pregnancy
  • Bl ind validation dataset 1 368 images of which 76 (20.7%) had a successful pregnancy outcome and 121 images included an embryologist score on the viabil ity of the embryo;
  • Bl ind validation dataset 2 632 images of which 194 (30.7%) had a successful pregnancy outcome and 477 images included an embryologist score on the viabil ity of the embryo [00156] Not all images have corresponding embryologist scores in the dataset.
  • the ensemble based Al model was appl ied to the three val idation datasets.
  • the overal l accuracy results for the ensemble model in identifying viable embryos are shown in Table 2.
  • the accuracy results for the two bl ind val idation datasets are the key accuracy indicators, however, results for the validation dataset are shown for completeness.
  • the accuracy for identifying viable embryos is calculated as a percentage of the number of viable embryos (i.e. images that had a successful pregnancy outcome) that the ensemble model could identify as viable (a viabi lity score of 50% or greater by the model) divided by the total number of viable embryos in the dataset.
  • the accuracy for identifying non-viable embryos is calculated as a percentage of the number of non-viable embryos (i.e. images that had an unsuccessful pregnancy outcome) that the ensemble model could identify as non- viable (a viabi l ity score of under 50% by the model) divided by the total number of non-viable embryos in the dataset.
  • FIG. 9 is a plot of the accuracy of an embodiment of an ensemble model in identifying embryo viability 900 according to an
  • Accuracy was calculated by summing the number of embryos that were identified as viable and led to a successful outcome, plus the number of embryos that were identified as non-viable and led to an unsuccessful outcome, divided by the total number of embryos.
  • the ensemble model showed 74.1% accuracy in identifying viable embryos 920 and 65.3% accuracy in identifying non-viable embryos 930. This represents a significant accuracy improvement in this large dataset of embryos already pre-selected by embryologists and implanted into patients, where only 27% resulted in a successful pregnancy outcome.
  • a subset of the images used for val idation had an associated embryologist’s score relating to the viabil ity of the embryo (598 images).
  • an embryo that is scored as 'non-viable' by an embryologist may sti ll be implanted if it is considered the most favorable embryo choice for that patient, and/or upon the request of the patient, despite a low l i kel ihood of success.
  • Embryo scores were used as a ground truth of the embryologists’ assessment of viabil ity and al low for a direct comparison of the ensemble model performance compared with leading embryologists.
  • the worst-case accuracy for the blind validation dataset 1 or 2 is 63.2% for identifying viable embryos in blind dataset 1, 57.5% for identifying non-viable embryos in blind dataset 2, and 63.9% total accuracy for blind dataset 2.
  • Table 3 shows the total mean accuracy across both blind datasets 1 and 2, which is 74.1% for identifying viable embryos, 65.3% for identifying non-viable embryos, and 67.7% total accuracy across both viable and non-viable embryos.
  • Table 4 shows the results comparing the model's accuracy with those of the
  • the accuracy values differ to those in the table above because not all embryos images in the datasets have embryo scores, and thus the results below are accuracy values on a subset of each dataset.
  • the table shows that the model's accuracy in identifying viable embryos is higher than the embryologist.
  • Table 5 shows a comparison of the number of times that the model was able to correctly identify the viabi l ity of an embryo and the embryologist was not able to, and vice versa. The results show there were fewer occurrences where embryologists were correct and the model was incorrect compared with the cases where the model was correct and embryologists were incorrect. These results re i llustrated in Figure 11. This result further val idates the high level of performance and accuracy of the ensemble model 's embryo viabi l ity assessment model .
  • Figure 11 is a bar plot showing the accuracy of an embodiment of the ensemble model (bar 1110) compared to world-leading embryologists (clinicians) (bar 1120) in correctly identifying embryo viabi l ity where the embryologists' assessment was incorrect, compared with embryologists correctly identifying embryo viabi l ity where the ensemble model assessment was incorrect.
  • the usable dataset of 2217 images (and linked outcomes) for developing the ensemble model is split into three subsets in the same manner as the pi lot study: the training dataset, val idation dataset and bl ind val idation dataset.
  • These studies include data sourced from the cl inics: Ovation Ferti l ity Austin, San Antonio IVF, M idwest Ferti l ity Specialists, and Institute for Reproductive Health and Ferti l ity Associates NZ. This comprised:
  • Training dataset 1744 images - 886 non-viable, 858 viable;
  • Val idation dataset 193 images - 96 non-viable, 97 viable.
  • Figure 12 is a plot of the distribution of inference scores 1200 for viable embryos (successful clinical pregnancy) using the embodiment of the ensemble based Al model, when appl ied to the bl ind validation dataset of Study 1.
  • the inferences are normal ized between 0 and 1, and can be interpreted as confidence scores. Instances where the model is correct are marked in boxes fi l led with thick downward diagonal l ines (True Positives 1220); whereas instances where the model is incorrect are marked in in boxes filled with thin upward diagonal lines (False Negatives 1210).
  • Figure 13 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) 1300 using the embodi ment of the ensemble based Al model, when appl ied to the bl ind val idation dataset of Study 1.
  • the inferences are normal ized between 0 and 1, and can be interpreted as confidences scores Instances where the model is correct are marked in boxes fil led with thick downward diagonal lines (True Negatives 1320), whereas instances where the model is incorrect are marked in boxes fil led with thin upward diagonal lines (False Positives 1310).
  • Truste Negatives 1320 True Negatives 1320
  • False Positives 1310 thin upward diagonal lines
  • Figure 13 contains a tall peak in the False Positives 1310 (boxes fi l led with thin upward diagonal l ines), which is not as prominent in the equivalent histogram for the False Negatives in Figure 12.
  • the reason for this effect could be due to the presence of patient health factors, such as uterine scarring, that cannot be identified through the embryo image itself. The presence of these factors means that even an ideal embryo may not lead to a successful implantation. This also l imits the upper value of the accuracy in predicting successful cl inical pregnancy using embryo imagine analysis alone.
  • models are selected for inclusion in the final ensemble based Al model such that the ensemble based Al model accuracy on the set of viable embryo i mages is higher than the accuracy on the set of non-viable embryo images, if possible. If models cannot be found such that they combine together to provide a bias to the viabi lity accuracy, then an additional parameter is sometimes supplied during training, which increases the penalty for misclassifying a viable embryo.
  • the embryologist score contains a numeral, or terminology representing a ranking of the embryos in terms of their advancement or arrestment (number of cells, compacting, morula, cavitation, early blastocyst, full blastocyst or hatched blastocyst)
  • a comparison of the ranking of the embryos can be made by equating the embryologist assessment with a numerical score from 1 to 5, while dividing the Al inferences into 5 equal bands (from the minimum inference to the maximum inference), labeled 1 to 5.
  • a comparison of ranking accuracy is made as follows.
  • FIG. 14 is a histogram of the rank obtained from the embryologist scores across the total blind dataset 1400 and Figure 15 is a histogram of the rank obtained from the embodiment of the ensemble based Al model inferences across the total blind dataset 1500.
  • Figures 14 and 15 differ from each other in the shape of the distribution. While there is dominance in the embryologist scores around a rank value of 3, dropping off steeply for lower scores of 1 and 2, the ensemble based Al model has a more even distribution of scores around a value of 2 and 3, with a rank of 4 being the dominant score.
  • Figure 16 has been extracted directly from the inference scores obtained from the ensemble based Al model, which are shown as a histogram in Figure 13 for comparison.
  • the ranks in Figure 12 are a coarser version of the scores in Figure 13.
  • the finer distribution in Figure 16 shows that there is a clear separation between the scores below 50% (predicted non-viable) 1610 and those above (predicted viable) 1620. This suggests the ensemble based Al model provides greater granularity around embryo ranking than the standard scoring method, enabling a more definitive selection to be achieved.
  • Appl ication of the model on the EmbryoScope images without any additional treatment results in an uneven prediction, where a high proportion of the images are predicted to be non-viable, leading to a high rate of False Negatives, and a low sensitivity, as shown in Table 15.
  • a coarse, first-pass appl ication to bring the image closer to its expected form results in a significant rebalancing of the inferences, and an increase in accuracy.
  • Whi le this dataset is smal l, it nevertheless provides evidence that computer vision techniques that reduce the variabi l ity in the form of the image can be used to improve the general izabi l ity of the ensemble based Al model.
  • a comparison with the embryologist was also conducted. Whi le no scores were provided directly by Alpha Ferti l ity Centre, it was found that the conservative assumption that embryos are predicted to be li kely viable (to avoid False Negatives) leads to a very simi lar accuracy to the true embryologist accuracy in the case of Study 1. Therefore, by making this assumption, the comparison between the ensemble based Al model accuracy and the embryologist accuracy can be carried out in the same way, as shown in Table 16. In this Study, a percentage improvement of 33.33% was found, simi larly to the total improvement obtained from Study 1, 31.85%.
  • Figure 19 is a plot of the distribution of inference scores for viable embryos (successful cl inical pregnancy) using the ensemble based Al model 1900 (False Negatives 1910 boxes fil led with thin upward diagonal l ines; True Positives 1920 boxes fi lled with thick downward diagonal l ines).
  • Figure 20 is a plot of the distribution of inference scores for non-viable embryos (successful cl inical pregnancy) using the ensemble based Al model 2000 (False Negatives 1220 boxes fi l led with thin upward diagonal l ines; True Positives 2020 boxes fi lled with thick downward diagonal l ines).
  • the ensemble based Al model is capable of achieving a high accuracy when compared to embryologists from each of the cl inics, with a mean improvement of 31.85% in a crosscl inic bl ind val idation study - simi lar to the improvement rate in the Austral ian pilot study.
  • the distribution of the inference scores obtained from the ensemble based Al model exhibited a clear separation between the correct and incorrect predictions for both viable and non-viable embryos, which provides evidence that the model is translating correctly to future blind datasets.
  • a comparative study with embryologist scores was expanded to consider the effect of the order of the embryo rank.
  • the ensemble based Al model inferences and the embryologist rank into an integer between 1 and 5
  • a direct comparison could be made as to how the ensemble based Al model wi l l differ in ranking the embryos from most viable to least viable, compared to the embryologist.
  • the ensemble based Al model was appl ied to a second bl ind val idation set, which exhibited accuracy within a few percent of Study 1.
  • the abil ity of the ensemble based Al model to perform on damaged or distorted images was also assessed. It was found that images that do not conform to the standard phase-contrast microscope images, or are low qual ity, blurred, compressed or poorly cropped are l ikely to be assessed as non-viable, and the ensemble based Al model confidence in the embryo image predicted is reduced.
  • Embodiments of methods and systems for the computational generation of Al models configured to generate embryo viabi lity score from an image using one or more deep learning models have been described.
  • a new Al model for estimating embryo viabi lity can be generated by segmenting images to identify Zona Pellucida and IZC regions, which annotate the images into key morphological components.
  • At least one Zona Deep Learning model is then trained on the Zona Pel lucida masked images.
  • a plural ity of Al models including deep learning models and/or computer vision models are generated and models that exhibits stabi lity, transferabi lity from the val idation set to the bl ind test set are selected and prediction accuracy are retained.
  • These Al models may be combined for example using an ensemble model that selects models based on contrasting and diversity criterion, and which are combined using a confidence based voting strategy.
  • a suitable Al model Once a suitable Al model is trained, it can then be deployed to estimate viabi lity of newly col lected images .
  • This can be provided as a cloud service allowing IVF cl inics or embryologists to upload captured images and get a viabi l ity score to assist in deciding whether to implant an embryo, or where multiple embryo's are avai lable, selecting which embryo (or embryo's) are most l ikely to be viable.
  • Deployment may comprise exporting the model coefficients and model metadata to a fi le and then loading onto another computing system to process new images, or reconfiguring the computational system to receive new images and generate a viabil ity estimate.
  • Implementations of ensemble based Al model include numerous choices, and embodiments described herein include several novel and advantageous features.
  • Image preprocessing steps such as segmentation to identify Zona Pellucida and IZC regions, object detection, normal isation of images, cropping of images, image cleaning such as removal of old images or non-conforming images (e.g. containing artefacts) can be performed.
  • segmentation to identify the Zona Pel lucida has a significant effect, with the final ensemble based Al model featuring four Zona models. Further deep learning models were generally found to outperform computer vision models, with the final model comprising of an ensemble of 8 deep learning Al models.
  • sti l l be generated using a single Al model based on Zona images, or an ensemble (or simi lar) Al models comprising a combination of Deep Learning and CV models.
  • the use of some deep learning models in which segmentation is performed prior to deep learning is thus preferred, and assists in producing contrasting deep learning models for use in the ensemble based Al model.
  • Image augmentation was also found to improve robustness.
  • DenseNet-161 (although other variants can be used). Simi larly Stochastic Gradient Descent general ly outperformed al l other optimisation protocols for altering neuron weights in almost all trials (fol lowed by Adam). The use of a custom loss function which modified the optimisation surface to make global minima more obvious improved robustness. Randomisation of the data sets before training, and in particular checking that the distribution of the dataset is even (or simi lar) across the test and training sets was also found to have a significant effect. Image of viable embryos are quite diverse, and thus checking the randomisation provides robustness against the diversity effects. Using a selection process to choose contrasting models (i .e.
  • Al models using computer vision and deep learning methods can be generated using one or more of these advantageous features, and could be appl ied to other image sets besides embryos.
  • the embryo model 100 could be replaced with an alternative model, trained and used on other image data, whether of a medical nature or not.
  • the methods could also be more general ly for deep learning based models including ensemble based deep learning models. These could be trained and implemented using systems such as those i l lustrated in Figures 3A and 3B and described above.
  • Models trained as described herein can be useful ly deployed to classify new images and thus assist embryologists in making i mplantation decisions, thus increasing success rates (ie pregnancies).
  • Extensive testing of an embodiment of the ensemble based Al model was performed in which the ensemble based Al model was configured to generate an embryo viabi l ity score of an embryo from an image of the embryo taken five days after in-vitro ferti lisation. The testing showed the model was able to clearly separate viable and non-viable embryos (see Figure 13), and Tables 10 to 12 and Figures 14 to 16 i llustrate that the model outperformed embryologist.
  • an embodiment of an ensemble based Al model was found to have high accuracy in both identifying viable embryos (74.1%) and non-viable embryos (65.3%) and significantly outperform experienced
  • processing may be implemented within one or more appl ication specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, control lers, micro-controllers, microprocessors, or other electronic units designed to perform the functions described herein, or a combination thereof.
  • ASICs appl ication specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • processors control lers, micro-controllers, microprocessors, or other electronic units designed to perform the functions described herein, or a combination thereof.
  • middleware and computing platforms may be used.
  • the processor module comprises one or more Central Processing Units (CPUs) or Graphical processing units (GPU) configured to perform some of the steps of the methods.
  • a computing apparatus may comprise one or more CPUs and/or GPUs.
  • a CPU may comprise an Input/Output Interface, an Arithmetic and Logic Unit (ALU) and a Control Unit and Program Counter element which is in communication with input and output devices through the Input/Output Interface.
  • the Input/Output Interface may comprise a network interface and/or communications module for communicating with an equivalent communications module in another device using a predefined communications protocol (e.g. Bluetooth, Zigbee, IEEE 802.15, IEEE 802.11, TCP/IP, UDP, etc.).
  • a predefined communications protocol e.g. Bluetooth, Zigbee, IEEE 802.15, IEEE 802.11, TCP/IP, UDP, etc.
  • the computing apparatus may comprise a single CPU (core) or multiple CPU's (multiple core), or multiple processors.
  • the computing apparatus is typically a cloud based computing apparatus using GPU clusters, but may be a parallel processor, a vector processor, or be a distributed computing device.
  • Memory is operatively coupled to the processor(s) and may comprise RAM and ROM components, and may be provided within or external to the device or processor module.
  • the memory may be used to store an operating system and additional software modules or instructions.
  • the processor(s) may be configured to load and executed the software modules or instructions stored in the memory.
  • Software modules also known as computer programs, computer codes, or instructions, may contain a number a number of source code or object code segments or instructions, and may reside in any computer readable medium such as a RAM memory, flash memory, ROM memory, EPROM memory, registers, hard disk, a removable disk, a CD-ROM, a DVD-ROM, a Blu-ray disc, or any other form of computer readable medium.
  • the computer-readable media may comprise non- transitory computer-readable media (e.g., tangible media).
  • computer- readable media may comprise transitory computer- readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
  • the computer readable medium may be integral to the processor.
  • the processor and the computer readable medium may reside in an ASIC or related device.
  • the software codes may be stored in a memory unit and the processor may be configured to execute them.
  • the memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
  • modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by computing device.
  • a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein.
  • various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a computing device can obtain the various methods upon coupling or providing the storage means to the device.
  • storage means e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

An Artificial Intelligence (AI) computational system for generating an embryo viability score from a single image of an embryo to aid selection of an embryo for implantation in an In-Vitro Fertilisation (IVF) procedure is described. The AI model uses a deep learning method applied to images in which the Zona Pellucida region in the image is identified using segmentation, and ground truth labels such as detection of a heartbeat at a six week ultrasound scan.

Description

METHOD AND SYSTEM FOR SELECTING EMBRYOS
PRIORITY DOCUMENTS
[0001] The present application claims priority from Australian Provisional Patent Appl ication No.
2019901152 titled " M ETHOD AND SYSTEM FOR SELECTI NG EM BRYOS" and fi led on 4 Apri l 2019, the content of which is hereby incorporated by reference in its entirety.
TECHNICAL FIELD
[0002] The present disclosure relates to In-vitro Fertil isation ( IVF). In a particular form the present disclosure relates to methods for selecting embryos.
BACKGROUND
[0003] An In-Vitro Ferti lisation (IVF) procedure starts with an ovarian stimulation phase which stimulates egg production. Eggs (oocytes) are then retrieved from the patient and ferti l ized in-vitro with sperm which penetrates the Zona Pel lucida, which is a glycoprotein layer surrounding the egg (oocyte) to form a zygote. An embryo develops over a period of around 5 days, after which time the embryo has formed a blastocyst (formed of the trophoblast, blastocoele and inner cell mass) suitable for transfer back into the patient. At around 5 days the blastocyst is still surrounded by the Zona Pellucida, from which the blastocyst wi ll hatch to then implant in the endometrial wall . We wil l refer to the region bounded by the inner surface of the Zona Pel lucida as the InnerZonal Cavity (IZC). The selection of the best embryo at the point of transfer is critical to ensure a positive pregnancy outcome. An embryologist visual ly assesses the embryos using a microscope to make this selection. Some cl inics record images of the embryos at the point of selection and an embryologist may score each embryo based on various metrics and their visual assessment down the microscope. For example one commonly used scoring system is the Gardner Scale in which morphological features such as inner cell mass qual ity, trophectoderm qual ity, and embryo developmental advancement are evaluated and graded according to an alphanumeric scale. The embryologist then selects one (or more) of the embryos which is then transferred back to the patient.
[0004] Thus embryo selection is currently a manual process that involves a subjective assessment of embryos by an embryologist through visual inspection. One of the key challenges in embryo grading is the high level of subjectivity and intra- and inter-operator variabi lity that exists between embryologists of different ski ll levels. This means that standardization is difficult even within a single laboratory and impossible across the industry as a whole. Thus the process rel ies heavi ly on the expertise of the embryologist, and despite their best efforts, the success rates for IVF are sti l l relatively low (around 20%). Whi lst the reasons for low pregnancy outcomes are complex, tools to more accurately select the most viable embryo's is expected to result in increases in successful pregnancy outcomes.
[0005] To date, several tools have been developed to assist embryologists in selecting viable embryos, including pre-implantation genetic screening (PGS) or time lapse photography. However each approach has crucial limitations. PGS involves the genetic assessment of several cells from the embryo by taking a biopsy, and then screening the extracted cel ls. Whi lst this can be useful to identify genetic risks which may lead to a failed pregnancy, this also has the potential to harm the embryo during the biopsy process. It is also expensive and has l imited or no availabil ity in many large developing markets such as China. Another tool that has been considered is the use of time-lapse imaging over the course of embryo development. However this requires expensive specialized hardware that is cost prohibitive for many cl inics. Further there is no evidence that it can rel iably improve embryo selection. At best it can assist in determining whether an embryo at an early stage wi l l develop through to a mature blastocyst, but it has not been demonstrated to rel iably predict pregnancy outcomes and is therefore l imited in its uti l ity for embryo selection.
[0006] There is thus a need to provide an improved tool for assisting an embryologist to perform selection of an embryo for implantation, or at least to provide a useful alternative to existing tools and systems.
SUMMARY
[0007] According to a first aspect, there is provided a method for computational ly generating an Artificial Intell igence (Al) model configured to estimate an embryo viabi l ity score from an image, the method comprising:
receiving a plural ity of images and associated metadata, wherein each image is captured during a pre-determined time window after In-Vitro Ferti l isation (IVF) and the pre-determined time window is 24 hours or less, and the metadata associated with the image comprises at least a pregnancy outcome label; pre-processing each image comprising at least segmenting the image to identify a Zona Pel lucida region;
generating an Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an input image by training at least one Zona Deep Learning Model using a deep learning method, comprising training a deep learning model on a set of Zona Pel lucida images in which the Zona Pel lucida regions are identified, and the associated pregnancy outcome labels are at least used to assess the accuracy of a trained model; and
deploying the Al model . [0008] In a further form the set of Zona Pel lucida images comprising images in which regions bounded by the Zona Pel lucida region are masked.
[0009] In a further form, generating the Al model further comprises training one or more additional Al models wherein each additional Al model is either a computer vision model trained using a machine learning method that uses a combination of one or more computer vision descriptors extracted from an image to estimate an embryo viabi lity score, a deep learning model trained on images local ised to the embryo comprising both Zona Pel lucida and IZC regions, and a deep learning model trained on a set of IntraZonal Cavity (IZC) images in which al l regions apart from the IZC are masked, and either using an ensemble method to combine at least two of the at least one Zona deep learning model and the one or more additional Al models to generate the Al model embryo viabi l ity score from an input image or using a disti llation method to train an Al model to generate the Al model embryo viabi lity score using the at least one Zona deep learning model and the one or more additional Al models to generate the Al model .
[0010] In one form, the Al model is generated using an ensemble model comprising selecting at least two contrasting Al models from the at least one Zona deep learning model and the one or more additional Al models, and selection of Al models is performed to generate a set of contrasting Al models and applying a voting strategy to the at least two contrasting Al models that defines how the selected at least two contrasting Al models are combined to generate an outcome score for an image.
[0011] In a further form, selecting at least two contrasting Al models comprises
generating a distribution of embryo viabi l ity scores from a set of images for each of the at least one Zona deep learning model and the one or more additional Al models, and
comparing the distributions and discarding a model if the associated distributions are too simi lar to another distribution to select Al models with contrasting distributions.
[0012] In one form, the pre-determined time window is a 24 hour timer period beginning 5 days after fertil isation. In one form, the pregnancy outcome label is a ground-truth pregnancy outcome measurement performed within 12 weeks after embryo transfer. In a further form, the ground-truth pregnancy outcome measurement is whether a foetal heartbeat is detected.
[0013] In one form the method further comprises cleaning the plural ity of image comprising identifying images with l ikely incorrect pregnancy outcome labels, and excluding or re-labell ing the identified images.
[0014] In a further form, cleaning the plural ity of images comprises estimating the likel ihood that a pregnancy outcome label associated with an image is incorrect and comparing against a threshold value, and then excluding or relabel ling images with a l i kel ihood exceeding the threshold value. [0015] In a further form, estimating the l i kel ihood a pregnancy outcome label associated with an image is incorrect is be performed by using a plurality of Al classification models and a k-fold cross validation method in which the plural ity of images are split into k mutual ly exclusive validation datasets, and each of the plural ity of Al classifications model is trained on k-1 validation datasets in combination and then used to classify images in the remaining val idation dataset, and the l ikel ihood is determined based on the number of Al classification models which misclassify the pregnancy outcome label of an image.
[0016] In one form, training each Al model or generating the ensemble model comprises assessing the performance of an Al model using a plural ity of metrics comprising at least one accuracy metric and at least one confidence metric, or a metric combining accuracy and confidence.
[0017] In one form, pre-processing the image further comprises cropping the image by local ising an embryo in the image using a deep learning or computer vision method.
[0018] In one form, pre-processing the image further comprises one or more of padding the image, normal ising the colour balance, normal ising the brightness, and scal ing the image to a predefined resolution.
[0019] In one form, padding the image may be performed to generate a square aspect ratio for the image. In one form, the method further comprises generating one or more one or more augmented images for use in training an Al model . Preparing each image may also comprise generating one or more augmented images by making a copy of an image with a change or the augmentation may be performed on the image. It may be performed prior to training or during training (on the fly). Any number of augmentations may be performed with varying amounts of 90 degree rotations of the image, mirror fl ip, a non-90 degree rotation where a diagonal border is fi lled in to match a background colour, image blurring, adjusting an image contrast using an intensity histogram, and applying one or more smal l random translations in both the horizontal and/or vertical direction, random rotations, J PEG noise, random image resizing, random hue j itter, random brightness j itter, contrast l imited adaptive histogram equal ization, random fl ip/mirror, image sharpening, image embossing, random brightness and contrast, RGB colour shift, random hue and saturation, channel shuffle, swap RGB to BGR or RBG or other, coarse dropout, motion blur, median blur, Gaussian blur, random shift-scale-rotate (i .e. al l three combined).
[0020] In one form, during training of an Al model one or more augmented images are generated for each image in the training set and during assessment of the val idation set, the results for the one or more augmented images are combined to generate a single result for the image. The results may be combined using one of mean-confidence, median-confidence, majority-mean-confidence, max-confidence methods or other voting strategies for combining model predictions. [0021] In one form pre-processing an image may further comprise annotating the image using one or more feature descriptor models, and masking al l areas of the image except those within a given radius of the descriptor key point. The one or more feature descriptor models may comprise a Gray-Level Co- Occurrence Matrix (G LCM) Texture Analysis, a Histogram of Oriented Gradients (HOG), a Oriented Features from Accelerated Segment Test (FAST) and Rotated Binary Robust Independent Elementary Features (BRI EF), a Binary Robust Invariant Scalable Key-points (BRISK), a Maximally Stable Extremal Regions (MSER) or a Good Features To Track (GFTT) feature detector.
[0022] In one form each Al model generates an outcome score wherein the outcome is a n-ary outcome having n states, and training an Al model comprises a plurality of training-val idation cycles further comprises randomly al locating the plurality of images to one of a training set, a validation set or a bl ind val idation set, such that the training dataset comprises at least 60% of the images, the validation dataset comprises at least 10% of the images, and the bl ind val idation dataset comprises at least 10% of the images, and after al locating the images to the training set, val idation set and blind val idation set, calculating the frequency of each of the n-ary outcome states in each of the training set, val idation set and bl ind val idation set, and testing that the frequencies are simi lar, and if the frequencies are not simi lar then discarding the al location and repeating the randomisation unti l a randomisation is obtained in which the frequencies are simi lar.
[0023] In one form, training a computer vision model comprising performing a plurality of training- val idation cycles, and during each cycle the images are clustered based on the computer vision descriptors using an unsupervised clustering algorithm to generate a set of clusters, and each image is assigned to a cluster using a distance measure based on the values of the computer vision descriptors of the image, and a supervised learning method is use to determine whether a particular combination of these features corresponds to an outcome measure, and frequency information of the presence of each computer vision descriptor in the plural ity of images.
[0024] In one form the deep learning model may be a convolutional neural network (CN N) and for an input image each deep learning model generates an outcome probabil ity.
[0025] In one form the deep learning method may use a loss function configured to modify an optimization surface is to emphasise global minima. The loss function may include a residual term defined in terms of the network weights, which encodes the col lective difference in the predicted value from the model and the target outcome for each image, and includes it as an additional contribution to the normal cross entropy loss function.
[0026] In one form the method may be performed on a cloud based computing system using a Webserver, a database, and a plurality of training servers, wherein the Webserver receives one or more model training parameters from a user, and the Webserver initiates a training process on one or more of the plurality of training servers, comprising uploading training code to one of the plural ity the training server, and the training server requests the plurality of images and associated metadata from a data repository, and performs the steps of preparing each image, generating a plural ity of computer vision models and generating a plural ity of deep learning models, and each training server is configured to periodically save the models to a storage service, and accuracy information to one or more log fi les to allow a training process to be restarted. In a further form the ensemble model may be trained to bias residual inaccuracies to minimize false negatives.
[0027] In one form the outcome is a binary outcome of either viable or non-viable, and randomisation may comprise calculating the frequency of images with a viable classification and a non-viable classification, in each of the training set, val idation set and bl ind val idation set and testing if they are simi lar. In one form the outcome measure is a measure of embryo viabi lity using the viabil ity classification associated with each image. In one form each outcome probabi lity may be a probabil ity that the image is viable. In one form each image may be a phase contrast image.
[0028] According to a second aspect, there is provided a method for computationally generating an embryo viabi l ity score from an image, the method comprising:
generating, in a computational system, an Artificial Intel ligence (Al) model configured to generate an embryo viabi lity score from an image according to the method of the first aspect;
receiving, from a user via a user interface of the computational system, an image captured during a pre-determined time window after In-Vitro Ferti l isation (IVF); and
pre-processing the image according to the pre-processing steps used to generate the Al model; providing the pre-processed image to the Al model to obtain an estimate of the embryo viabi l ity score; and
sending the embryo viabi lity score to the user via the user interface
[0029] According to a third aspect, there is provided a method for obtaining an embryo viabi lity score from an image, comprising:
uploading, via a user interface, an image captured during a pre-determined time window after In- Vitro Ferti l isation (IVF) to a cloud based Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of the first aspect;
receiving an embryo viabi l ity score from the cloud based Al model via the user interface.
[0030] According to a fourth aspect, there is provided a cloud based computational system configured to computational ly generate an Artificial Intell igence (Al) model configured to estimate an embryo viabi lity score from an image according to the method of the first aspect. [0031] According to a fifth aspect, there is provided a cloud based computational system configured to computational ly generate an embryo viabi l ity score from an image, wherein the computational system comprises:
an Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of the first aspect;
receiving, from a user via a user interface of the computational system, an image captured during a pre-determined time window after In-Vitro Ferti l isation (IVF);
providing the image to the Al model to obtain an embryo viabi l ity score; and
sending the embryo viabi lity score to the user via the user interface.
[0032] According to a sixth aspect, there is provided a computational system configured to generate an embryo viabi l ity score from an image, wherein the computational system comprises at least one processor, and at least one memory comprising instructions to configure the at least one processor to: receive an image captured during a pre-determined time window after In-Vitro Ferti l isation ( IVF) upload, via a user interface, the image captured during a pre-determined time window after In- Vitro Ferti l isation (IVF) to a cloud based Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of the first aspect;
receive an embryo viabi lity score from the cloud based Al model; and
display the embryo viabi l ity score via the user interface.
BRIEF DESCRIPTION OF DRAWINGS
[0033] Embodiments of the present disclosure wil l be discussed with reference to the accompanying drawings wherein:
[0034] Figure 1A is a schematic flowchart of the generation of an Artificial Intel l igence (Al) model configured to estimate an embryo viabi l ity score from an image according to an embodiment;
[0035] Figure IB is a schematic block diagram of a cloud based computation system configured to computational ly generate and use an Al model configured to estimate an embryo viabi l ity score from an image according to an embodiment;
[0036] Figure 2 is a schematic diagram of an IVF procedure using an Al model configured to estimate an embryo viabi l ity score from an image to assist in selecting an embryo for implantation according to an embodiment; [0037] Figure 3A is schematic architecture diagram of cloud based computation system configured to generate and use an Al model configured to estimate an embryo viabil ity score from an image according to an embodiment;
[0038] Figure 3B is a schematic flowchart of a model training process on a training server according to an embodiment;
[0039] Figure 4 is schematic diagram of binary thresholding for boundary-finding on images of human embryos according to an embodi ment;
[0040] Figure 5 is schematic diagram of a boundary-finding method on images of human embryos according to an embodiment
[0041] Figure 6A is an example of the use of a Geometrical Active Contour (GAC) model as appl ied to a fixed region of an image for image segmentation according to an embodiment;
[0042] Figure 6B is an example of the use of a morphological snake as applied to a fixed region of an image for image segmentation according;
[0043] Figure 6C is a schematic architecture diagram of a U-Net architecture for an semantic segmentation model according to an embodiment;
[0044] Figure 6D is an image of a day 5 embryo;
[0045] Figure 6E is a padded version of Figure 6D creating a square image;
[0046] Figure 6F shows a Zona Image based on Figure 6E in which the IZC is masked according to an embodiment;
[0047] Figure 6G shows a IZC image based on Figure 6E in which the Zona Pel lucida and background is masked according to an embodiment;
[0048] Figure 7 is a plot of a Gray Level Co-occurrence Matrix (G LCM ) showing GLCM correlation of sample feature descriptors: ASM, homogeneity, correlation, contrast and entropy, calculated on a set of six Zona Pel lucida regions and six cytoplasm regions according to an embodiment associated;
[0049] Figure 8 is schematic architecture diagram of a deep learning method, including convolutional layers, which transform the input image to a prediction, after training, according to an embodiment; [0050] Figure 9 is a plot of the accuracy of an embodiment of an ensemble model in identifying embryo viabi lity according to an embodi ment;
[0051] Figure 10 is a bar chart showing the accuracy of an embodiment of the ensemble model compared to world-leading embryologists (clinicians) in accurately identifying embryo viabi l ity;
[0052] Figure 11 is a bar chart showing the accuracy of an embodiment of the ensemble model compared to world-leading embryologists (clinicians) in correctly identifying embryo viabi lity where the embryologists' assessment was incorrect, compared with embryologists correctly identifying embryo viabi lity where the ensemble model assessment was incorrect;
[0053] Figure 12 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the embodiment of the ensemble model, when appl ied to the bl ind val idation dataset of Study 1;
[0054] Figure 13 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) using the embodiment of the ensemble model, when appl ied to the blind validation dataset of Study 1.
[0055] Figure 14 is a histogram of the rank obtained from the embryologist scores across the total bl ind dataset;
[0056] Figure 15 is a histogram of the rank obtained from the embodiment of the ensemble model inferences across the total bl ind dataset;
[0057] Figure 16 is a histogram of the ensemble model inferences, prior to being placed into rank bandings from 1 to 5;
[0058] Figure 17 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the ensemble model, when appl ied to the blind val idation dataset of Study 2;
[0059] Figure 18 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) using the ensemble model, when applied to the blind validation dataset of Study 2;
[0060] Figure 19 is a plot of the distribution of inference scores for viable embryos (successful clinical pregnancy) using the ensemble model, when appl ied to the blind val idation dataset of Study 3; and
[0061] Figure 20 is a plot of the distribution of inference scores for non-viable embryos (successful cl inical pregnancy) using Ensemble model, when applied to the blind validation dataset of Study 3. [0062] In the fol lowing description, l i ke reference characters designate l ike or corresponding parts throughout the figures.
DESCRIPTION OF EMBODIMENTS
[0063] With reference to Figures 1A, IB and 2, embodiments of a cloud based computation system 1 configured to computational ly generate and use an Artificial Intel ligence (Al) model 100 configured to estimate an embryo viabi l ity score from a single image of an embryo wi l l now be discussed. We wi ll also refer to this Al model 100 as an embryo viabi l ity assessment model . Figure 1A is schematic flow chart of the generation of an Al model 100 using a cloud based computation system 1 according to an
embodiment. A plural ity of images and associated metadata is received (or obtained) from one or more data sources 101. Each image is captured during a pre-determined time window after In-Vitro
Ferti lisation (IVF), such as a 24 hour period starting at day 5 post ferti l isation. The images and metadata can be sourced from IVF cl inics and may be images captured using optical light microscopy including phase contrast images. The metadata includes a pregnancy outcome label (e.g. heart beat detected at first scan post IVF) and may include a range of other clinical and patient information.
[0064] The images are then pre-processed 102, with the pre-processing including segmenting the image to identify a Zona Pel lucida region of the image. The segmentation may also include identification of the IntraZonal Cavity (IZC) which is surrounded by the Zona Pel lucida region. Pre-processing an image may also involve one or more (or all) of object detection, alpha channel removal, padding, cropping/localising, normal ising the colour balance, normal ising the brightness, and/or scal ing the image to a predefined resolution as discussed below. Pre-processing the image may also include calculating/determining computer vision feature descriptors from an image, and performing one or more image augmentations, or generating one or more augmented images.
[0065] At least one Zona Deep Learning model is trained on a set of Zona Pellucida images 103 in order to generate the Artificial Intel ligence (Al) model 100 configured to generate an embryo viabi l ity score from an input image 104. The set of Zona Pellucida images are images in which the Zona Pellucida regions are identified (e.g. during segmentation in step 102). In some embodiments the set of Zona Pel lucida images are images in al l regions of the image apart from the Zona Pellucida region are masked (i .e. so the deep learning model is only trained on information from/relating to the Zona Pel lucida region). The pregnancy outcome labels are used at least in the assessment of a trained model (i.e. to assess accuracy/performance) and may also be used in model training (e.g. by the loss function to drive model optimisation). Multiple Zona Deep Learning Models may be trained, with the best performing model selected as the Al model 100. [0066] In another embodiment, one or more additional Al models are trained on the pre-processed images 106. These may be additional deep learning models trained directly on the embryo image, and/or on a set of IZC images in which all regions of the image apart from the IZC are masked, or Computer Vision (CV) models trained to combine computer vision features/descriptors generating in the preprocessing step 102 to generate an embryo viabil ity score from an image. Each of the Computer Vision models uses a combination of one or more computer vision descriptors extracted from an image to estimate an embryo viabi l ity score of an embryo in an image, and a machine learning method performs a plural ity of training-val idation cycles to generate the CV model. Simi larly each of the deep learning models is trained in a plurality of training-validation cycles so that each deep learning model learns how to estimate an embryo viabi l ity score of an embryo in an image. During training images may be randomly assigned to each of a training set, a val idation set and a bl ind val idation set and each training-val idation cycle comprises a (further) randomisation of the plural ity of images within each of the training set, val idation set and blind validation set. That is the images within each set are randomly sampled each cycle, so that each cycle a different subset of images are analysed, or are analysed in a different ordering. Note however that as they are randomly sampled this does al low two or more sets to be identical, provided this occurred through a random selection process.
[0067] The multiple Al models are then combined into the single Al model 100, using ensemble, disti l lation or other simi lar techniques 107 to generate the AU model 100 in step 104. An ensemble approach involves selecting models from the set of available models and using a voting strategy that defines how an outcome score is generated from the individual outcomes of the selected models. In some embodiments, the models are selected to ensure that the results contrast to generate a distribution of results. These are preferably as independent as possible to ensure a good distribution of results. In a disti l lation method, the multiple Al models are used as teachers to train a single student model, with the student model becoming the final Al model 100.
[0068] In step 104 a final Al model is selected. This may be one of the Zona Deep Learning models trained in step 103, or it may be a model obtained using an ensemble, disti l lation or simi lar combination step (stepl07) where the training included at least one Zona Deep Learning model (from 103) and one or more additional Al models (Deep Learning and/or CV; step 106). Once a final Al model 100 is generated (104), this is deployed for operational use to estimate an embryo viability score from an input image 105, e.g. on a cloud server that is configured to receive a phase contrast image of a day 5 embryo captured at an IVF cl inic using a l ight microscope. This is further i llustrated in Figure 2 and discussed below. In some embodi ments deployment comprises saving or exporting the trained model, such as by writing the model weights and associated model metadata to a fi le which is transferred to the operational computation system and uploaded to recreate the trained model . Deployment may also comprise moving, copying, or replicating the trained model onto an operational computational system, such as one or more cloud based servers, or locally based computer servers at IVF cl inics. In one embodiment deployment may comprise reconfiguring the computational system the Al model was trained on to accept new images and generate viabi lity estimates using the trained model, for example by adding an interface to receive images, run the trained model on the received images, and to send the results back to the source, or to store the results for later retrieval. The deployed system is configured to receive an input image, and perform any preprocessing steps used to generate the Al model (i .e. so new images are pre-processed in the same way as the trained images). In some embodiments the images may be pre-processed prior to uploading to the cloud system (i .e. local pre-processing). In some embodiments the pre-processing may be distributed between the local system and the remote (e.g. cloud) system. The deployed model is executed or run over the image to generate an embryo viabi l ity score that is then provided to the user.
[0069] Figure IB is schematic block diagram a cloud based computation system 1 configured to computational ly generate an Al model 100 configured to estimate an embryo viabi l ity score from an image (i .e. an embryo viabi l ity assessment model), and then use this Al model 100 to generate an embryo viabi lity score (i.e. an outcome score) which is an estimate (or assessment) of the viabi lity of a received image. The input 10 comprises data such as the images of the embryo and pregnancy outcome information (e.g. heart beat detected at first ultrasound scan post IVF, l ive birth or not, or successful implantation) which can be used to generate a viabi l ity classification. This is provided as input to the model creation process 20 which creates and trains Al models. These include the Zona Deep Learning model (103) and in some embodiments also include additional deep learning and/or computer vision models (106). Models may be trained using a variety of methods and information including the use of segmented datasets (e.g. Zona images, IZC images) and pregnancy outcome data. Where multiple Al models are trained a best performing model may be selected according to some criteria, such as based on the pregnancy outcome information or multiple Al models may be combined using an ensemble model which selects Al models and generates an outcome based on a voting strategy, or a disti l lation method may be used in which the multiple Al models are used as teachers to train a student Al model, or some other simi lar method may be used to combine the multiple models into a single model . A cloud based model management and monitoring tool, which we refer to as the model monitor 21, is used to create (or generate) the Al models. This uses a series of linked services, such as Amazon Web Services (AWS) which manages the training, logging and tracking of models specific to image analysis and the model . Other simi lar services on other cloud platforms may be used. These may use deep learning methods 22, computer vision methods 23, classification methods 24, statistical methods 25 and physics based models 26. The model generation may also use domain expertise 12 as input, such as from embryologists, computer scientists, scientific/technical l iterature, etc., for example on what features to extract and use in a Computer Vision model . The output of the model creation process is an instance of an Al model (100) which we wi l l also refer to as a validated embryo assessment model [0070] A cloud based del ivery platform 30 is used which provides a user interface 42 to the system for a user 40. This is further il lustrated with reference to Figure 2 which is a schematic diagram of an IVF procedure 200 using a previously trained Al model to generate an embryo viabi lity score to assist in selecting an embryo for implantation according to an embodiment. At day 0, harvested eggs are fertil ised 202. These are then in-vitro cultured for several days and then an image of the embryo is captured, for example using a phase contrast microscope 204. As discussed below, it was generally found that images taken 5 days after in-vitro ferti l isation produced better results than images taken at earlier days. Thus preferably the model is trained and used on day 5 embryos, however it is to be understood that a model could be trained and used on embryo's taken during a specific time window with reference to a specific epoch. In one embodiment the time is 24 hours, but other time windows such as 12 hours, 36 hours, or 48 hours could be used. General ly smal ler time windows of 24 hours or less are preferable to ensure greater simi larity in appearance. In one embodiment this could a specific day which is a 24 hour window starting at the beginning of the day (0:00) to the end of the day (23:39), or specific days such days 4 or 5 (a 48 hour window starting at the start of day 4). Alternatively the time window could define a window size and epoch, such as 24 hours centred on day 5 (i.e. 4.5 days to 5.5 days). The time window could be open ended with a lower bound, such as at least 5 days. As noted above whi lst is preferable to use images of embryos from a time window of 24 hours around day 5, it is to be understood that earl ier stage embryos could be used including day 3 or day 4 images.
[0071] Typical ly several eggs wi l l be ferti lised at the same time and thus a set of multiple images wi l l be obtained for consideration of which embryo is the best (i .e. most viable) to implant. The user uploads the captured image to the platform 30 via user interface 42, for example using "drag and drop" functional ity. The user can upload a single image or multiple images, for example to assist in selection which embryo from a set of multiple embryos being considered for implantation. The platform 30 receives the one or more images 312 which are is stored in a database 36 that includes an image repository. The cloud based del ivery platform comprises on-demand cloud servers 32 that can do the image pre-processing (e.g. object detection, segmentation, padded, normalised, cropped, centred, etc.) and then provide the processed image to the trained Al (embryo viabi l ity assessment) model 100 which executes on one of the on- demand cloud servers 32 to generate an embryo viabi l ity score 314. A report including the embryo viabi lity score is generated 316 and this is sent or otherwise provided to the user 40, such as through the user interface 42. The user (e.g. embryologist) receives the embryo viabi lity score via the user interface and can then use the viabi lity score to assist in a decision of whether to implant the embryo, or which is the best embryo in the set to implant. The selected embryo is then implanted 205. To assist in further refinement of the Al model, pregnancy outcome data, such as detection (or not) of a heartbeat in the first ultrasound scan after implantation (normal ly around 6-10 weeks post ferti l isation) may be provided to the system. This allows the Al model to be retrained and updated as more data becomes available. [0072] The image may be captured using a range of imaging systems, such as those found in existing IVF cl inics. This has the advantage of not requiring IVF cl inics to purchase new imaging systems or use specific imaging systems. Imaging systems are typical ly l ight microscopes configured to capture single phase contrast images embryos. However it wi l l be understood that other imaging systems may be used, in particular optical l ight microscope systems using a range of imaging sensors and image capture techniques. These may include phase contrast microscopy, polarised light microscopy, differential interference contrast (DIC) microscopy, dark-field microscopy, and bright field microscopy. Images may be captured using a conventional optical microscope fitted with a camera or image sensor, or the image may be captured by a camera with an integrated optical system capable of taking a high resolution or high magnification image, including smart phone systems. Image sensors may be a CMOS sensor chip or a charge coupled device (CCD), each with associated electronics. The optical system may be configured to col lect specific wavelengths or use filters including band pass fi lters to col lect (or exclude) specific wavelengths. Some image sensors may be configured to operate or sensitive to l ight in specific wavelengths, or at wavelengths beyond the optical range including in the Infrared (IR) or near IR. In some embodi ments the imaging sensor is a multispectral camera which collects an image at multiple distinct wavelength ranges. I l lumination systems may also be used i l luminate the embryo with light of a particular wavelength, in a particular wavelength band, or a particular intensity. Stops and other components may be used to restrict or modify illumination to certain parts of the image (or image plane).
[0073] Further the image used in embodiments described herein may be sourced from video and time lapse imaging systems. A video stream is a periodic sequence of image frames where the interval between image frames is defined by the capture frame rate (e.g. 24 or 48 frames per second). Simi larly a time- lapse system captures a sequence of images with a very slow frame rate (e.g. 1 image per hour) to obtain a sequence of images as the embryo grows (post-ferti lisation). Accordingly it wi l l be understood that the image used in embodiments described herein may be a single image extracted from a video stream or a time lapse sequence of images of an embryo. Where an image is extracted from a video stream or a time lapse sequence, the image to use may be selected as the image with a capture time nearest to a reference time point such as 5.0 or 5.5 days post ferti lisation.
[0074] In some embodiments pre-processing may include an image qual ity assessment so that an image may be excluded if it fai ls a quality assessment. A further image may be captured if the original image fails a qual ity assessment. In embodiments where the image is selected from a video stream or time lapse sequence, then the image selected is the first image which passes the qual ity assessment nearest the reference time. Alternatively a reference time window may be defined, (e.g. 30 minutes fol lowing the start of day 5.0) along with image qual ity criteria. In this embodiment the image selected is the image with the highest qual ity during the reference time window is selected. The image qual ity criteria used in performing qual ity assessment may be based on a pixel colour distribution, a brightness range, and/or an unusual image property or feature that indicates poor quality or equipment failure. The thresholds may be determined by analysing a reference set of images. This may be based on manual assessment or automated systems which extract outl iers from distributions.
[0075] The generation of the Al embryo viabi l ity assessment model 100 can be further understood with reference to Figure 3A which is a schematic architecture diagram of cloud based computation system 1 configured to generate and use an Al model 100 configured to estimate an embryo viabi l ity score from an image according to an embodiment. With reference to Figure IB the Al model generation method is handled by the model monitor 21.
[0076] The model monitor 21 al lows a user 40 to provide image data and metadata 14 to a data management platform which includes a data repository. A data preparation step is performed, for example to move the images to specific folder, and to rename and perform pre-processing on the image such as objection detection, segmentation, alpha channel removal, padding, cropping/local ising, normal ising, scaling, etc. Feature descriptors may also be calculated, and augmented images generated in advance. However additional pre-processing including augmentation may also be performed during training (i.e. on the fly). Images may also undergo qual ity assessment, to allow rejection of clearly poor images and allow capture of replacement images. Simi larly patient records or other cl inical data is processed (prepared) to extra an embryo viabi l ity classification (e.g. viable or non-viable) which is l inked or associated with each image to enable use in training the Al models and/or in assessment. The prepared data is loaded 16 onto a cloud provider (e.g. AWS) template server 28 with the most recent version of the training algorithms. The template server is saved, and multiple copies made across a range of training server clusters 37, which may be CPU, GPU, ASIC, FPGA, or TPU (Tensor Processing Unit)-based, which form training servers 35. The model monitor web server 31 then applies for a training server 37 from a plurality of cloud based training servers 35 for each job submitted by the user 40. Each training server 35 runs the pre-prepared code (from template server 28) for training an Al model, using a library such as Pytorch, Tensorflow or equivalent, and may use a computer vision library such as OpenCV. PyTorch and OpenCV are open- source l ibraries with low-level commands for constructing CV machine learning models.
[0077] The training servers 37 manage the training process. This may include may dividing the images in to training, val idation, and bl ind val idation sets, for example using a random al location process. Further during a training-val idation cycle the training servers 37 may also randomise the set of images at the start of the cycle so that each cycle a different subset of images are analysed, or are analysed in a different ordering. If pre-processing was not performed earlier or was incomplete (e.g. during data management) then additional pre-processing may be performed including object detection, segmentation and generation of masked data sets (e.g. just Zona Pel lucida images, or just IZC images), calculation/estimation of CV feature descriptors, and generating data augmentations. Pre-processing may also include padding, normal ising, etc. as required. That is the pre-processing step 102 may be performed prior to training, during training, or some combination (i .e. distributed pre-processing). The number of training servers 35 being run can be managed from the browser interface. As the training progresses, logging information about the status of the training is recorded 62 onto a distributed logging service such as Cloudwatch 60. Key patient and accuracy information is also parsed out of the logs and saved into a relational database 36. The models are also periodical ly saved 51 to a data storage (e.g. AWS Simple Storage Service (S3) or simi lar cloud storage service) 50 so they can be retrieved and loaded at a later date (for example to restart in case of an error or other stoppage). The user 40 is sent emai l updates 44 regarding the status of the training servers if their jobs are complete, or an error is encountered.
[0078] Within each training cluster 37, a number of processes take place. Once a cluster is started via the web server 31, a script is automatical ly run, which reads the prepared images and patient records, and begins the specific Pytorch/OpenCV training code requested 71. The input parameters for the model training 28 are suppl ied by the user 40 via the browser interface 42 or via a configuration script. The training process 72 is then initiated for the requested model parameters, and can be a lengthy and intensive task. Therefore, so as not to lose progress whi le the training is in progress, the logs are periodically saved 62 to the logging (e.g. AWS Cloudwatch) service 60 and the current version of the model (whi le training) is saved 51 to the data (e.g. S3) storage service 51 for later retrieval and use. An embodiment of a schematic flowchart of a model training process on a training server is shown in Figure 3B. With access to a range of trained Al models on the data storage service, multiple models can be combined together for example using ensemble, distil lation or simi lar approaches in order to incorporate a range of deep learning models (e.g. PyTorch) and/or targeted computer vision models (e.g. OpenCV) to generate a robust Al model 100 which is provided to the cloud based delivery platform 30.
[0079] The cloud-based del ivery platform 30 system then al lows users 10 to drag and drop images directly onto the web appl ication 34, which prepares the image and passes the image to the
trained/val idated Al model 100 to obtain an embryo viabil ity score which is immediately returned in a report (as i llustrated in Figure 2). The web appl ication 34 also al lows clinics to store data such as images and patient information in database 36, create a variety of reports on the data, create audit reports on the usage of the tool for their organisation, group or specific users, as well as bi l ling and user accounts (e.g. create users, delete users, reset passwords, change access levels, etc.). The cloud-based delivery platform 30 also enables product admin to access the system to create new customer accounts and users, reset passwords, as wel l as access to customer/user accounts (including data and screens) to faci litate technical support.
[0080] The various steps and variations in generation of embodiments of an Al model configured to estimate an embryo viabi l ity score from an image wi l l now be discussed in further detai l. With reference to Figure 1A, the model is trained and uses images captured 5 days post fertil isation (i.e. a 24 hour period from day 5:00:00 to day 5:23:59). Studies on a val idated model indicate that model performance is significantly improved using images taken at day 5 post ferti l isation compared to images taken at day 4 post ferti l isation. However as noted above effective models can sti l l be developed using a shorter time window such as 12 hours, or images taken at other days such as day 3 or day 4, or a minimum time period after ferti lisation such as at least 5 days (e.g. open ended time window). What is perhaps more important than the exact time window (e.g. 4 day or 5 days) is that images used for training of an Al model, and then subsequent classification by the trained Al model, are taken during simi lar and preferably the same time windows (e.g. the same 12 or 24 hour time window).
[0081] Prior to analysis, each image undergoes pre-processing (image preparation) procedure 102 including at least segmenting the image to identify a Zona Pel lucida region. A range of pre-processing steps or techniques may be applied. The may be performed after adding to the data store 14 or during training by a training server 37. In some embodiments an objection detection (localisation) module is used to detect and local ise the image on the embryo. Objection detection/localisation comprises estimating the bounding box containing an embryo. This can be used for cropping and/or segmentation of the image. The image may also be padded with a given boundary, and then the color balance and brightness are normal ized. The image is then cropped so that the outer region of the embryo is close to the boundary of the image. This is achieved using computer vision techniques for boundary selection, including the use of Al object detection models. Image segmentation is a computer vision technique that is useful for preparing the image for certain models to pick out relevant areas for the model training to focus on such as the Zona Pel lucida, and the IntraZonal Cavity ( IZC). The image may masked to generate images of just the Zona Pellucida (i .e. crop the border of the Zona Pellucida, and mask the IZC -see Figure 6F) or just IZC (i.e. crop to the border of the IZC to exclude the Zona Pellucida (Figure 6G). The background may be left in in the image or it may be masked as wel l. Embryo viabi lity models may then be trained using just the masked images, for example Zona images which are masked to just contain the Zona Pel lucida and background of the image, and/or IZC i mages which are masked to just contain the IZC. Scal ing involves rescal ing the image to a predefined scale to suit the particular model being trained. Augmentation involves incorporating making smal l changes to a copy of the images, such as rotations of the image in order to control for the direction of the embryo dish. The use of segmentation prior to deep learning was found to have a significant effect on the performance of the deep learning method. Simi larly augmentation was important for generating a robust model.
[0082] A range of image pre-processing techniques may be used for the preparation of human embryo images prior to training an Al model. These include:
Alpha Channel Stripping comprises stripping an image of an alpha channel (if present) to ensure it is coded in a 3-channel format (e.g. RGB), for example to remove transparency maps;
Padding/Bolstering each image with a padded border, to generate a square aspect ratio, prior to segmentation, cropping or boundary-finding. This process ensured that image dimensions were consistent, comparable, and compatible for deep learning methods, which typical ly require square dimension images as input, whi le also ensuring that no key components of the image were cropped;
Normalizing the RGB (red-green-blue) or gray-scale images to a fixed mean value for al l the images. For example this includes taking the mean of each RGB channel, and dividing each channel by its mean value. Each channel was then multipl ied by a fixed value of 100/255, in order to ensure the mean value of each image in RGB space was (100, 100, 100). This step ensured that color biases among the images were suppressed, and that the brightness of each image was normal ized;
Thresholding images using binary, Otsu, or adaptive methods. Includes morphological processing of the image using di lation (opening), erosion (closing) and scale gradients, and using a scaled mask to extract the outer and inner boundaries of a shape;
Object Detection/Cropping the image to localise the image on the embryo and ensure that there are no artefacts around the edges of the image. This may be performed using an Object Detector which uses an object detection model (discussed below) which is trained to estimate a bounding box which contains the embryo (including the Zona Pel lucida);
Extracting the geometric properties of the boundaries using an el liptical Hough transform of the image contours, for example the best el lipse fit from an el l iptical Hough transform calculated on the binary threshold map of the image. This method acts by selecting the hard boundary of the embryo in the image, and by cropping the square boundary of the new image so that the longest radius of the new el lipse is encompassed by the new image width and height, and so that the center of the el l ipse is the center of the new image;
Zooming the image by ensuring a consistently centred image with a consistent border size around the el liptical region;
Segmenting the image to identify the Zona Pellucida region and the cytoplasmic IntraZonal Cavity (IZC) region. Segmentation may be performed by calculating the best-fit contour around an un el l iptical image using a Geometrical Active Contour (GAC) model, or morphological snake, within a given region. The inner and other regions of the snake can be treated differently depending on the focus of the trained model on the zona pellucida region or the cytoplasmic (IntraZonal Cavity) region, that may contain a blastocyst. Alternatively a Semantic Segmentation model may be trained which identifies a class for each pixel in an image. In one embodiment a semantic segmentation model was developed using a U-Net architecture with a pretrained ResNet-50 encoder to segment the Zona Pel lucida and IZC. The model was trained using a BinaryCrossEntropy loss function;
Annotating the image by selecting feature descriptors, and masking al l areas of the image except those within a given radius of the descriptor key point;
Resizing/scaling the entire set of images to a specified resolution; and
Tensor conversion comprising transforming each image to a tensor rather than a visually displayable image, as this data format is more usable by deep learning models. In one embodiment, Tensor normal ization was obtained from standard pre-trained ImageNet values with a mean: (0.485, 0.456, 0.406) and standard deviation (0.299, 0.224, 0.225).
[0083] Figure 4 is schematic diagram of binary thresholding 400 for boundary-finding on images of human embryos according to an embodiment. Figure 4 shows 8 binary thresholds appl ied to the same image, namely levels 60, 70, 80, 90 100, 110 (images 401, 402, 403, 404, 405, 406, respectively), adaptive Gaussian 407 and Otsu's Gaussian 408. Figure 5 is schematic diagram of a boundary-finding method 500 on an image of human embryo according to an embodiment. The first panel shows outer boundary 501, inner boundary 502, and the image with detected inner (and outer boundaries 503. The inner boundary 502 may approximately correspond to the IZC boundary, and the outer boundary 501 may approximately correspond to the outer edge of the Zona Pel lucida region.
[0084] Figure 6A is an example of the use of a Geometrical Active Contour (GAC) model as appl ied to a fixed region of an image 600 for image segmentation according to an embodiment. The blue solid line 601 is the outer boundary of the Zona Pellucida region and the dashed green line 602 denotes the inner boundary defining the edge of the Zona Pel lucida region and the cytoplasmic (IntraZonal Cavity or IZC) region. Figure 6B is an example of the use of a morphological snake as applied to a fixed region of an image for image segmentation. Again the blue sol id l ine 611 is the outer boundary of the Zona Pellucida region and the dashed green l ine 612 denotes the inner boundary defining the edge of the Zona Pel lucida region and the cytoplasmic (inner) region. In this second image the boundary 612 (defining the cytoplasmic IntraZonal Cavity region) has an irregular shape with a bump or projecting portion in the lower right hand quadrant.
[0085] In another embodiment an object detector uses an object detection model which is trained to estimate a bounding box which contains the embryo. The goal of object detection is to identify the largest bounding box that contains al l of the pixels associated with that object. This requires the model to both model the location of an object and a category/label (i.e. what's in the box) and thus detection models typical ly contain both an object classifier head and a bounding box regression head.
[0086] One approach is Region-Convolutional Neural Net (or R-CNN) which uses an expensive search process is appl ied to search for image patch proposals (potential bounding boxes). These bounding boxes are then used to crop the regions of the image of interest. The cropped images are then run through a classifying model to classify the contents of the image region. This process is compl icated and computational ly expensive. An alternative is Fast-CNN which uses a CNN that proposed feature regions rather a search for image patch proposals. This model uses a CNN to estimate a fixed number of candidate boxes, typical ly set to be between 100 and 2000. An even faster alternative approach is Faster- RCN N which uses anchor boxes to l imit the search space of required boxes. By default, a standard set of 9 anchor boxes (each of different size) is used. Faster-RCNN. This uses a small network which jointly learns to predict the feature regions of interest, and this can speed up the runtime compared to R-CNN or Fast-CN N as expensive region search can be replaced.
[0087] For every feature activation coming out of the back one model is considered anchor point (Red in the image below). For every anchor point, the 9 (or more, or less, depending on problem) anchor boxes are generated. The anchor boxes correspond to common object sizes in the training dataset. As there are multiple anchor points with multiple anchor boxes, this results in 10s of thousands of region proposals. The proposals are then fi ltered via a process called Non-Maximal Suppression (NMS) that selects the largest box that has confident smal ler boxes contained within it. This ensures that there is only 1 box for each object. As the NMS is relies on the confidence of each bounding box prediction, a threshold must be set for when to consider objects as part of the same object instance. As the anchor boxes wi ll not fit the objects perfectly, the job of the regression head is to predict the offsets to these anchor boxes which morph them into the best fitting bounding box.
[0088] The detector can also special ise and only estimate boxes for a subset of objects e.g. only people for pedestrian detectors. Object categories that are not of interest are encoded into the 0-class which corresponds with the background class. During training, patches/boxes for the background class are usual ly sampled at random from image regions which contain no bounding box information. This step al lows the model to become invariant to those undesirable objects e.g. it can learn to ignore them rather than classifying them incorrectly. Bounding boxes are usual ly represented in two different formats: The most common is (xl, yl, x2, y2) where the point pl=(xl, yl) is the top left hand corner of the box and p2=(x2, y2) is the bottom right hand side. The other common box format is (cx, cy, height, width), where the bounding box/rectangle is encoded as a centre point of the box (cx, cy) and the box size (height, width). Different detection methods wi l l use different encodings/formats depending on the task and situation.
[0089] The regression head may be trained using a LI loss and the classification head may be trained using a CrossEntropy loss. An objectness loss may also be used (is this background or an object) as well The final loss is computed as the sum of these losses. The individual losses may also be weighted such as: loss = l-^regressionjoss + 2classification_loss + 3objectness_loss (1)
[0090] In one embodiment, an embryo detection model based upon Faster-RNN was used. In this embodiment approximately 2000 images were hand labelled with the ground truth bounding boxes. The boxes were label led such that the full embryo, including the Zona Pel lucida region, were inside the bounding box. In the cases of there being more than one embryo present, a.k.a Double transfer, both embryos were label led in order to al low the model to differentiate between double transfer and single transfer. As it is impossible to reconcile which embryo is which in a double transfer, then the model was configured to raise an error to the use if a double transfer was detected. Models with multiple lobes' are label led as being a single embryo.
[0091] As an alternative to GAC segmentation, semantic segmentation may be used. Semantic
Segmentation is the task of trying to predict a category or label for every pixel . Tasks l ike semantic segmentation are referred to as pixel-wise dense prediction tasks as an output is required for every input pixel. Semantic segmentation models are setup differently to standard models as they require a ful l image output. Typical ly, a semantic segmentation (or any dense prediction model) wi ll have an encoding module and a decoding module. The encoding module is responsible for create a low-dimensional representation of the image (sometimes cal led a feature representation). This feature representation is then decoded into the final output image via the decoding module. During training, the predicted label map (for semantic segmentation) is then compared against the ground truth label maps that assign a category to each pixel, and the loss is computed. The standard loss function for Segmentation models is either BinaryCrossEntropy, standard Crossentopy loss (depending on if the problem is multi-class or not). These implementations are identical to their image classification cousins, except that the loss is applied pixel wise (across the image channel dimension of the tensor).
[0092] The Ful ly Convolutional Network (FCN) style architecture is commonly used in the field for generic semantic segmentation tasks. In this architecture, a pretrained model (such as a ResNet) is first used to encode a low resolution image (at approx 1/32 of the original resolution, but can be 1/8 if di lated convolutions are used). This low resolution label map is then up-sampled to the original image resolution and the loss is computed. The intuition behind predicted a low resolution label map, is that semantic segmentation masks are very low frequency and do not need al l the extra parameters of a larger decoder. More compl icated versions of this model exist, which use multi-stage upsampl ing to improve segmentation results. Simply stated, the loss is computed at multiple resolutions in a progressive manner to refine the predictions at each scale.
[0093] One down side of this type of model, is that if the input data is high resolution, or contains high frequency information (i.e. smal ler/thinner objects), the low-resolution label map wi l l fai l to capture these smal ler structures (especial ly when the encoding model does not use di lated convolutions). In a standard encoder/Convolutional Neural Network, the input image/image features are progressively downsampled as the model gets deeper. However, as the image/features are downsampled key high frequency detai ls can be lost. Thus to address this, an alternative U-Net architecture may be used that instead uses skip connections between the symmetric components of the encoder and decoder. Simply put, every encoding block has a corresponding block in the decoder. The features at each stage are then passed to the decoder alongside the lowest resolution feature representation. For each of the decoding blocks, the input feature representation is upsampled to match the resolution of its corresponding encoding block. The feature representation from the encoding block and the upsampled lower resolution features are then concatenated and passed through a 2D convolution layer. By concatenating the features in this way, the decoder can learn to refine the inputs at each block, choosing which details to integrate (low-res details or high-res detai ls) depending on its input.
[0094] An example of a U-Net architecture 620 is shown in Figure 6C. The main difference between FCN style models and U-Net style models is that in the FCN model, the encoder is responsible for predicting a low resolution label map that is then upsampled (possibly progressively). Whereas, the U-Net model does not have a ful ly complete label map prediction until the final layer. Ultimately, there do exist many variants of these models that trade off the differences between them (e.g. Hybrids). U-net architectures may also use pre-trained weights, such as ResNet-18 or ResNet-50, for use in cases where there is insufficient data to train models from scratch.
[0095] In some embodiments segmentation was performed using U-Net architecture with pre-trained ResNet-50 encoder trained using BinaryCrossEntropy to identify the Zona Pellucida region and the IntraZonal Cavity region. This U-Net architecture based segmenter general ly outperformed active contour based segmentation, particularly on poorer quality images. Figures 6D to 6F illustrate segmentation according to an embodiment. Figure 6D is an image of a day 5 embryo 630 comprising a Zona Pel lucida region 631 surrounding the IntraZonal Cavity (IZC, 632). In this embodiment the embryo is starting to hatch with the ISZ emerging (hatching) from the Zona Pellucida. The embryo is surrounded by background pixels 633. Figure 6E is a padded image 640 created from Figure 6D by adding padding pixels 641 642 to create a square image more easi ly processed by the deep learning methods. Figure 6F shows a Zona Image 650 in which the IZC is masked 652 to leave the Zona Pel lucida 631 and background pixels 633, and Figure 6G shows a IZC image 660 in which the Zona Pel lucida and background is masked 661 leaving only the IZC region 632. Once segmented, images sets could be generated in which all regions other than a desired region were masked. Al Models could then be trained on these specific image sets. That is Al models could be separated into two groups: first, those that included additional image segmentation, and second those that required the entire unsegmented image. Models that were trained on images that masked the IZC, exposing the zona region, were denoted as Zona models. Models that were trained on images that masked the Zona (denoted IZC models), and models that were trained on full-embryo images (i .e. second group), were also considered in training.
[0096] In one embodiment, to ensure uniqueness of each image, so that copies of records do not bias the results, the name of the new image is set equal to the hash of the original image contents, as a png (lossless) fi le. When run, the data parser wil l output images in a multi-threaded way, for any images that do not already exist in the output directory (which, if it doesn't exist, wi ll create it), so if it is a lengthy process, it can be restarted from the same point even if it is interrupted. The data preparation step may also include processing the metadata to remove images associated with inconsistent or contradictory records, and identify any mistaken clinical records. For example a script may be run on a spreadsheet to conform the metadata into a predefined format. This ensures the data used to generate and train the models is of high quality, and has uniform characteristics (e.g. size, colour, scale etc.).
[0097] In some embodiments the data is cleaned by identifying i mages with l ikely incorrect pregnancy outcome labels (i.e. mis-label led data), and excluding or re-label ling the identified images. In one embodiment this is performed by estimating the l ikelihood that a pregnancy outcome label associated with an image is incorrect and comparing the likel ihood against a threshold value. If the l ikelihood exceeds the threshold value then the image is excluded or relabelled. Estimating the l ikelihood a pregnancy outcome label is incorrect may be performed by using a plural ity of Al classification models and a k-fold cross val idation method. In this approach the images are spl it into k mutual ly exclusive val idation datasets. Each of the plural ity of Al classifications model is trained on k-1 val idation datasets in combination and then used to classify images in the remaining validation dataset. The l ikelihood is then determined based on the number of Al classification models which misclassify the pregnancy outcome label of an image. In some embodiments a deep learning model may further be used to learn the l ikelihood value.
[0098] Once the data is suitably pre-processed it can then be used to train one or more Al models. In one embodiment the Al model is a deep learning model trained on a set of Zona Pellucida images in al l regions of the images except the Zona Pellucida are masked during pre-processing. In one embodiment multiple Al models are trained and then combined using an ensemble or disti l lation method. The Al models may be one or more deep learning models and/or one or more computer vision (CV) models. The deep learning models may be trained on ful l embryo images, Zona images or IZC images. The computer vision (CV) models may be generated using a machine learning method using a set feature descriptors calculated from each image Each of the individual models are configured to estimate an embryo viabi l ity score of an embryo in an image, and the Al model combines selected models to produce an overall embryo viabi l ity score that is returned by the Al model .
[0099] Training is performed using randomised datasets. Sets of complex image data, can suffer from uneven distribution, especial ly if the data set is smal ler than around 10,000 images, where exemplars of key viable or non-viable embryos are not distributed evenly through the set. Therefore, several (e.g. 20) randomizations of the data are considered at one time, and then split into the training, val idation and blind test subsets defined below. Al l randomizations are used for a single training example, to gauge which exhibits the best distribution for training. As a corol lary, it is also beneficial to ensure that the ratio between the number of viable and non-viable embryos is the same across every subset. Embryo images are quite diverse, and thus ensuring even distribution of images across test and training sets can be used to improve performance. Thus after performing a randomisation the ratio of images with a viable classification to images with a non-viable classification in each of the training set, validation set and blind val idation set is calculated and tested to ensure that the ratios are simi lar. For example this may include testing if the range of the ratios is less than a threshold value, or within some variance taking into account the number of images. If the ranges are not simi lar then the randomisation is discarded and a new randomisation is generated and tested unti l a randomisation is obtained in which the ratios are simi lar. More general ly if the outcome is a n-ary outcome having n states then after randomisation is performed the calculation step may comprise calculating the frequency of each of the n-ary outcome states in each of the training set, validation set and bl ind val idation set, and testing that the frequencies are simi lar, and if the frequencies are not simi lar then discarding the al location and repeating the randomisation unti l a randomisation is obtained in which the frequencies are simi lar.
[00100] Training further comprises performing a plurality of training-val idation cycles. In each train-val idate cycle each randomization of the total useable dataset is spl it into typical ly 3 separate datasets known as the training, val idation and blind validation datasets. In some variants more than 3 could be used, for example the val idation and bl ind val idation datasets could be stratified into multiple sub test sets of varying difficulty.
[00101] The first set is the training dataset and comprises at least 60% and preferably 70-80% of images. These images are used by deep learning models and computer vision models to create an embryo viabi lity assessment model to accurately identify viable embryos. The second set is the Validation dataset, which is typical ly around (or at least) 10% of images: This dataset is used to validate or test the accuracy of the model created using the training dataset. Even though these images are independent of the training dataset used to create the model, the val idation dataset sti l l has a smal l positive bias in accuracy because it is used to monitor and optimize the progress of the model training. Hence, training tends to be targeted towards models that maximize the accuracy of this particular val idation dataset, which may not necessari ly be the best model when appl ied more general ly to other embryo images. The third dataset is the Bl ind val idation dataset which is typical ly around 10-20% of the images. To address the positive bias with the validation dataset described above, a third bl ind val idation dataset is used to conduct a final unbiased accuracy assessment of the final model. This val idation occurs at the end of the model ling and val idation process, when a final model has been created and selected. It is important to ensure that the final model 's accuracy is relatively consistent with the val idation dataset to ensure that the model is general izable to al l embryos images. The accuracy of the val idation dataset wi ll l ikely be higher than the bl ind val idation dataset for the reasons discussed above. Results of the bl ind val idation dataset are a more reliable measure of the accuracy of the model .
[00102] In some embodiments pre-processing the data further comprises augmenting images, in which a change is made to the image. This may be performed prior to training, or during training (i .e. on the fly). Augmentation may comprise directly augmenting (altering) and image or by making a copy of an image with a smal l change. Any number of augmentations may be performed with varying amounts of 90 degree rotations of the image, mirror fl ip, a non-90 degree rotation where a diagonal border is fi l led in to match a background colour, image blurring, adjusting an image contrast using an intensity histogram, and applying one or more smal l random translations in both the horizontal and/or vertical direction, random rotations, adding J PEG (or compression) noise, random i mage resizing, random hue j itter, random brightness j itter, contrast l imited adaptive histogram equal ization, random fl ip/mirror, image sharpening, image embossing, random brightness and contrast, RGB colour shift, random hue and saturation, channel shuffle: swap RGB to BGR or RBG or other, coarse dropout, motion blur, median blur, Gaussian blur, random shift-scale-rotate (i .e. al l three combined). The same set of augmented images may be used for multiple training-val idation cycles, or new augmentations may be generated on the fly during each cycle. An additional augmentation used for CV model training is the alteration of the 'seed' of the random number generator for extracting feature descriptors. The techniques for obtaining computer vision descriptors contain an element of randomness in extracting a sample of features. This random number can be altered and included among the augmentations to provide a more robust training for CV models.
[00103] Computer vision models rely on identifying key features of the image and expressing them in terms of descriptors. These descriptors may encode qualities such as pixel variation, gray level, roughness of texture, fixed corner points or orientation of image gradients, which are implemented in the OpenCV or simi lar l ibraries. By selection on such feature to search for in each image, a model can be bui lt by finding which arrangement of the features is a good indicator for embryo viabi l ity. This procedure is best carried out by machine learning processes such as Random Forest or Support Vector Machines, which are able to separate the images in terms of their descriptions from the computer vision analysis.
[00104] A range of computer vision descriptors are used, encompassing both smal l and large scale features, which are combined with traditional machine learning methods to produce "CV models" for embryo selection. These may optional ly be later combined with deep learning (DL) models, for example into an Ensemble model or used in disti llation to train a student model . Suitable computer vision image descriptors include:
Zona-Pellucida through Hough transformation· finds inner and outer ell ipses to approximate the Zona Pel lucida and IntraZonal Cavity spl it, and records the mean and difference in radi i as features;
Gray-Level Co-Occurrence Matrix (GLCM) Texture Analysis: detects roughness of different regions by comparing neighbouring pixels in the region. The sample feature descriptors used are: angular second moment (ASM), homogeneity, correlation, contrast and entropy. The selection of the region is obtained by randomly sampl ing a given number of square sub-regions of the image, of a given size, and records the results of each of the five descriptors for each region as the total set of features; Histogram of Oriented Gradients (HOG): detects objects and features using scale-invariant feature transform descriptors and shape contexts. This method has precedence for being used in embryology and other medical imaging, but does not itself constitute a machine learning model;
Oriented Features from Accelerated Segment Test (FAST) and Rotated Binary Robust
Independent Elementary Features (BRIEF) (ORB): an industry standard alternative to SIFT and SURF features, which rel ies on a FAST key-point detector (specific pixel) and BRIEF descriptor combination, and which has been modified to include rotation invariance;
Binary Robust Invariant Scalable Key-points (BRISK): a FAST-based detector in combination with an assembly of intensity comparisons of pixels, which is achieved by sampling each neighbourhood around a feature specified at a key-point;
Maximally Stable Extremal Regions (MSER): a local morphological feature detection algorithm, through extracting covariant regions, which are stable connected components related to one or more gray- level sets extracted from the image.
Good Features To Track (GFTT): a feature detector that uses an adaptive window size to detect textures of corners, identified using Harris Corner Detection or Shi-Tomasi Corner Detection, and extracting points the exhibit a high standard deviation in their spatial intensity profi le.
[00105] Figure 7 is a plot 700 of a Gray Level Co-occurrence Matrix (GLCM) showing GLCM correlation of sample feature descriptors 702 : ASM, homogeneity, correlation, contrast and entropy, calculated on a set of six Zona Pel lucida regions (label led 711 to 716; cross hatch) and six cytoplasm/IZC regions (label led 721 to 726; dotted) in image 701.
[00106] A computer vision (CV) model is constructed by the fol lowing method. One (or more) of the computer vision image descriptors techniques l isted above is selected, and the features are extracted from al l of the images in the training dataset. These features are arranged into a combined array, and then supplied to a KMeans unsupervised clustering algorithm, this array is cal led the Codebook, for a 'bag of visual words'. The number of clusters is a free parameter of the model . The clustered features from this point on represent the 'custom features' that are used, through whichever combination of algorithms, to which each individual image in the validation or test set wi l l be compared. Each image has features extracted and is clustered individual ly. For a given image with clustered features, the 'distance' (in feature-space) to each of the clusters in the codebook is measured using a KDTree query algorithm, which gives the closest clustered feature. The results from the tree query can then be represented as a histogram, showing the frequency at which each feature occurs in that image. Finally, the question of whether a particular combination of these features corresponds to a measure of embryo viabi l ity needs to be assessed, using machine learning. Here, the histogram and the ground-truth outcomes are used to carry out supervised learning. The methods used to obtain the final selection model include Random Forest or Support Vector Machine (SVM). [00107] A plurality of deep learning models may also be generated. Deep Learning models are based on neural network methods, typical ly convolutional neural network (CN N) that consist of a plural ity of connected layers, with each layer of 'neurons' containing a non-l inear activation function, such as a 'rectifier', 'sigmoid' etc. Contrasting with feature based methods (i.e. CV models), Deep Learning and neural networks instead ' learn' features rather than relying on hand designed feature descriptors. This al lows them to learn 'feature representations' that are tai lored to the desired task. These methods are suitable for image analysis, as they are able to pick up both small details and overall morphological shapes in order to arrive at an overal l classification A variety of deep learning models are avai lable each with different architectures (i.e. different number of layers and connections between layers) such as residual networks (e.g. ResNet-18, ResNet-50 and ResNet-101), densely connected networks (e.g. DenseNet-121 and DenseNet-161), and other variations (e.g. lnceptionV4 and Inception- ResNetV2). Deep Learning models may be assessed based on stabi l isation (how stable the accuracy value was on the validation set over the training process) transferabi l ity (how wel l the accuracy on the training data correlated with the accuracy on the val idation set) and prediction accuracy (which models provided the best val idation accuracy, for both viable and non-viable embryos, the total combined accuracy, and the balanced accuracy, defined as the weighted average accuracy across both class types of embryos). Training involves trying different combinations of model parameters and hyper-parameters, including input image resolution, choice of optimizer, learning rate value and schedul ing, momentum value, dropout, and initial ization of the weights (pre-training). A loss function may be defined to assess performing of a model, and during training a Deep Learning model is optimised by varying learning rates to drive the update mechanism for the network's weight parameters to minimize an objective/loss function.
[00108] Deep learning models may be implemented using a variety of l ibraries and software languages. In one embodiment, the PyTorch l ibrary is used to implement neural networks in the language of python. The library Pytorch additional ly al lows tensors to be created that uti lize Hardware (GPU, TPU) acceleration, and includes modules for building multiple layers for neural networks. Whi le deep learning is one of the most powerful techniques for image classification, it can be improved by providing guidance through the use of segmentation or augmentation described above. The use of segmentation prior to deep learning was found to have a significant effect on the performance of the deep learning method, and assisted in generating contrasting models. Thus preferably at least some deep learning models were trained on segmented images such images in which the Zona Pel lucida has been identified, or the image is masked to hide al l regions except the Zona Pel lucida region. In some embodiments the plural ity of deep learning models includes at least one model trained on segmented images, and one model trained on images not subject to segmentation. Simi larly augmentation was important for generating robust models. [00109] The effectiveness of an approach is determined by the architecture of the Deep Neural Network (DNN). However, unl i ke the feature descriptor methods, the DNN learns the features itself throughout the convolutional layers, before employing a classifier. That is, without adding in proposed features by hand, the DN N can be used to check existing practices in the l iterature, as wel l as developing previously unguessed descriptors, especial ly those that are difficult for the human eye to detect and measure.
[00110] The architecture of the DNN is constrained by the size of images as input, the hidden layers, which have dimensions of the tensors describing the DNN, and a l inear classifier, with the number of class labels as output. Most architectures employ a number of down-sampl ing ratios, with smal l (3x3 pixel) filters to capture notion of left/right, up-down and centre. Stacks of a) Convolutional 2d layers, b) Rectified Linear Units (ReLU), and c) Max Pooling layers al low the number of parameters through the DN N to remain tractable, whi le al lowing the fi lters to pass over the high level (topological) features of an image, mapping them onto the intermediate and final ly microscopic features embedded in the image. The top layer typical ly includes one or more ful ly-connected neural network layers, which act as a classifier, simi lar to SVM . Typical ly, a Softmax layer is used to normal ize the resulting tensor as containing probabi l ities after the fully connected classifier. Therefore, the output of the model is a l ist of probabi l ities that the image is either non-viable or viable.
[00111] Figure 8 is schematic architecture diagram of a deep learning method, including convolutional layers, which transform the input image to a prediction, after training, according to an embodiment. Figure 8 shows a series of layers based on a RESN ET 152 architecture according to an embodiment. The components are annotated as fol lows. "CONV" indicates a convolutional 2D layer, which computes cross-correlations of the input from the layer below. Each element or neuron within the convolutional layer processes the input from its receptive field only, e.g. 3x3 or 7x7 pixels. This reduces the number of learnable parameters required to describe the layer, and allows deeper neural networks to be formed than those constructed from ful ly-connected layers where every neuron is connected to every other neuron in the subsequent layer, which is highly memory intensive and prone to overfitting.
Convolutional layers are also spatial translation invariant, which is useful for processing images where the subject matter cannot be guaranteed to be precisely centred. "POOL" refers the max pool ing layers, which is a down-sampl ing method whereby only representative neuron weights are selected within a given region, to reduce the complexity of the network and also reduce overfitting. For example, for weights within a 4x4 square region of a convolutional layer, the maximum value of each 2x2 corner block is computed, and these representative values are then used to reduce the size of the square region to 2x2 in dimension. RELU indicates the use of rectified l inear units, which act as a nonl inear activation function. As a common example, the ramp function takes the fol lowing form for an input x from a given neuron, and is analogous to the activation of neurons in biology: f(x ) = max(0, x) (2)
The final layers at the end of the network, after the input has passed through al l of the convolutional layers, is typical ly a ful ly connected (FC) layer, which acts as a classifier. This layer takes the final input and outputs an array of the same number of dimensions as the classification categories. For two categories, e.g. 'viable Day 5 embryo' and 'non-viable Day 5 embryo', the final layer wi l l output an array of length 2, which indicates the proportion that the input image contains features that al ign with each category respectively. A final softmax layer is often added, which transforms the final numbers in the output array to percentages that fit between 0 and 1, and both together add up to a total of 1, so that the final output can be interpreted as a confidence limit for the image to be classified in one of the categories.
[00112] One suitable DNN architecture is Resnet (https://ieeexplore. ieee.org/document/7780459) such as ResNetl52, ResNetlOl, ResNet50 or ResNet-18. ResNet advanced the field significantly in 2016 by using an extremely large number of hidden layers, and introducing 'skip connections' also known as 'residual connections'. Only the difference from one layer to the next is calculated, which is more time- cost efficient, and if very l ittle change is detected at a particular layer, that layer is skipped over, thus create a network that wil l very quickly tune itself to a combination of smal l and large features in the image. In particular ResNet-18, ResNet-50, ResNet-101, DenseNet-121 and DenseNet-161general ly outperformed the other architectures. Another suitable DNN architecture is DenseNet
(https://ieeexplore.ieee.org/document/8099726), such as DenseNetl61, DenseNet201, DenseNetl69, DenseNetl21. DenseNet is an extension of ResNet, where now every layer can skip over to any other layer, with the maximal number of skip connections. This architecture requires much more memory, and so is less efficient, but can exhibit improved performance over ResNet. With a large number of model parameters, it is also easy to overtrain/overfit. All model architectures are often combined with methods to control for this In particular DenseNet-121 and DenseNet-161. Another suitable DNN architecture is Inception (-ResNet) (https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/viewPaper/14806), such as: lnceptionV4, lnceptionResNetV2. Inception represents a more compl icated convolutional unit, whereby instead of simply using a fixed size filter (e.g. 3x3 pixels) as described in Section 3.2, several sized fi lters are calculated in parallel : (5x5, 3x3, lxl pixels), with weights that are free parameters, so that the neural network may prioritize which filter is most suitable at each layer in the DNN. An extension of this kind if architecture is to combine it with skip connects in the same way as ResNet, to create an Inception-ResNet. In particular ResNet-18, ResNet-50, ResNet-101, DenseNet-121 and DenseNet-161 general ly outperformed the other architectures.
[00113] As discussed above both computer vision and deep learning methods are trained using a plural ity of Train-Val idate Cycles on pre-processed data. The Train-Val idate cycle fol lows the fol lowing framework: [00114] The training data is pre-processed and split into batches (the number of data in each batch is a free model parameter but controls how fast and how stably the algorithm learns). Augmentation may be performed prior to splitting or during training.
[00115] After each batch, the weights of the network are adjusted, and the running total accuracy so far is assessed. In some embodiment weights are updated during the batch for example using gradient accumulation. When all images have been assessed 1 Epoch has been carried out, the training set is shuffled (i.e. a new randomisation with the set is obtained), and the training starts again from the top, for the next epoch.
[00116] During training a number of epochs may be run, depending on the size of the data set, the complexity of the data and the complexity of the model being trained. An optimal number of epochs is typically in the range of 2 to 100, but may be more depending on the specific case.
[00117] After each epoch, the model is run on the validation set, without any training taking place, to provide a measure of the progress in how accurate the model is, and to guide the user whether more epochs should be run, or if more epochs will result in overtraining. The validation set guides the choice of the overall model parameters, or hyperparameters, and is therefore not a truly blind set.
However, it is important that the distribution of images of the validation set is very similar to the ultimate blind test set that will be run after training.
[00118] In reporting the validation set results, augmentations may also be included for each image (all), or not (noaug). Furthermore, the augmentations for each image may be combined to provide a more robust final result for the image. Several combination/voting strategies may be used including: mean- confidence (taking the mean value of the inference of the model across all the augmentations), median- confidence, majority-mean-confidence (taking the majority viability assessment, and only providing the mean confidence of those that agree, and if no majority, take the mean), max-confidence, weighted average, majority-max-confidence, etc.
[00119] Another method used in the field of machine learning is transfer learning, where a previously trained model is used as the starting point to train a new model. This is also referred to as Pretraining. Pre-training is used extensively, which allows new models to be built rapidly. There are two kinds of pre-training. One embodiment of pre-training is ImageNet pre-training. Most model architectures are provided with a set of pre-trained weights, using the standard image database ImageNet. While it is not specific for medical images, and includes one thousand different types of objects, it provides a method for a model to have already learnt to identify shapes. The classifier of the thousand objects is completely removed, and a new classifier for viability replaces it. This kind of pre-training outperforms other initialization strategies. Another embodiment of pre-training is custom pre-training which uses a previously-trained embryo model, either from a study with a different set of outcomes, or on different images (PGS instead of viabi lity, or randomly assigned outcomes). These models only provide a smal l benefit to the classification.
[00120] For non pre-trained models, or new layers added after pre-training such as the classifier, the weights need to be initial ized. The initial ization method can make a difference to the success of the training. Al l weights set to 0 or 1, for example, wi ll perform very poorly. A uniform arrangement of random numbers, or a Gaussian distribution of random numbers, also represent commonly used options. These are also often combined with a normal ization method, such as Xavier or Kaiming algorithms. This addresses an issue where nodes in the neural network can become 'trapped' in a certain state, by becoming saturated (close to 1), or dead (close to 0), where it is difficult to measure in which direction to adjust the weights associated with that particular neuron. This is especially prevalent when introducing a hyperbol ic-tangent or a sigmoid function, and is addressed by the Xavier initial ization.
[00121] In the Xavier initial ization protocol, the neural network weights are randomized in such a way that the inputs of each layer to the activation function wi l l not fal l too close to either the saturated or dead extreme ends. The use of ReLU, however, is better behaved, and different initializations provide a smal ler benefit, such as the Kaiming initial ization. The Kaiming initial ization is better suited to the case where ReLU is used as the neuron’s non-l inear activation profile. This achieves the same process as the Xavier initial ization effectively.
[00122] In deep learning, a range of free parameters is used to optimize the model training on the val idation set. One of the key parameters is the learning rate, which determines by how much the underlying neuron weights are adjusted after each batch. When training a selection model, overtraining, or overfitting the data should be avoided. This happens when the model contains too many parameters to fit, and essential ly 'memorizes' the data, trading general izabi l ity for accuracy on the training or validation sets. This is to be avoided, since the generalizability is the true measure of whether the model has correctly identified true underlying parameters that indicate embryo health, among the noise of the data, and not compromised this in order to fit the training set perfectly.
[00123] During the Val idation and Test phases, success rates can sometimes drop suddenly due to overfitting during the Training phase. This can be amel iorated through a variety of tactics, including slowed or decaying learning rates (e.g. halve the learning rate every n epochs) or the use of
CosineAnneal ling, incorporating the aforementioned methods of tensor initial ization or pre-training, and the addition of noise, such as Dropout layers, or Batch Normalization. Batch Normal isation is used to counteract vanishing or exploding gradients which improves the stabi lity of training large models resulting in improved general isation. Dropout regularization effectively simpl ifies the network by introducing a random chance to set al l incoming weights zero within a rectifier’s receptive range. By introducing noise, it effectively ensures the remaining rectifiers are correctly fitting to the representation of the data, without relying on over-specialization. This al lows the DNN to generalize more effectively and become less sensitive to specific values of network weights. Simi larly, Batch Normal ization improves training stabi lity of very deep neural networks, which al low s for faster learning and better generalization by shifting the input weights to zero mean and unit variance as a precursor to the rectification stage.
[00124] In performing deep learning, the methodology for altering the neuron weights to achieve an acceptable classification includes the need to specify an optimization protocol . That is, for a given definition of 'accuracy' or ' loss' (discussed below) exactly how much the weights should be adjusted, and how the value of the learning rate should be used, has a number of techniques that need to be specified. Suitable optimisation techniques include Stochastic Gradient Descent (SGD) with momentum (and/or Nesterov accelerated gradients), Adaptive Gradient with Delta (Adadelta), Adaptive Moment Estimation (Adam), Root-Mean-Square Propagation (RMSProp), and Limited-Memory Broyden-Fletcher-Goldfarb- Shanno ( L-BFGS) Algorithm. Of these, SGD based techniques general ly outperformed other optimisation techniques. Typical learning rates for phase contrast microscope images of human embryos were between 0.01 to 0.0001. However the learning rate wil l depend upon batch size, which is dependent upon hardware capacity. For example larger GPUs al low larger batch sizes and higher learning rates.
[00125] Stochastic Gradient Descent (SGD) with momentum (and/or Nesterov accelerated gradients) represents the most simple and commonly used optimizer. Gradient descent algorithms typical ly compute the gradient (slope) of the effect of a given weight on the accuracy. Whi le this is slow if it is required to calculate the gradient for the whole dataset to perform an update to the weights, stochcistic gradient descent performs an update for each training image, one at a time. Whi le this can result in fluctuations in the overal l objective accuracy or loss achieved, it has a tendency to generalize better than other methods, as it is able to jump into new regions of the loss parameter landscape, and find new minimum loss functions. For a noisy loss landscape in difficult problems such as embryo selection, SGD performs wel l . SGD can have trouble navigating asymmetrical loss function surface curves that are more steep on one side than the other, this can be compensated for by adding a parameter cal led momentum. This helps accelerate SGD in the direction and dampens high fluctuations in the accuracy, by adding an extra fraction to the update of the weight, derived from the previous state. An extension of this method is to include the estimated position of the weight in the next state as well, and this extension is known as the Nesterov accelerated gradient.
[00126] Adaptive Gradient with Delta (Adadelta), is an algorithm for adapting the learning rate to the weights themselves, performing smal ler updates for parameters that are frequently occurring, and larger updates for infrequently occurring features, and is well-suited to sparse data. Whi le this can suddenly reduce the learning rate after a few epochs across the entire dataset, the addition of a delta parameter in order to restrict the window allowed for the accumulated past gradients, to some fixed size. This process makes a default learning rate redundant, however, and the freedom of an additional free parameter provides some control in finding the best overal l selection model .
[00127] Adaptive Moment Estimation (Adam) stores exponential ly decaying average of both past squared and non-squared gradients, incorporating them both into the weight update. This has the effect of providing 'friction' for the direction of the weight update, and is suitable for problems that have relatively shal low or flat loss minima, without strong fluctuations. In the embryo selection model, training with Adam has a tendency to perform wel l on the training set, but often overtrain, and is not as suitable as SGD with momentum.
[00128] Root-Mean-Square Propagation (RMSProp) is related to the adaptive gradient optimizers above, and almost identical to Adadelta, except that the update term to the weights divides the learning rate by an exponential ly decaying average of the squared gradients.
[00129] Limited-Memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) Algorithm. Whi le computational ly intensive, the L-BFGS algorithm that actual ly estimates the curvature of the loss landscape rather than other methods than attempt to compensate for this lack of estimation with additional terms. It has a tendency to outperform Adam when the data set is smal l, but doesn't necessari ly outperform SGD in terms of speed and accuracy.
[00130] In addition to the above methods, it is also possible to include non-uniform learning rates. That is, the learning rate of the convolution layers can be specified to be much larger or smal ler than the learning rate of the classifier. This is useful in the case of pre-trained models, where changes to the filters underneath the classifier should be kept more 'frozen', and the classifier be retrained, so that the pre-training is not undone by additional retraining.
[00131] Whi le the optimizer specifies how to update the weights given a specific loss or accuracy measure, in some embodiments the loss function is modified to incorporate distribution effects. These may include cross-entropy (CE) loss, weighted CE, residual CE, inference distribution or a custom loss function.
[00132] Cross Entropy Loss is a commonly used loss function, which has a tendency to outperform simple mean-squared-of-difference between the ground truth and the predicted value. If the result of the network is passed through a Softmax layer, such as is the case here, then the distribution of the cross entropy results in better accuracy. This is because is natural ly maximizes the l i kel ihood of classifying the input data correctly, by not weighting distant outl iers too heavi ly. For an input array, batch, representing a batch of images, and class representing viable or non-viable, the cross entropy loss is defined as: where C is the number of classes. In the binary case this can be simpl ified to: loss(p, C) =— (y log(p )) + (1 - y)log(l - p) (4)
An optimised version is:
Figure imgf000035_0001
[00133] If the data contains a class bias, that is, more viable than non-viable examples (or vice- versa), the loss function should be weighted proportionally so that misclassifying an element of the less numerous class is penal ized more heavi ly. This is achieved by pre-multiplying the right hand side of Eq.(2) with the factor:
Figure imgf000035_0002
where N[class] is the total number of images for each class, N is the total number of samples in the dataset and C is the number of classes. It is also possible to manual ly bias the weight towards the viable embryos in order to reduce the number of false negatives compared to false positives, if necessary.
[00134] In some embodiments an Inference Distribution may be used. Whi le it is important to seek a high level of accuracy in classifying embryos, it is also important to seek a high level of transferabi l ity in the model. That is, it is often beneficial to understand the distribution of the scores, and that whi le seeking a high accuracy is an important goal, the separate of the viable and non-viable embryos confidently with a margin of certainty is an indicator that the model wil l general ize wel l to a test set.
Since the accuracy on the test set is often used to quote comparisons with important cl inical benchmarks, such as the accuracy of the embryologist classification on the same embryo, ensuring general izabi l ity should also be incorporated into the batch-by-batch assessment of the success of the model, each epoch.
[00135] In some embodiments a Custom Loss function is used. In one embodiment, we have customized how we define the loss function so that the optimization surface is changed to make global minima more obvious and so improve the robustness of the model. To achieve this, a new term is added to the loss function which maintains differentiabi l ity, called a residual term, which is defined in terms of the networks weights. It encodes the collective difference in the predicted value from the model and the target outcome for each image, and includes it as an additional contribution to the normal cross entropy loss function. The formula for the residual term is as fol lows, for iV images:
Figure imgf000036_0001
For this Custom Loss function, wel l-space clusters of viable and non-viable embryo scores are thus considered consistent with an improve loss rating. It is noted that this custom loss function is not specific to the embryo detection appl ication, and could be used in other Deep Learning Models.
[00136] In some embodiments the models are combined to generate a more robust final Al model 100. That is deep learning and/or computer vision models are combined together to contribute to the overal l prediction of the embryo viabi l ity.
[00137] In one embodiment an ensemble method is used. First, models that perform wel l are selected. Then, each model 'votes' on one of the images (using augmentations or otherwise), and the voting strategy that leads to the best result is selected. Example voting strategies include maximum- confidence, mean-value, majority-mean-value, median-value, mean-confidence, median-confidence, majority-mean-confidence, weighted average, majority-max-confidence, etc. Once the voting strategy has been selected, the evaluation method for the combination of augmentations must also be selected, which describes how each of the rotations should be treated by the ensemble, as before. In this embodiment the final Al model 100 can thus be defined as a collection of trained Al models, using deep learning and/or computer vision models, together with a mode, which encodes the voting strategy that defines how the individual Al model results wi l l be combined, and an evaluation mode that defines how the
augmentations (if present) wi l l be combined.
[00138] Selection of the models was performed in such a way that their results contrast from one another, i .e. their results are independent as possible, and the scores are wel l distributed. This selection procedure is carried out by examining which images in the test set have been correctly identified for each model . If the sets of correctly identified images are very simi lar when comparing two models, or the scores provided by each model are simi lar to each other for a given image, then the models are not considered contrasting models. If, however, there is little overlap between the two sets of correctly identified images, or the scores provided for each image are markedly different from each other, then the models are considered contrasting. This procedure effectively assesses whether the distributions of the embryo scores on a test set for two different models are simi lar or not. The contrasting criterion drives model selection with diverse prediction outcome distributions, due to different input images or segmentation. This method ensured translatabi l ity by avoiding selection of models that performed wel l only on specific clinic datasets, thus preventing over-fitting. Additionally model selection may also use a diversity criterion. The diversity criterion drives model selection to include different model 's hyperparameters and configurations. The reason is that, in practice, simi lar model settings result in simi lar prediction outcomes and hence may not be useful for the final ensemble model .
[00139] In one embodiment this can be implemented by using a counting approach and specifying a threshold simi larity, such as 50%, 75% or 90% overlapping images in the two sets. In other
embodiments, the scores in a set of images (e.g. the viable set) could be totalled and two sets (totals) compared, and ranked simi lar if the two totals are less than a threshold amount. Statistical based comparisons could also be used, for example taking into account the number of images in the set, or otherwise comparing the distribution of images in each of the sets.
[00140] In other embodiments a disti l lation method could be used to combine the individual Al models. In this approach the Al models are used as teacher models to train a student model . Selection of the individual Al models may be performed using diversity and contrasting criterion as discussed for ensemble methods. Further other methods for selecting the best model from a range of models or for combining outputs from multiple models into a single output maybe used.
[00141] An embodiment of an ensemble based embryo viabi l ity assessment model was generated and two val idation (or bench marking) studies were performed in IVF cl inics to assess the performance of the embryo viabi lity assessment model described herein compared to working embryologists. For ease of reference this wil l be referred to as the ensemble model . These val idation studies showed that the embryo viabi lity assessment model showed a greater than 30% improvement in accuracy in identifying the viabi lity of embryos when compared directly with world-leading embryologists. The studies thus val idates the abi l ity of embodiments of the ensemble model described herein to inform and support embryologists' selection decision, which is expected to contribute to improved IVF outcomes for couples.
[00142] The first study was a pi lot study conducted with an Austral ian clinic (Monash IVF) and the second study was conducted across multiple cl inics and geographical sites. The studies assessed the abi l ity of an embodiment of an ensemble based embryo viabi lity assessment model as described, to predict Day 5 embryo viabi l ity, as measured by cl inical pregnancy.
[00143] For each cl inical study, each patient in the IVF process may have multiple embryos to select from. An embodiment of an embryo viabi l ity assessment model as described herein was used to assess and score the viabil ity of each of these embryos. However, only embryos that are implanted and which the pregnancy outcome is known (e.g. foetal heartbeat detected at the first ultrasound scan) can be used to val idate the accuracy of the model. The total data set thus comprises images of embryos that have been implanted into the patient, with associated known outcomes, for which the accuracy (and thus the performance) of the model can be validated. [00144] To provide further rigor with respect to the val idation, some of the images used for val idation comprise the embryologist's score as to the viabi l ity of the embryo. In some cases, an embryo that is scored as 'non-viable' may sti ll be implanted if it nevertheless sti ll the most favorable embryo choice, and/or upon the request of the patient. This data enables a direct comparison of how the ensemble model performs compared with the embryologist. Both the ensemble model and the embryologists’ accuracies are measured as the percentage of the number of embryos that were scored as viable and had a successful pregnancy outcome (true positives), in addition to the number of embryos that were scored non-viable and had an unsuccessful pregnancy outcome (true negatives), divided by the total number of scored embryos. This approach is used to val idate whether the ensemble model performs comparably or better when directly compared with leading embryologists. It is noted that not al l images have corresponding embryologist scores in the dataset.
[00145] In order to make a direct comparison of the accuracy of a selection model with the current manual method employed by embryologists, the fol lowing interpretation of the embryologist scores for each clinic is used, for a degree of expansion that is at least a blastocyst (' BL' in Ovation Ferti lity notation, or 'XB' in M idwest Fertil ity Special ists notation). Embryos that are l isted as the cellular stage (e.g. 10 cel l), as compacting from the cellular stage to the morula, or as cavitating morula (where the blastocoel cavity is less than 50% of the total volume at Day 5 after IVF) are considered likely to be non-viable.
[00146] The letter grades that denote the qual ity of the IntraZonal Cavity (first letter) and trophectoderm (second letter) are arranged into bands of embryo qual ity, as discerned by the
embryologist. A division is then made to denote whether an embryo was judged likely to be non-viable or viable, using Table 1 below. Bands 1 through 3 are considered l i kely to be viable, and bands 4 and greater are considered l ikely to be non-viable. In band 6, the embryo is considered likely to be non-viable if either letter score is worse than 'C' . In band 7, a score of ΊCC' from M idwest Fertility Special ists indicates an early blastocyst with early (large) trophectoderm cel ls and without a discernible IntraZonal Cavity, and is considered l ikely to be non-viable.
TABLE 1
Ovation Ferti lity and M idwest Fertil ity Special ists embryologist score bands for l ikely viabi l ity.
Figure imgf000038_0001
Figure imgf000039_0001
[00147] A set of approximately 20,000 embryo images taken at Day 5 after IVF was obtained along with related pregnancy and pre-implantation genetic screening (PGS) outcomes, and demographic information, including patient age and cl inic geographical location. The clinics that contributed data to this study are: Repromed (Adelaide, SA, Australia) as part of Monash IVF Group ( Melbourne, VIC, Austral ia), Ovation Ferti l ity (Austin, TX, USA), San Antonio IVF (San Antonio, TX, USA), M idwest Ferti lity Special ists (Carmel, IN, USA), Institute for Reproductive Health (Cincinnati, OH, USA),
Ferti lity Associates (Auckland, Hami lton, Well ington, Christchurch and Dunedin, New Zealand), Oregon Reproductive Medicine (Portland, OR, USA) and Alpha Ferti l ity Centre (Petaling J aya, Selangor, Malaysia).
[00148] The generation of an Al model for use in the trial proceeded as fol lows. First a range of model architectures (or model types) are generated and each Al model is trained with various settings of model parameters and hyper-parameters, including input image resolution, choice of optimizer, learning rate value and schedul ing, momentum value, dropout, and initial ization of the weights (pre-training). Initial fi ltering is performed to select models which exhibit stabi l ity (accuracy stable over the training process), transferabi lity (accuracy stable between training and validation sets) and predictions accuracy. Prediction accuracy examined which models provided the best val idation accuracy, for both viable and non-viable embryos, the total combined accuracy, and the balanced accuracy, defined as the weighted average accuracy across both class types of embryos. In one embodiment, the use of ImageNet pretrained weights demonstrated improved performance of these quantities. Evaluation of loss functions indicated that weighted CE and residual CE loss functions general ly outperformed other models.
[00149] Next models were then separated into two groups: first, those that included additional image segmentation (Zona or IZC identification), and second those that use the entire unsegmented image (i .e. ful l embryo models). Models that were trained on images that masked the IZC, exposing the zona region, were denoted as zona models. Models that were trained on images that masked the zona (denoted IZC models), and models that were trained on ful l-embryo images, were also considered in training. A group of models encompassing contrasting architectures and pre-processing methods was selected in order to provide diversity and maximize performance on the validation set.
[00150] The final ensemble based Al model was an ensemble of the highest performing individual models selected on the basis of diversity and contrasting results. Well-performing individual models that exhibited different methodologies, or extracted different biases from the features obtained through machine learning, were combined using a range of voting strategies based on the confidence of each model . Voting strategies evaluated included mean, median, max, majority mean voting, maximum- confidence, mean-value, majority-mean-value, median-value, mean-confidence, median-confidence, majority-mean-confidence, weighted average, majority-max-confidence, etc. In one embodiment the majority mean voting strategy is used as in testing it outperformed other voting strategies giving the most stable model across all datasets.
[00151] In this embodiment the final ensemble based Al model includes eight deep learning models of which four are Zona models and four are full-embryo models. The final model configuration used in this embodiment is as fol lows:
One ful l-embryo ResNet-152 model, trained using SGD with momentum=0.9, CE loss, learning rate 5.0e-5, step-wise scheduler halving the learning rate every 3 epochs, batch size of 32, input resolution of 224 x 224, and a dropout value of 0.1.;
One zona model ResNet-152 model, trained using SGD with momentum=0.99, CE loss, learning rate 1.0e-5, step-wise scheduler dividing the learning rate by 10 every 3 epochs, batch size of 8, input resolution of 299 x 299, and a dropout value of 0.1;
Three zona ResNet-152 models, trained using SGD with momentum=0.99, CE loss, learning rate 1.0e-5, step-wise scheduler dividing the learning rate by 10 every 6 epochs, batch size of 8, input resolution of 299 x 299, and a dropout value of 0.1, one trained with random rotation of any angle;
One ful l-embryo DenseNet-161 model, trained using SGD with momentum=0.9, CE loss, learning rate 1.0e-4, step-wise scheduler halving the learning rate every 5 epochs, batch size of 32, input resolution of 224 x 224, a dropout value of 0, and trained with random rotation of any angle;
One ful l-embryo DenseNet-161 model, trained using SGD with momentum=0.9, CE loss, learning rate 1.0e-4, step-wise scheduler halving the learning rate every 5 epochs, batch size of 32, input resolution of 299 x 299, a dropout value of 0.; and
One ful l-embryo DenseNet-161 model, trained using SGD with momentum=0.9, Residual CE loss, learning rate 1.0e-4, step-wise scheduler halving the learning rate every 5 epochs, batch size of 32, input resolution of 299 x 299, a dropout value of 0, and trained with random rotation of any angle. [00152] The architecture diagram corresponding to ResNet-152, which features heavi ly in the final model configuration, is shown in Figure 8. The final ensemble model was subsequently val idated and tested on bl ind test datasets as described in the results section.
[00153] Measures of accuracy used in the assessment of model behaviour on data included sensitivity, specificity, overal l accuracy, distributions of predictions, and comparison to embryologists' scoring methods. For the Al model, an embryo viabi lity score of 50% and above was considered viable, and below 50% non-viable. Accuracy in identification of viable embryos (sensitivity) was defined as the number of embryos that the Al model identified as viable divided by the total number of known viable embryos that resulted in a positive cl inical pregnancy. Accuracy in identification of non-viable embryos (specificity) was defined as the number of embryos that the Al model identified as non-viable divided by the total number of known non-viable embryos that resulted in a negative cl inical pregnancy outcome. Overal l accuracy of the Al model was determined using a weighted average of sensitivity and specificity, and percentage improvement in accuracy of the Al model over the embryologist was defined as the difference in accuracy as a proportion of the original embryologist accuracy
(i .e. AI_accuracy - embryologist_accuracy) / embryologist_accuracy) .
[00154] Pi lot Study
[00155] Monash IVF provided the ensemble model with approximately 10,000 embryo images and related pregnancy and live birth data for each image. Additional data provided included patient age, BM I, whether the embryo was implanted fresh or was frozen prior, and any ferti lity related medical conditions. Data for some of the images contained the embryologist's score for the viabi l ity of the embryo. Prel iminary training, val idation and analysis showed that the model's accuracy is significantly higher for day 5 embryos compared with day 4 embryos. Hence all day 4 embryos were removed, leaving approximately 5,000 images. The usable dataset for training and validation was 4650 images. This initial dataset was spl it into 3 separate datasets. A further 632 images were then provided which was used as a second Bl ind val idation dataset. The final datasets for training and validation include:
• Training dataset: 3892 images;
• Val idation dataset: 390 images, of which 70 (17.9%) had a successful pregnancy
outcome and 149 images included an embryologist score on the viabil ity of the embryo;
• Bl ind validation dataset 1: 368 images of which 76 (20.7%) had a successful pregnancy outcome and 121 images included an embryologist score on the viabil ity of the embryo; and
• Bl ind validation dataset 2 : 632 images of which 194 (30.7%) had a successful pregnancy outcome and 477 images included an embryologist score on the viabil ity of the embryo [00156] Not all images have corresponding embryologist scores in the dataset. The sizes of the datasets, as wel l as the subsets that include embryologist scores, are l isted below.
[00157] The ensemble based Al model was appl ied to the three val idation datasets. The overal l accuracy results for the ensemble model in identifying viable embryos are shown in Table 2. The accuracy results for the two bl ind val idation datasets are the key accuracy indicators, however, results for the validation dataset are shown for completeness. The accuracy for identifying viable embryos is calculated as a percentage of the number of viable embryos (i.e. images that had a successful pregnancy outcome) that the ensemble model could identify as viable (a viabi lity score of 50% or greater by the model) divided by the total number of viable embryos in the dataset. Simi larly, the accuracy for identifying non-viable embryos is calculated as a percentage of the number of non-viable embryos (i.e. images that had an unsuccessful pregnancy outcome) that the ensemble model could identify as non- viable (a viabi l ity score of under 50% by the model) divided by the total number of non-viable embryos in the dataset.
[00158] In the first stage of val idation conducted with Monash IVF, the ensemble model 's trained embryo viabi l ity assessment model was applied to two bl ind datasets of embryo images with known pregnancy outcomes, with a combined total of 1000 images (patients). Figure 9 is a plot of the accuracy of an embodiment of an ensemble model in identifying embryo viability 900 according to an
embodiment. The results showing that the ensemble model 910 had an overall accuracy of 67.7% in identifying embryo viabi lity across the two bl ind val idation datasets. Accuracy was calculated by summing the number of embryos that were identified as viable and led to a successful outcome, plus the number of embryos that were identified as non-viable and led to an unsuccessful outcome, divided by the total number of embryos. The ensemble model showed 74.1% accuracy in identifying viable embryos 920 and 65.3% accuracy in identifying non-viable embryos 930. This represents a significant accuracy improvement in this large dataset of embryos already pre-selected by embryologists and implanted into patients, where only 27% resulted in a successful pregnancy outcome.
[00159] To provide further rigor with respect to the val idation, a subset of the images used for val idation had an associated embryologist’s score relating to the viabil ity of the embryo (598 images). In some cases, an embryo that is scored as 'non-viable' by an embryologist may sti ll be implanted if it is considered the most favorable embryo choice for that patient, and/or upon the request of the patient, despite a low l i kel ihood of success. Embryo scores were used as a ground truth of the embryologists’ assessment of viabil ity and al low for a direct comparison of the ensemble model performance compared with leading embryologists. [00160] The worst-case accuracy for the blind validation dataset 1 or 2 is 63.2% for identifying viable embryos in blind dataset 1, 57.5% for identifying non-viable embryos in blind dataset 2, and 63.9% total accuracy for blind dataset 2.
[00161] Table 3 shows the total mean accuracy across both blind datasets 1 and 2, which is 74.1% for identifying viable embryos, 65.3% for identifying non-viable embryos, and 67.7% total accuracy across both viable and non-viable embryos.
[00162] The accuracy values in both tables are high considering 27% of embryos result in a successful pregnancy outcome, and the ensemble model's difficult task of further classifying embryo images that have already been analyzed and selected as viable, or more favorable than other embryos in the same batch, by embryologists.
TABLE 2
Accuracy of the embryo viability assessment model when applied to the three types of validation datasets. Results show the accuracy in identifying viable embryos, non-viable embryos, and the total accuracy for both viable and non-viable embryos.
Figure imgf000043_0001
TABLE 3
Total mean accuracy of the embryo viability assessment model when applied to the blind validation datasets 1 and 2 only. Results show the accuracy in identifying viable embryos, non-viable embryos, and the total accuracy for both viable and non-viable embryos.
Figure imgf000043_0002
[00163] Table 4 shows the results comparing the model's accuracy with those of the
embryologists. The accuracy values differ to those in the table above because not all embryos images in the datasets have embryo scores, and thus the results below are accuracy values on a subset of each dataset. The table shows that the model's accuracy in identifying viable embryos is higher than the embryologist. These results are illustrated in the bar chart 1000 in Figure 10 with ensemble results 1010 on the left and embryologist results 1020 on the right.
TABLE 4
Comparison of the accuracy in identifying viable/non-viable embryos for the ensemble model versus world-leading embryologists.
Figure imgf000044_0001
[00164] Table 5 shows a comparison of the number of times that the model was able to correctly identify the viabi l ity of an embryo and the embryologist was not able to, and vice versa. The results show there were fewer occurrences where embryologists were correct and the model was incorrect compared with the cases where the model was correct and embryologists were incorrect. These results re i llustrated in Figure 11. This result further val idates the high level of performance and accuracy of the ensemble model 's embryo viabi l ity assessment model .
TABLE 5
Comparison of the accuracy in identifying viable/non-viable embryos for the ensemble model versus world-leading embryologists.
Figure imgf000044_0002
[00165] Overal l, the ensemble model achieved a total of 66.7% accuracy in identifying the viabi lity of embryos, whereas embryologists' achieved 51% accuracy based on their scoring method (Figure 10). The additional 15.7% accuracy represents a significant 30.8% performance (accuracy) improvement for the ensemble model compared with embryologists (p=0.021, n=2, Student’s r test). Specifically, results show that the ensemble model was able to correctly classify embryo viabi l ity 148 times when embryologists were incorrect, and conversely embryologists' correctly classified embryo viabi lity only 54 times where the ensemble model was incorrect. Figure 11 is a bar plot showing the accuracy of an embodiment of the ensemble model (bar 1110) compared to world-leading embryologists (clinicians) (bar 1120) in correctly identifying embryo viabi l ity where the embryologists' assessment was incorrect, compared with embryologists correctly identifying embryo viabi l ity where the ensemble model assessment was incorrect. These results show a clear advantage of the ensemble model in identifying viable and non-viable embryos when compared with world-leading embryologists. A further val idation study was performed for embryo images from Ovation Fertil ity with simi lar results.
[00166] The successful val idations demonstrate that the ensemble model 's approach and technology can be appl ied to embryos images to create a model that can accurately identify viable embryos and ultimately lead to improved IVF outcomes for couples. The model was then further tested in a larger cross cl inic study
[00167] Cross cl inic study
[00168] In a more general cross-cl inic study fol lowing the Australian pilot study, over 10,000 embryo images were sourced from multiple demographics. Of these images, over 8,000 can be related to the embryologist's score for the viabil ity of the embryo. For training, each image needs to be labeled as viable or non-viable to al low the deep learning and computer vision algorithms to identify patterns and features relating to the viabi l ity of the embryos.
[00169] In the first cross-cl inic study, the usable dataset of 2217 images (and linked outcomes) for developing the ensemble model is split into three subsets in the same manner as the pi lot study: the training dataset, val idation dataset and bl ind val idation dataset. These studies include data sourced from the cl inics: Ovation Ferti l ity Austin, San Antonio IVF, M idwest Ferti l ity Specialists, and Institute for Reproductive Health and Ferti l ity Associates NZ. This comprised:
• Training dataset: 1744 images - 886 non-viable, 858 viable;
• Val idation dataset: 193 images - 96 non-viable, 97 viable; and
• Bl ind validation dataset 1: 280 images - 139 non-viable, 141 viable; [00170] After completion of the training, val idation and blind val idation phases, a second study is conducted on a completed separate demographic, sourced from the cl inic: Oregon Reproductive
Medicine. This dataset comprised
• Bl ind validation dataset 2 : 286 images - 106 non-viable, 180 viable.
[00171] A third study uti lizes the EmbryoScope images sourced from the cl inic: Alpha Ferti lity Centre:
• EmbryoScope val idation dataset: 62 images - 32 non-viable, 30 viable.
[00172] In producing the trained ensemble based Al model, the same training dataset is used for each model that is trained, so that they can be compared in a consistent manner.
[00173] The final results for the ensemble based Al model, as appl ied to the mixed demographic bl ind val idation dataset, are as fol lows. A summary of the total accuracy can be found in Table 6.
TABLE 6
Accuracy of the ensemble based Al model, when applied to the bl ind val idation dataset of Study lof the cross cl inic study. Results show the accuracy in identifying viable embryos, non-viable embryos, and the total accuracy for both viable and non-viable embryos combined.
Figure imgf000046_0001
[00174] The distribution of the inferences, displayed as histograms, is shown in Figures 12 and 13. Figure 12 is a plot of the distribution of inference scores 1200 for viable embryos (successful clinical pregnancy) using the embodiment of the ensemble based Al model, when appl ied to the bl ind validation dataset of Study 1. The inferences are normal ized between 0 and 1, and can be interpreted as confidence scores. Instances where the model is correct are marked in boxes fi l led with thick downward diagonal l ines (True Positives 1220); whereas instances where the model is incorrect are marked in in boxes filled with thin upward diagonal lines (False Negatives 1210). Figure 13 is a plot of the distribution of inference scores for non-viable embryos (unsuccessful cl inical pregnancy) 1300 using the embodi ment of the ensemble based Al model, when appl ied to the bl ind val idation dataset of Study 1. The inferences are normal ized between 0 and 1, and can be interpreted as confidences scores Instances where the model is correct are marked in boxes fil led with thick downward diagonal lines (True Negatives 1320), whereas instances where the model is incorrect are marked in boxes fil led with thin upward diagonal lines (False Positives 1310). There is clear separation between the two groups. These histograms show good separation between the correctly and incorrectly identified embryo images, which provides evidence that the model wil l translate wel l to a bl ind val idation set.
[00175] Figure 13 contains a tall peak in the False Positives 1310 (boxes fi l led with thin upward diagonal l ines), which is not as prominent in the equivalent histogram for the False Negatives in Figure 12. The reason for this effect could be due to the presence of patient health factors, such as uterine scarring, that cannot be identified through the embryo image itself. The presence of these factors means that even an ideal embryo may not lead to a successful implantation. This also l imits the upper value of the accuracy in predicting successful cl inical pregnancy using embryo imagine analysis alone.
[00176] In the selection of an embryo, it is widely considered preferential to al low a non-viable embryo to be implanted (False Positive) than to jeopardize a potential ly healthy embryo (False Negative). Therefore, in obtaining the final ensemble based Al model that forms the ensemble based Al model, effort has been made, where possible, to bias residual inaccuracies to minimize the False Negatives preferential ly. Therefore, the final model wi l l have a higher sensitivity than specificity, i.e. a higher accuracy at selecting viable embryos than non-viable embryos. To bias the model to prioritize minimizing the False Negatives, models are selected for inclusion in the final ensemble based Al model such that the ensemble based Al model accuracy on the set of viable embryo i mages is higher than the accuracy on the set of non-viable embryo images, if possible. If models cannot be found such that they combine together to provide a bias to the viabi lity accuracy, then an additional parameter is sometimes supplied during training, which increases the penalty for misclassifying a viable embryo.
[00177] Whi le the total accuracy is useful for roughly assessing the overall efficacy of the model, complexities regarding different demographics have necessari ly been averaged. Therefore, it is instructive to consider a breakdown of the results into various key groups, described below.
[00178] Study 1 : Demographic cross-sections
[00179] To explore the behavior of the ensemble based Al model, the following demographic groups are considered. First, the accuracy on the dataset provided by Fertil ity Associates NZ is lower than those of the US-based cl inics. This is likely due to the diversity inherent in the data from this cl inic, which encompasses a number of different cities, camera fi lters and brightness levels, over which the ensemble based Al model must take an average. It is anticipated that further training of the Al on much larger datasets wi ll be able to account for the camera diversity by incorporating it into a fine-tuning training dataset. The accuracies including and excluding the NZ data are shown in Tables 7 and 8.
[00180] Because of the smaller number of images from the cl inics Midwest Fertility Associates and San Antonio IVF, the sample sizes are too small individually to provide a reliable accuracy measure. Therefore, their outcomes have been combined together with the results from Ovation Fertility Austin in Table 7.
TABLE 7
Accuracy of the ensemble based Al model, when applied to the blind validation dataset of Study 1, as broken down by clinic.
Figure imgf000048_0001
[00181] A study of the effect of patient age on the accuracy of the ensemble based Al model was also conducted, shown in Table 7. It was found that embryo images corresponding to patients equal to or over 35 years were classified more accurately. If the age cutoff is lifted to 38 years, the accuracy improved again, indicating that the ensemble based Al model is more sensitive to morphological characteristics that become more prominent with age.
TABLE 8
Accuracy of the ensemble based Al model, when applied to the blind validation dataset of Study 1, as broken down into age, or hatched/non-hatched bandings.
Figure imgf000048_0002
[00182] Whether the embryo has been treated with a hatched or non-hatched protocol prior to transfer was also considered. It was found that while hatched embryos which exhibit more gross morphological features were more easily identified by the Al than non-hatched embryos, the specificity was reduced in the former case. This is likely a result of the fact that an ensemble based Al model trained on a mixed dataset of hatched and non-hatched embryos will have a tendency to associate successfully hatched embryos with viability.
[00183] Study 1: Embryologist ranking comparison
[00184] A summary of the accuracies of the ensemble based Al model and the embryologist can be found in Tables 9 and 10 for the same demographic breakdown considered in Section 5A. Only embryo images that have a corresponding embryologist score are considered in this Study.
[00185] The percentage improvement of the ensemble based Al model over the embryologist in accuracy is quoted, as defined by the difference in accuracy as a proportion of the original embryologist accuracy (AI_accuracy - embiyologist_accuracy) / embiyologist_accuracy. It is found that while the improvement across the total number of images was 31.85%, the improvement is highly variable across specific demographics, as the improvement factor is highly sensitive to the performance of the embryologist on each given dataset.
[00186] In the case of Fertility Associates NZ, the embryologists performed significantly better than other demographics, leading to an improvement of only 12.37% using the ensemble based Al model. In cases where the ensemble based Al model performed very well, such as Ovation Fertility Austin, the improvement was as high as 77.71%. A comparison of the performance of the ensemble based Al model compared to the embryologist is also reflected in the total number of images correctly assessed where its comparator incorrectly assessed the same image, as seen in the last two columns of both Tables 9 and 10.
TABLE 9
Embryologist comparison for images that have embryologist scores, as broken down by clinic.
Figure imgf000049_0001
Figure imgf000050_0001
[00187] If the embryologist score contains a numeral, or terminology representing a ranking of the embryos in terms of their advancement or arrestment (number of cells, compacting, morula, cavitation, early blastocyst, full blastocyst or hatched blastocyst), an alternative study comparing the efficacy of the ensemble based Al model and the embryologists assessment can be conducted. A comparison of the ranking of the embryos can be made by equating the embryologist assessment with a numerical score from 1 to 5, while dividing the Al inferences into 5 equal bands (from the minimum inference to the maximum inference), labeled 1 to 5. With both the ensemble based Al model and the embryologist scores expressed as an integer from 1 to 5, a comparison of ranking accuracy is made as follows.
[00188] If a given embryo image is given the same rank by the ensemble based Al model and the embryologist, this is noted as a concordance. If, however, the ensemble based Al model provides a higher rank than the embryologist and the ground-truth outcome was recorded as viable, or the ensemble based Al model provides a lower rank than the embryologist and the ground-truth outcome was recorded as non-viable, then this outcome is noted as model correct. Similarly, if the ensemble based Al model provides a lower rank than the embryologist and the ground-truth outcome was recorded as viable, or the ensemble based Al model provides a higher rank and the outcomes was recorded as non-viable, this outcome is noted as model incorrect. A summary of the proportions of images assessed as concordant, model correct or model incorrect can be found in Tables 11 and 12 for the same demographic breakdown considered above. The ensemble based Al model is considered to have performed well on a dataset if the model correct proportion is high, and the concordance and model incorrect proportions are low.
TABLE 10
Embryologist comparison for images that have embryologist scores, as broken down by clinic.
Figure imgf000050_0002
Figure imgf000051_0001
TABLE 11
Embryologist ranking study, where the proportions of rank concordance, model correct or model incorrect are expressed as percentages of the total images in each clinic.
Figure imgf000051_0002
TABLE 12
Embryologist ranking study, where the proportions of rank concordance, model correct or model incorrect are expressed as percentages of the total images in each demographic.
Figure imgf000051_0003
Figure imgf000052_0001
[00189] A visual representation of the distribution of the rankings from the embryologist and the ensemble based Al model across the total blind dataset of Study 1 can be seen in the histograms in Figures 14 and 15, respectively. Figure 14 is a histogram of the rank obtained from the embryologist scores across the total blind dataset 1400 and Figure 15 is a histogram of the rank obtained from the embodiment of the ensemble based Al model inferences across the total blind dataset 1500.
[00190] Figures 14 and 15 differ from each other in the shape of the distribution. While there is dominance in the embryologist scores around a rank value of 3, dropping off steeply for lower scores of 1 and 2, the ensemble based Al model has a more even distribution of scores around a value of 2 and 3, with a rank of 4 being the dominant score. Figure 16 has been extracted directly from the inference scores obtained from the ensemble based Al model, which are shown as a histogram in Figure 13 for comparison. The ranks in Figure 12 are a coarser version of the scores in Figure 13. The finer distribution in Figure 16 shows that there is a clear separation between the scores below 50% (predicted non-viable) 1610 and those above (predicted viable) 1620. This suggests the ensemble based Al model provides greater granularity around embryo ranking than the standard scoring method, enabling a more definitive selection to be achieved.
[00191] Study 2 - secondary blind validation
[00192] In Study 2, embryo images were sourced from a separate clinic, Oregon Reproductive Medicine, to be used as a secondary blind validation. The total number of images with linked clinical pregnancy outcomes was 286, similar in size to the blind validation dataset in Study 1. The final results for the ensemble based Al model, as applied to the mixed demographic blind validation set can be found in Table 13. In this blind validation, there is a drop in accuracy of only (66.43% - 62.64% = 3.49%) compared to Study 1, which indicates that the model is translating across to the secondary blind set. However, the drop in accuracy is not uniform over the non-viable and viable embryos. The specificity is reduced, while the sensitivity remains stable. In this trial 183 low quality images sourced from an old (>1- years) Pixelink ® camera were removed (failing quality criteria) before the commencement of the study to prevent them influencing the ensemble based Al model from correctly predict embryo viability.
TABLE 13
Accuracy of the ensemble based Al model, when applied to the blind validation dataset of Study 2 from Oregon Reproductive Medicine. Results show the accuracy in identifying viable embryos, non-viable embryos, and the total accuracy for both viable and non-viable embryos combined.
Figure imgf000053_0001
[00193] To explore this point further, a separate study was conducted in which embryo images were successively distorted, by introducing uneven cropping, scal ing (blurring) or the addition of compression noise (such as jpeg artefacts). In each case it was found that the confidence in the ensemble based Al model prediction reduces as the artefacts are increased. Furthermore, it was found that there is a tendency for the ensemble based Al model to assign a non-viable prediction to a distorted image. This makes sense from the point of view of the ensemble based Al model, which cannot distinguish between an image of a damaged embryo, or a damaged image of a normal embryo. In both cases, a distortion is identified by the ensemble based Al model, and the li kel ihood of assigning the image a non-viable prediction increases.
[00194] As a confirmation of this analysis, the ensemble based Al model was applied to only the 183 Pixelink camera images removed from the main high qual ity image set from Oregon Reproductive Medicine, and the results are shown in Table 14.
TABLE 14
Accuracy of the ensemble based Al model, when applied to the low quality Pixelink images of Study 2 from Oregon Reproductive Medicine. Results show the accuracy in identifying viable embryos, non- viable embryos, and the total accuracy for both viable and non-viable embryos combined.
Figure imgf000053_0002
[00195] It is clear from Table 14 that in the case of distorted images and poor quality image (ie fail ing a qual ity assessment), not only wil l the ensemble based Al model performance drop, but a larger proportion of the images wi l l be assigned a non-viable prediction. Further analysis of the ensemble based Al model behaviour on alternative camera setups, and a method for handl ing such artefacts to improve the result, is discussed in below. The distribution of the inferences, displayed as histograms 1700 and 1800, are shown in Figures 17 and 18. J ust as in Study 1, Figures 17 and 18 both show a clear separation between the correct (1720; 1820; boxes fi lled with thick downward diagonal l ines) and incorrect predictions (1710; 1810; boxes fi lled with thin upward diagonal l ines) for both the viable and non-viable embryos. The shapes of the distributions between Figures 17 and 18 are also simi lar to each other, although there is a higher rate of False Positives than is the case for the False Negatives. [00196] Study 3 - EmbryoScope val idation
[00197] In Study 3, the potential performance of the ensemble based Al model on a dataset sourced from a completely different camera setup is explored. A limited number of EmbryoScope images were obtained from Alpha Ferti lity Centre, with the intention of testing the ensemble based Al model, which has been trained on phase contrast microscope images predominantly. The EmbryoScope images have a clear bright ring around the embryo coming from the incubator's lamp, and a dark region outside this ring, which is not present in a typical phase contrast microscope image from Study 1. Appl ication of the model on the EmbryoScope images without any additional treatment results in an uneven prediction, where a high proportion of the images are predicted to be non-viable, leading to a high rate of False Negatives, and a low sensitivity, as shown in Table 15. However, using computer vision imaging techniques, a coarse, first-pass appl ication to bring the image closer to its expected form results in a significant rebalancing of the inferences, and an increase in accuracy.
TABLE 15
Accuracy of the ensemble based Al model, when applied to the bl ind val idation dataset of Study 3 from Alpha Ferti l ity Centre, Results show the accuracy in identifying viable embryos, non-viable embryos, and the total accuracy for both viable and non-viable embryos.
Figure imgf000054_0001
[00198] Whi le this dataset is smal l, it nevertheless provides evidence that computer vision techniques that reduce the variabi l ity in the form of the image can be used to improve the general izabi l ity of the ensemble based Al model. A comparison with the embryologist was also conducted. Whi le no scores were provided directly by Alpha Ferti l ity Centre, it was found that the conservative assumption that embryos are predicted to be li kely viable (to avoid False Negatives) leads to a very simi lar accuracy to the true embryologist accuracy in the case of Study 1. Therefore, by making this assumption, the comparison between the ensemble based Al model accuracy and the embryologist accuracy can be carried out in the same way, as shown in Table 16. In this Study, a percentage improvement of 33.33% was found, simi larly to the total improvement obtained from Study 1, 31.85%.
TABLE 16 Embryologist comparison. In this case where no embryologist scores were recorded, it is assumed that al l embryos are conservatively predicted as l ikely viable, as a substitute measure. The expected embryologist accuracy is simi lar to those of the cl inics in Study 1
Figure imgf000055_0001
[00199] The distribution of inferences can also be obtained in this study, as shown in Figures 19 and 20. Figure 19 is a plot of the distribution of inference scores for viable embryos (successful cl inical pregnancy) using the ensemble based Al model 1900 (False Negatives 1910 boxes fil led with thin upward diagonal l ines; True Positives 1920 boxes fi lled with thick downward diagonal l ines). Figure 20 is a plot of the distribution of inference scores for non-viable embryos (successful cl inical pregnancy) using the ensemble based Al model 2000 (False Negatives 1220 boxes fi l led with thin upward diagonal l ines; True Positives 2020 boxes fi lled with thick downward diagonal l ines). Whi le the limited size of the study (62 images) does not al low the distribution to be very clear, it can nevertheless be observed that, in this case, the separation between the correct (1920; 2020) and incorrect predictions (1910; 2010) for both viable and non-viable embryos are much less distinct. This is to be expected for images that exhibit quite different additional features as artefacts from the EmbryoScope camera setup. These additional artefacts effectively add noise to the images, making it more difficult to extract the relevant features that indicate embryo health.
[00200] Furthermore, the accuracy in the viable category is significantly lower than the non- viable category, leading to a high rate of False Negatives. However, it was found that this effect was much reduced after even a preliminary computer vision treatment of the images, providing evidence for the improvement of handl ing i mages from different camera sources. In addition, it is expected that the addition of EmbryoScope images during a subsequent training or fine-tuning phase wi l l also lead to improved performance.
[00201] Summary
[00202] The efficacy of Al models including deep learning and computer vision models to predict the viabi l ity of embryos based on microscope images was explored in an Austral ian pi lot study, and three cross-cl inic studies to develop a general ensemble based Al model . [00203] The pilot study involving a single Austral ian clinic was able to produce an overal l accuracy of 67.7% in identifying embryo viabi l ity, with 74.1% accuracy for viable embryos and 65.3% accuracy for non-viable embryos. This improves upon the embryologists' classification rate by a factor of 30.8%. The success of these results prompted a more thorough cross-cl inic study.
[00204] In 3 separate cross-cl inic studies, a general Al selection model was developed, val idated, and tested on a range of demographics from different cl inics across the US, New Zealand and Malaysia.
In Study 1, it was found that the ensemble based Al model is capable of achieving a high accuracy when compared to embryologists from each of the cl inics, with a mean improvement of 31.85% in a crosscl inic bl ind val idation study - simi lar to the improvement rate in the Austral ian pilot study. In addition, the distribution of the inference scores obtained from the ensemble based Al model exhibited a clear separation between the correct and incorrect predictions for both viable and non-viable embryos, which provides evidence that the model is translating correctly to future blind datasets.
[00205] A comparative study with embryologist scores was expanded to consider the effect of the order of the embryo rank. By transforming the ensemble based Al model inferences and the embryologist rank into an integer between 1 and 5, a direct comparison could be made as to how the ensemble based Al model wi l l differ in ranking the embryos from most viable to least viable, compared to the embryologist.
It was found that the ensemble based Al model again outperformed the embryologist, with 40.08% of the images being provided an improved ranking, whereas only 25.19% of the images were provided a worse ranking, 34.73% of the images unchanged in their ranking.
[00206] The ensemble based Al model was appl ied to a second bl ind val idation set, which exhibited accuracy within a few percent of Study 1. The abil ity of the ensemble based Al model to perform on damaged or distorted images was also assessed. It was found that images that do not conform to the standard phase-contrast microscope images, or are low qual ity, blurred, compressed or poorly cropped are l ikely to be assessed as non-viable, and the ensemble based Al model confidence in the embryo image predicted is reduced.
[00207] In order to understand the issue of different camera hardware and how that affects the outcome of a study, a dataset of EmbryoScope images was obtained, and it was found that the ensemble based Al model when naively appl ied to this dataset does not reach the high accuracy achieved on the original set in Study 1. However, a prel iminary data cleaning treatment of the images to handle artefacts and reduce noise systematical ly present in the EmbryoScope images markedly improved the results, bringing the accuracy of the ensemble based Al model much closer to its optimal value on Study 1.
Because of the abil ity of the ensemble based Al model to be improved by incorporating larger and more diverse datasets into the training process, and thus fine-tuning the models so that it can self-i mprove over time, the 3 Studies in this document provide compel l ing evidence for the efficacy of Al models as important vital tools for the robust and consistent assessment of embryo viability in the near future.
[00208] Further, whi lst the examples above use phase contrast images from l ight microscopes and EmbryoScope systems, further test has shown that the method may be used on images captured using a range of imaging systems. This testing has shown that the method is robust to a range of image sensors and images (i .e. beyond just embryoscopes and phase contrast images) including images extracted from video and time lapse systems. When using images extracted from video and time lapse system, a reference capture time point may be defined, and the image extracted from such systems may be the image closest in time to this reference capture time point, or the first image captured after the reference time. Quality assessment may be performed on images to ensure a selected image passes minimum qual ity criteria.
[00209] Embodiments of methods and systems for the computational generation of Al models configured to generate embryo viabi lity score from an image using one or more deep learning models have been described. Given a new set of embryo images for training, a new Al model for estimating embryo viabi l ity can be generated by segmenting images to identify Zona Pellucida and IZC regions, which annotate the images into key morphological components. At least one Zona Deep Learning model is then trained on the Zona Pel lucida masked images. In some embodiments a plural ity of Al models including deep learning models and/or computer vision models are generated and models that exhibits stabi lity, transferabi lity from the val idation set to the bl ind test set are selected and prediction accuracy are retained. These Al models may be combined for example using an ensemble model that selects models based on contrasting and diversity criterion, and which are combined using a confidence based voting strategy. Once a suitable Al model is trained, it can then be deployed to estimate viabi lity of newly col lected images .This can be provided as a cloud service allowing IVF cl inics or embryologists to upload captured images and get a viabi l ity score to assist in deciding whether to implant an embryo, or where multiple embryo's are avai lable, selecting which embryo (or embryo's) are most l ikely to be viable. Deployment may comprise exporting the model coefficients and model metadata to a fi le and then loading onto another computing system to process new images, or reconfiguring the computational system to receive new images and generate a viabil ity estimate.
[00210] Implementations of ensemble based Al model include numerous choices, and embodiments described herein include several novel and advantageous features. Image preprocessing steps such as segmentation to identify Zona Pellucida and IZC regions, object detection, normal isation of images, cropping of images, image cleaning such as removal of old images or non-conforming images (e.g. containing artefacts) can be performed. [00211] In relation to the deep learning models, the use of segmentation to identify the Zona Pel lucida has a significant effect, with the final ensemble based Al model featuring four Zona models. Further deep learning models were generally found to outperform computer vision models, with the final model comprising of an ensemble of 8 deep learning Al models. However useful results can sti l l be generated using a single Al model based on Zona images, or an ensemble (or simi lar) Al models comprising a combination of Deep Learning and CV models. The use of some deep learning models in which segmentation is performed prior to deep learning is thus preferred, and assists in producing contrasting deep learning models for use in the ensemble based Al model. Image augmentation was also found to improve robustness. Several architecture that performed wel l included ResNet-152, and
DenseNet-161 (although other variants can be used). Simi larly Stochastic Gradient Descent general ly outperformed al l other optimisation protocols for altering neuron weights in almost all trials (fol lowed by Adam). The use of a custom loss function which modified the optimisation surface to make global minima more obvious improved robustness. Randomisation of the data sets before training, and in particular checking that the distribution of the dataset is even (or simi lar) across the test and training sets was also found to have a significant effect. Image of viable embryos are quite diverse, and thus checking the randomisation provides robustness against the diversity effects. Using a selection process to choose contrasting models (i .e. their results are independent as possible, and the scores are wel l distributed) for bui lding the ensemble based Al model also improved performance. This can be assessed by examining the overlap in the set of viable images for two models. Prioritisation of the reduction of false negatives (i .e. data cleansing) also assists in improving the accuracy. As described herein, in the case of the embryo viabi lity assessment model, models using images taken 5 days after in-vitro ferti l isation outperformed models taken using earl ier images (e.g. day 4 or before).
[00212] Al models using computer vision and deep learning methods can be generated using one or more of these advantageous features, and could be appl ied to other image sets besides embryos. With reference to Figure 1, the embryo model 100 could be replaced with an alternative model, trained and used on other image data, whether of a medical nature or not. The methods could also be more general ly for deep learning based models including ensemble based deep learning models. These could be trained and implemented using systems such as those i l lustrated in Figures 3A and 3B and described above.
[00213] Models trained as described herein can be useful ly deployed to classify new images and thus assist embryologists in making i mplantation decisions, thus increasing success rates (ie pregnancies). Extensive testing of an embodiment of the ensemble based Al model was performed in which the ensemble based Al model was configured to generate an embryo viabi l ity score of an embryo from an image of the embryo taken five days after in-vitro ferti lisation. The testing showed the model was able to clearly separate viable and non-viable embryos (see Figure 13), and Tables 10 to 12 and Figures 14 to 16 i llustrate that the model outperformed embryologist. In particular as il lustrated in the above studies, an embodiment of an ensemble based Al model was found to have high accuracy in both identifying viable embryos (74.1%) and non-viable embryos (65.3%) and significantly outperform experienced
embryologists in assessing viabi l ity of images by more than 30%.
[00214] Those of ski ll in the art would understand that information and signals may be represented using any of a variety of technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
[00215] Those of ski ll in the art would further appreciate that the various i l lustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software or instructions, middleware, platforms, or combinations of both. To clearly i l lustrate this interchangeability of hardware and software, various i l lustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functional ity is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Ski l led artisans may implement the described functional ity in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
[00216] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two, including cloud based systems. For a hardware implementation, processing may be implemented within one or more appl ication specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, control lers, micro-controllers, microprocessors, or other electronic units designed to perform the functions described herein, or a combination thereof. Various middleware and computing platforms may be used.
[00217] In some embodiments the processor module comprises one or more Central Processing Units (CPUs) or Graphical processing units (GPU) configured to perform some of the steps of the methods. Simi larly a computing apparatus may comprise one or more CPUs and/or GPUs. A CPU may comprise an Input/Output Interface, an Arithmetic and Logic Unit (ALU) and a Control Unit and Program Counter element which is in communication with input and output devices through the Input/Output Interface. The Input/Output Interface may comprise a network interface and/or communications module for communicating with an equivalent communications module in another device using a predefined communications protocol (e.g. Bluetooth, Zigbee, IEEE 802.15, IEEE 802.11, TCP/IP, UDP, etc.). The computing apparatus may comprise a single CPU (core) or multiple CPU's (multiple core), or multiple processors. The computing apparatus is typically a cloud based computing apparatus using GPU clusters, but may be a parallel processor, a vector processor, or be a distributed computing device. Memory is operatively coupled to the processor(s) and may comprise RAM and ROM components, and may be provided within or external to the device or processor module. The memory may be used to store an operating system and additional software modules or instructions. The processor(s) may be configured to load and executed the software modules or instructions stored in the memory.
[00218] Software modules, also known as computer programs, computer codes, or instructions, may contain a number a number of source code or object code segments or instructions, and may reside in any computer readable medium such as a RAM memory, flash memory, ROM memory, EPROM memory, registers, hard disk, a removable disk, a CD-ROM, a DVD-ROM, a Blu-ray disc, or any other form of computer readable medium. In some aspects the computer-readable media may comprise non- transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer- readable media may comprise transitory computer- readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media. In another aspect, the computer readable medium may be integral to the processor. The processor and the computer readable medium may reside in an ASIC or related device. The software codes may be stored in a memory unit and the processor may be configured to execute them. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
[00219] Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by computing device. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a computing device can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.
[00220] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. [00221] Throughout the specification and the claims that fol low, unless the context requires otherwise, the words "comprise" and "include" and variations such as "comprising" and "including" wi l l be understood to imply the inclusion of a stated integer or group of integers, but not the exclusion of any other integer or group of integers.
[00222] The reference to any prior art in this specification is not, and should not be taken as, an acknowledgement of any form of suggestion that such prior art forms part of the common general knowledge.
[00223] It wi l l be appreciated by those ski lled in the art that the disclosure is not restricted in its use to the particular appl ication or appl ications described. Neither is the present disclosure restricted in its preferred embodiment with regard to the particular elements and/or features described or depicted herein. It wi l l be appreciated that the disclosure is not limited to the embodiment or embodiments disclosed, but is capable of numerous rearrangements, modifications and substitutions without departing from the scope as set forth and defined by the fol lowing claims.

Claims

1. A method for computational ly generating an Artificial Intel l igence (Al) model configured to estimate an embryo viabi l ity score from an image, the method comprising:
receiving a plural ity of images and associated metadata, wherein each image is captured during a pre-determined time window after In-Vitro Ferti l isation (IVF) and the pre-determined time window is 24 hours or less, and the metadata associated with the image comprises at least a pregnancy outcome label; pre-processing each image comprising at least segmenting the image to identify a Zona Pel lucida region;
generating an Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an input image by training at least one Zona Deep Learning Model using a deep learning method, comprising training a deep learning model on a set of Zona Pel lucida images in which the Zona Pel lucida regions are identified, and the associated pregnancy outcome labels are at least used to assess the accuracy of a trained model; and
deploying the Al model .
2. The method as claimed in claim 1 wherein the set of Zona Pel lucida images comprising images in which regions bounded by the Zona Pel lucida region are masked.
3. The method as claimed in claim 1 or 2, wherein generating the Al model further comprises training one or more additional Al models wherein each additional Al model is either a computer vision model trained using a machine learning method that uses a combination of one or more computer vision descriptors extracted from an image to estimate an embryo viabi lity score, a deep learning model trained on images local ised to the embryo comprising both Zona Pel lucida and IZC regions, and a deep learning model trained on a set of IntraZonal Cavity (IZC) images in which all regions apart from the IZC are masked, and either using an ensemble method to combine at least two of the at least one Zona deep learning model and the one or more additional Al models to generate the Al model embryo viabi l ity score from an input image or using a disti llation method to train an Al model to generate the Al model embryo viabi lity score using the at least one Zona deep learning model and the one or more additional Al models to generate the Al model .
4. The method as claims in claim 3, wherein the Al model is generated using an ensemble model comprising selecting at least two contrasting Al models from the at least one Zona deep learning model and the one or more additional Al models, and selection of Al models is performed to generate a set of contrasting Al models and applying a voting strategy to the at least two contrasting Al models that defines how the selected at least two contrasting Al models are combined to generate an outcome score for an image.
5. The method as claimed in claim 3, wherein selecting at least two contrasting Al models comprises:
generating a distribution of embryo viabi l ity scores from a set of images for each of the at least one Zona deep learning model and the one or more additional Al models; and
comparing the distributions and discarding a model if the associated distributions is too simi lar to another distribution to select Al models with contrasting distributions.
6. The method as claimed in any preceding claim wherein the pre-determined time window is a 24 hour timer period beginning 5 days after fertil isation.
7. The method as claimed in any preceding claim, wherein the pregnancy outcome label is a ground- truth pregnancy outcome measurement performed within 12 weeks after embryo transfer.
8. The method as claimed claim 7 wherein the ground-truth pregnancy outcome measurement is whether a foetal heartbeat is detected.
9. The method as claimed in any preceding claim further comprising cleaning the plurality of image comprising identifying images with l i kely incorrect pregnancy outcome labels, and excluding or re label l ing the identified images.
10. The method as claimed in claim 9 wherein cleaning the plural ity of images comprises estimating the l i kel ihood that a pregnancy outcome label associated with an image is incorrect and comparing against a threshold value, and then excluding or relabel ling images with a li kel ihood exceeding the threshold value.
11. The method as claimed in claim 10 wherein estimating the l i kel ihood a pregnancy outcome label associated with an image is incorrect is be performed by using a plurality of Al classification models and a k-fold cross val idation method in which the plurality of images are split into k mutual ly exclusive val idation datasets, and each of the plurality of Al classifications model is trained on k-1 validation datasets in combination and then used to classify images in the remaining val idation dataset, and the l ikelihood is determined based on the number of Al classification models which misclassify the pregnancy outcome label of an image.
12. The method as claimed in any preceding claim wherein training each Al model or generating the ensemble model comprises assessing the performance of an Al model using a plural ity of metrics comprising at least one accuracy metric and at least one confidence metric, or a metric combining accuracy and confidence.
13. The method claimed in any preceding claim wherein pre-processing the image further comprises cropping the image by localising an embryo in the image using a deep learning or computer vision method.
14. The method claimed in any preceding claim wherein pre-processing the image further comprises one or more of padding the image, normal ising the colour balance, normal ising the brightness, and scaling the image to a predefined resolution.
15. The method as claimed in any preceding claim, further comprises generating one or more augmented images for use in training an Al model .
16. The method as claimed in the preceding claim wherein an augmented image is generated by applying one or more rotations, reflections, resizing, blurring, contrast variation, j itter, or random compression noise to an image.
17. The method as claimed in claim 15 or 16, wherein during training of an Al model one or more augmented images are generated for each image in the training set, and during assessment of the val idation set, the results for the one or more augmented images are combined to generate a single result for the image.
18. The method as claimed in any preceding claim, where pre-processing the image further comprises annotating the image using one or more feature descriptor models, and masking al l areas of the image except those within a given radius of the descriptor key point.
19. The method as claimed in any preceding claim, wherein each Al model generates an outcome score wherein the outcome is a n-ary outcome having n states, and training an Al model comprises a plural ity of training-val idation cycles further comprises randomly al locating the plurality of images to one of a training set, a val idation set or a blind validation set, such that the training dataset comprises at least 60% of the images, the val idation dataset comprises at least 10% of the images, and the blind val idation dataset comprises at least 10% of the images, and after al locating the images to the training set, val idation set and bl ind val idation set, calculating the frequency of each of the n-ary outcome states in each of the training set, val idation set and blind validation set, and testing that the frequencies are simi lar, and if the frequencies are not simi lar then discarding the al location and repeating the randomisation until a randomisation is obtained in which the frequencies are simi lar.
20. The method as claimed in claim 3 wherein training a computer vision model comprising performing a plurality of a training-val idation cycles, and during each cycle the images are clustered based on the computer vision descriptors using an unsupervised clustering algorithm to generate a set of clusters, and each image is assigned to a cluster using a distance measure based on the values of the computer vision descriptors of the image, and a supervised learning method is use to determine whether a particular combination of these features corresponds to an outcome measure, and frequency information of the presence of each computer vision descriptor in the plural ity of images.
21. The method as claimed in any preceding claim, wherein each deep learning model is a convolutional neural network (CN N) and for an input image each deep learning model generates an outcome probabil ity.
22. The method as claimed in any preceding claim, wherein the deep learning method uses a loss function configured to modify an optimization surface is to emphasise global minima.
23. The method as claimed in the preceding claim, wherein the loss function includes a residual term defined in terms of the network weights, which encodes the col lective difference in the predicted value from the model and the target outcome for each image, and includes it as an additional contribution to the normal cross entropy loss function.
24. The method as claimed in any preceding claim, wherein the method is performed on a cloud based computing system using a Webserver, a database, and a plural ity of training servers, wherein the Webserver receives one or more model training parameters from a user, and the Webserver initiates a training process on one or more of the plurality of training servers, comprising uploading training code to one of the plural ity the training server, and the training server requests the plurality of images and associated metadata from a data repository, and performs the steps of preparing each image, generating a plurality of computer vision models and generating a plurality of deep learning models, and each training server is configured to periodical ly save the models to a storage service, and accuracy information to one or more log files to allow a training process to be restarted.
25. The method as claimed in any preceding claim, wherein the ensemble model is trained to bias residual inaccuracies to minimize false negatives.
26. The method as claimed in any preceding claim wherein the embryo viabi l ity score is a binary outcome of either viable or non-viable.
27. The method as claimed in any preceding claim, wherein each image is a phase contrast image.
28. A method for computational ly generating an embryo viability score from an image, the method comprising: generating, in a computational system, an Artificial Intel ligence (Al) model configured to generate an embryo viabi lity score from an image according to the method of any one of claims 1 to 27; receiving, from a user via a user interface of the computational system, an image captured during a pre-determined time window after In-Vitro Ferti l isation (IVF);
pre-processing the image according to the pre-processing steps used to generate the Al model; providing the pre-processed image to the Al model to obtain an estimate of the embryo viabi l ity score; and
sending the embryo viabi lity score to the user via the user interface.
29. A method for obtaining an embryo viabi lity score from an image, comprising:
uploading, via a user interface, an image captured during a pre-determined time window after In- Vitro Ferti l isation (IVF) to a cloud based Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of any one of claims 1 to 27;
receiving an embryo viabi l ity score from the cloud based Al model via the user interface.
30. A cloud based computational system configured to computationally generate an Artificial Intel l igence (Al) model configured to estimate an embryo viabi l ity score from an image according to the method of any one of claims 1 to 27.
31. A cloud based computational system configured to computationally generate an embryo viabi l ity score from an image, wherein the computational system comprises:
an Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of any one of claims 1 to 27;
receiving, from a user via a user interface of the computational system, an image captured during a pre-determined time window after In-Vitro Ferti l isation (IVF);
providing the image to the Al model to obtain an embryo viabi l ity score; and
sending the embryo viabi lity score to the user via the user interface.
32. A computational system configured to generate an embryo viabi lity score from an image, wherein the computational system comprises at least one processor, and at least one memory comprising instructions to configure the at least one processor to:
receive an image captured during a pre-determined time window after In-Vitro Ferti l isation ( IVF) upload, via a user interface, the image captured during a pre-determined time window after In- Vitro Ferti l isation (IVF) to a cloud based Artificial Intel l igence (Al) model configured to generate an embryo viabi l ity score from an image wherein the Al model is generated according to the method of any one of claims 1 to 27;
receive an embryo viabi lity score from the cloud based Al model; and display the embryo viability score via the user interface.
PCT/AU2020/000027 2019-04-04 2020-04-02 Method and system for selecting embryos WO2020198779A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202080041427.XA CN113906472A (en) 2019-04-04 2020-04-02 Method and system for selecting embryos
JP2021560476A JP2022528961A (en) 2019-04-04 2020-04-02 Methods and systems for selecting embryos
EP20783755.0A EP3948772A4 (en) 2019-04-04 2020-04-02 Method and system for selecting embryos
US17/600,739 US20220198657A1 (en) 2019-04-04 2020-04-02 Method and system for selecting embryos
AU2020251045A AU2020251045A1 (en) 2019-04-04 2020-04-02 Method and system for selecting embryos

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2019901152A AU2019901152A0 (en) 2019-04-04 Method and system for selecting embryos
AU2019901152 2019-04-04

Publications (1)

Publication Number Publication Date
WO2020198779A1 true WO2020198779A1 (en) 2020-10-08

Family

ID=72664320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2020/000027 WO2020198779A1 (en) 2019-04-04 2020-04-02 Method and system for selecting embryos

Country Status (6)

Country Link
US (1) US20220198657A1 (en)
EP (1) EP3948772A4 (en)
JP (1) JP2022528961A (en)
CN (1) CN113906472A (en)
AU (1) AU2020251045A1 (en)
WO (1) WO2020198779A1 (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112635060A (en) * 2020-12-29 2021-04-09 北京航空航天大学合肥创新研究院 Viability evaluation method and device, viability evaluation equipment and storage medium
WO2022150914A1 (en) * 2021-01-12 2022-07-21 Trio Fertility Research Inc. Systems and methods for non-invasive preimplantation embryo genetic screening
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
WO2022192436A1 (en) * 2021-03-09 2022-09-15 Thread Robotics Inc. System and method for automated gamete selection
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
WO2023023263A1 (en) * 2021-08-18 2023-02-23 Cercle.Ai, Inc. Wisdom based decision system
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
WO2023121575A1 (en) * 2021-12-23 2023-06-29 Kodmed Saglik Ve Bilisim Teknolojileri A.S Determining the age and arrest status of embryos using a single deep learning model
US11694344B2 (en) 2021-11-05 2023-07-04 Thread Robotics Inc. System and method for automated cell positioning
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
CN116823831A (en) * 2023-08-29 2023-09-29 武汉互创联合科技有限公司 Embryo image fragment removing system based on cyclic feature reasoning
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10725438B1 (en) * 2019-10-01 2020-07-28 11114140 Canada Inc. System and method for automated water operations for aquatic facilities using image-based machine learning
US20220012873A1 (en) * 2020-07-10 2022-01-13 Embryonics LTD Predicting Embryo Implantation Probability
BR112023001951A2 (en) * 2020-08-03 2023-02-28 Emgenisys Inc REAL-TIME VIDEO-BASED EMBRYO EVALUATION
US20220051788A1 (en) * 2020-08-17 2022-02-17 Fertility Guidance Technologies Methods and systems for dynamic automation of quality control and information management for an in vitro fertilization (ivf) laboratory
TWI806006B (en) * 2021-02-20 2023-06-21 緯創資通股份有限公司 Thermal image positioning method and system thereof
US20220383497A1 (en) * 2021-05-28 2022-12-01 Daniel Needleman Automated analysis and selection of human embryos
WO2024019963A1 (en) * 2022-07-17 2024-01-25 Fertility Basics, Inc. Moderated communication system for infertility treatment
KR102558551B1 (en) * 2022-09-15 2023-07-24 주식회사 카이헬스 Method for providing information of in vitro fertilization and device using the same
CN115272303B (en) * 2022-09-26 2023-03-10 睿贸恒诚(山东)科技发展有限责任公司 Textile fabric defect degree evaluation method, device and system based on Gaussian blur
CN116561627B (en) * 2023-05-11 2024-04-16 中南大学 Method, apparatus, processor and storage medium for determining embryo transfer type
CN116778481B (en) * 2023-08-17 2023-10-31 武汉互创联合科技有限公司 Method and system for identifying blastomere image based on key point detection
CN116844160B (en) * 2023-09-01 2023-11-28 武汉互创联合科技有限公司 Embryo development quality assessment system based on main body identification
CN116958710B (en) * 2023-09-01 2023-12-08 武汉互创联合科技有限公司 Embryo development stage prediction method and system based on annular feature statistics

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150346187A1 (en) 2012-05-31 2015-12-03 Progyny, Inc. In Vitro Embryo Blastocyst Prediction Methods
US20170089820A1 (en) * 2009-08-22 2017-03-30 The Board Of Trustees Of The Leland Stanford Junior University Imaging and evaluating embryos, oocytes, and stem cells
CA3068194A1 (en) * 2017-07-10 2019-01-17 Sony Corporation Information processing apparatus, information processing method, program, and observation system
US20190042958A1 (en) 2016-01-28 2019-02-07 Gerard Letterie Automated image analysis to assess reproductive potential of human oocytes and pronuclear embryos
CN109409182A (en) * 2018-07-17 2019-03-01 宁波华仪宁创智能科技有限公司 Embryo's automatic identifying method based on image procossing
US11335000B2 (en) 2017-10-26 2022-05-17 Sony Corporation Fertile ovum quality evaluation method, fertile ovum quality evaluation system, program, and information processing apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170089820A1 (en) * 2009-08-22 2017-03-30 The Board Of Trustees Of The Leland Stanford Junior University Imaging and evaluating embryos, oocytes, and stem cells
US20150346187A1 (en) 2012-05-31 2015-12-03 Progyny, Inc. In Vitro Embryo Blastocyst Prediction Methods
US20190042958A1 (en) 2016-01-28 2019-02-07 Gerard Letterie Automated image analysis to assess reproductive potential of human oocytes and pronuclear embryos
CA3068194A1 (en) * 2017-07-10 2019-01-17 Sony Corporation Information processing apparatus, information processing method, program, and observation system
US11335000B2 (en) 2017-10-26 2022-05-17 Sony Corporation Fertile ovum quality evaluation method, fertile ovum quality evaluation system, program, and information processing apparatus
CN109409182A (en) * 2018-07-17 2019-03-01 宁波华仪宁创智能科技有限公司 Embryo's automatic identifying method based on image procossing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3948772A4

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
WO2022143811A1 (en) * 2020-12-29 2022-07-07 北京航空航天大学合肥创新研究院 Survivability evaluation method and apparatus, evaluation device, and storage medium
CN112635060A (en) * 2020-12-29 2021-04-09 北京航空航天大学合肥创新研究院 Viability evaluation method and device, viability evaluation equipment and storage medium
WO2022150914A1 (en) * 2021-01-12 2022-07-21 Trio Fertility Research Inc. Systems and methods for non-invasive preimplantation embryo genetic screening
US11481900B2 (en) 2021-03-09 2022-10-25 Thread Robotics Inc. System and method for automated gamete selection
WO2022192436A1 (en) * 2021-03-09 2022-09-15 Thread Robotics Inc. System and method for automated gamete selection
US11734822B2 (en) 2021-03-09 2023-08-22 Thread Robotics Inc. System and method for automated gamete selection
WO2023023263A1 (en) * 2021-08-18 2023-02-23 Cercle.Ai, Inc. Wisdom based decision system
US11694344B2 (en) 2021-11-05 2023-07-04 Thread Robotics Inc. System and method for automated cell positioning
WO2023121575A1 (en) * 2021-12-23 2023-06-29 Kodmed Saglik Ve Bilisim Teknolojileri A.S Determining the age and arrest status of embryos using a single deep learning model
CN116823831B (en) * 2023-08-29 2023-11-14 武汉互创联合科技有限公司 Embryo image fragment removing system based on cyclic feature reasoning
CN116823831A (en) * 2023-08-29 2023-09-29 武汉互创联合科技有限公司 Embryo image fragment removing system based on cyclic feature reasoning

Also Published As

Publication number Publication date
EP3948772A1 (en) 2022-02-09
CN113906472A (en) 2022-01-07
AU2020251045A1 (en) 2021-11-11
JP2022528961A (en) 2022-06-16
US20220198657A1 (en) 2022-06-23
EP3948772A4 (en) 2022-06-01

Similar Documents

Publication Publication Date Title
US20220198657A1 (en) Method and system for selecting embryos
US20220343178A1 (en) Method and system for performing non-invasive genetic testing using an artificial intelligence (ai) model
Vijayalakshmi Deep learning approach to detect malaria from microscopic images
Li et al. A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches
Ghosh et al. Automatic detection and classification of diabetic retinopathy stages using CNN
Pan et al. Classification of malaria-infected cells using deep convolutional neural networks
CN110211087B (en) Sharable semiautomatic marking method for diabetic fundus lesions
US20200311916A1 (en) Systems and methods for estimating embryo viability
Moses et al. Deep CNN-based damage classification of milled rice grains using a high-magnification image dataset
US8600143B1 (en) Method and system for hierarchical tissue analysis and classification
WO2018052586A1 (en) Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
JP7294695B2 (en) Program, Information Recording Medium, Classification Apparatus, and Classification Method Based on Trained Model
CN113011450B (en) Training method, training device, recognition method and recognition system for glaucoma recognition
Memeu A rapid malaria diagnostic method based on automatic detection and classification of plasmodium parasites in stained thin blood smear images
CN112464983A (en) Small sample learning method for apple tree leaf disease image classification
Malmsten et al. Automated cell division classification in early mouse and human embryos using convolutional neural networks
Vardhan et al. Detection of healthy and diseased crops in drone captured images using Deep Learning
Bejerano et al. Rice (Oryza Sativa) Grading classification using Hybrid Model Deep Convolutional Neural Networks-Support Vector Machine Classifier
Kaoungku et al. Colorectal Cancer Histology Image Classification Using Stacked Ensembles
CN116958710B (en) Embryo development stage prediction method and system based on annular feature statistics
JP2018125019A (en) Image processing apparatus and image processing method
Yigbeta et al. Enset (Enset ventricosum) Plant Disease and Pests Identification Using Image Processing and Deep Convolutional Neural Network.
Bhandari et al. Improved Diabetic Retinopathy Severity Classification Using Squeeze-and-excitation and Sparse Light Weight Multi-level Attention U-net With Transfer Learning From Xception
Vijayaraghavan Detection and Revelation of Multiple Ocular Diseases using Transfer Learning Techniques
CN116704181A (en) Image segmentation method, weight prediction method and equipment based on living CT data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20783755

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 2021560476

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020783755

Country of ref document: EP

Effective date: 20211104

ENP Entry into the national phase

Ref document number: 2020251045

Country of ref document: AU

Date of ref document: 20200402

Kind code of ref document: A