CN116844143B - Embryo development stage prediction and quality assessment system based on edge enhancement - Google Patents

Embryo development stage prediction and quality assessment system based on edge enhancement Download PDF

Info

Publication number
CN116844143B
CN116844143B CN202311123764.5A CN202311123764A CN116844143B CN 116844143 B CN116844143 B CN 116844143B CN 202311123764 A CN202311123764 A CN 202311123764A CN 116844143 B CN116844143 B CN 116844143B
Authority
CN
China
Prior art keywords
embryo
image
classification
module
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311123764.5A
Other languages
Chinese (zh)
Other versions
CN116844143A (en
Inventor
彭松林
代文
谭威
陈长胜
熊祥
云新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Mutual United Technology Co ltd
Original Assignee
Wuhan Mutual United Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Mutual United Technology Co ltd filed Critical Wuhan Mutual United Technology Co ltd
Priority to CN202311123764.5A priority Critical patent/CN116844143B/en
Publication of CN116844143A publication Critical patent/CN116844143A/en
Application granted granted Critical
Publication of CN116844143B publication Critical patent/CN116844143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an embryo development stage prediction and quality assessment system based on edge enhancement, which is characterized in that a preprocessing module is used for removing noise of embryo images to be assessed, enhancing contour information and storing an image edge map; the embryo main body segmentation module is used for carrying out two-class segmentation on an embryo image to be evaluated into one type comprising a blastula cavity, an inner cell mass, a trophoblast and a zona pellucida and the other type comprising other areas; the embryo classification module is used for classifying the embryo images to be evaluated into 1-2 stages or 3-5 stages based on the segmentation result of the embryo main body segmentation module; the embryo fine classification module is used for classifying the 1-2 stage into the 1 stage or the 2 stage and classifying the 3-5 stage into the 3 stage or the 4 stage or the 5 stage based on the classification result of the embryo two classification module and the image edge map; and the embryo quality evaluation module is used for carrying out quality grading identification on the images of the 3-5 stages based on the classification result of the embryo fine classification module.

Description

Embryo development stage prediction and quality assessment system based on edge enhancement
Technical Field
The invention relates to the technical field of embryo quality assessment, in particular to an embryo development stage prediction and quality assessment system based on edge enhancement.
Background
The embryo development quality directly affects the pregnancy rate, embryologists judge the embryo quality by two main ways of embryo morphology and genetics, wherein the embryo quality judgment by using the genetics means needs to have extremely high experimental conditions, and the embryo judgment by using embryo morphology information is a simple, rapid and effective method. At present, most embryologists judge the quality of embryos based on morphological characteristics of blastocyst embryos, and screen out high-quality embryos for transplantation. Among these morphological characteristics, characteristics such as blastula cavity, inner cell mass and trophoblast are extremely important factors for a doctor to score the embryo, so that establishing a visual model by using a computer helps the doctor to rapidly and accurately predict characteristics such as blastula cavity, inner cell mass and trophoblast, and the like is a very significant research direction. However, the following problems exist with the accurate identification and quality assessment of embryo characteristics using machine learning:
(1) In the early stage of development, the embryo main body area is smaller, the ratio of the embryo main body area in the original picture is too small, meanwhile, fragments and other areas irrelevant to the embryo main body can exist in the embryo image, and information interaction during development stage prediction can be interfered. On the other hand, one-step in-place classification in embryo development stage is a 5-classification problem, and the more the number of classes is, the more difficult the decision boundary is to accurately judge, and the higher the probability of misclassification is. Therefore, designing an efficient classification framework is a matter of investigation.
(2) The structure-dependent determination of developmental stage and quality class relies on masking of the results of each structure-specific segmentation task itself presents a significant challenge, and typically local structures account for only a small fraction of the image, on which feature interactions present a greater challenge. Accurate prediction and quality assessment of developmental stages is essentially dependent on detailed information mined in the image, the edge information in the image being the most pronounced of the detailed features for embryonic cells. Therefore, an effective mining mode of the edge information is designed, and classification and grading accuracy is improved.
Disclosure of Invention
The invention provides an embryo development stage prediction and quality assessment system based on edge enhancement, which aims to solve the technical problem of inaccurate embryo quality assessment.
In order to solve the technical problems, the invention provides an embryo development stage prediction and quality assessment system based on edge enhancement, which comprises a pretreatment module, an embryo main body segmentation module, an embryo classification module, an embryo fine classification module and an embryo quality assessment module;
the preprocessing module is used for removing noise of the embryo image to be evaluated, enhancing contour information and storing an image edge map of the embryo image to be evaluated;
the embryo main body segmentation module is used for carrying out classification segmentation on an embryo image to be evaluated, and dividing the embryo image to be evaluated into one type comprising a blastula cavity, an inner cell mass, a trophoblast and a zona pellucida and the other type comprising other areas;
the embryo classification module is used for classifying the embryo images to be evaluated into 1-2 stages or 3-5 stages based on the segmentation result of the embryo main body segmentation module;
the embryo fine classification module is used for carrying out feature interaction through a self-attention mechanism based on the classification result of the embryo two classification module and an image edge graph, carrying out edge weighted feature fusion, classifying 1-2 phases into 1 phase or 2 phases, and classifying 3-5 phases into 3 phases or 4 phases or 5 phases; the expression of the edge weighted feature fusion is as follows:
wherein I represents an embryo image to be evaluated, E represents an image edge map, and I and j respectively represent corresponding pixel coordinates in the image;
and the embryo quality evaluation module is used for carrying out quality grading identification on the images in the 3-5 period based on the classification result of the embryo fine classification module and outputting the corresponding quality grades of the inner cell mass and the trophoblast.
Preferably, the embryo main body segmentation module adopts Res-U-Net as a backbone network, and obtains a segmentation result by predicting the prediction category of each pixel and comparing with a real label and constraining with average cross entropy loss.
Preferably, the method for performing two-class segmentation by the embryo main body segmentation module comprises the following steps: gradually downsampling the image by adopting a 5-layer encoder, and extracting high-level rich features; and gradually upsampling by adopting 4 decoders to restore the original image size, strengthening the mapping of main features by adopting a jump connection mode, and obtaining a final embryo main body segmentation result through nonlinear activation.
Preferably, the expression of the average cross entropy loss is:
where k=2 represents the number of pixel categories, y i,j Representing the true class of pixel j in the ith sample,the representation model predicts a prediction class probability distribution for that pixel, N representing the total number of pixels in the image.
Preferably, the embryo two-classification module uses ResNet-50 as a backbone network, and combines a self-attention mechanism to extract classification results of the images obtained by information interaction and regression after aggregation of masked self-attention of the embryo main body region respectively.
Preferably, the embryo two-classification module performs two-classification segmentation, including the following steps: inputting the segmented embryo images to be evaluated into the embryo classification module; the dimension is increased pixel by pixel through a multi-layer perceptron; information mining between graphs is performed by using an attention mechanism; performing feature interaction by using a plurality of layers of residual blocks to obtain information of full interaction between the segmented embryo images to be evaluated; the other branch uses convolution and a plurality of layers of residual blocks to extract characteristics on the image to be evaluated and carries out primary average pooling; after the mask of the segmentation area is spliced to the corresponding feature of the image convolution to be evaluated according to the mask corresponding position, the feature of the image convolution to be evaluated is copied without mask information, and the classification result of the image is regressed after feature fusion.
Preferably, the embryo fine classification module employs a focused graph convolution and a ResNet-50 based backbone network, and the convolution in the feature extractor uses edge-enhanced convolution kernels.
Preferably, the embryo fine classification module performs classification by the following steps: obtaining a mask reserved by the embryo main body segmentation module in a segmentation way; preprocessing an embryo image to be evaluated and an edge map through the mask, reserving a pixel value corresponding to a main body part, and performing zero filling on other pixels; the embryo image to be evaluated after pretreatment and the edge image are subjected to 2 times of residual blocks and a self-attention mechanism, and then edge weighted feature fusion is carried out; and after fusion, extracting high-level semantic information after full fusion and interaction of edge information sequentially through a 3-layer residual block and an average pooling layer, and finishing fine classification of embryos.
Preferably, when the classification result of the embryo two-classification module is 1-2 phase, the embryo fine classification module adopts a two-classification cross entropy loss function for constraint; and when the classification result of the embryo two-classification module is 3-5 stages, the embryo fine classification module adopts a three-classification cross entropy loss function for constraint.
Preferably, the method for feature interaction and fusion by the self-attention mechanism comprises the following steps: calculating weight of self-attention
Representing the dimensions of the key vector;
where i and j represent the corresponding pixel coordinates in the image,respectively represent corresponding pixels +>Is characterized in that,is a weight matrix which can be learned, +.>Representing a query matrix->Representing a key matrix and a value matrix, respectively, +.>Representing a non-linear bias ∈>Representing the softmax calculation for the j column vectors;
the post-self-attention features are updated as:
in the aboveRepresenting feature stitching operations, ++>Representing edges when constructing the graph convolution, +.>Indicating the number of layers of self-attention, MLP indicates a multi-layer perceptron.
The beneficial effects of the invention at least comprise:
1) The method adopts the method of eliminating the interference of the main irrelevant area of the embryo image so as to improve the precision of coarse classification;
2) The method of classifying and grading after dividing the main body can perform multitask learning by excavating edge detail information of the embryo main body, eliminate information interaction of non-main body areas and increase the grading accuracy;
3) According to the invention, the mask of each structure segmentation result is not relied on in different tasks, only the whole edge information is concerned and relied on, and an effective edge weighting convolution kernel edge enhancement feature fusion structure is designed, so that the classification and grading precision is improved.
4) The invention completes labeling of blastula cavities, inner cell clusters, trophoblasts and zona pellucida, and gives accurate label information to embryo sample data according to labeling information so as to construct a large number of accurately labeled embryo sample data sets.
Drawings
FIG. 1 is a block diagram of a system for edge-enhanced embryo developmental stage prediction and quality assessment in accordance with the present invention;
FIG. 2 is a schematic diagram of a embryo body segmentation network structure;
FIG. 3 is an image of embryos at different developmental stages;
FIG. 4 is a network architecture diagram of a developmental stage embryo two classification module;
FIG. 5 is a network architecture diagram of a coupled developmental stage classification and quality assessment module;
FIG. 6 is a schematic illustration of embryo body segmentation;
FIG. 7 is a graph of the results of a rough prediction of embryo developmental stages;
FIG. 8 is a graph of the prediction results of fine prediction and quality assessment of embryo development stage.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is evident that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present invention, based on the embodiments of the present invention.
As shown in fig. 1, the embodiment of the invention provides an embryo development stage prediction and quality assessment system based on edge enhancement, which comprises a preprocessing module, an embryo main body segmentation module, an embryo classification module, an embryo fine classification module and an embryo quality assessment module;
the preprocessing module is used for removing noise of embryo images to be evaluated and enhancing contour information, and storing Canny detection image edge images so as to guide edge enhancement in a deep learning network;
the embryo main body segmentation module is used for carrying out classification segmentation on an embryo image to be evaluated, and dividing the embryo image to be evaluated into one type comprising a blastula cavity, an inner cell mass, a trophoblast and a zona pellucida and the other type comprising other areas;
the embryo classification module is used for classifying the embryo images to be evaluated into 1-2 stages or 3-5 stages based on the segmentation result of the embryo main body segmentation module;
the embryo fine classification module is used for classifying 1-2 phases into 1 phase or 2 phases and classifying 3-5 phases into 3 phases or 4 phases or 5 phases based on the classification result of the embryo two classification module and the image edge map;
and the embryo quality evaluation module is used for carrying out quality grading identification on the images in the 3-5 period based on the classification result of the embryo fine classification module and outputting the corresponding quality grades of the inner cell mass and the trophoblast.
The following describes the construction process of each module of the embryo development stage prediction and quality assessment system.
(one) constructing an accurately labeled embryo sample data set:
the oocyte may enter into blastula stage formally according to culture condition and embryo quality on day 5 after fertilization, blastula stage embryo images of D5-D6 including blastula cavity, inner cell mass, trophoblast, zona pellucida, fragments, etc. are collected, and labeling work of embryo images is completed by professional doctors under guidance of multiple embryologists. The studies prove that the method is effective and widely applied by Gardner evaluation, and mainly observes the expansion degree of blastula, the inner cell mass and the development state of trophoblast for embryo identification development stage and quality grade. The developmental stage of embryonic cells can be represented by stages 1-6, and the quality classification of the inner cell mass and trophoblasts is assessed for fully expanded blasts by A, B, C grades, respectively. First, pixel-level labeling is performed on embryo images by using LabelImg software, three types of embryo images are mainly classified into a blastula cavity, an inner cell mass and a trophoblast, specifically, outlines of all types are labeled by using polygons, and a label for representing the type is assigned to each outline area. And secondly, marking the accurate development stage and quality rating for each embryo image by randomly crossing the label images of the divided different areas by different doctors, determining that the development stage and quality rating of the final embryo image are consistent by the label at most, constructing an embryo sample data set with the accurate label, and storing the embryo sample data set with the accurate label into the image data set.
(II) image preprocessing:
due to the existence of certain embryo subject independent areas and noise in the acquired embryo images, deviations may be generated in embryo stage recognition and quality assessment results based on deep learning. Therefore, it is necessary to pre-process the inputted embryo image. Specifically, the original input image is first edge-enhanced and noise-removed using conventional morphological image denoising methods such as dilation, open-close operation, and the like. Then, different areas are divided according to the gradient information statistical analysis of the images and mask information is generated for the areas, and meanwhile, edge detection such as canny operator is used for extracting outline information of embryo images and storing the outline information as outline images of corresponding images.
(III) embryo body segmentation:
at the early stage of development, the embryo main body area is smaller, the ratio of the embryo main body area to the original image is too small, meanwhile, fragments and other areas irrelevant to the embryo main body can exist in the embryo image, and information interaction during classification can be interfered. Therefore, it is necessary to divide the embryo body before the rough classification. Secondly, the embryo main body is divided into 2 classification division problems, the 2 classification division is lower than the 5 classification division difficulty, the interference caused by noise can be reduced, the generalization is better, and the reliability of the model is improved.
The embryo main body segmentation network structure adopts Res-U-Net as a main network, as shown in fig. 2, the input is original image, a 5-layer encoder is adopted to gradually downsample the image, and the high-level rich characteristics are extracted. And then gradually upsampling by adopting 4 decoders to restore the original image size, strengthening the mapping of the main features by adopting a jump connection mode, and obtaining a final embryo main body segmentation result through nonlinear activation. The embryo main body comprises a blastula cavity, an inner cell mass, a trophoblast and a transparent belt, the tags are combined into one type, the other region segmentation tags are combined into the other type, two-class segmentation is carried out, the prediction type of each pixel is predicted, the prediction type is compared with the real tag, and the average cross entropy loss of the whole image is calculated and can be expressed as:
where k=2 denotes the number of pixel classes, y i,j Representing the true class of pixel j in the ith sample,the representation model predicts a prediction class probability distribution for that pixel, N representing the total number of pixels in the image.
(IV) a developmental stage embryo classification rough classification module:
embryo development stage classification accuracy depends on the mining of high-level global semantic features and low-level detail features. First, as shown in FIG. 3, stage 1-2 and stage 3-5 embryo image discrimination tends to be macroscopic, reflecting the classification problem of images, ultimately determining that classification is dependent on global features. Second, classifications from adjacent developmental stages are often distinguished by detail. However, one-step classification is considered to preserve both the underlying detail features and the higher-level global semantic features, which is a very challenging problem in machine learning and computer vision. Therefore, the two-stage decoupling is adopted to firstly coarsely divide and then subdivide the frame, namely, the global feature two classification is firstly adopted, and then the detail feature is adopted to classify the adjacent stages, so that the classification precision of the adjacent stages is improved.
The development stage coarse classification module takes ResNet-50 as a backbone network and combines a self-attention mechanism to take embryo main body areas to respectively carry out masked self-attention information interaction and aggregation. Gradually marking the characteristic dimension increase as the characteristic dimension in the segmented embryo main body area by using a multi-layer perceptronThe self-attention mechanism is then performed for feature interaction and fusion, which can be expressed as:
where i and j represent the corresponding pixel coordinates in the image,respectively represent corresponding pixels +>Is characterized in that,is a weight matrix which can be learned, +.>Representing a query matrix->Representing a key matrix and a value matrix, respectively, +.>Representing a non-linear bias, then the weight of self-attention +.>Can be expressed as
Representing the dimensions of the key vector;
the features after self-attention can be updated as
In the aboveRepresenting feature stitching operations, ++>Representing edges when constructing the graph convolution, +.>The number of layers representing self-attention, feature interactions guided by focused edge information and information interactions and mining of edge features are enhanced by an attention mechanism. And then, carrying out edge weighted feature fusion module, multiplying corresponding elements of the original image I and the edge image E, and representing as follows:
where i and j represent the corresponding pixel coordinates in the image.
Specifically, as shown in fig. 4, the coarse classification network in the development stage inputs irregular images of a specific area after segmentation, the dimension is increased to 128 by a multi-layer perceptron, information mining between the images is performed by using an attention mechanism, and then characteristic interaction is performed by using a plurality of layers of residual blocks, so that fully interacted information in the specific area is obtained. The other branch uses convolution and several layers of residual blocks to extract features on the original and performs one-time average pooling. And splicing the mask of the specific area to the corresponding original image convolution feature according to the mask corresponding position, copying one original image convolution feature without mask information, and returning to a rough classification result of the image after feature fusion.
And (V) coupling the developmental stage classification and quality assessment module:
the developmental stage classification is essentially dependent on the detail information mined in the image, the edge information in the image being the most pronounced detail feature for the embryonic cells. Edge-sensitive convolution is a common way, and a bilateral filter in a depth Edge-Aware filter (DEAF) continuously smoothes an image by combining similarity between a spatial distance and a pixel value, and retains detail information of an Edge while reducing the influence of noise; conventional convolution operations are typically performed on a grid of pixels, ignoring the relationships and connections between pixels, when processing image data. However, in an image, the connection and structure between pixels is very important, especially when processing areas with structural information such as edges and textures. Edge map convolution (Edge-Conditioned Convolution, ECC) better captures Edge features and structural information by performing convolution operations on the edges of the image. Therefore, edge features of the image are fully mined to classify the embryo development stage by using image edge enhancement and edge map convolution, which is helpful for improving classification accuracy. In addition, the edge graph convolution uses edge information to guide information transfer between graphs, and does not depend on the mask of each structure segmentation result, but only focuses on and depends on the whole edge information. Each structure segmentation task presents a significant challenge itself, and typically local structures account for only a small portion of the image, on which feature interactions present a greater challenge. Therefore, only the embryo main body structure is reserved and then the edge information in the embryo main body is mined, so that richer characteristic interaction information can be reserved, and the accuracy of classification and grading is improved.
According to the result of coarse classification, carrying out fine classification on the 1-2-stage images, and outputting classified images as 1-stage or 2-stage images; and carrying out fine classification and quality classification identification on the 3-5-stage images, and outputting classified images as 3-stage, 4-stage or 5-stage images and corresponding inner cell mass and quality grade A, grade B or grade C of trophoblast. Taking 3-5 stages as an example, the fine classification and classification task is performed at the same time, the network structure is shown in fig. 5, a shared weight common feature extractor is used, and then different constraints are applied to different branches to obtain accurate classification and classification results.
The coupled network structure builds a backbone network as in fig. 5 based on res net-50, where the convolution in the feature extractor uses edge-enhanced convolution kernels to enhance the gray-level variation of the edges or to detect high frequency components of the edges. Firstly, inputting a network, wherein the network comprises 3 original cell images, masking reserved for the segmentation of the detected edge images and embryo main bodies, preprocessing the original images and the edge images by using the masking, reserving the main body parts corresponding to pixel values, and filling the rest pixels with zero. The graph after preprocessing is respectively subjected to 2 times of residual blocks and a self-attention mechanism, taking the preprocessed original graph as an example, the characteristics extracted by the residual blocks are marked asThe self-attention mechanism is performed to perform feature interaction and fusion, and can be expressed as follows:
where i and j represent the corresponding pixel coordinates in the image,respectively representCorresponding pixel +.>Is characterized in that,is a weight matrix which can be learned, +.>Representing a query matrix->Representing a key matrix and a value matrix, respectively, +.>Representing a non-linear bias, then the weight of self-attention +.>Can be expressed as
Representing the dimensions of the key vector;
the features after self-attention can be updated as
In the aboveRepresenting feature stitching operations, ++>Representing edges when constructing the graph convolution, +.>Layer number representing self-attention, +.>The method comprises the steps of carrying out softmax calculation on j column vectors, carrying out feature interaction guided by focusing edge information, deepening information interaction and mining on edge features through an attention mechanism, and then carrying out edge weighted feature fusion module, wherein corresponding elements of an original image I and an edge image E are multiplied, and the representation is as follows:
where i, j denote the corresponding pixel coordinates in the image. And after fusion, extracting high-level semantic information of the edge information after full fusion and interaction by sequentially passing through a 3-layer residual block and an average pooling layer. Branches of 3 different tasks are led out, which are respectively the development stage classification and the quality grade of inner cell mass and trophoblast. And directly selecting a single branch for classification in the 1-2 stage image network.
The phase 1-2 image loss function uses two classes of cross entropy loss. And the 3-5 phase image loss function weights the classification loss and the grading loss, expressed as:
representing stage 3-5 classification loss, < ->Indicating a loss of trophoblast classification,/->The inner cell mass classification loss is three-classification cross entropy loss.
The specific implementation process is as follows:
1. image data preprocessing
The collected dataset was then combined at 6:2: the scale of 2 is divided into a training set, a validation set and a test set. Removing noise points in the image and enhancing the edge information of the image by using morphological methods such as expansion corrosion and open-close operation, uniformly scaling the size of the enhanced image to 500×500, and calculating and retaining an edge map of Canny edge detection.
2. Model training stage
The network model of the image segmentation module uses Res-U-Net, a pre-training model on ImageNet is used for initializing parameter setting, the learning rate of the parameter setting in the training process is 1e-4, the weight attenuation rate is 5e-4, and all training sets are iterated for 200 times. The inner cell mass, trophoblast, blastula cavity and zona pellucida pixel level labels in embryo labeling are respectively 1 type to represent embryo bodies, other areas are combined into 1 type, 2-class division is carried out, and the body parts of the embryos are extracted for classification. The loss function of the training process is set to 2-class pixel-by-pixel average cross entropy loss. The average cross entropy loss of the whole image is calculated and can be expressed as:
where k=2 represents the number of pixel classes,representing the true class of pixel j in the ith sample,/->The representation model predicts a prediction class probability distribution for that pixel, N representing the total number of pixels in the image.
The coarse classification model training phase model of the development phase is shown in fig. 4, the pre-training model on ImageNet is used for initializing parameter setting, the learning rate of the parameter setting in the training process is 1e-4, the weight attenuation rate is 5e-4, and all training sets are iterated for 200 times. The label of embryo development stage is combined into class 0 from stage 1 to stage 2, and is combined into class 1 from stage 3 to stage 5, and a classification training is carried out to obtain a coarse classification result. The model prediction result is p, the sample label is 0 or 1, the loss function in the training process is a two-class cross entropy loss, and the specific form is as follows:
the network model trained by the coupled development stage classification and development quality classification module is shown in fig. 5, a segmentation mask is selected as an input for an inner cell mass, a trophoblast region and an original image, the same pre-training model on an ImageNet is used for initializing parameter setting, the parameter setting learning rate in the training process is 1e-4, the weight attenuation rate is 5e-4, and all training sets are iterated 200 times in total. The phase 1-2 image loss function uses two classes of cross entropy loss. And the 3-5 phase image loss function weights the classification loss and the grading loss, expressed as:
representing stage 3-5 classification loss, < ->Indicating a loss of trophoblast classification,/->The inner cell mass classification loss is three-classification cross entropy loss.
3. Model test stage
And (3) carrying out morphological treatment on the embryo images in the test set, and then respectively sending the embryo images into a trained network model of an embryo main body segmentation module, a development stage rough classification module, a coupling development stage fine classification module and a development quality classification module for testing to respectively obtain embryo development stage rough prediction, embryo main body segmentation, embryo development stage fine prediction and embryo development quality grading results, wherein the embryo main body segmentation results are shown in fig. 6, the embryo development stage rough prediction results are shown in fig. 7, and the embryo development stage fine prediction and classification results are shown in fig. 8.
The invention also provides a computer readable storage medium storing a computer program, which is characterized in that the computer program is executed by a processor to realize the edge enhancement-based embryo development stage prediction and quality assessment system.
The foregoing embodiments may be combined in any way, and all possible combinations of the features of the foregoing embodiments are not described for brevity, but only the preferred embodiments of the invention are described in detail, which should not be construed as limiting the scope of the invention. The scope of the present specification should be considered as long as there is no contradiction between the combinations of these technical features.
It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (8)

1. An embryo development stage prediction and quality assessment system based on edge enhancement, which is characterized in that: the system comprises a pretreatment module, an embryo main body segmentation module, an embryo classification module, an embryo fine classification module and an embryo quality assessment module;
the preprocessing module is used for removing noise of the embryo image to be evaluated, enhancing contour information and storing an image edge map of the embryo image to be evaluated;
the embryo main body segmentation module is used for carrying out classification segmentation on an embryo image to be evaluated, and dividing the embryo image to be evaluated into one type comprising a blastula cavity, an inner cell mass, a trophoblast and a zona pellucida and the other type comprising other areas;
the embryo classification module is used for classifying the embryo images to be evaluated into 1-2 stages or 3-5 stages based on the segmentation result of the embryo main body segmentation module;
the embryo fine classification module is used for carrying out feature interaction through a self-attention mechanism based on the classification result of the embryo two classification module and an image edge graph, carrying out edge weighted feature fusion, classifying 1-2 phases into 1 phase or 2 phases, and classifying 3-5 phases into 3 phases or 4 phases or 5 phases; the expression of the edge weighted feature fusion is as follows:
H=YT*BY;
in the formula, YT represents the characteristic of the embryo image to be evaluated obtained through a self-attention mechanism, and BY represents the characteristic of the image edge map obtained through the self-attention mechanism;
the embryo fine classification module is used for classifying the embryos according to the following steps: obtaining a mask reserved by the embryo main body segmentation module in a segmentation way; preprocessing an embryo image to be evaluated and an edge map through the mask, reserving a pixel value corresponding to a main body part, and performing zero filling on other pixels; the embryo image to be evaluated after pretreatment and the edge image are subjected to 2 times of residual blocks and a self-attention mechanism, and then edge weighted feature fusion is carried out; after fusion, extracting high-level semantic information after full fusion and interaction of edge information sequentially through a 3-layer residual block and an average pooling layer, and finishing fine classification of embryos;
the method for carrying out feature interaction and fusion through the self-attention mechanism comprises the following steps: calculating the weight of self-attention alpha ij
d k Representing a key matrix k j Is a dimension of (2);
q i =W 1 (l) f i +b 1 ,k j =W 2 (l) f j +b 2 ,v j =W 3 (l) f j +b 3
wherein i and j represent corresponding pixels in the image, f i ,f j Features, W, representing the corresponding pixels i, j, respectively 1 ,W 2 ,W 3 As a weight matrix capable of learning, q i Representing a query matrix, k j ,v j Representing key matrix and value matrix, b 1 ,b 2 ,b 3 Representing non-linear bias, softmax (·) represents softmax calculation;
the post-self-attention features are updated as:
in the above [ ·||· ] the characteristic stitching operation is represented as such, epsilon represents the edges when the graph convolution is constructed, l=6 represents the number of layers of self-attention, MLP represents the multi-layer perceptron;
and the embryo quality evaluation module is used for carrying out quality grading identification on the images in the 3-5 period based on the classification result of the embryo fine classification module and outputting the corresponding quality grades of the inner cell mass and the trophoblast.
2. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 1, wherein: the embryo main body segmentation module adopts Res-U-Net as a backbone network, and obtains a segmentation result by predicting the prediction type of each pixel and comparing the prediction type with a real label and restraining the prediction type with average cross entropy loss.
3. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 2, wherein: the method for performing two-class segmentation by the embryo main body segmentation module comprises the following steps: gradually downsampling the image by adopting a 5-layer encoder, and extracting high-level rich features; and gradually upsampling by adopting 4 decoders to restore the original image size, strengthening the mapping of main features by adopting a jump connection mode, and obtaining a final embryo main body segmentation result through nonlinear activation.
4. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 2, wherein: the expression of the average cross entropy loss is:
where k=2 represents the number of pixel categories,y i,j representing the true class of pixel j in the ith sample,the representation model predicts a prediction class probability distribution for that pixel, N representing the total number of pixels in the image.
5. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 1, wherein: and the embryo two-classification module takes ResNet-50 as a backbone network, combines a self-attention mechanism, extracts information interaction of self-attention with masks of the embryo main body area and returns classification results of images after aggregation.
6. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 5 wherein: the method for performing two-classification segmentation by the embryo two-classification module comprises the following steps: inputting the segmented embryo images to be evaluated into the embryo classification module; the dimension is increased pixel by pixel through a multi-layer perceptron; information mining between graphs is performed by using an attention mechanism; performing feature interaction by using a plurality of layers of residual blocks to obtain information of full interaction between the segmented embryo images to be evaluated; the other branch uses convolution and a plurality of layers of residual blocks to extract characteristics on the image to be evaluated and carries out primary average pooling; after the mask of the segmentation area is spliced to the corresponding feature of the image convolution to be evaluated according to the mask corresponding position, the feature of the image convolution to be evaluated is copied without mask information, and the classification result of the image is regressed after feature fusion.
7. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 1, wherein: the embryo fine classification module employs a focused graph convolution and a ResNet-50 based backbone network, and convolution in the feature extractor uses edge-enhanced convolution kernels.
8. An edge-enhanced embryo developmental stage prediction and quality assessment system in accordance with claim 7 wherein: when the classification result of the embryo two-classification module is 1-2 phase, the embryo fine classification module adopts a two-classification cross entropy loss function for constraint; and when the classification result of the embryo two-classification module is 3-5 stages, the embryo fine classification module adopts a three-classification cross entropy loss function for constraint.
CN202311123764.5A 2023-09-01 2023-09-01 Embryo development stage prediction and quality assessment system based on edge enhancement Active CN116844143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311123764.5A CN116844143B (en) 2023-09-01 2023-09-01 Embryo development stage prediction and quality assessment system based on edge enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311123764.5A CN116844143B (en) 2023-09-01 2023-09-01 Embryo development stage prediction and quality assessment system based on edge enhancement

Publications (2)

Publication Number Publication Date
CN116844143A CN116844143A (en) 2023-10-03
CN116844143B true CN116844143B (en) 2023-12-05

Family

ID=88165618

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311123764.5A Active CN116844143B (en) 2023-09-01 2023-09-01 Embryo development stage prediction and quality assessment system based on edge enhancement

Country Status (1)

Country Link
CN (1) CN116844143B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117612164B (en) * 2024-01-19 2024-04-30 武汉互创联合科技有限公司 Cell division equilibrium degree detection method based on double edge detection

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832801A (en) * 2017-11-23 2018-03-23 桂林优利特医疗电子有限公司 A kind of cell image classification model building method
CN111539308A (en) * 2020-04-20 2020-08-14 浙江大学 Embryo quality comprehensive evaluation device based on deep learning
CN113256668A (en) * 2021-06-13 2021-08-13 中科云尚(南京)智能技术有限公司 Image segmentation method and device
DE102020208765A1 (en) * 2020-07-14 2022-01-20 Robert Bosch Gesellschaft mit beschränkter Haftung Image classifier with variable receptive fields in convolutional layers
CN114283407A (en) * 2021-12-24 2022-04-05 江苏康尚生物医疗科技有限公司 Self-adaptive automatic leukocyte segmentation and subclass detection method and system
CN114758360A (en) * 2022-04-24 2022-07-15 北京医准智能科技有限公司 Multi-modal image classification model training method and device and electronic equipment
CN114926797A (en) * 2022-05-18 2022-08-19 中国地质大学(武汉) Transformer double-branch road extraction method and device based on edge constraint and feature adaptation
CN115100467A (en) * 2022-06-22 2022-09-23 北京航空航天大学 Pathological full-slice image classification method based on nuclear attention network
CN115205730A (en) * 2022-06-10 2022-10-18 西安工业大学 Target tracking method combining feature enhancement and template updating
CN115272342A (en) * 2022-09-29 2022-11-01 深圳大学 Cell differentiation degree evaluation method based on bright field image, storage medium and system
CN115409990A (en) * 2022-09-28 2022-11-29 北京医准智能科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN115937082A (en) * 2022-09-30 2023-04-07 湘潭大学 Embryo quality intelligent evaluation system and method based on deep learning
CN116310323A (en) * 2023-02-26 2023-06-23 深圳大学 Aircraft target instance segmentation method, system and readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832801A (en) * 2017-11-23 2018-03-23 桂林优利特医疗电子有限公司 A kind of cell image classification model building method
CN111539308A (en) * 2020-04-20 2020-08-14 浙江大学 Embryo quality comprehensive evaluation device based on deep learning
DE102020208765A1 (en) * 2020-07-14 2022-01-20 Robert Bosch Gesellschaft mit beschränkter Haftung Image classifier with variable receptive fields in convolutional layers
CN113256668A (en) * 2021-06-13 2021-08-13 中科云尚(南京)智能技术有限公司 Image segmentation method and device
CN114283407A (en) * 2021-12-24 2022-04-05 江苏康尚生物医疗科技有限公司 Self-adaptive automatic leukocyte segmentation and subclass detection method and system
CN114758360A (en) * 2022-04-24 2022-07-15 北京医准智能科技有限公司 Multi-modal image classification model training method and device and electronic equipment
CN114926797A (en) * 2022-05-18 2022-08-19 中国地质大学(武汉) Transformer double-branch road extraction method and device based on edge constraint and feature adaptation
CN115205730A (en) * 2022-06-10 2022-10-18 西安工业大学 Target tracking method combining feature enhancement and template updating
CN115100467A (en) * 2022-06-22 2022-09-23 北京航空航天大学 Pathological full-slice image classification method based on nuclear attention network
CN115409990A (en) * 2022-09-28 2022-11-29 北京医准智能科技有限公司 Medical image segmentation method, device, equipment and storage medium
CN115272342A (en) * 2022-09-29 2022-11-01 深圳大学 Cell differentiation degree evaluation method based on bright field image, storage medium and system
CN115937082A (en) * 2022-09-30 2023-04-07 湘潭大学 Embryo quality intelligent evaluation system and method based on deep learning
CN116310323A (en) * 2023-02-26 2023-06-23 深圳大学 Aircraft target instance segmentation method, system and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Convolutional neural network for cell classification using microscope images of intracellular actin networks;Ronald Wihal Oei 等;《OPENACCESS》;全文 *
基于改进生成对抗网络的谣言检测方法;李奥;但志平;董方敏;刘龙文;冯阳;;中文信息学报(09);全文 *
宫颈细胞图像特征分析与自动识别方法研究;许璇;《中国优秀硕士论文集信息科技》;全文 *

Also Published As

Publication number Publication date
CN116844143A (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN107680678B (en) Thyroid ultrasound image nodule diagnosis system based on multi-scale convolution neural network
Deng et al. Vision based pixel-level bridge structural damage detection using a link ASPP network
Vijayakumar et al. Capsule network on font style classification
CN103049763B (en) Context-constraint-based target identification method
CN109711448A (en) Based on the plant image fine grit classification method for differentiating key field and deep learning
JP2018534694A (en) Convolutional neural network with subcategory recognition for object detection
Gao et al. Generative adversarial networks for road crack image segmentation
CN116844143B (en) Embryo development stage prediction and quality assessment system based on edge enhancement
Lopez Droguett et al. Semantic segmentation model for crack images from concrete bridges for mobile devices
CN111553414A (en) In-vehicle lost object detection method based on improved Faster R-CNN
Xing et al. Traffic sign recognition using guided image filtering
Nair et al. An Enhanced Approach for Binarizing and Segmenting Degraded Ayurvedic Medical Prescription.
Prabaharan et al. RETRACTED ARTICLE: An improved convolutional neural network for abnormality detection and segmentation from human sperm images
CN117095180B (en) Embryo development stage prediction and quality assessment method based on stage identification
Zhang et al. Investigation of pavement crack detection based on deep learning method using weakly supervised instance segmentation framework
CN115019133A (en) Method and system for detecting weak target in image based on self-training and label anti-noise
CN115719475A (en) Three-stage trackside equipment fault automatic detection method based on deep learning
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN113177554B (en) Thyroid nodule identification and segmentation method, system, storage medium and equipment
Yu et al. SignHRNet: Street-level traffic signs recognition with an attentive semi-anchoring guided high-resolution network
Gooda et al. Automatic detection of road cracks using EfficientNet with residual U-net-based segmentation and YOLOv5-based detection
Yin et al. Road Damage Detection and Classification based on Multi-level Feature Pyramids.
CN116883650A (en) Image-level weak supervision semantic segmentation method based on attention and local stitching
Castillo et al. Object detection in digital documents based on machine learning algorithms
Lin et al. Intelligent identification of pavement cracks based on PSA-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant