CN116091421A - Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo - Google Patents

Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo Download PDF

Info

Publication number
CN116091421A
CN116091421A CN202211645122.7A CN202211645122A CN116091421A CN 116091421 A CN116091421 A CN 116091421A CN 202211645122 A CN202211645122 A CN 202211645122A CN 116091421 A CN116091421 A CN 116091421A
Authority
CN
China
Prior art keywords
embryo
blastomere
mask
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211645122.7A
Other languages
Chinese (zh)
Inventor
李伟忠
严朝煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202211645122.7A priority Critical patent/CN116091421A/en
Publication of CN116091421A publication Critical patent/CN116091421A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Abstract

The invention relates to the technical field of embryo optical lens image processing, and particularly discloses a method for automatically segmenting and calculating the area of an in-vitro fertilized embryo blastomere image, which comprises the following steps: performing blastomere detection on the embryo optical lens picture by adopting a target detection model based on deep learning to obtain a blastomere candidate frame; fusing the blastomere candidate frames to obtain embryo candidate frames; extracting a region of interest based on the embryo candidate frame; performing image enhancement processing on the region of interest; automatically dividing the embryo optical lens picture by adopting an interactive image segmentation algorithm to obtain an blastomere mask and an embryo mask; respectively carrying out blastomere tracing and embryo tracing on the embryo optical lens picture through the blastomere mask and the embryo mask; and calculating the areas of the blastomere mask and the embryo mask, and obtaining the blastomere area and the embryo area. The method can accurately divide the figures of the blastomere and the embryo, realizes the accurate edge drawing and area calculation of the blastomere and the embryo, and has high recognition precision.

Description

Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo
Technical Field
The invention relates to the technical field of embryo optical lens image processing, in particular to a method for automatically dividing and calculating the area of an in-vitro fertilized embryo blastomere image.
Background
In the embryo transfer process, the quality of embryos and blastomeres in embryo optical lens pictures is required to be evaluated, in order to reduce the labor burden, research and development personnel develop a series of intelligent auxiliary detection technologies based on deep learning or machine learning, and at present, an ellipse fitting method based on machine learning and a target detection method based on deep learning are mainly used for in vitro fertilization of blastomeres in the embryo optical lens pictures.
Wherein, in terms of machine learning:
conaghan et al in Improving embryo selection using a computer-automated time-lapse image analysis test plus day 3morphology:results from a prospective multicenter trial propose a cell tracking software which can approximate the blastomere doctor-assisted diagnosis of the third day embryo with ellipses, so that the specificity of doctor diagnosis is improved from 79.5% to 86.6%.
Patil et al in "Application of Vessel Enhancement Filtering for Automated Classification of Human In-Vitro virtual (IVF) Images" propose an algorithm based on machine learning to detect blastomeres of embryos on the next and third days, specifically divided into 4 steps of boundary extraction, filtering, rounding and validation.
Syulisotyo et al in "Ellipse detection on embryo image using modification of arc Particle Swarm Optimization (ArcPSO) based arc segment" propose an algorithm for machine learning to detect blastomeres of embryos on the third day, specifically divided into 3 steps of line segment extraction, filtering and ellipse finding, with an accuracy of 42% in a multi-blastomeres detection task.
The methods proposed by Conaghan et al, patil et al and Syulistro et al are all to fit the blastomeres with circles or ellipses, but the blastomeres in the cleavage stage are not all centrosymmetric ellipses on the embryo optical lens picture, and the interference of substances such as transparent bands, bubbles and fragments exists, so the number of the blastomeres can only be estimated approximately by using the method, and the accurate edge and area calculation of the blastomeres cannot be satisfied. And the algorithm does not provide corresponding measures for the interference caused by different image occupation ratios, contrast ratios, illumination intensities and signal to noise ratios of embryo bodies and embryo bodies in different equipment and different batches of the same equipment.
In terms of deep learning:
wang Jianbo et al propose a method, a system, a device and a storage medium for identifying cells in an embryo microscope image, wherein the position and the number of the blastomeres are detected by using a minimum tangent frame, but the method cannot carry out edge segmentation and area statistics on the blastomeres, and the minimum tangent frame is not strictly tangent to the blastomeres under the condition that the blastomeres are overlapped in height.
The model training proposed by He et al in Machine learning for automated cell segmentation in embryos requires manual edge drawing as a label, and can only split blastomeres of embryos with the number of 4, the counting precision is only 70%, and the clinical requirements cannot be met.
Therefore, the existing identification and detection method is based on the interference of substances such as transparent belts, bubbles and fragments, aims at the reasons that the conditions of interference caused by different image occupation ratios, contrast, illumination intensity and signal to noise ratio of embryo bodies and whole image occupation ratios, the conditions of high coincidence of blastomeres and the like exist among different equipment and different batches of the same equipment, cannot achieve accurate edge drawing and area calculation of the blastomeres, has low precision and cannot meet clinical high-precision requirements.
Disclosure of Invention
The invention aims at providing an automatic in-vitro fertilized embryo blastomere image segmentation and area calculation method aiming at the existing state of the art, and the method can accurately segment the blastomere and embryo patterns, realize accurate edging and area calculation of the blastomere and embryo and has high recognition accuracy.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an automatic segmentation and area calculation method for an in vitro fertilized embryo blastomere image comprises the following steps:
performing blastomere detection on the embryo optical lens picture by adopting a target detection model based on deep learning to obtain a blastomere candidate frame;
fusing the blastomere candidate frames to obtain embryo candidate frames;
extracting a region of interest based on the embryo candidate frame;
performing image enhancement processing on the region of interest;
after the region of interest of the embryo microscope picture is subjected to image enhancement processing, an interactive image segmentation algorithm is adopted to automatically segment the embryo microscope picture, and an blastomere mask and an embryo mask are obtained;
respectively carrying out blastomere tracing and embryo tracing on an embryo optical lens picture through the blastomere mask and the embryo mask;
and calculating the areas of the blastomere mask and the embryo mask to obtain the blastomere area and the embryo area.
In some embodiments, the step of image enhancement processing includes:
performing tone scale adjustment on the region of interest;
noise reduction is carried out on the region of interest subjected to the tone scale adjustment;
contrast enhancement is performed on the noise-reduced region of interest.
In some embodiments, the step of tone scale adjustment comprises:
converting the region of interest into HSV channels;
calculating the average value of the H channel, and marking the average value as H;
calculating the mean value of the V channel, and marking the mean value as V;
presetting a judgment threshold T, if
Figure BDA0004003578230000031
Then performing an operation of tone adjustment based on the tone adjustment formula, otherwise, exiting the tone adjustment;
The tone scale adjustment formula is:
Figure BDA0004003578230000032
wherein, I is the image data before the tone scale adjustment, I' is the image data after the tone scale adjustment, s, h and m are constants, and satisfy s epsilon [0,170], h epsilon (s, 255), m epsilon [1,3], and the number smaller than 0 in I-s is set as 0.
In some embodiments, the step of reducing noise comprises:
constructing a total variation noise reduction function and presetting a noise reduction parameter interval;
iteration is carried out on the input image in the noise reduction parameter interval by adopting an optimal J invariance method to obtain optimal noise reduction parameters;
and adopting the optimal noise reduction parameters to reduce noise of the input image.
In some embodiments, the step of obtaining the blastomere candidate frame by performing blastomere detection on the embryo optical lens picture by using a target detection model based on deep learning includes:
reading embryo light mirror picture data;
performing field histogram equalization on the read data;
inputting the processed picture into a target detection model based on deep learning to detect the blastomere so as to obtain a primary blastomere rectangular frame;
and respectively expanding the primary blastomere rectangular frames by adopting a preset offset value, recording vertex coordinates, and if the expanded coordinates exceed the original image size of the embryo optical lens picture, reserving corresponding original coordinates, thereby obtaining the blastomere candidate frames.
In some embodiments, the step of extracting a region of interest based on the embryo candidate frame comprises:
expanding the embryo candidate frame by taking the center of the embryo candidate frame as the center according to an expansion formula to obtain an expanded candidate frame, taking the expanded candidate frame as an interested region, and recording corresponding vertex coordinates;
the expansion formula is as follows: l' =l×k
Wherein L is the side length of the embryo candidate frame, L' is the side length of the expansion candidate frame, k is the expansion coefficient, and k is [1.2,2].
In some embodiments, the step of automatically segmenting the embryo mirror image by using an interactive image segmentation algorithm to obtain the blastomere mask and the embryo mask comprises:
and inputting an embryo optical lens picture of the region of interest after image enhancement processing, and carrying out Grabcut algorithm by taking the blastomere candidate frame as a target candidate frame to obtain a blastomere mask.
In some embodiments, the step of automatically segmenting the embryo mirror image by using an interactive image segmentation algorithm to obtain the blastomere mask and the embryo mask comprises:
overlapping the blastomere masks, performing image binarization, marking the connected domains, only reserving the connected domains with the largest area, and then filling through holes to obtain a fusion mask;
inputting the region of interest subjected to image enhancement processing, performing Grabcut algorithm by taking an embryo candidate frame as a target candidate frame, obtaining an initial mask, performing image binarization, marking connected domains and only reserving the connected domain with the largest area, and then filling through holes to obtain the embryo initial mask;
logic and operation are carried out on the fusion mask and the embryo initial mask, and an embryo middle mask is obtained;
constructing a pixel-level tag map based on the embryo midmask;
and using the label image as an accurate label image, and performing Grabcut algorithm on the embryo light microscope image of the region of interest after the image enhancement processing to obtain an embryo mask.
In some embodiments, the step of constructing a pixel-level tag map based on the embryo midmask comprises:
etching the embryo middle mask, and performing color reversal, and marking as a background;
performing corrosion operation on the embryo middle mask, and marking the embryo middle mask as a prospect;
marking a transition region between a background and a foreground as a suspicious foreground;
constructing a pixel-level label map based on the foreground, the background, and the suspicious foreground.
In some embodiments, the step of superimposing the blastomere mask and performing image binarization, marking the connected domain and only preserving the connected domain with the largest area, and then filling through the hole to obtain the fusion mask includes:
superposing the blastomere masks, setting the pixels larger than 0 as 255, and obtaining a binary image;
marking connected domains on the binary icons;
reserving pixels of the maximum area connected domain, and setting the rest pixels to 0;
and filling holes to obtain the fusion mask.
The invention has the beneficial effects that:
according to the invention, the target detection model based on deep learning is utilized to initially obtain the blastomere candidate frame and the embryo candidate frame, the blastomere and the embryo are initially positioned, the region of interest is accurately obtained, the range of value of the embryo is fixed, the interference caused by impurities in a culture medium, the ratio of the embryo to the picture, the shooting light intensity, the contrast and the like is reduced as far as possible, then only the region of interest is subjected to image enhancement processing, on one hand, the difference between different pictures is reduced, on the other hand, the image difference between the region of interest and the whole picture main body is improved, the difference between the embryo and the background in the same picture is enlarged, and then the robustness of a subsequent interactive image segmentation algorithm is increased, so that the blastomere and the embryo graph can be accurately segmented when the interactive image segmentation algorithm is subsequently adopted, the precise segmentation of the blastomere and the embryo graph can be realized, the precise segmentation, the edge and the area calculation of the blastomere and the embryo are effectively realized in the deep learning method without training and the advantage of high interpretation, and the recognition precision is higher.
Drawings
FIG. 1 is a flow chart of a method for automatic segmentation and area calculation of in vitro fertilized embryo blastomere images according to the present invention.
Fig. 2 is a schematic flow chart of extracting a region of interest according to the present invention.
Fig. 3 is a flow chart of the image enhancement process of the present invention.
FIG. 4 is a flow chart of the invention for obtaining embryo masks.
FIG. 5 is a schematic diagram of the operation flow of a method for automatic segmentation and area calculation of in vitro fertilized embryo blastomere images according to the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
referring to fig. 1 and 5, the invention discloses a method for automatically segmenting an in vitro fertilized embryo blastomere image and calculating an area, which comprises the following steps:
s100, performing blastomere detection on the embryo mirror image by adopting a target detection model based on deep learning, and obtaining a blastomere candidate frame;
s200, fusing the blastomere candidate frames to obtain embryo candidate frames;
s300, extracting a region of interest based on the embryo candidate frame;
s400, performing image enhancement processing on the region of interest;
s500, after the region of interest of the embryo microscope image is subjected to image enhancement processing, automatically dividing the embryo microscope image by adopting an interactive image dividing algorithm to obtain a blastomere mask and an embryo mask;
s600, performing blastomere tracing and embryo tracing on the embryo mirror image through the blastomere mask and the embryo mask respectively;
s700, calculating areas of the blastomere mask and the embryo mask, and obtaining the blastomere area and the embryo area.
According to the invention, the target detection model based on deep learning is utilized to initially obtain the blastomere candidate frame and the embryo candidate frame, the blastomere and the embryo are initially positioned, the region of interest is accurately obtained, the range of value of the embryo is fixed, the interference caused by impurities in a culture medium, the ratio of the embryo to the picture, the shooting light intensity, the contrast and the like is reduced as far as possible, then only the region of interest is subjected to image enhancement processing, on one hand, the difference between different pictures is reduced, on the other hand, the image difference between the region of interest and the whole picture main body is improved, the difference between the embryo and the background in the same picture is enlarged, and then the robustness of a subsequent interactive image segmentation algorithm is increased, so that the blastomere and the embryo graph can be accurately segmented when the interactive image segmentation algorithm is subsequently adopted, the precise segmentation of the blastomere and the embryo graph can be realized, the precise segmentation, the edge and the area calculation of the blastomere and the embryo are effectively realized in the deep learning method without training and the advantage of high interpretation, and the recognition precision is higher.
In some embodiments, step S700 may be performed prior to S600 or simultaneously with S600, respectively.
In some embodiments, referring to fig. 3, the image enhancement process includes the steps of:
s410, performing tone scale adjustment on the region of interest;
s420, denoising the region of interest subjected to the tone scale adjustment;
s430, carrying out contrast enhancement on the region of interest subjected to noise reduction.
The image enhancement processing is carried out on the region of interest through the tone scale adjustment, the noise reduction and the contrast enhancement, the difference between different pictures is reduced, the difference between the embryo and the background in the same picture is enlarged, and the high-precision segmentation of the embryo optical lens picture by the follow-up interactive image segmentation algorithm is facilitated.
In some embodiments, the step of tone scale adjustment includes:
s411, converting the region of interest into an HSV channel;
s412, calculating the average value of the H channel, and marking the average value as H; calculating the mean value of the V channel, and marking the mean value as V;
s413, presetting a judgment threshold T, if
Figure BDA0004003578230000071
Executing the operation of adjusting the color level based on the color level adjusting formula, and if not, exiting the color level adjustment;
the tone scale adjustment formula is:
Figure BDA0004003578230000072
wherein, I is the image data before the tone scale adjustment, I' is the image data after the tone scale adjustment, s, h and m are constants, and satisfy s epsilon [0,170], h epsilon (s, 255), m epsilon [1,3], and the number smaller than 0 in I-s is set as 0.
Preferably, the decision threshold T.epsilon.100, 150.
The traditional tone adjustment algorithm needs to manually set adjustment amplitude, and in the invention, through a judgment condition formula based on tone and brightness, the embryo photomicrograph can be automatically filtered, so that the tone adjustment algorithm only meets the requirement of the picture needing adjustment
Figure BDA0004003578230000073
Adjustment of the picture of this condition
In the tone scale adjustment, the conventional tone scale adjustment methods, such as histogram equalization and histogram stretching, change the histogram trend and variance of the original picture while adjusting the tone scale.
In some embodiments, the step of reducing noise comprises:
s421, constructing a total variation noise reduction function (TV), and presetting a noise reduction parameter interval;
s422, iterating the input image in a noise reduction parameter interval by adopting an optimal J invariance method to obtain optimal noise reduction parameters;
s423, adopting the optimal noise reduction parameters to reduce noise of the input image.
Preferably, the noise reduction parameter interval is [0.1,0.9], and the preset step size is 0.02.
In particular, the specific processing steps of the optimal J-invariance method (optimal J-invariance) may employ the method steps described in "Noise2Self: blind Denoising by Self-Supervision" (authors: J.Batson & L.Royer., origin: international Conference on Machine Learning, p.524-533 (2019)).
The noise reduction method is automatic parametric noise reduction, and has stronger robustness compared with the traditional single-parameter noise reduction method, and the noise reduction method adopted by the invention is total-variation noise reduction, and has strongest noise reduction capability as far as possible on the premise of retaining edges in the embryo light microscope photographing task compared with the traditional double-side filtering, gaussian blur and other algorithms.
In some embodiments, the step of contrast enhancement comprises:
s431, selecting a lowest probability threshold value from intervals [0,1] to be marked as q1, selecting a highest probability threshold value to be marked as q2, and enabling 0< q1< q2<1;
s432, obtaining a pixel histogram of an input image, and obtaining pixel values t1 and t2 corresponding to q1 and q 2.
S433, histogram stretching is carried out on the pixel histogram of the input image by t1 and t2.
The automatic standardization process is formed by the tone scale adjustment, the noise reduction and the contrast enhancement based on the machine learning method, so that the automatic standardization process can be suitable for the interference caused by different occupation ratios of embryo bodies and full images, different contrast ratios, illumination intensities and signal to noise ratios in photographing among different equipment and different batches of the same equipment.
In some embodiments, the step of obtaining the blastomere candidate frame comprises the steps of:
s110, reading embryo light mirror picture data;
s120, performing field histogram equalization on the read data;
s130, inputting the processed picture into a target detection model based on deep learning to detect the blastomere to obtain a primary blastomere rectangular frame, wherein the target detection model based on the deep learning can be a YOLO series model or an RCNN series model;
s140, expanding the primary blastomere rectangular frames by adopting preset offset values, recording vertex coordinates, and if the expanded coordinates exceed the original image size of the embryo optical lens picture, reserving corresponding original coordinates, thereby obtaining the blastomere candidate frames, and preferably selecting one value from the pixel intervals [10,50] as the preset offset value.
According to the invention, by combining the target detection model based on deep learning with the expansion of the step S140, the risk caused by the inaccuracy of the minimum tangent frame of the target detection model is avoided to a certain extent while the blastomere is positioned.
In some embodiments, referring to fig. 2, the step of extracting the region of interest based on the embryo candidate frame comprises:
s310, expanding the embryo candidate frame by taking the center of the embryo candidate frame as the center according to an expansion formula to obtain an expanded candidate frame, taking the expanded candidate frame as an interested region, and recording corresponding vertex coordinates;
the expansion formula is: l' =l×k
Wherein L is the side length of the embryo candidate frame, L' is the side length of the expansion candidate frame, k is the expansion coefficient, and k is [1.2,2].
According to the invention, the extraction of the region of interest is performed by combining the target detection model and the fixed amplification factor, so that the processing capacity of the subsequent steps is smaller, and the risk caused by the inaccuracy of the minimum tangent frame of the target detection model is avoided to a certain extent.
In some embodiments, referring to fig. 4, the steps of automatically segmenting the embryo mirror image by using an interactive image segmentation algorithm to obtain the blastomere mask and the embryo mask include:
s510, inputting an embryo optical lens picture of the region of interest after image enhancement processing, and carrying out Grabcut algorithm by taking the blastomere candidate frame as a target candidate frame to obtain a blastomere mask.
In the cleavage sphere division task, if only the smallest tangent frame of each cleavage sphere is used as the target frame for division, the division effect is poor because another cleavage sphere is also present in the background. According to the invention, only the image enhancement is carried out on the region of interest, and the Grabcut algorithm of a single blastomere is carried out on the whole image, so that the difference between the background and the foreground can be well enlarged, and the Grabcut algorithm can be segmented even when facing the blastomere with high coincidence.
In some embodiments, referring to fig. 4 and 5, the steps of automatically dividing the embryo mirror image by using an interactive image division algorithm to obtain the blastomere mask and the embryo mask include:
s521, superposing the blastomere masks, performing image binarization, marking the connected domains, only reserving the connected domains with the largest area, and then filling through holes to obtain a fusion mask;
s522, inputting an interested region subjected to image enhancement processing, carrying out Grabcut algorithm by taking an embryo candidate frame as a target candidate frame, obtaining an initial mask, carrying out image binarization, marking a connected domain and only reserving a connected domain with the largest area, and then filling through holes to obtain an embryo initial mask;
s523, performing logic AND operation on the fusion mask and the embryo initial mask to obtain an embryo intermediate mask;
s524, constructing a pixel-level label graph based on the embryo middle mask;
s525, using the label image as an accurate label image, and performing Grabcut algorithm on the embryo light microscope image of the region of interest after image enhancement processing to obtain an embryo mask.
The existing Grabcut algorithm needs to manually set a target candidate frame or manually set an accurate label graph at a pixel level, but is not clear in the face of a transparent belt, and when the situations of high light, adhesion of fragments and blastomeres, adhesion of impurities such as extra-embryonic granular cells and sperms and embryos and the like occur at the edge part, the simple Grabcut algorithm based on the target candidate frame method cannot well identify the difference between the foreground and the background, so that the situation of segmentation crossing or deficiency occurs. The invention respectively divides the total area of the embryo and the blastomere to fuse and generate the background label, can more accurately permeate the background label into the inner side of the transparent belt, and generates the pixel-level label graph, so that the Grabcut algorithm can more accurately carry out recursion division near the blurred embryo edge, thereby accurately identifying the embryo and enabling the accurate tracing and accurate area calculation of the embryo to be carried out subsequently.
In some embodiments, the step of constructing a pixel-level tag map based on the embryo midmask comprises:
1) Etching the middle mask of the embryo, and performing color reversal, and marking as a background;
2) Etching the embryo middle mask, and marking as a prospect;
3) Marking a transition region between a background and a foreground as a suspicious foreground;
4) A pixel-level label map is constructed based on the foreground, background, and suspicious foreground.
The method provided by the invention has the advantages that the middle mask of the embryo is corroded, and then the marking is carried out, so that the background label is further ensured to permeate into the inner side of the transparent belt, and the interference caused by factors such as the unclear transparent belt, high light appearance at the edge part, adhesion between fragments and blastomeres, adhesion between impurities such as extra-embryo granulosa cells and sperms and the embryo, and the like on accurate segmentation of the embryo is overcome.
Further, the step of constructing a pixel-level tag map based on the embryo midmask comprises:
1) Selecting a value as a side length in a section [3, 13], generating a square with a pixel value of 1 as a core, and marking the square as k;
2) Etching the embryo middle mask with iteration number of 1 and kernel of k, and performing color reversal and marking as background;
2) Etching the embryo middle mask, wherein the iteration times are i, the kernel is k, and the kernel is marked as a prospect, i is [3, 15];
3) Marking a transition region between a background and a foreground as a suspicious foreground;
4) A pixel-level label map is constructed based on the foreground, background, and suspicious foreground.
And carrying out iteration with different degrees on the corrosion operation before the background and foreground labeling, and further avoiding the occurrence of the situation that the foreground and the background are segmented and are out of limits or deficient.
In some embodiments, overlapping the blastomere mask, performing image binarization, marking the connected domain and only reserving the connected domain with the largest area, and then filling through holes, wherein the step of obtaining the fusion mask comprises the following steps:
1) Superposing the blastomere masks, setting the pixels larger than 0 as 255, and obtaining a binary image;
2) Marking connected domains on the binary icons;
3) Reserving pixels of the maximum area connected domain, and setting the rest pixels to 0;
4) And filling holes to obtain the fusion mask.
Further, inputting the region of interest after image enhancement processing, performing a Grabcut algorithm by taking an embryo candidate frame as a target candidate frame, obtaining an initial mask, performing image binarization, marking a connected domain and only reserving the connected domain with the largest area, and then filling through holes, wherein the step of obtaining the embryo initial mask comprises the following steps:
1) Inputting an interested region subjected to image enhancement processing, and carrying out Grabcut algorithm by taking an embryo candidate frame as a target candidate frame, wherein the iteration number is n, and an initial mask is obtained, wherein n is E [5,20];
2) Binarizing the initial mask, setting the pixels larger than 0 as 255, and obtaining a binary image;
3) Marking connected domains on the binary icons;
4) Reserving pixels of the maximum area connected domain, and setting the rest pixels to 0;
5) And filling holes to obtain an embryo initial mask.
Further, the steps of performing blastomere tracing and embryo tracing on the embryo mirror image through the blastomere mask and the embryo mask respectively include:
1) Performing edge detection on the embryo mirror image based on the blastomere mask, and then performing edge tracing;
2) Edge detection is performed on the embryo mirror image based on the embryo mask, followed by edge tracing.
Correspondingly, the invention also provides computer equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the method when executing the computer program. Meanwhile, the invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the above method.
The foregoing is merely illustrative of the preferred embodiments of this invention, and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the invention, and such variations and modifications are to be regarded as being within the scope of the invention.

Claims (10)

1. The method for automatically segmenting and calculating the area of the blastomere image of the in vitro fertilized embryo is characterized by comprising the following steps:
performing blastomere detection on the embryo optical lens picture by adopting a target detection model based on deep learning to obtain a blastomere candidate frame;
fusing the blastomere candidate frames to obtain embryo candidate frames;
extracting a region of interest based on the embryo candidate frame;
performing image enhancement processing on the region of interest;
after the region of interest of the embryo microscope picture is subjected to image enhancement processing, an interactive image segmentation algorithm is adopted to automatically segment the embryo microscope picture, and an blastomere mask and an embryo mask are obtained;
respectively carrying out blastomere tracing and embryo tracing on an embryo optical lens picture through the blastomere mask and the embryo mask;
and calculating the areas of the blastomere mask and the embryo mask to obtain the blastomere area and the embryo area.
2. The method for automatic segmentation and area calculation of an in vitro fertilized embryo blastomere image according to claim 1, wherein the step of image enhancement processing comprises:
performing tone scale adjustment on the region of interest;
noise reduction is carried out on the region of interest subjected to the tone scale adjustment;
contrast enhancement is performed on the noise-reduced region of interest.
3. The method for automatic segmentation and area calculation of an in vitro fertilized embryo blastomere image according to claim 2, wherein the step of adjusting the color level comprises:
converting the region of interest into HSV channels;
calculating the average value of the H channel, and marking the average value as H;
calculating the mean value of the V channel, and marking the mean value as V;
presetting a judgment threshold T, if
Figure FDA0004003578220000011
Executing the operation of adjusting the color level based on the color level adjusting formula, and if not, exiting the color level adjustment;
the tone scale adjustment formula is:
Figure FDA0004003578220000021
wherein, I is the image data before the tone scale adjustment, I' is the image data after the tone scale adjustment, s, h and m are constants, and satisfy s epsilon [0,170], h epsilon (s, 255), m epsilon [1,3], and the number smaller than 0 in I-s is set as 0.
4. The method for automatic segmentation and area calculation of an in vitro fertilized embryo blastomere image of claim 2, wherein the step of reducing noise comprises:
constructing a total variation noise reduction function and presetting a noise reduction parameter interval;
iteration is carried out on the input image in the noise reduction parameter interval by adopting an optimal J invariance method to obtain optimal noise reduction parameters;
and adopting the optimal noise reduction parameters to reduce noise of the input image.
5. The method for automatically segmenting and calculating the area of an in vitro fertilized embryo blastomere image according to claim 1, wherein the step of obtaining a blastomere candidate frame by performing blastomere detection on an embryo optical lens picture by using a target detection model based on deep learning comprises the following steps:
reading embryo light mirror picture data;
performing field histogram equalization on the read data;
inputting the processed picture into a target detection model based on deep learning to detect the blastomere so as to obtain a primary blastomere rectangular frame;
and respectively expanding the primary blastomere rectangular frames by adopting a preset offset value, recording vertex coordinates, and if the expanded coordinates exceed the original image size of the embryo optical lens picture, reserving corresponding original coordinates, thereby obtaining the blastomere candidate frames.
6. The method of claim 1, wherein the step of extracting the region of interest based on the embryo candidate frame comprises:
expanding the embryo candidate frame by taking the center of the embryo candidate frame as the center according to an expansion formula to obtain an expanded candidate frame, taking the expanded candidate frame as an interested region, and recording corresponding vertex coordinates;
the expansion formula is as follows: l' =l×k
Wherein L is the side length of the embryo candidate frame, L' is the side length of the expansion candidate frame, k is the expansion coefficient, and k is [1.2,2].
7. The method for automatically segmenting and calculating the area of an in vitro fertilized embryo blastomere image according to claim 1, wherein the step of automatically segmenting the embryo optical lens picture by using an interactive image segmentation algorithm to obtain a blastomere mask and an embryo mask comprises the following steps:
and inputting an embryo optical lens picture of the region of interest after image enhancement processing, and carrying out Grabcut algorithm by taking the blastomere candidate frame as a target candidate frame to obtain a blastomere mask.
8. The method for automatically segmenting and calculating the area of an in vitro fertilized embryo blastomere image according to claim 1, wherein the step of automatically segmenting the embryo optical lens picture by using an interactive image segmentation algorithm to obtain a blastomere mask and an embryo mask comprises the following steps:
overlapping the blastomere masks, performing image binarization, marking the connected domains, only reserving the connected domains with the largest area, and then filling through holes to obtain a fusion mask;
inputting the region of interest subjected to image enhancement processing, performing Grabcut algorithm by taking an embryo candidate frame as a target candidate frame, obtaining an initial mask, performing image binarization, marking connected domains and only reserving the connected domain with the largest area, and then filling through holes to obtain the embryo initial mask;
logic and operation are carried out on the fusion mask and the embryo initial mask, and an embryo middle mask is obtained;
constructing a pixel-level tag map based on the embryo midmask;
and using the label image as an accurate label image, and performing Grabcut algorithm on the embryo light microscope image of the region of interest after the image enhancement processing to obtain an embryo mask.
9. The method of automatic segmentation and area calculation of an in vitro fertilized embryo blastomere image of claim 8, wherein said step of constructing a pixel-level tag map based on said embryo midmask comprises:
etching the embryo middle mask, and performing color reversal, and marking as a background;
performing corrosion operation on the embryo middle mask, and marking the embryo middle mask as a prospect;
marking a transition region between a background and a foreground as a suspicious foreground;
constructing a pixel-level label map based on the foreground, the background, and the suspicious foreground.
10. The method for automatically segmenting and calculating the area of an in vitro fertilized embryo blastomere image according to claim 8, wherein the step of superimposing the blastomere mask and binarizing the image, marking the connected domain while only preserving the connected domain of the maximum area, and then filling through the hole to obtain the fusion mask comprises the steps of:
superposing the blastomere masks, setting the pixels larger than 0 as 255, and obtaining a binary image;
marking connected domains on the binary icons;
reserving pixels of the maximum area connected domain, and setting the rest pixels to 0;
and filling holes to obtain the fusion mask.
CN202211645122.7A 2022-12-16 2022-12-16 Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo Pending CN116091421A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211645122.7A CN116091421A (en) 2022-12-16 2022-12-16 Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211645122.7A CN116091421A (en) 2022-12-16 2022-12-16 Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo

Publications (1)

Publication Number Publication Date
CN116091421A true CN116091421A (en) 2023-05-09

Family

ID=86200229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211645122.7A Pending CN116091421A (en) 2022-12-16 2022-12-16 Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo

Country Status (1)

Country Link
CN (1) CN116091421A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116609332A (en) * 2023-07-20 2023-08-18 佳木斯大学 Novel tissue embryo pathological section panorama scanning system
CN116739949A (en) * 2023-08-15 2023-09-12 武汉互创联合科技有限公司 Blastomere edge enhancement processing method of embryo image
CN116758539A (en) * 2023-08-17 2023-09-15 武汉互创联合科技有限公司 Embryo image blastomere identification method based on data enhancement
CN116757967A (en) * 2023-08-18 2023-09-15 武汉互创联合科技有限公司 Embryo image fragment removing method, computer device and readable storage medium
CN116778481A (en) * 2023-08-17 2023-09-19 武汉互创联合科技有限公司 Method and system for identifying blastomere image based on key point detection

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116609332A (en) * 2023-07-20 2023-08-18 佳木斯大学 Novel tissue embryo pathological section panorama scanning system
CN116609332B (en) * 2023-07-20 2023-10-13 佳木斯大学 Novel tissue embryo pathological section panorama scanning system
CN116739949A (en) * 2023-08-15 2023-09-12 武汉互创联合科技有限公司 Blastomere edge enhancement processing method of embryo image
CN116739949B (en) * 2023-08-15 2023-11-03 武汉互创联合科技有限公司 Blastomere edge enhancement processing method of embryo image
CN116758539A (en) * 2023-08-17 2023-09-15 武汉互创联合科技有限公司 Embryo image blastomere identification method based on data enhancement
CN116778481A (en) * 2023-08-17 2023-09-19 武汉互创联合科技有限公司 Method and system for identifying blastomere image based on key point detection
CN116758539B (en) * 2023-08-17 2023-10-31 武汉互创联合科技有限公司 Embryo image blastomere identification method based on data enhancement
CN116778481B (en) * 2023-08-17 2023-10-31 武汉互创联合科技有限公司 Method and system for identifying blastomere image based on key point detection
CN116757967A (en) * 2023-08-18 2023-09-15 武汉互创联合科技有限公司 Embryo image fragment removing method, computer device and readable storage medium
CN116757967B (en) * 2023-08-18 2023-11-03 武汉互创联合科技有限公司 Embryo image fragment removing method, computer device and readable storage medium

Similar Documents

Publication Publication Date Title
CN116091421A (en) Method for automatically dividing and calculating area of blastomere image of in-vitro fertilized embryo
CN107256558B (en) Unsupervised automatic cervical cell image segmentation method and system
EP3455782B1 (en) System and method for detecting plant diseases
CN106651872B (en) Pavement crack identification method and system based on Prewitt operator
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN108460757A (en) A kind of mobile phone TFT-LCD screens Mura defects online automatic detection method
CN109241973B (en) Full-automatic soft segmentation method for characters under texture background
CN104794502A (en) Image processing and mode recognition technology-based rice blast spore microscopic image recognition method
CN113723573B (en) Tumor tissue pathological classification system and method based on adaptive proportion learning
CN110648330B (en) Defect detection method for camera glass
Shahin et al. A novel white blood cells segmentation algorithm based on adaptive neutrosophic similarity score
US20200193139A1 (en) Systems and methods for automated cell segmentation and labeling in immunofluorescence microscopy
CN111539980B (en) Multi-target tracking method based on visible light
CN110310291A (en) A kind of rice blast hierarchy system and its method
CN109598681B (en) No-reference quality evaluation method for image after repairing of symmetrical Thangka
CN109166092A (en) A kind of image defect detection method and system
CN109975196B (en) Reticulocyte detection method and system
CN106372593B (en) Optic disk area positioning method based on vascular convergence
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN116596899A (en) Method, device, terminal and medium for identifying circulating tumor cells based on fluorescence image
CN113763404B (en) Foam image segmentation method based on optimization mark and edge constraint watershed algorithm
CN111429461A (en) Novel segmentation method for overlapped exfoliated epithelial cells
CN110458042B (en) Method for detecting number of probes in fluorescent CTC
CN114092441A (en) Product surface defect detection method and system based on dual neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination