CN110796127A - Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal - Google Patents

Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal Download PDF

Info

Publication number
CN110796127A
CN110796127A CN202010007935.8A CN202010007935A CN110796127A CN 110796127 A CN110796127 A CN 110796127A CN 202010007935 A CN202010007935 A CN 202010007935A CN 110796127 A CN110796127 A CN 110796127A
Authority
CN
China
Prior art keywords
layer
embryo
prokaryotic
occlusion
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010007935.8A
Other languages
Chinese (zh)
Inventor
杨波
蒲逊
汪燕
邓唐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Communication Research Planning & Designing Co Ltd
Original Assignee
Sichuan Communication Research Planning & Designing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Communication Research Planning & Designing Co Ltd filed Critical Sichuan Communication Research Planning & Designing Co Ltd
Priority to CN202010007935.8A priority Critical patent/CN110796127A/en
Publication of CN110796127A publication Critical patent/CN110796127A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an embryo prokaryotic detection system based on occlusion sensing, a storage medium and a terminal, belonging to the technical field of image processing, wherein the system comprises an occlusion sensing network, an occlusion sensing detection unit and a classification unit; the occlusion perception detection unit comprises a first ROI (region of interest) pooling layer, and an ROI pooling module, an occlusion processing module and a summing layer which are sequentially connected, wherein the output end of the first ROI pooling layer is connected with the summing layer; the output end of the shielding perception detection unit is connected with the classification unit, the shielding perception detection unit is used for detecting the shielded prokaryotic global feature map, and the classification unit is used for judging whether the shielded prokaryotic global feature map has a shielded prokaryotic cell. The invention can detect whether the blocked pronucleus exists or not when the pronucleus and the pronucleus are blocked, thereby greatly reducing the false detection rate of the embryo pronucleus under the blocking condition.

Description

Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal
Technical Field
The invention relates to the technical field of image processing, in particular to an embryo prokaryotic detection system based on occlusion perception, a storage medium and a terminal.
Background
With the rapid development of the modern medical level and the continuous and deep research on the embryo development mechanism, the in vitro fertilization-embryo transplantation technology becomes more mature, and meanwhile, the demand of embryo transplantation is greatly increased. Doctors need to take embryos out of the culture environment every day for observation, observation results are recorded, hundreds of embryos need to be observed every day on average, the operation depends on the human resources of hospitals seriously, the whole process is realized manually, and the whole working efficiency is low; on the other hand, the existing follicle monitoring mode is a volume probe, and the problems that the probe cannot shoot in all directions and pronuclei are mutually shielded exist. Therefore, the traditional manual observation of embryo characteristics cannot meet the current increasing embryo transfer requirement. In order to improve the work efficiency of embryo transplantation and the utilization rate of doctor resources, accurately and efficiently identify the morphological characteristics of embryos, provide accurate data for embryo quality evaluation and effectively provide auxiliary decision for embryo transplantation, the embryo prokaryotic detection method capable of automatically and accurately detecting the prokaryotic phase of the embryos under the shielding condition is very important.
In the technical field of embryo detection based on image data analysis, the invention patent with the application number of CN201610325368.4 and the name of 'an embryo division detection method based on cell motion information and gray level characteristics' discloses an embryo division detection method, which belongs to the field of automatic embryo division detection, wherein the main idea of the embryo automatic detection method is as follows: the invention provides a cell movement information and gray level characteristic-based embryo division detection method in the field of embryo division automatic detection. The detection method utilizes the motion information and the gray characteristic of the embryonic cells to detect the embryonic pronuclei. Although the technical method considers the defects of poor interference resistance and small application range of the traditional change detection methods such as a difference method, K-T conversion and the like, the motion information inside the embryo is calculated through the corresponding relation of the pixels of adjacent frame images to measure the change degree inside the embryo so as to determine the embryo division period, and the gray characteristics of pronuclei and cells are utilized to overcome the influence of interference factors such as illumination, motion and the like. However, in the culture process, impurities such as cell secretion and the like are easy to appear, the probe cannot shoot in all directions, and the pronuclei are easy to be shielded, so that the method does not consider the influence of the impurities in the culture medium liquid and the shielding problem between the pronuclei, under the conditions, the existing detection method is not suitable, and on the basis, the invention of the embryo pronuclei detection method and system based on shielding sensing is necessary.
Disclosure of Invention
The invention aims to overcome the defect that the prior art cannot solve the problem of high false detection rate of embryo pronucleus due to mutual occlusion of pronucleus, and provides an embryo pronucleus detection system based on occlusion sensing, a storage medium and a terminal.
An embryo prokaryotic detection system based on occlusion perception comprises an occlusion perception network, wherein the occlusion perception network comprises an occlusion perception detection unit and a classification unit; the occlusion perception detection unit comprises a first ROI (region of interest) pooling layer, and an ROI pooling module, an occlusion processing module and a summing layer which are sequentially connected, wherein the output end of the first ROI pooling layer is connected with the summing layer; the output end of the shielding perception detection unit is connected with the classification unit, the shielding perception detection unit is used for detecting the shielded prokaryotic global feature map, and the classification unit is used for judging whether the shielded prokaryotic global feature map has a shielded prokaryotic cell.
Specifically, the ROI pooling module comprises a plurality of ROI pooling layers and is used for sampling parts of the embryo main body image to obtain a plurality of first feature maps; the shielding processing module comprises a plurality of shielding processing units, each shielding processing unit comprises a first convolution module, a second softmax layer and a product layer which are sequentially connected, each first convolution module comprises a plurality of convolution layers, the input end of a first convolution layer in each first convolution module is connected with the input end of the product layer, and the first convolution layers are used for predicting the visibility of pronucleus in the first characteristic diagram and performing dot product processing on the first characteristic diagram and the visibility of a corresponding pronucleus area to obtain a second characteristic diagram of each pronucleus; the summation layer comprises a first full connection layer and a second full connection layer and is used for adding elements in the second feature map one by one to obtain a prokaryotic global feature map so as to further judge whether the blocked prokaryotic cells exist.
In particular, the classification unit comprises a first fully-connected layer and a first softmax layer, the first fully-connected layer output being connected with the first softmax layer.
Specifically, the occlusion sensing network further comprises a regression unit, wherein the regression unit comprises a second full connection layer and a regression layer which are sequentially connected, and the second full connection layer and the regression layer are used for acquiring position information of each pronucleus in the embryo main body image.
Specifically, the occlusion sensing network further comprises a feature extraction unit, and an output end of the extraction unit is connected with the occlusion sensing detection unit; the characteristic extraction unit comprises a plurality of convolution modules which are connected in sequence and is used for extracting the characteristics of the input embryo main body image so as to determine a target prokaryotic region in the embryo main body image.
Specifically, the system further comprises a splitting network, wherein the splitting network comprises an output layer, a plurality of identity _ block modules and a plurality of conv _ block modules, and output ends of the identity _ block modules and the conv _ block modules are connected with the output layer; the identity _ block module comprises a plurality of identity _ block units, and the conv _ block module comprises a plurality of conv _ block units; the identity _ block unit and the conv _ block unit are used for sequentially carrying out down-sampling and up-sampling processing on the input original embryo images and finally outputting the optimal embryo main body images MASK through the output layer.
Specifically, the identity _ block unit includes a first feature extraction layer, a second feature extraction layer and a first full addition layer, which are connected in sequence, where the first feature extraction layer is a first convolution layer or a first depth-separable convolution layer, and the second feature extraction layer is a second convolution layer or a second depth-separable convolution layer.
Specifically, the conv _ block unit comprises a fifth feature extraction layer, a third feature extraction layer, a fourth feature extraction layer and a second full addition layer, wherein the third feature extraction layer, the fourth feature extraction layer and the second full addition layer are sequentially connected, the input end of the fifth feature extraction layer is connected with the input end of the third feature extraction layer, and the output end of the fifth feature extraction layer is connected with the input end of the second full addition layer; the third feature extraction layer is a third convolutional layer or a third depth separable convolutional layer or a first transpose layer, and the fourth feature extraction layer is a fourth convolutional layer or a fourth depth separable convolutional layer; the fifth feature extraction layer comprises a fifth convolutional layer or a fifth depth separable convolutional layer or a second transpose layer.
The invention also includes a storage medium having stored thereon a method of operating the occlusion perception based embryo prokaryotic detection system, the method comprising: detecting the shielded prokaryotic global feature map by adopting a shielding sensing detection unit; and judging whether the blocked pronuclei exists in the pronuclei global feature map by adopting a classification unit.
The invention also comprises a terminal, which is used for operating the method of the embryo prokaryotic detection system based on occlusion perception, and the method comprises the following steps: detecting the shielded prokaryotic global feature map by adopting a shielding sensing detection unit; and judging whether the blocked pronuclei exists in the pronuclei global feature map by adopting a classification unit.
Compared with the prior art, the invention has the beneficial effects that:
(1) according to the invention, the sheltered pronucleus overall feature map is detected by the shelter perception detection unit, the classification unit is used for judging whether sheltered pronucleus exists in the pronucleus overall feature map, so that whether sheltered pronucleus exists can be effectively judged, and the embryo pronucleus false detection rate under the shelter condition is greatly reduced.
(2) According to the invention, each part of a target prokaryotic region in an input embryo main body image is sampled through the ROI pooling module, the visibility of each prokaryotic region is predicted by the shielding processing module, the local characteristics of each part of each target prokaryotic region are further obtained, the global characteristic diagram of each target prokaryotic region is obtained by the summing layer according to the local characteristics of each part of the target prokaryotic region, whether the shielded prokaryotic region exists or not can be effectively judged, and the embryo prokaryotic false detection rate under the shielding condition is greatly reduced.
(3) The feature extraction unit can extract the features of the input embryo main body image so as to determine the target prokaryotic region in the embryo main body image.
(4) The regression unit can acquire the position information of each pronucleus in the embryo main body image and determine the position of each pronucleus in the embryo original image.
(5) The segmentation network can predict the MASK of the embryo subject image, and further realize the segmentation of the embryo subject image and the background image.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the figure:
FIG. 1 is a diagram of a split network framework according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of an identity _ block unit and a conv _ block unit in embodiment 1 of the present invention;
FIG. 3 is a schematic diagram of an occlusion aware network according to embodiment 1 of the present invention;
FIG. 4 is a schematic diagram of a feature extraction unit in embodiment 1 of the present invention;
FIG. 5 is a schematic diagram of an occlusion perception detection unit according to embodiment 1 of the present invention;
FIG. 6 is a schematic diagram of an occlusion handling module according to embodiment 1 of the present invention;
FIG. 7 is a flowchart of a method in accordance with embodiment 2 of the present invention;
FIG. 8 is a schematic diagram of the annotation process of the original embryo image in embodiment 2 of the present invention;
FIG. 9 is a schematic diagram of extracting a main image of an embryo according to embodiment 2 of the present invention;
FIG. 10 is a flowchart illustrating the process of determining whether an occluded pronuclei exists in embodiment 2 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships described based on the drawings, and are only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Example 1
The implementation provides an embryo prokaryotic detection system based on occlusion perception, and the system comprises a segmentation network based on a network symmetry idea and a residual error structure and an occlusion perception network based on R-CNN. The segmentation network is used for segmenting the embryo main body image before prokaryotic recognition, removing the interference of culture medium liquid impurities and the like, reducing the detection range of a prokaryotic detection model, not only eliminating the interference from the outside of the embryo, but also improving the detection speed in a certain sense. The shielding sensing network based on the R-CNN can effectively detect the characteristics of the shielded pronuclei when the pronuclei and the pronuclei are shielded, and greatly reduces the false detection rate under the shielding condition.
Further, as shown in fig. 1, the split network includes an output layer, a plurality of identity _ block modules, and a plurality of conv _ block modules, where an output end of the identity _ block module and an output end of the conv _ block module are connected to the output layer; the identity _ block module comprises a plurality of identity _ block units, and the conv _ block module comprises a plurality of conv _ block units; the identity _ block unit and the conv _ block unit are used for sequentially carrying out down-sampling and up-sampling processing on the input original embryo images and finally outputting the optimal embryo main body images MASK through the output layer. Specifically, the identity _ block unit includes a first feature extraction layer, a second feature extraction layer and a first full addition layer which are connected in sequence, the first feature extraction layer is a first convolution layer or a first depth separable convolution layer, and the second feature extraction layer is a second convolution layer or a second depth separable convolution layer. The conv _ block unit comprises a fifth feature extraction layer, a third feature extraction layer, a fourth feature extraction layer and a second full addition layer which are sequentially connected, wherein the input end of the fifth feature extraction layer is connected with the third feature extraction layer, and the output end of the fifth feature extraction layer is connected with the second full addition layer; the third feature extraction layer is a third convolution layer or a third depth separable convolution layer or a first transposition layer, and the fourth feature extraction layer is a fourth convolution layer or a fourth depth separable convolution layer; the fifth feature extraction layer includes a fifth convolution layer or a fifth depth separable convolution layer or a second transposition layer. The identity _ block unit and the conv _ block unit are used for sampling the original embryo image, and the output layer is used for determining the accuracy of judging the output of the original embryo image; more specifically, as shown in fig. 2, for the identity _ block unit, it is first determined whether to perform a general convolution (Conv 2D) or a depth separable convolution (separatableconv 2D) on the original embryo image according to an initial parameter Conv _ type, where the general convolution performs convolution operations of regions and channels at the same time, and there are many parameters, and the depth separable convolution divides the convolution operations into two steps, and performs the region convolution and the channel convolution in sequence, and fits the assumption of the inclusion module that the correlation and the spatial correlation between the convolution layers can be decoupled, and they are separately mapped, so as to achieve a better effect; and then continuously adopting two convolutions of the same type to the input tensor (input _ tensor), wherein the default convolution kernel is 3 x 3, and finally summing (add) the result with the input tensor and using the result as the output of the identity _ block module. For the Conv _ block unit, it is first decided whether to do general convolution (Conv 2D), depth separable convolution (separateconv 2D) or transposed convolution (Conv 2 DTranspose) by judging the Conv _ type parameter, and then to apply one convolution to the input tensor (input _ tensor), with the default convolution kernel being 3 × 3. If conv _ type is 'private', adopting the depth separable convolution again; otherwise, a general convolution with a convolution kernel of 3 x 3 is used. The shortcut branch adopts convolution with 1 x 1 once and default step length of 2 for the input tensor (judging the convolution type by conv _ type), and finally sums the feature maps and returns the result. And the output layer adopts a Sigmoid activation function, so that the value range of each pixel of the output embryo main body Mask is [0,1], the probability that the output embryo main body Mask belongs to the embryo region is represented, and the optimal embryo main body Mask is determined.
Further, as shown in fig. 3, the occlusion sensing network includes a feature extraction unit, an occlusion sensing detection unit, a classification unit and a regression unit, which are connected in sequence, where k in the classification unit and k in the regression unit in fig. 3 represent the total number of categories, and in the present invention, the occlusion sensing network only distinguishes two categories, namely, background and pronucleus, so that k = 2; furthermore, as shown in fig. 4, the feature extraction unit includes the first five convolution modules of the VGG-16 network, which are used to extract the features of the input embryo main body image, so as to determine the target prokaryotic region in the embryo main body image. It should be further noted that, in fig. 4, the conv2 convolution module, the conv3 convolution module, the conv4 convolution module, and the conv5 convolution module are maximum pooling layers from the left, and the last layer of the conv5 convolution module corresponds to the fifth convolution module of the VGG-16 network and the third layer of the conv5_ 3.
Furthermore, as shown in fig. 5, the occlusion perception detection unit includes a first ROI pooling layer, and an occlusion perception ROI pooling module, an occlusion processing module, and a summing layer connected in sequence, where an output end of the first ROI pooling layer is connected to the summing layer, and is configured to combine the candidate region feature map output by the first ROI pooling layer with the features of each part of the prokaryotic cell output by the occlusion processing module and the information of whether occlusion exists, so as to obtain a prokaryotic global feature map; the ROI pooling module comprises a plurality of ROI pooling layers, the occlusion processing module comprises a plurality of occlusion processing units, each occlusion processing unit comprises a first convolution module, a second softmax layer and a product layer which are sequentially connected, the first convolution module comprises a plurality of convolution layers, and each summation layer comprises a first full-connection layer and a second full-connection layer. Aiming at the problems of false detection and missing detection caused by occlusion between pronuclei, an ROI pooling unit of pronucleus occlusion receives an embryo main body image of a target pronucleus region marked by an anchor point frame, a block occlusion perception ROI pooling module is used for replacing the ROI pooling module in an original Faster R-CNN module, the ROI pooling module in the Faster R-CNN module is derived from a model Fast-R-CNN, an MXN grid is used for uniformly dividing each candidate region into MXN blocks, the maximum pooling operation is carried out on each block, therefore, the candidate regions with different sizes are unified into feature vectors with the same dimension, namely, the structural information of different positions of the pronucleus is integrated and input into an occlusion processing module corresponding to the Faster R-CNN, and the occlusion processing module further estimates the pronucleus occlusion condition. More specifically, as shown in fig. 6, the occlusion processing unit includes 3 convolution layers, softmax layer, and product layer connected in sequence, and the first convolution layer input terminal is connected to the product layer input terminal. And finally, adding the final characteristics of the four parts of the pronuclei one by one according to elements for classification and window regression of a fast R-CNN module so as to judge whether the blocked pronuclei and the position information of the pronuclei in the embryo main body image exist. Adding the final characteristics of the four parts of the pronuclei one by one according to elements, classifying the final characteristics through a connecting layer and a softmax layer, and judging whether the shielded pronuclei exists or not; and adding the final characteristics of the four parts of the pronuclei one by one according to elements, and performing regression processing through a connecting layer and a regression layer to obtain the position information of the pronuclei in the embryo main body image.
The segmentation network of the invention realizes the segmentation of the embryo main body image and the background image, removes impurity interference and simplifies the operation steps, and the occlusion sensing network can detect occluded pronuclei, thereby greatly reducing the embryo pronucleus false detection rate under the occlusion condition.
The present embodiment provides a storage medium, which is further optimized based on embodiment 1, and has stored thereon computer instructions, which when executed, perform the steps of the method of the embryo prokaryotic detection system based on occlusion perception in embodiment 1.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product, which is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The embodiment also provides a terminal, which includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the method of the embryo prokaryotic detection system based on occlusion perception in embodiment 1. The processor may be a single or multi-core central processing unit or a specific integrated circuit, or one or more integrated circuits configured to implement the present invention.
Each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
Example 2
On the basis of considering the problem that pronuclei are easy to be shielded, the invention uses the detection method of the embryo pronucleus detection system based on shielding perception R-CNN to more effectively and accurately detect the embryo pronucleus under the shielding condition. The specific idea is as follows: the invention analyzes the embryo prokaryotic characteristics under the shielding condition and provides an embryo prokaryotic detection method based on embryo main body segmentation and shielding perception R-CNN. Firstly, the embryo image is preprocessed, and the embryo main body is segmented from the image, so that the interference from impurities outside the embryo is eliminated, and convenience is provided for the subsequent embryo prokaryotic detection. And then detecting the embryo pronucleus by using shielding perception R-CNN, wherein the shielding perception R-CNN is also a two-stage detection framework based on Faster R-CNN. The method comprises the following steps that in the first stage, an RPN generates a target candidate frame, and a plurality of anchor points matched with a real target in a training process are made to be as close as possible by introducing an aggregation loss function; and in the second stage, a fast R-CNN part module is used for further classifying and regressing the target candidate frame, the prokaryotic target is divided into 4 parts according to the prior, the characteristics of the 4 parts are respectively extracted, complementary local characteristics can be obtained according to the prior of the prokaryotic form, the global characteristics of the prokaryotic target are combined for weighted summation, and then further classification and regression are carried out, so that the influence of mutual shielding between the pronuclei on prokaryotic detection is reduced.
As shown in fig. 7, in embodiment 1, the embryo prokaryotic detection method based on occlusion perception specifically includes the following steps:
s01: establishing and training a segmentation network and a shielding perception network; the method comprises the steps of constructing a segmentation network and an occlusion perception network, and training the segmentation network and the occlusion perception network.
Furthermore, the segmentation network based on the network symmetry idea and the residual error structure comprises an identity _ block module, a conv _ block module and an output layer, wherein the identity _ block module comprises an identity _ block unit and a conv _ block unit of the conv _ block module, the identity _ block unit and the conv _ block unit are used for sampling the original embryo image, and the output layer is used for determining the accuracy of judging and outputting the original embryo image. The occlusion perception network is based on an R-CNN, the VGG-16 model is selected as a basic framework,
the system comprises an occlusion perception ROI pooling module, an occlusion processing module and a summation layer, and is used for detecting the prokaryotic features of an occluded part.
Further, the training of the segmentation network specifically comprises the following steps:
s011: training the segmentation network by adopting the label image; specifically, training a segmentation network comprises a step of generating a label image and a step of dividing a training set and a verification set according to the label image; and the step of generating the label image is to label the original embryo image, the embryo and the background are segmented by using image labeling software labelme, the JSON format file is automatically generated by the software, and then the JSON file is converted into the label image in the PNG format. The image labeling process effect diagram is shown in fig. 8, and the image labeling process effect diagram is an original embryo image, an image with a label and a label image from left to right. The step of dividing the training set and the verification set is to perform operations such as rotation, blurring and azimuth translation on the original label image, perform data enhancement on the data set, further increase training and verification data, and then divide the data set into the training set and the verification set, wherein the training set accounts for 80% and the verification set accounts for 20%.
S012: a model function is defined. Specifically, defining the model function includes defining a function of the storage model, a learning rate dynamic adjustment function, an optimizer configuration function, and configuring a model training function. More specifically, the function ModelCheckpoint defining the storage model includes defining storage paths of the model and outputs in a training process, specifying a monitoring object (val _ loss) trained by the network model, and defining parameters such as a weight of whether the model is stored only; defining a learning rate dynamic adjustment function reduce LROnPateau, specifying a monitoring index val _ loss, defining parameters such as a learning rate reduction index and a learning rate lower limit, and reducing the learning rate when the evaluation index is not increased any more; defining an optimizer configuration function, namely, defining a compiler, and selecting an optimizer, a loss function and a performance index during training and testing; and configuring a model training function fit _ generator, and setting the number of data input in each training and test, the total number of rounds of model training and a feedback function.
Further, the step of training the shielding perception network specifically comprises the steps of calculating a loss function value, updating the Faster R-CNN neural network according to the loss function value, and performing network training on the updated Faster R-CNN neural network again until the updated Faster R-CNN neural network meets a preset convergence condition, so that the network model is trained successfully. More specifically, the method for calculating the loss function value specifically includes matching a prokaryotic labeling frame with an anchor point frame in the training sample, wherein the anchor point frame matched with the prokaryotic labeling frame is a positive sample, and the anchor point frame not matched with the prokaryotic labeling frame is a negative sample. And calculating error values brought by classification prediction for all negative samples, performing descending order according to the error values, selecting a batch of negative samples with the largest error values as the negative samples of the training data set, discarding all the rest negative samples, and ensuring that the quantity ratio of the positive samples to the negative samples is 1:3, so that a relatively balanced quantity relation exists between the positive samples and the negative samples, and the smooth network training is facilitated. And finally, calculating the loss function value according to the positive sample and the selected negative sample.
S02: extracting a main embryo image to realize the separation of the main embryo image from a background image, and specifically comprises the following steps:
s021: generating an embryo subject image MASK according to the trained subject segmentation network; specifically, the model input image size is first defined, i.e., the redefined pixel size is 320 × 240, and standard convolution calculations are performed on the input picture, extending the number of channels to 32. And then starting a down-sampling stage, wherein the down-sampling is performed for 4 times, the embryo body characteristic image in the original embryo image sampled last time is stored in an intermediate variable for add operation in the up-sampling process, the calculation is performed by respectively adopting a conv _ block unit and an identity _ block unit, and the number of channels after the 4 times of down-sampling is respectively 64,128, 256 and 512. And then, an upsampling process is carried out for 4 times, wherein conv _ block upsampling is firstly adopted (conv _ type = transposition) each time, the shallow feature map is connected with the current result (add), 1 × 1 convolution is adopted to adjust the number of channels, and then the identity _ block operation is carried out. And through the upsampling operation, the embryo main body feature map is reduced to be the same as the input size, finally, 3-by-3 convolution is adopted to compress the channel into 1, the accuracy rate of the embryo main body image MASK is judged by adopting a Sigmoid function, and the optimal embryo main body image MASK is output.
S022: and obtaining an embryo subject image according to the embryo subject image MASK. Specifically, the original embryo image is cut by adopting the minimum rectangle according to the embryo subject image MASK to obtain the embryo subject image, so that the embryo subject image and the background image are segmented, the interference of impurities and the like in a culture medium liquid can be removed, the detection range of a prokaryotic detection model is narrowed, the interference from the outside of the embryo is eliminated, and the detection speed is further improved. As shown in fig. 9, the original embryo image, the label image, and the body-segmented embryo image are sequentially arranged from left to right.
S03: judging whether a blocked pronucleus exists according to the embryo main body image, as shown in FIG. 10, the method specifically comprises the following steps:
s031: determining a target prokaryotic region in an input embryo main body image; the method comprises the steps that a feature extraction unit of an occlusion perception network extracts input embryo main body image features, a plurality of anchor frames with different areas are laid on the embryo main body image feature part, and the anchor frames are associated with a high-level volume layer in a feature extraction unit in an occlusion perception network model so as to extract more semantic information and global information; the region generation network in the feature extraction unit lays anchor frames with 5 areas at the positions of the feature parts of the input image, the areas of the anchor frames are respectively 8 × 8,16 × 16,32 × 32,64 × 64 and 128 × 128, and the ratio of the width to the height of all the anchor frames is 1 (prokaryotic approximate proportion), so as to determine the target prokaryotic region in the embryo main body image.
S032: estimating the blocking condition of the pronuclei to judge whether the blocked pronuclei exists; specifically, aiming at the problems of false detection and missing detection caused by occlusion between pronuclei, a blocked occlusion perception ROI pooling module is used for replacing an ROI pooling module in an original Faster R-CNN module, structural information of different positions of the pronuclei is integrated and then input into a Faster R-CNN corresponding module, and the occlusion condition is estimated through a small neural network, and the specific process comprises the following steps:
s0321: dividing a target prokaryotic region in an input embryo main body image into a plurality of parts, sampling each part of the target prokaryotic region by adopting a blocking and shielding perception ROI pooling module, namely unifying the feature dimensions of each part of the target prokaryotic region by using maximum pooling operation to obtain a plurality of small first feature maps (the width and the height are both 7) with fixed sizes;
s0322: predicting the pronucleus visibility in the first characteristic diagram, and performing dot product processing on the first characteristic diagram and the pronucleus visibility in the corresponding first characteristic diagram to obtain a second characteristic diagram; wherein the original nuclear characteristic dimension in the second characteristic diagram is 512 x 7; specifically, the occlusion processing module consists of three convolution layers followed by a softmax layer, the softmax layer being layered after an Eltw multiplication layer, and performs parameter training, o, on the occlusion processing module using a log loss functioni,jVisibility score, o, representing corresponding prediction of embryo subject imagei,jAnd (4) marking the real visibility score for the corresponding anchor point frame in the embryo subject image. If the intersection ratio between the jth part of the ith candidate window and the corresponding calibration window in the embryo subject image is greater than or equal to 0.5, then oi,jAnd =1, otherwise 0. The loss function of the occlusion handling module is thus defined:
Locc({ti},( ti*))=
Figure 699322DEST_PATH_IMAGE002
(o*i,jlog oi,j+(1-o*i,j)log(1-oi,j))
wherein i is the reference number of the anchor frame, tiFor the predicted coordinates, t, corresponding to the i-th anchor box pronucleusiAnd the calibrated coordinates of the object associated with the ith anchor frame.
S0323: and adding the elements in the second characteristic diagram one by one, and carrying out classification processing to judge whether the second characteristic diagram has the shielded pronuclei, thereby effectively detecting the shielded pronuclei and greatly reducing the false detection rate of the embryo pronuclei under the shielding condition.
Further, in step S0323, the elements in the second feature map are added one by one to obtain a prokaryotic global feature map, and the prokaryotic global feature map is subjected to regression processing, so as to obtain the position information of the prokaryotic cell in the embryo main map image.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (10)

1. An embryo prokaryotic detection system based on occlusion perception is characterized in that: the system comprises an occlusion sensing network, wherein the occlusion sensing network comprises an occlusion sensing detection unit and a classification unit; the occlusion perception detection unit comprises a first ROI (region of interest) pooling layer, and an ROI pooling module, an occlusion processing module and a summing layer which are sequentially connected, wherein the output end of the first ROI pooling layer is connected with the summing layer;
the output end of the shielding perception detection unit is connected with the classification unit, the shielding perception detection unit is used for detecting the shielded prokaryotic global feature map, and the classification unit is used for judging whether the shielded prokaryotic global feature map has a shielded prokaryotic cell.
2. The occlusion perception-based embryo prokaryotic detection system according to claim 1, characterized in that: the ROI pooling module comprises a plurality of ROI pooling layers and is used for sampling all parts of the embryo main body image to obtain a plurality of first characteristic maps;
the shielding processing module comprises a plurality of shielding processing units, each shielding processing unit comprises a first convolution module, a second softmax layer and a product layer which are sequentially connected, each first convolution module comprises a plurality of convolution layers, the input end of a first convolution layer in each first convolution module is connected with the input end of the product layer, and the first convolution layers are used for predicting the visibility of pronucleus in the first characteristic diagram and performing dot product processing on the first characteristic diagram and the visibility of a corresponding pronucleus area to obtain a second characteristic diagram of each pronucleus;
the summation layer comprises a first full connection layer and a second full connection layer and is used for adding elements in the second feature map one by one to obtain a prokaryotic global feature map so as to further judge whether the blocked prokaryotic cells exist.
3. The occlusion perception-based embryo prokaryotic detection system according to claim 2, characterized in that: the classification unit includes a first fully-connected layer and a first softmax layer, the first fully-connected layer output being connected with the first softmax layer.
4. The occlusion perception-based embryo prokaryotic detection system according to claim 2, characterized in that: the occlusion sensing network further comprises a regression unit, wherein the regression unit comprises a second full connection layer and a regression layer which are sequentially connected, and the regression unit is used for acquiring the position information of each pronucleus in the embryo main body image.
5. The occlusion perception-based embryo prokaryotic detection system according to claim 1, characterized in that: the shielding perception network further comprises a feature extraction unit, and the output end of the extraction unit is connected with the shielding perception detection unit;
the characteristic extraction unit comprises a plurality of convolution modules which are connected in sequence and is used for extracting the characteristics of the input embryo main body image so as to determine a target prokaryotic region in the embryo main body image.
6. The occlusion perception-based embryo prokaryotic detection system according to claim 1, characterized in that: the system also comprises a segmentation network, wherein the segmentation network comprises an output layer, a plurality of identity _ block modules and a plurality of conv _ block modules, and the output ends of the identity _ block modules and the conv _ block modules are connected with the output layer;
the identity _ block module comprises a plurality of identity _ block units, and the conv _ block module comprises a plurality of conv _ block units; the identity _ block unit and the conv _ block unit are used for sequentially carrying out down-sampling and up-sampling processing on the input original embryo images and finally outputting the optimal embryo main body images MASK through the output layer.
7. The occlusion perception-based embryo prokaryotic detection system according to claim 6, characterized in that: the identity _ block unit comprises a first feature extraction layer, a second feature extraction layer and a first full addition layer which are sequentially connected, wherein the first feature extraction layer is a first convolution layer or a first depth separable convolution layer, and the second feature extraction layer is a second convolution layer or a second depth separable convolution layer.
8. The occlusion perception-based embryo prokaryotic detection system according to claim 6, characterized in that: the conv _ block unit comprises a fifth feature extraction layer, a third feature extraction layer, a fourth feature extraction layer and a second full addition layer which are sequentially connected, wherein the input end of the fifth feature extraction layer is connected with the input end of the third feature extraction layer, and the output end of the fifth feature extraction layer is connected with the output end of the second full addition layer;
the third feature extraction layer is a third convolutional layer or a third depth separable convolutional layer or a first transpose layer, and the fourth feature extraction layer is a fourth convolutional layer or a fourth depth separable convolutional layer; the fifth feature extraction layer comprises a fifth convolutional layer or a fifth depth separable convolutional layer or a second transpose layer.
9. A storage medium, characterized by: the storage medium has stored thereon a method of operating the occlusion perception based embryo prokaryotic detection system of any one of claims 1-7, the method comprising:
detecting the shielded prokaryotic global feature map by adopting a shielding sensing detection unit;
and judging whether the blocked pronuclei exists in the pronuclei global feature map by adopting a classification unit.
10. A terminal, characterized by: the method for operating the occlusion perception-based embryo prokaryotic detection system of any one of claims 1-7 by the terminal comprises the following steps:
detecting the shielded prokaryotic global feature map by adopting a shielding sensing detection unit;
and judging whether the blocked pronuclei exists in the pronuclei global feature map by adopting a classification unit.
CN202010007935.8A 2020-01-06 2020-01-06 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal Pending CN110796127A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007935.8A CN110796127A (en) 2020-01-06 2020-01-06 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007935.8A CN110796127A (en) 2020-01-06 2020-01-06 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN110796127A true CN110796127A (en) 2020-02-14

Family

ID=69448534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007935.8A Pending CN110796127A (en) 2020-01-06 2020-01-06 Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN110796127A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814741A (en) * 2020-07-28 2020-10-23 四川通信科研规划设计有限责任公司 Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022250A (en) * 2016-05-17 2016-10-12 华中科技大学 Embryo splitting detection method based on cell movement information and gray property
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN107909027A (en) * 2017-11-14 2018-04-13 电子科技大学 It is a kind of that there is the quick human body target detection method for blocking processing
CN108898047A (en) * 2018-04-27 2018-11-27 中国科学院自动化研究所 The pedestrian detection method and system of perception are blocked based on piecemeal
CN109409182A (en) * 2018-07-17 2019-03-01 宁波华仪宁创智能科技有限公司 Embryo's automatic identifying method based on image procossing
CN109635629A (en) * 2018-10-23 2019-04-16 南京行者易智能交通科技有限公司 A kind of bus platform crowd density detection method and device based on deep learning
CN110032985A (en) * 2019-04-22 2019-07-19 清华大学深圳研究生院 A kind of automatic detection recognition method of haemocyte

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106022250A (en) * 2016-05-17 2016-10-12 华中科技大学 Embryo splitting detection method based on cell movement information and gray property
CN106485268A (en) * 2016-09-27 2017-03-08 东软集团股份有限公司 A kind of image-recognizing method and device
CN107423760A (en) * 2017-07-21 2017-12-01 西安电子科技大学 Based on pre-segmentation and the deep learning object detection method returned
CN107909027A (en) * 2017-11-14 2018-04-13 电子科技大学 It is a kind of that there is the quick human body target detection method for blocking processing
CN108898047A (en) * 2018-04-27 2018-11-27 中国科学院自动化研究所 The pedestrian detection method and system of perception are blocked based on piecemeal
CN109409182A (en) * 2018-07-17 2019-03-01 宁波华仪宁创智能科技有限公司 Embryo's automatic identifying method based on image procossing
CN109635629A (en) * 2018-10-23 2019-04-16 南京行者易智能交通科技有限公司 A kind of bus platform crowd density detection method and device based on deep learning
CN110032985A (en) * 2019-04-22 2019-07-19 清华大学深圳研究生院 A kind of automatic detection recognition method of haemocyte

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814741A (en) * 2020-07-28 2020-10-23 四川通信科研规划设计有限责任公司 Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism
CN111814741B (en) * 2020-07-28 2022-04-08 四川通信科研规划设计有限责任公司 Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism

Similar Documents

Publication Publication Date Title
CN111814741B (en) Method for detecting embryo-sheltered pronucleus and blastomere based on attention mechanism
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN111798416B (en) Intelligent glomerulus detection method and system based on pathological image and deep learning
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111784671A (en) Pathological image focus region detection method based on multi-scale deep learning
CN111931751B (en) Deep learning training method, target object identification method, system and storage medium
CN113780296A (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN110853005A (en) Immunohistochemical membrane staining section diagnosis method and device
CN112819821B (en) Cell nucleus image detection method
EP4290451A1 (en) Deep neural network-based method for detecting living cell morphology, and related product
CN110751636A (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112215217B (en) Digital image recognition method and device for simulating doctor to read film
CN115546605A (en) Training method and device based on image labeling and segmentation model
CN111476307A (en) Lithium battery surface defect detection method based on depth field adaptation
CN112102259A (en) Image segmentation algorithm based on boundary guide depth learning
CN110838094A (en) Pathological section staining style conversion method and electronic equipment
CN115546187A (en) Agricultural pest and disease detection method and device based on YOLO v5
CN113536896B (en) Insulator defect detection method and device based on improved Faster RCNN and storage medium
CN115359264A (en) Intensive distribution adhesion cell deep learning identification method
CN115205520A (en) Gastroscope image intelligent target detection method and system, electronic equipment and storage medium
CN114140437A (en) Fundus hard exudate segmentation method based on deep learning
CN110796127A (en) Embryo prokaryotic detection system based on occlusion sensing, storage medium and terminal
CN113160261B (en) Boundary enhancement convolution neural network for OCT image corneal layer segmentation
CN112634226B (en) Head CT image detection device, method, electronic device and storage medium
CN115131628A (en) Mammary gland image classification method and equipment based on typing auxiliary information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200214