CN110570350A - two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium - Google Patents
two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium Download PDFInfo
- Publication number
- CN110570350A CN110570350A CN201910860200.7A CN201910860200A CN110570350A CN 110570350 A CN110570350 A CN 110570350A CN 201910860200 A CN201910860200 A CN 201910860200A CN 110570350 A CN110570350 A CN 110570350A
- Authority
- CN
- China
- Prior art keywords
- follicle
- image
- feature map
- segmentation
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 230000011218 segmentation Effects 0.000 claims abstract description 151
- 238000004364 calculation method Methods 0.000 claims abstract description 124
- 238000012549 training Methods 0.000 claims abstract description 93
- 238000002604 ultrasonography Methods 0.000 claims abstract description 43
- 238000005259 measurement Methods 0.000 claims abstract description 28
- 230000000873 masking effect Effects 0.000 claims abstract description 10
- 230000003325 follicular Effects 0.000 claims description 42
- 238000000034 method Methods 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 14
- 230000006870 function Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 8
- 238000003556 assay Methods 0.000 claims 1
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 35
- 230000004913 activation Effects 0.000 description 4
- 238000003745 diagnosis Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000001672 ovary Anatomy 0.000 description 2
- 230000016087 ovulation Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000029142 excretion Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000012010 growth Effects 0.000 description 1
- 208000000509 infertility Diseases 0.000 description 1
- 230000036512 infertility Effects 0.000 description 1
- 231100000535 infertility Toxicity 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000001850 reproductive effect Effects 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
the application provides a two-dimensional follicle detection method, comprising: cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas; adding a label to an image in the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to the follicle contour; constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units; training the improved U-Net network model by using an image training set to obtain a follicle segmentation model; and inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map. The accuracy of measurement has been promoted when can improving work efficiency. The application also provides a two-dimensional follicle detection device, an ultrasonic device and a computer-readable storage medium, all having the above beneficial effects.
Description
Technical Field
the present application relates to the field of follicle detection technology, and in particular, to a two-dimensional follicle detection method, a two-dimensional follicle detection device, an ultrasound apparatus, and a computer-readable storage medium.
background
the well-developed follicle and the accurate ovulation date lay the foundation for the prenatal and postnatal care. In the diagnosis of infertility, it is often necessary to monitor the follicles in order to see whether there is normal follicle growth, development, excretion, etc. In departments such as the reproductive department and the gynecological department, doctors need to measure the sizes of a plurality of follicles simultaneously, and in the existing detection scheme, a two-dimensional scanning mode is usually adopted to obtain the maximum section of each follicle at one time, the follicle to be measured is manually selected, and the size of each follicle is sequentially measured by an oval method. The manual selection and measurement of the size of the follicle by adopting the method has the advantages of complicated process and high repeatability, not only has low working efficiency, but also is easy to introduce errors after manual diagnosis and manual report inclusion.
Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.
disclosure of Invention
The application aims to provide a two-dimensional follicle detection method, a two-dimensional follicle detection device, an ultrasonic device and a computer-readable storage medium, which can improve the accuracy of measurement while improving the working efficiency. The specific scheme is as follows:
the application provides a two-dimensional follicle detection method, comprising:
cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas;
generating a label according to the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to a follicle contour;
constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units in the U-Net network model;
training the improved U-Net network model by using the image training set to obtain a follicle segmentation model;
And inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
optionally, the generating a label according to the effective image set to obtain an image training set includes:
generating the label according to the effective image set to obtain a plurality of training images;
carrying out affine transformation and elastic transformation on the plurality of training images to obtain a plurality of expanded training images;
And taking the images in the effective image set and all the extended training images as the image training set.
Optionally, the inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map includes:
Before residual calculation is carried out on the feature map by using the residual calculation unit, filling edge pixels 0 in the feature map so that the size of the feature map is the same as that of the feature map after the residual calculation;
inputting the shallow layer feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit, and outputting an attention feature map;
Fusing the resolution feature map and the corresponding attention feature map in the decoding path to obtain a decoding feature map;
and performing residual calculation on the decoded feature map by using the residual calculation unit to obtain a new resolution feature map, performing the step of filling edge pixels 0 in the feature map until the operations of all layers of the decoding path are completed, and outputting the follicle segmentation map.
Optionally, the inputting the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit, and outputting the attention feature map includes:
Fusing the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path to obtain a fused image;
Obtaining a designated coefficient by utilizing a Relu function, convolution operation and regularization processing according to the fused image;
and obtaining the attention feature map according to the corresponding resolution feature map generated by the specified coefficient by using the decoding path.
Optionally, the training the improved U-Net network model by using the image training set to obtain the follicle segmentation model includes:
training the improved U-Net network model by using the image training set to obtain an initial follicle segmentation model;
Evaluating the initial follicle segmentation model by using a Dice coefficient;
And when the Dice coefficient is larger than a preset threshold value, determining that the initial follicle segmentation model is the follicle segmentation model.
optionally, after the ultrasound image of the follicle to be detected is input into the follicle segmentation model to obtain the follicle segmentation map, the method further includes:
Determining follicular information from the follicular segmentation map, wherein the follicular information comprises: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
Optionally, determining follicular information according to the follicular segmentation map, wherein the follicular information comprises: the measurement results of the follicular morphology, number of follicles and follicles after ellipse fitting include:
determining the follicle area of all follicles according to the follicle outline in the follicle segmentation map;
Deleting information corresponding to follicles smaller than a preset area to obtain the follicle information in the follicle segmentation map, wherein the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
Optionally, after determining the follicle information according to the follicle segmentation map, the method further includes:
when a cursor executes a first preset operation at a first target position in the follicle segmentation map, judging whether a follicle exists at the first target position;
and if the follicle exists at the first target position, acquiring and displaying the follicle morphology of the follicle at the first target position and the measurement result of the follicle after ellipse fitting.
optionally, after determining the follicle information according to the follicle segmentation map, the method further includes:
and when the cursor executes a second preset operation at a second target position in the follicle segmentation map, acquiring and displaying the follicle information of the follicle segmentation map.
The application provides a two-dimensional follicle detection device, includes:
The pretreatment module is used for cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas;
an image training set obtaining module, configured to generate a label according to the effective image to obtain an image training set, where the label is a binary mask image generated according to a follicle contour;
The improved U-Net network model building module is used for building a U-Net network model with a residual error calculating unit and an attention calculating unit to obtain an improved U-Net network model, wherein the residual error calculating unit is used for replacing all convolution calculating units in the U-Net network model;
A follicle segmentation model obtaining module, configured to train the improved U-Net network model by using the image training set to obtain a follicle segmentation model;
And the segmentation module is used for inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
optionally, the image training set obtaining module includes:
A training image obtaining unit, configured to generate the label according to the effective image set to obtain a plurality of training images;
the extension unit is used for carrying out affine transformation and elastic transformation on the plurality of training images to obtain a plurality of extended training images;
and the image training set acquisition unit is used for taking the images in the effective image set and all the extended training images as the image training set.
optionally, the segmentation module includes:
an edge filling unit, configured to perform edge pixel 0 filling on the feature map before performing residual calculation on the feature map by using the residual calculation unit, so that the size of the feature map is the same as the size of the feature map after the residual calculation;
An attention feature map output unit, configured to input the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit, and output an attention feature map;
A decoding feature map obtaining unit, configured to fuse the resolution feature map and the corresponding attention feature map in the decoding path to obtain a decoding feature map;
a follicle segmentation map obtaining unit, configured to perform the residual calculation on the decoded feature map by using the residual calculation unit to obtain a new resolution feature map, execute the step of performing edge pixel 0 filling on the feature map until the operations of all layers of the decoding path are completed, and output the follicle segmentation map.
optionally, the attention feature map output unit includes:
a fusion subunit, configured to fuse the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path to obtain a fused image;
A designated coefficient obtaining subunit, configured to obtain a designated coefficient by using a Relu function, convolution operation, and regularization processing according to the fused image;
an attention feature map obtaining subunit, configured to obtain the attention feature map according to the corresponding resolution feature map generated by the specified coefficient using the decoding path.
Optionally, the follicle segmentation model obtaining module includes:
an initial follicle segmentation model obtaining unit, configured to train the improved U-Net network model with the image training set to obtain an initial follicle segmentation model;
an evaluation unit, configured to evaluate the initial follicle segmentation model by using a Dice coefficient;
a follicle segmentation model obtaining unit, configured to determine that the initial follicle segmentation model is the follicle segmentation model when the Dice coefficient is greater than a preset threshold.
optionally, the method further includes:
a follicle information determining module, configured to determine follicle information according to the follicle segmentation map, where the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
Optionally, the follicle information determining module includes:
A follicle area determination unit configured to determine a follicle area of all follicles from the follicle outline in the follicle segmentation map;
a follicle information determining unit, configured to delete information corresponding to a follicle smaller than a preset area, and obtain the follicle information in the follicle segmentation map, where the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
Optionally, the method further includes:
the judging module is used for judging whether a follicle exists at a first target position when a cursor executes a first preset operation at the first target position in the follicle segmentation graph;
A first obtaining module, configured to obtain and display a follicular morphology of the follicle at the first target location and a measurement result of the follicle after ellipse fitting if the follicle exists at the first target location.
Optionally, the method further includes:
A second obtaining module, configured to obtain and display the follicle information of the follicle segmentation map when a cursor performs a second preset operation at a second target position in the follicle segmentation map.
The application provides an ultrasound device comprising:
a memory for storing a computer program;
a processor for implementing the steps of the two-dimensional follicle detection method when executing the computer program.
the present application provides a computer readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the two-dimensional follicle detection method as described above.
the application provides a two-dimensional follicle detection method, comprising: cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas; adding a label to an image in the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to the follicle contour; constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units; training the improved U-Net network model by using an image training set to obtain a follicle segmentation model; and inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
therefore, the method and the device have the advantages that the effective image set is obtained by cutting and masking all images in the follicle ultrasonic image set, and particularly, for the particularity of medical images, in order to eliminate the interference of irrelevant information, the images are cut and masked, so that the performance of the model is more excellent; the method comprises the steps of constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, specifically, a medical image is mainly gray-scale images, the complexity of features in the image is high, and the definition of boundaries is insufficient, using the residual error calculation unit to replace all convolution operations in the U-Net network model can better extract image features, the depth of the network is increased, and in order to optimize segmentation results, the attention calculation unit is integrated in long-jump connection of the U-Net network model, the calculation weight can be inclined to a region to be segmented to obtain a refined segmentation result, so that the extra cost of the improved U-Net network model is low, incoherent regions in an input follicle ultrasound image to be measured can be inhibited, the working efficiency can be improved, and the accuracy of measurement can be improved. This application still provides a two-dimentional follicle detection device, an ultrasonic equipment and computer readable storage medium simultaneously, all has above-mentioned beneficial effect, no longer gives unnecessary details here.
drawings
in order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
fig. 1 is a flowchart of a two-dimensional follicle detection method according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of image cropping and masking according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of an improved U-Net network model provided by an embodiment of the present application;
fig. 4 is a schematic diagram of a residual calculation flow of a residual calculation unit according to an embodiment of the present application;
fig. 5 is a schematic diagram of a calculation flow of an attention calculating unit according to an embodiment of the present application;
fig. 6 is a flowchart of another two-dimensional follicle detection method provided in the present embodiment;
Fig. 7 is a schematic structural diagram of a two-dimensional follicle detection device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a flowchart of a two-dimensional follicle detection method according to an embodiment of the present application, which specifically includes:
s101, all images in the follicle ultrasonic image set are cut and masked to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas.
the follicles in the follicle ultrasonic image are distributed in the ovary, the follicles are generally circular or oval, the inside of the follicles is an echolucent area, the echolucent distribution is clear and pure, the boundary is clear, and the wall is thin. A cumulus image was seen one day before ovulation in 20% of mature follicles, with a short hyperechoic wave in the follicle near the wall. There may be a mutual squeezing during the development of multiple follicles in the ovary, and the follicles in the follicular ultrasound image may assume shapes other than an ellipse. Therefore, the embodiment provides a method, which can extract the characteristics of the follicular image by using the improved U-Net network model, and implement automatic monitoring of follicular information to assist the doctor in working.
In this embodiment, a follicle ultrasound image set is acquired, wherein the images of the follicle ultrasound image set include follicle ultrasound images of various shapes and numbers. And (3) cutting all images in the follicle ultrasonic image set, eliminating the interference of other irrelevant information at the moment, removing irrelevant information in a frame, multiplying the cut images and a fan-shaped mask point by point to obtain an image, wherein the image comprises an ultrasonic image information area, and all the cut and masked images form an effective image set. To further describe the fan-shaped mask, the shape and size of the fan-shaped mask correspond to the follicular ultrasound image one by one, so as to obtain all images including ultrasound image information areas, please refer to fig. 2, and fig. 2 is a flowchart of the image cropping and mask processing provided in the embodiment of the present application. In fig. 2, a is an image in the follicular ultrasound image set, a' is a cropped image, B is a sector mask, and C is an image in the effective image set including the ultrasound image information area.
And S102, generating a label according to the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to the follicle contour.
The label is generated according to the effective image set, which may be that a user manually draws a follicle contour of images in the effective image set, then generates a binary mask map according to the follicle contour, uses the binary mask map as a label for network training, so as to obtain the images in the effective image set and corresponding labels, and generates an image training set by using the images in all the effective image sets and corresponding labels.
in one implementation, generating labels from the valid image set, resulting in a training set of images, includes: adding labels to the images in the effective image set to obtain a plurality of training images; carrying out affine transformation and elastic transformation on the plurality of training images to obtain a plurality of expanded training images; and taking the images in the effective image set and all the extended training images as an image training set.
Specifically, after generating labels for the effective image set, a plurality of training images are obtained, where the training images include images in one effective image set and corresponding labels. Because of the particularity of medical images and the difficulty in obtaining follicular ultrasound images, image data are limited, and the model robustness is poor, in this embodiment, the training images are extended by using affine transformation and elastic change to obtain extended training images, and the images in the effective image set and all the extended training images are used as the image training set. For affine transformation, images in an effective image set are subjected to translation, rotation, scaling, shearing and symmetry. Since the follicular in the follicular ultrasound image is elastic and may be crushed to change the shape of the follicular, the extended training image obtained by affine transformation and elastic transformation can increase the diversity of the training samples, so that the follicular segmentation model can learn the deformation characteristics.
s103, constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units in the U-Net network model.
The image segmentation can be realized by automatically segmenting the image by using a semantic segmentation network, wherein the semantic segmentation is to classify each pixel point in the image and determine the category (such as belonging to a background or a follicle) of each point. Taking the U-Net network model as an example, the U-Net network model improves the full convolution network structure by expanding the capacity of the network decoder, and proposes an encoding and decoding network structure consisting of a contracted path for capturing content and an expanded path for precise positioning symmetrical to the contraction. In this embodiment, an improved U-Net network model for two-dimensional automatic follicle segmentation is built, and a U-Net network model having a residual calculation unit and an attention calculation unit is built, where the residual calculation unit is used to replace all convolution calculation units.
Specifically, please refer to fig. 3 for a structure of an improved U-Net network model, and fig. 3 is a schematic structural diagram of the improved U-Net network model according to an embodiment of the present application.
The left side is the encoder, which takes as input the pre-processed image, which is the image in the image training set. The image is down-sampled for four times through residual calculation of a series of residual calculation units, a high-order feature map is extracted to be output, the output high-order feature map is input by a decoder on the right side, the high-order feature map is up-sampled through the residual calculation of the residual calculation units, at the moment, in order to optimize a segmentation result, an attention calculation unit is integrated in long jump connection of a U-Net network model, at the moment, a shallow feature map generated by a coding path and a corresponding resolution feature map generated by a decoding path are fused to obtain a fused image, and the fused image is input to the attention calculation unit, so that calculation weight inclines to a region to be segmented to obtain an attention feature map. And fusing the corresponding resolution characteristic diagram and the attention characteristic diagram in the decoding path to obtain a decoding characteristic diagram, and performing residual calculation on the decoding characteristic diagram by using a residual calculation unit to obtain a new resolution characteristic diagram until the follicle segmentation diagram is output. Wherein all black rectangles in fig. 3 represent the feature maps generated by the current residual calculation unit, and the width of the rectangle represents how many feature maps are; the white rectangle is a characteristic diagram generated by an attention mechanism; horizontal arrows indicate forward pass, down arrows indicate 2 x 2 down sampling, and up arrows indicate 2 x 2 up sampling.
specifically, images in an image training set are used as input images, an improved U-Net network model is input, and two residual calculation units are used for sequentially performing two-time residual calculation on a first layer of a coding path to obtain a first-layer shallow characteristic diagram; in the second layer of the coding path, according to the first layer of shallow layer feature map, two residual error calculation units are used for sequentially carrying out two times of residual error calculation to obtain a second layer of shallow layer feature map; in the third layer of the coding path, according to the second layer shallow layer characteristic diagram, two residual error calculation units are used for sequentially carrying out two times of residual error calculation to obtain a third layer shallow layer characteristic diagram; in the fourth layer of the coding path, according to the shallow feature map of the third layer, two residual error calculation units are used for sequentially carrying out two residual error calculations to obtain a shallow feature map of the fourth layer; at the fifth layer of the coding path, according to the fourth layer shallow layer characteristic diagram, two residual error calculation units are utilized to sequentially carry out two residual error calculations, and a high-order characteristic diagram is obtained and is used as a fifth resolution; inputting the high-order feature map, the fourth layer shallow layer feature map and the high-order feature map into an attention calculation unit at a fourth layer of the decoding path to obtain a fourth attention feature map, fusing the fourth attention feature map and the high-order feature map to obtain a fourth fused image, and sequentially performing residual calculation twice by using two residual calculation units to obtain a fourth resolution feature map; inputting the third layer of the decoding path into an attention calculation unit according to a fourth resolution feature map and a third layer of shallow feature map to obtain a third attention feature map, fusing the third attention feature map and the fourth resolution feature map to obtain a third fused image, and sequentially performing residual calculation twice by using two residual calculation units to obtain a third resolution feature map; inputting the third resolution characteristic diagram and the second layer shallow layer characteristic diagram into an attention calculation unit at a second layer of the decoding path to obtain a second attention characteristic diagram, fusing the second attention characteristic diagram and the third resolution characteristic diagram to obtain a second fused image, and sequentially performing residual calculation twice by using two residual calculation units to obtain a second resolution characteristic diagram; and inputting the second resolution feature map and the first layer shallow feature map into an attention calculation unit at the first layer of the decoding path to obtain a first attention feature map, fusing the first attention feature map and the second resolution feature map to obtain a first fused image, sequentially performing residual calculation twice by using two residual calculation units to obtain a first resolution feature map and obtain an output image, and finishing the segmentation of the image at this moment. And an attention calculation unit is added, and low-dimensional information, namely a shallow feature map, in the encoding path is efficiently spliced with high-dimensional information, namely a resolution feature map, in the decoding path, so that finer features are obtained for segmentation, the region of interest is segmented from the feature image, and the accuracy of image segmentation is greatly improved.
Referring to fig. 4 for a calculation process of a residual calculation unit, fig. 4 is a schematic diagram of a residual calculation flow of the residual calculation unit according to an embodiment of the present application, where x is a feature map generated by a previous layer in an encoding path or a decoding path, and F (x) is a nonlinear activation parameter, where the nonlinear activation parameter is a Relu function in this embodiment. It can be understood that, in the conventional neural network, the input is a, and the output H (a) is expected, at this time, the difficulty of model training is large, in this embodiment, the input is a and the output H (a) is expected to be a + F (a) by using the residual calculation unit, at this time, the model only needs to learn the difference between the target value and a, and the difficulty of learning is reduced.
please refer to fig. 5 for a calculation process of the attention calculation unit, and fig. 5 is a schematic calculation flow diagram of the attention calculation unit according to an embodiment of the present application. Wherein g is a shallow feature map generated in the encoding path, X is a resolution feature map obtained by upsampling in the decoding path, and in the scheme, 0 is filled in the feature map before convolution, so that g and X have the same dimension. The black matrix is a convolution and regularization operation unit, Wg, Wx and psi represent the sizes of convolution kernels in convolution operation, Relu and sigmoid are nonlinear activation functions, alpha is a designated coefficient, and X' is the output of the attention calculation unit. Relu is used as a nonlinear activation function for increasing the nonlinear expression capability of the network; regularization is added to speed up network convergence and to prevent gradient explosion or disappearance when the network is too deep.
and S104, training the improved U-Net network model by using the image training set to obtain a follicle segmentation model.
In an implementation, training the improved U-Net network model with an image training set to obtain a follicle segmentation model, includes: training the improved U-Net network model by using an image training set to obtain an initial follicle segmentation model; evaluating the initial follicle segmentation model by using a Dice coefficient; and when the Dice coefficient is larger than a preset threshold value, determining that the initial follicle segmentation model is the follicle segmentation model.
the follicle segmentation is the pixel-level classification problem of follicles and backgrounds, and the Dice coefficient is excellent in the two-classification problem of sample imbalance, so that the loss function corresponding to the Dice coefficient is adopted as the network loss function in the scheme to guide the training of the network.
the Dice coefficient is an evaluation index widely used in medical image segmentation, the Dice distance can be used for measuring the similarity of two sets, the higher the sample similarity is, the larger the Dice coefficient is, the smaller the corresponding loss is, and the scheme can also adopt 1-Dice as a network loss function. The definition of the Dice coefficient is as follows:
The corresponding loss function Dice loss, d:
wherein a is a label graph, b is a network output graph, and the network training condition is judged by comparing the similarity of the label graph and the network output graph. And when the Dice coefficient is larger than a preset threshold value, determining that the initial follicle segmentation model is the follicle segmentation model, and when the Dice coefficient is smaller than the preset threshold value, continuing training until the Dice coefficient is larger than the preset threshold value, so as to obtain the follicle segmentation model. The preset threshold is not limited in this embodiment, and the user can customize the setting, which may be 70%, 75%, 80%, 85%, 90%, or other values, as long as the purpose of this embodiment can be achieved.
when d is taken as a network loss function, evaluating the initial follicle segmentation model by using d; and when d is smaller than a loss preset threshold value, determining that the initial follicle segmentation model is the follicle segmentation model, and when d is larger than the loss preset threshold value, continuing training until d is smaller than the loss preset threshold value, so as to obtain the follicle segmentation model. The loss preset threshold is not limited in this embodiment, and the user can customize the setting, which may be 10%, 15%, 20%, 25%, 30%, or other values, as long as the purpose of this embodiment can be achieved.
And S105, inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
and acquiring an ultrasound image of the follicle to be detected to obtain an ultrasound image of the follicle to be detected, and inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map. The follicle segmentation map includes contour maps of all the divided follicles, and for example, when a follicle a, a follicle b, a follicle c, a follicle d, and a follicle e are obtained in different regions by model segmentation, the follicle segmentation map includes an a contour corresponding to the follicle a, a b contour corresponding to the follicle b, a c contour corresponding to the follicle c, a d contour corresponding to the follicle d, and an e contour corresponding to the follicle e.
based on the above technical solution, in this embodiment, an effective image set is obtained by performing cropping and masking processing on all images in a follicle ultrasound image set, and specifically, for the specificity of a medical image, in order to eliminate interference of irrelevant information, the image is cropped and masked, so that the performance of a model can be more excellent; the method comprises the steps of constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, specifically, a medical image is mainly gray-scale images, the complexity of features in the image is high, and the definition of boundaries is insufficient, using the residual error calculation unit to replace all convolution operations in the U-Net network model can better extract image features, the depth of the network is increased, and in order to optimize segmentation results, the attention calculation unit is integrated in long-jump connection of the U-Net network model, the calculation weight can be inclined to a region to be segmented to obtain a refined segmentation result, so that the extra cost of the improved U-Net network model is low, incoherent regions in an input follicle ultrasound image to be measured can be inhibited, the working efficiency can be improved, and the accuracy of measurement can be improved.
In an achievable real-time mode, inputting an ultrasound image of a follicle to be detected into a follicle segmentation model to obtain a follicle segmentation map, including: before residual calculation is carried out on the feature map by using a residual calculation unit, filling edge pixels 0 in the feature map so that the size of the feature map is the same as that of the feature map after the residual calculation; inputting the shallow layer characteristic diagram generated by the coding path and the corresponding resolution characteristic diagram generated by the decoding path into an attention calculation unit, and outputting an attention characteristic diagram; fusing the resolution characteristic graph and the corresponding attention characteristic graph in a decoding path to obtain a decoding characteristic graph; and performing residual calculation on the decoded feature map by using a residual calculation unit to obtain a new resolution feature map, performing a step of filling edge pixels 0 in the feature map until the operations of all layers of a decoding path are completed, and outputting a follicle segmentation map.
The residual calculation includes convolution calculation, the size of the image changes after each convolution calculation, if the size of the input image is 256 × 256, the size of the image obtained after convolution is 254 × 254, at this time, edge pixel 0 filling is performed on the feature map, the size of the image can be 258 × 258, and the size of the image obtained after convolution is 256 × 256, so that the size of the output image is consistent with that of the input image.
In a real-time implementation manner, inputting the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit, and outputting the attention feature map, including: fusing the shallow layer characteristic diagram generated by the encoding path and the corresponding resolution characteristic diagram generated by the decoding path to obtain a fused image; obtaining a designated coefficient by utilizing a Relu function, convolution operation and regularization processing according to the fused image; and obtaining the attention feature map by using the corresponding resolution feature map generated by the decoding path according to the specified coefficient.
Referring to fig. 6, fig. 6 is a flowchart of another two-dimensional follicle detection method according to an embodiment of the present application, including:
s201, all images in the follicle ultrasonic image set are cut and masked to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas.
s202, generating a label according to the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to the follicle contour.
s203, constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units in the U-Net network model.
And S204, training the improved U-Net network model by using the image training set to obtain a follicle segmentation model.
and S205, inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
s206, determining follicle information according to the follicle segmentation map, wherein the follicle information comprises: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
And performing image processing according to the follicle segmentation chart to obtain the measurement results of the follicle morphology, the number of follicles and the follicles in the follicle segmentation chart after ellipse fitting. Wherein the follicular morphology comprises the morphology of all divided follicles; the number of follicles is the number of all follicles obtained by division; the measurement of the follicle after ellipse fitting comprises a major axis and a minor axis corresponding to the follicle. The follicle can be fitted to the ellipse by fitting an opencv ellipse to obtain a measurement result. Further, the follicle information can be automatically filled into the report so that the user is aware of the ultrasound results.
based on the technical scheme, the follicle information is obtained by further image processing on the follicle segmentation image, the ultrasound measurement result can be efficiently read, and the diagnosis convenience and accuracy of a clinician are improved.
in an implementation, the follicular information is determined from a follicular segmentation map, wherein the follicular information comprises: the measurement results of the follicular morphology, number of follicles and follicles after ellipse fitting include: determining the follicle area of all follicles according to the follicle outline in the follicle segmentation map; deleting information corresponding to follicles smaller than a preset area to obtain follicle information in the follicle segmentation map, wherein the follicle information comprises: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
it can be understood that, the follicle outline of a plurality of follicles is obtained in the follicle segmentation map, the follicle area of the follicle can be determined according to the outline of the follicle, whether the follicle area is smaller than the preset area is judged, and when the follicle area is smaller than the preset area, the last result is not included, and the corresponding information is deleted. The reliability of the obtained follicle information is higher.
in an implementation manner, after determining the follicular information according to the follicular segmentation map, the method further includes: when a cursor executes a first preset operation at a first target position in the follicle segmentation graph, judging whether a follicle exists at the first target position; if a follicle exists at the first target position, the follicle morphology of the follicle at the first target position and the measurement result of the follicle after ellipse fitting are obtained and displayed.
And (4) under the two-dimensional real-time scanning or freezing state of the follicle image, obtaining the follicle segmentation image and corresponding information by adopting the method of the embodiment. In this embodiment, the first target position is not limited, and may be a region including the follicle contour or a region not including the follicle contour. The first preset operation is not limited in this embodiment, and may be a left click, a left double click, a right double click, or other operations as long as the purpose of this embodiment can be achieved. When the cursor performs a first preset operation on the area including the outline of the follicle, the follicle morphology of the area including the outline of the follicle and the measurement result of the follicle after ellipse fitting are obtained and displayed. When the cursor performs a first preset operation in the area not including the outline of the follicle, the information without the follicle is acquired and displayed.
in an implementation manner, after determining the follicular information according to the follicular segmentation map, the method further includes: and when the cursor executes a second preset operation at a second target position in the follicle segmentation map, acquiring and displaying follicle information of the follicle segmentation map.
the second target position is not limited in this embodiment, and may be any position, and when the second preset operation is performed, all the follicle information of the follicle segmentation map is acquired and displayed. When the first predetermined operation is different from the second predetermined operation.
In the following, a two-dimensional follicle detection device provided in an embodiment of the present application is introduced, and the two-dimensional follicle detection device described below and the two-dimensional follicle detection method described above can be referred to with reference to fig. 7, where fig. 7 is a schematic structural diagram of a two-dimensional follicle detection device provided in an embodiment of the present application, and includes:
the preprocessing module 100 is configured to perform cutting and masking on all images in the follicular ultrasound image set to obtain an effective image set, where the images in the effective image set are images including ultrasound image information areas;
An image training set obtaining module 200, configured to generate a label according to the effective image set to obtain an image training set, where the label is a binary mask image generated according to the follicle contour;
the improved U-Net network model building module 300 is used for building a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units in the U-Net network model;
a follicle segmentation model obtaining module 400, configured to train the improved U-Net network model by using an image training set, to obtain a follicle segmentation model;
a segmentation module 500, configured to input the ultrasound image of the follicle to be detected into the follicle segmentation model, so as to obtain a follicle segmentation map.
In some specific embodiments, the image training set acquisition module 200 includes:
The training image obtaining unit is used for generating labels according to the effective image set to obtain a plurality of training images;
the extension unit is used for carrying out affine transformation and elastic transformation on the plurality of training images to obtain a plurality of extended training images;
And the image training set acquisition unit is used for taking the images in the effective image set and all the extended training images as an image training set.
In some specific embodiments, the segmentation module 500 includes:
The edge filling unit is used for filling edge pixels 0 into the feature map before residual calculation is carried out on the feature map by using the residual calculation unit so that the size of the feature map is the same as that of the feature map after the residual calculation;
An attention feature map output unit, which is used for inputting the shallow feature map generated by the coding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit and outputting the attention feature map;
the decoding characteristic diagram acquisition unit is used for fusing the resolution characteristic diagram and the corresponding attention characteristic diagram in a decoding path to obtain a decoding characteristic diagram;
And the follicle segmentation map acquisition unit is used for performing residual calculation on the decoding feature map by using the residual calculation unit to obtain a new resolution feature map, executing the step of filling edge pixels 0 in the feature map until the operation of all layers of the decoding path is completed, and outputting the follicle segmentation map.
in some specific embodiments, the attention feature map output unit includes:
the fusion subunit is used for fusing the shallow layer feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path to obtain a fused image;
the specified coefficient obtaining subunit is used for obtaining a specified coefficient by utilizing a Relu function, convolution operation and regularization processing according to the fused image;
And the attention feature map obtaining subunit is used for obtaining the attention feature map according to the corresponding resolution feature map generated by the specified coefficient by using the decoding path.
in some embodiments, the follicle segmentation model obtaining module 400 includes:
the initial follicle segmentation model acquisition unit is used for training the improved U-Net network model by using an image training set to obtain an initial follicle segmentation model;
The evaluation unit is used for evaluating the initial follicle segmentation model by using the Dice coefficient;
And the follicle segmentation model obtaining unit is used for determining that the initial follicle segmentation model is the follicle segmentation model when the Dice coefficient is larger than a preset threshold value.
In some specific embodiments, the method further comprises:
A follicle information determining module, configured to determine follicle information according to the follicle segmentation map, where the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
In some specific embodiments, the follicle information determination module includes:
a follicle area determination unit for determining the follicle area of all follicles from the follicle outline in the follicle segmentation map;
a follicle information determining unit, configured to delete information corresponding to a follicle smaller than a preset area, and obtain follicle information in a follicle segmentation map, where the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
in some specific embodiments, the method further comprises:
the judging module is used for judging whether a follicle exists in a first target position when a cursor executes a first preset operation at the first target position in the follicle segmentation graph;
And the first acquisition module is used for acquiring and displaying the follicle morphology of the follicle at the first target position and the measurement result of the follicle after ellipse fitting if the follicle exists at the first target position.
In some specific embodiments, the method further comprises:
And the second acquiring module is used for acquiring and displaying the follicle information of the follicle segmentation map when the cursor executes a second preset operation at a second target position in the follicle segmentation map.
Since the embodiment of the two-dimensional follicle detection apparatus section corresponds to the embodiment of the two-dimensional follicle detection method section, for the embodiment of the two-dimensional follicle detection apparatus section, reference is made to the description of the embodiment of the two-dimensional follicle detection method section, and details are not repeated here.
The following describes an ultrasound apparatus provided in an embodiment of the present application, and the ultrasound apparatus described below and the two-dimensional follicle detection method described above are referred to in correspondence.
The present embodiment provides an ultrasound apparatus including:
A memory for storing a computer program;
A processor for implementing the steps of the two-dimensional follicle detection method when executing the computer program.
since the embodiment of the ultrasound device portion corresponds to the embodiment of the two-dimensional follicle detection method portion, please refer to the description of the embodiment of the two-dimensional follicle detection method portion for the embodiment of the ultrasound device portion, and details are not repeated here.
in the following, a computer-readable storage medium provided by an embodiment of the present application is introduced, and the computer-readable storage medium described below and the two-dimensional follicle detection method described above may be referred to in correspondence.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the two-dimensional follicle detection method described above.
Since the embodiment of the computer-readable storage medium portion corresponds to the embodiment of the two-dimensional follicle detection method portion, please refer to the description of the embodiment of the two-dimensional follicle detection method portion for the embodiment of the computer-readable storage medium portion, and details are not repeated here.
the embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
the two-dimensional follicle detection method, the two-dimensional follicle detection device, the ultrasound apparatus, and the computer-readable storage medium provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
Claims (12)
1. A two-dimensional follicle assay method, comprising:
Cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas;
generating a label according to the effective image set to obtain an image training set, wherein the label is a binary mask image generated according to a follicle contour;
constructing a U-Net network model with a residual error calculation unit and an attention calculation unit to obtain an improved U-Net network model, wherein the residual error calculation unit is used for replacing all convolution calculation units in the U-Net network model;
Training the improved U-Net network model by using the image training set to obtain a follicle segmentation model;
and inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
2. The two-dimensional follicle detection method according to claim 1, wherein generating a label from the effective image set to obtain an image training set includes:
generating the label according to the effective image set to obtain a plurality of training images;
Carrying out affine transformation and elastic transformation on the plurality of training images to obtain a plurality of expanded training images;
And taking the images in the effective image set and all the extended training images as the image training set.
3. The two-dimensional follicle detection method according to claim 1, wherein the obtaining of the follicle segmentation map by inputting the ultrasound image of the follicle to be examined into the follicle segmentation model includes:
before residual calculation is carried out on the feature map by using the residual calculation unit, filling edge pixels 0 in the feature map so that the size of the feature map is the same as that of the feature map after the residual calculation;
Inputting the shallow layer feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit, and outputting an attention feature map;
Fusing the resolution feature map and the corresponding attention feature map in the decoding path to obtain a decoding feature map;
And performing residual calculation on the decoded feature map by using the residual calculation unit to obtain a new resolution feature map, performing the step of filling edge pixels 0 in the feature map until the operations of all layers of the decoding path are completed, and outputting the follicle segmentation map.
4. The two-dimensional follicle detection method according to claim 3, wherein the step of inputting the shallow layer feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path into the attention calculation unit and outputting the attention feature map includes:
Fusing the shallow feature map generated by the encoding path and the corresponding resolution feature map generated by the decoding path to obtain a fused image;
Obtaining a designated coefficient by utilizing a Relu function, convolution operation and regularization processing according to the fused image;
And obtaining the attention feature map according to the corresponding resolution feature map generated by the specified coefficient by using the decoding path.
5. The two-dimensional follicle detection method according to claim 1, wherein the training of the improved U-Net network model using the image training set to obtain the follicle segmentation model includes:
training the improved U-Net network model by using the image training set to obtain an initial follicle segmentation model;
evaluating the initial follicle segmentation model by using a Dice coefficient;
And when the Dice coefficient is larger than a preset threshold value, determining that the initial follicle segmentation model is the follicle segmentation model.
6. The two-dimensional follicle detection method according to claim 1, wherein after the ultrasound image of the follicle to be examined is input to the follicle segmentation model to obtain the follicle segmentation map, the method further comprises:
Determining follicular information from the follicular segmentation map, wherein the follicular information comprises: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
7. The two-dimensional follicle detection method according to claim 6, wherein follicle information is determined from the follicle segmentation map, and the follicle information includes: the measurement results of the follicular morphology, number of follicles and follicles after ellipse fitting include:
Determining the follicle area of all follicles according to the follicle outline in the follicle segmentation map;
deleting information corresponding to follicles smaller than a preset area to obtain the follicle information in the follicle segmentation map, wherein the follicle information includes: follicular morphology, number of follicles, measurement of follicles after ellipse fitting.
8. the two-dimensional follicle detection method according to claim 6, further comprising, after determining the follicle information from the follicle segmentation map:
when a cursor executes a first preset operation at a first target position in the follicle segmentation map, judging whether a follicle exists at the first target position;
and if the follicle exists at the first target position, acquiring and displaying the follicle morphology of the follicle at the first target position and the measurement result of the follicle after ellipse fitting.
9. the two-dimensional follicle detection method according to claim 6, further comprising, after determining the follicle information from the follicle segmentation map:
And when the cursor executes a second preset operation at a second target position in the follicle segmentation map, acquiring and displaying the follicle information of the follicle segmentation map.
10. a two-dimensional follicle detection device, comprising:
the pretreatment module is used for cutting and masking all images in the follicle ultrasonic image set to obtain an effective image set, wherein the images in the effective image set are images including ultrasonic image information areas;
an image training set obtaining module, configured to generate a label according to the effective image set to obtain an image training set, where the label is a binary mask image generated according to a follicle contour;
the improved U-Net network model building module is used for building a U-Net network model with a residual error calculating unit and an attention calculating unit to obtain an improved U-Net network model, wherein the residual error calculating unit is used for replacing all convolution calculating units in the U-Net network model;
A follicle segmentation model obtaining module, configured to train the improved U-Net network model by using the image training set to obtain a follicle segmentation model;
and the segmentation module is used for inputting the ultrasound image of the follicle to be detected into the follicle segmentation model to obtain a follicle segmentation map.
11. an ultrasound device, comprising:
a memory for storing a computer program;
A processor for implementing the steps of the two-dimensional follicle detection method according to any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the two-dimensional follicle detection method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910860200.7A CN110570350A (en) | 2019-09-11 | 2019-09-11 | two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910860200.7A CN110570350A (en) | 2019-09-11 | 2019-09-11 | two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110570350A true CN110570350A (en) | 2019-12-13 |
Family
ID=68779353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910860200.7A Pending CN110570350A (en) | 2019-09-11 | 2019-09-11 | two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110570350A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445443A (en) * | 2020-03-11 | 2020-07-24 | 北京深睿博联科技有限责任公司 | Method and device for detecting early acute cerebral infarction |
CN111680706A (en) * | 2020-06-17 | 2020-09-18 | 南开大学 | Double-channel output contour detection method based on coding and decoding structure |
CN112164074A (en) * | 2020-09-22 | 2021-01-01 | 江南大学 | 3D CT bed fast segmentation method based on deep learning |
CN112184683A (en) * | 2020-10-09 | 2021-01-05 | 深圳度影医疗科技有限公司 | Ultrasonic image identification method, terminal equipment and storage medium |
CN112651978A (en) * | 2020-12-16 | 2021-04-13 | 广州医软智能科技有限公司 | Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium |
CN113140291A (en) * | 2020-12-17 | 2021-07-20 | 慧影医疗科技(北京)有限公司 | Image segmentation method and device, model training method and electronic equipment |
CN113487581A (en) * | 2021-07-16 | 2021-10-08 | 武汉中旗生物医疗电子有限公司 | Method, system, equipment and storage medium for automatically measuring diameter of fetus head and buttocks |
CN113673526A (en) * | 2021-07-23 | 2021-11-19 | 浙江大华技术股份有限公司 | Bubble detection method, terminal and computer-readable storage medium |
US20220044358A1 (en) * | 2021-01-20 | 2022-02-10 | Beijing Baidu Netcom Science Technology Co., Ltd. | Image processing method and apparatus, device, and storage medium |
CN114972263A (en) * | 2022-05-27 | 2022-08-30 | 浙江大学 | Real-time ultrasound image follicle measurement method and system based on intelligent picture segmentation |
CN115018805A (en) * | 2022-06-21 | 2022-09-06 | 推想医疗科技股份有限公司 | Segmentation model training method, image segmentation method, device, equipment and medium |
CN115049642A (en) * | 2022-08-11 | 2022-09-13 | 合肥合滨智能机器人有限公司 | Carotid artery blood vessel intima-media measurement and plaque detection method |
CN115082426A (en) * | 2022-07-20 | 2022-09-20 | 湖北经济学院 | Follicle detection method and device based on deep learning model |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
CN108133235A (en) * | 2017-12-21 | 2018-06-08 | 中通服公众信息产业股份有限公司 | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure |
CN109191472A (en) * | 2018-08-28 | 2019-01-11 | 杭州电子科技大学 | Based on the thymocyte image partition method for improving U-Net network |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
CN109903292A (en) * | 2019-01-24 | 2019-06-18 | 西安交通大学 | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks |
-
2019
- 2019-09-11 CN CN201910860200.7A patent/CN110570350A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9147255B1 (en) * | 2013-03-14 | 2015-09-29 | Hrl Laboratories, Llc | Rapid object detection by combining structural information from image segmentation with bio-inspired attentional mechanisms |
CN108133235A (en) * | 2017-12-21 | 2018-06-08 | 中通服公众信息产业股份有限公司 | A kind of pedestrian detection method based on neural network Analysis On Multi-scale Features figure |
CN109191472A (en) * | 2018-08-28 | 2019-01-11 | 杭州电子科技大学 | Based on the thymocyte image partition method for improving U-Net network |
CN109886971A (en) * | 2019-01-24 | 2019-06-14 | 西安交通大学 | A kind of image partition method and system based on convolutional neural networks |
CN109903292A (en) * | 2019-01-24 | 2019-06-18 | 西安交通大学 | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445443A (en) * | 2020-03-11 | 2020-07-24 | 北京深睿博联科技有限责任公司 | Method and device for detecting early acute cerebral infarction |
CN111445443B (en) * | 2020-03-11 | 2023-09-01 | 北京深睿博联科技有限责任公司 | Early acute cerebral infarction detection method and device |
CN111680706A (en) * | 2020-06-17 | 2020-09-18 | 南开大学 | Double-channel output contour detection method based on coding and decoding structure |
CN112164074A (en) * | 2020-09-22 | 2021-01-01 | 江南大学 | 3D CT bed fast segmentation method based on deep learning |
CN112184683A (en) * | 2020-10-09 | 2021-01-05 | 深圳度影医疗科技有限公司 | Ultrasonic image identification method, terminal equipment and storage medium |
CN112651978A (en) * | 2020-12-16 | 2021-04-13 | 广州医软智能科技有限公司 | Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium |
CN112651978B (en) * | 2020-12-16 | 2024-06-07 | 广州医软智能科技有限公司 | Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium |
CN113140291A (en) * | 2020-12-17 | 2021-07-20 | 慧影医疗科技(北京)有限公司 | Image segmentation method and device, model training method and electronic equipment |
US20220044358A1 (en) * | 2021-01-20 | 2022-02-10 | Beijing Baidu Netcom Science Technology Co., Ltd. | Image processing method and apparatus, device, and storage medium |
US11893708B2 (en) * | 2021-01-20 | 2024-02-06 | Beijing Baidu Netcom Science Technology Co., Ltd. | Image processing method and apparatus, device, and storage medium |
CN113487581A (en) * | 2021-07-16 | 2021-10-08 | 武汉中旗生物医疗电子有限公司 | Method, system, equipment and storage medium for automatically measuring diameter of fetus head and buttocks |
CN113673526A (en) * | 2021-07-23 | 2021-11-19 | 浙江大华技术股份有限公司 | Bubble detection method, terminal and computer-readable storage medium |
CN114972263A (en) * | 2022-05-27 | 2022-08-30 | 浙江大学 | Real-time ultrasound image follicle measurement method and system based on intelligent picture segmentation |
CN114972263B (en) * | 2022-05-27 | 2024-08-20 | 浙江大学 | Real-time ultrasonic image follicle measurement method and system based on intelligent picture segmentation |
CN115018805A (en) * | 2022-06-21 | 2022-09-06 | 推想医疗科技股份有限公司 | Segmentation model training method, image segmentation method, device, equipment and medium |
CN115082426A (en) * | 2022-07-20 | 2022-09-20 | 湖北经济学院 | Follicle detection method and device based on deep learning model |
CN115082426B (en) * | 2022-07-20 | 2022-11-04 | 湖北经济学院 | Follicle detection method and device based on deep learning model |
CN115049642A (en) * | 2022-08-11 | 2022-09-13 | 合肥合滨智能机器人有限公司 | Carotid artery blood vessel intima-media measurement and plaque detection method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110570350A (en) | two-dimensional follicle detection method and device, ultrasonic equipment and readable storage medium | |
US20220189142A1 (en) | Ai-based object classification method and apparatus, and medical imaging device and storage medium | |
CN111862044B (en) | Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium | |
CN110974306B (en) | System for discernment and location pancreas neuroendocrine tumour under ultrasonic endoscope | |
US20090252429A1 (en) | System and method for displaying results of an image processing system that has multiple results to allow selection for subsequent image processing | |
CN112508884B (en) | Comprehensive detection device and method for cancerous region | |
CN112446892A (en) | Cell nucleus segmentation method based on attention learning | |
CN115272887A (en) | Coastal zone garbage identification method, device and equipment based on unmanned aerial vehicle detection | |
CN113313680A (en) | Colorectal cancer pathological image prognosis auxiliary prediction method and system | |
CN116205967A (en) | Medical image semantic segmentation method, device, equipment and medium | |
CN116309459A (en) | Improved network-based lung nodule detection method, apparatus, device and storage medium | |
CN103169506A (en) | Ultrasonic diagnosis device and method capable of recognizing liver cancer automatically | |
CN112215217A (en) | Digital image recognition method and device for simulating doctor to read film | |
CN115100494A (en) | Identification method, device and equipment of focus image and readable storage medium | |
CN118334336A (en) | Colposcope image segmentation model construction method, image classification method and device | |
CN111652876B (en) | Method for detecting three-dimensional basin bottom ultrasonic image | |
CN115619941A (en) | Ultrasonic imaging method and ultrasonic equipment | |
CN116091522A (en) | Medical image segmentation method, device, equipment and readable storage medium | |
CN113657214B (en) | Building damage assessment method based on Mask RCNN | |
CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
CN112862785B (en) | CTA image data identification method, device and storage medium | |
CN112862786B (en) | CTA image data processing method, device and storage medium | |
KR102460899B1 (en) | Method and System for People Count based on Deep Learning | |
CN112862787A (en) | CTA image data processing method, device and storage medium | |
CN114764776A (en) | Image labeling method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191213 |