CN111145183A - Segmentation system and method for transparent separation cavity ultrasonic image - Google Patents

Segmentation system and method for transparent separation cavity ultrasonic image Download PDF

Info

Publication number
CN111145183A
CN111145183A CN201911394001.8A CN201911394001A CN111145183A CN 111145183 A CN111145183 A CN 111145183A CN 201911394001 A CN201911394001 A CN 201911394001A CN 111145183 A CN111145183 A CN 111145183A
Authority
CN
China
Prior art keywords
segmentation
unit
convolution
width
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911394001.8A
Other languages
Chinese (zh)
Other versions
CN111145183B (en
Inventor
吴嘉
吴宇洲
谌奎芳
袁晓华
刘月兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201911394001.8A priority Critical patent/CN111145183B/en
Publication of CN111145183A publication Critical patent/CN111145183A/en
Application granted granted Critical
Publication of CN111145183B publication Critical patent/CN111145183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a segmentation system for a transparent separation cavity ultrasonic image, which comprises a convolution unit, a convolution unit and a control unit, wherein the convolution unit is used for performing convolution calculation; a modified linear unit for calculating an activation value; a batch normalization unit for normalizing the input layers by adjustment and scaling activation; a maximum pooling unit for down-sampling the encoded output; transpose convolution unit, double the size of the characteristic diagram by up-sampling, and halve the number of channels; an attention mechanism unit, including a global average pooling process, attention weights and a generating context vector function, is used for learning global information to enhance obtaining meaningful information and suppressing meaningless information from a channel perspective. The invention can solve the defects of the prior art and can realize automatic division of the transparent separation cavity and measurement of the width of the transparent separation cavity.

Description

Segmentation system and method for transparent separation cavity ultrasonic image
Technical Field
The invention belongs to the technical field of image segmentation, and particularly relates to a segmentation system and a segmentation method for an ultrasonic image of a transparent separation cavity.
Background
Ultrasound testing is a relatively safe, non-destructive and low-cost imaging method that is widely used for screening and diagnosis in prenatal diagnosis. However, some features of ultrasound images are: variability in the quality of the ultrasound acquisition, cut surface, can lead to variability in observed variability and variability in diagnostic results. In the routine examination of pregnant women in clinical medicine, the transparent Compartment (CSP) is one of the most important physiological structures for the normal development of the Central Nervous System (CNS) of the fetal head. According to the guidelines of the American society for ultrasound, the clear Compartment (CSP) has become one of the criteria that must be examined to assess fetal CNS development. Manual measurement of CSP is a difficult and time-consuming task even for experienced sonographers, since features of ultrasound images (e.g., low signal-to-noise ratio) often result in CSP boundary ambiguity and discontinuity.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a segmentation system and method for an ultrasound image of a transparent separation space, which can solve the defects of the prior art, and can realize automatic segmentation of the transparent separation space and measurement of the width of the transparent separation space.
The invention comprises the following technical scheme.
A segmentation system for transparent compartment ultrasound images, comprising,
a convolution unit for performing convolution calculation;
a modified linear unit for calculating an activation value;
a batch normalization unit for normalizing the input layers by adjustment and scaling activation;
a maximum pooling unit for down-sampling the encoded output;
transpose convolution unit, double the size of the characteristic diagram by up-sampling, and halve the number of channels;
an attention mechanism unit, including a global average pooling process, attention weights and a generating context vector function, is used for learning global information to enhance obtaining meaningful information and suppressing meaningless information from a channel perspective.
A method for segmenting the segmentation system for the transparent-cell ultrasonic image comprises the following steps:
A. setting a CA-Unet main structure which comprises an encoding path and a decoding path;
B. down-sampling the output of the coding path and performing a maximum pooling operation, wherein the size of the feature map becomes 1/2 each time down-sampling is performed; and the number of channels in the output element graph is doubled; performing transposition convolution on the decoding path, doubling the size of the feature graph each time, reducing the number of channels by half, realizing skipping connection between the cut feature graph and the decoding path so as to ensure that context information is added at the top layer of the network, and determining pixel segmentation probabilities belonging to different categories through a Sigmoid activation function and a 1x1 convolution network;
C. converting all points of each channel in the space into a single value by using global average pooling, wherein the input is a feature map with the dimension of C H W, wherein C, H and W respectively represent the number, length and width of the channels of the feature map, an C x 1x1 matrix is obtained by global average pooling, and the feature value of the channel at the l level is calculated as follows:
Figure BDA0002345785810000021
the method comprises the following steps that (i, j) represents horizontal and vertical coordinates of a point a in a feature diagram belonging to a α plane, the size of a α plane is H, the width of the α plane is W, the maximum values of (i, j) are omega and H respectively, the range of l is {1, 2.. C }, and the number of the feature diagrams is represented, and an activation function is used for modeling the correlation degree between channels;
D. after compressing the global information of each channel into a set of values, calculating a score function to find input information that should be focused on; step C obtains a vector e, which represents { e }l,l∈{1,2,...C}},
s=ξ{Ws[γ(Wa·e+ba)]+bs}
γ(z)=max(0,z)
Figure BDA0002345785810000022
Wa is a weight parameter matrix with dimensions of
Figure BDA0002345785810000023
sc represents a scaling parameter, the number of channels can be reduced, the calculation amount is reduced, and a weight parameter matrix
Figure BDA0002345785810000024
And
Figure BDA0002345785810000025
in the case of (2), deviation term
Figure BDA0002345785810000026
And bs∈RC×1γ denotes the RELU function, by which the dimension does not change in size, then multiplies WS and adds the bias term bS, looking it as a fully connected layer, and finally obtains the output of s by a Tanh activation function, which is a score function for each attention feature map;
E. the input feature map a and the attention weight s are multiplied element by element: replacing the original input a in the U-net main frame with x obtained by the channel attention module, and introducing the original input a into the U-net network structure to locate a target area to continue to realize the segmentation in the coder-decoder frame;
F. the thalamus, midline of the brain and the clear compartment in the thalamic cross section are divided; after segmentation of the transparent compartment and callus in the thalamic sagittal plane, the integrity of these structures and criteria for the chosen plane are ensured. Then, the width of the transparent compartment is calculated, and the conversion relation between the width of the transparent compartment and the image size is,
Figure BDA0002345785810000027
wherein h iswidthIndicating the pixel height of the CSP, edepthIs used to correspond to the actual depth of each picture, cwidthRepresenting the width of an image of 852 pixels
Preferably, in step a, the encoding path is a VGG11 network excluding the fully-connected layer, and includes 11 consecutive convolution layers, and each layer of the encoder path first undergoes two repeated convolution operations of 3 × 3 vectors, batch normalization operation and modified linear operation; a Dropout layer is included between the two blocks.
Preferably, the Dropout layer setting probability is 0.1.
Preferably, in step B, the maximum pooling step size is 2.
Preferably, in step B, each upsampling comprises two 3 × 3 convolution units, a batch normalization unit and a modified linear unit.
The method has the advantages that the method solves the problems that a large amount of information is lost due to the up-sampling and down-sampling processes of the U-NET and the depth of the U-NET network is not enough to acquire a large amount of received information and context information, reduces noise influence and realizes automatic segmentation and measurement of the transparent separation space. The batch standardization operation enables the input data distribution of each layer in the network to tend to be stable, and the learning speed of the model can be accelerated. The Dropout layer can mitigate the overfitting and achieve the regularization effect to some extent. Global average pooling can mask spatially distributed information, thereby facilitating accuracy of scale calculations.
Drawings
FIG. 1 is a general block diagram of an embodiment of the present invention.
Fig. 2 is a structure diagram of CA-uet in an embodiment of the present invention.
FIG. 3 is a diagram of a channel attention module in accordance with an embodiment of the present invention.
Fig. 4 is an unprocessed original input image.
Fig. 5 is a real result image of fig. 4 segmented by a conventional manual method.
FIG. 6 is an image of FIG. 4 after segmentation by the method of the present application.
Fig. 7 is a graph showing a comparison between fig. 5 and fig. 6.
Fig. 8 shows the result of comparing the segmentation accuracy of the physiological structure based on the thalamic cross section of the ultrasound image.
FIG. 9 shows the result of comparing the segmentation accuracy of the physiological structure of the thalamic sagittal plane based on the ultrasound image
FIG. 10 is a comparison of recall for segmentation of a physiological structure based on the sagittal thalamic plane of an ultrasound image.
Fig. 11 is a comparison of physiological structure segmentation recall based on thalamic cross-sections of ultrasound images.
Fig. 12 shows the result of comparing DC coefficients for physiological structure segmentation based on the thalamic cross section of an ultrasound image.
Fig. 13 shows the comparison result of the physiological structure segmentation DC coefficient based on the thalamic sagittal plane of the ultrasound image.
Fig. 14 is a comparison of ROC curves versus AUC values for physiological structures based on the thalamic sagittal plane of ultrasound images.
Fig. 15 is a ROC curve versus AUC value comparison of physiological structures of a thalamic cross-section based on ultrasound images.
Detailed Description
As shown in the drawings, the present invention includes,
a convolution unit 1 for performing convolution calculation;
a modified linear unit 2 for calculating an activation value;
a batch normalization unit 3 for normalizing the input layers by adjustment and scaling activation;
a maximum pooling unit 4 for down-sampling the encoded output;
transpose convolution unit 5, double the size of the characteristic diagram by up-sampling, and halve the number of channels;
the attention mechanism unit 6, including the global average pooling process, the attention weights and the generated context vector function, is used to learn global information to enhance the acquisition of meaningful information and suppress meaningless information from a channel perspective.
A method for segmenting by the segmenting system for the transparent-cell ultrasound image, comprising the following steps:
A. setting a U-net main body structure comprising an encoding path and a decoding path;
B. down-sampling the output of the coding path and performing a maximum pooling operation, wherein the size of the feature map becomes 1/2 each time down-sampling is performed; and the number of channels in the output element graph is doubled; performing transposition convolution on the decoding path, doubling the size of the feature graph each time, reducing the number of channels by half, realizing skipping connection between the cut feature graph and the decoding path so as to ensure that context information is added at the top layer of the network, and determining pixel segmentation probabilities belonging to different categories through a Sigmoid activation function and a 1x1 convolution network;
C. converting all points of each channel in the space into a single value by using global average pooling, wherein the input is a feature map with the dimension of C H W, wherein C, H and W respectively represent the number, length and width of the channels of the feature map, an C x 1x1 matrix is obtained by global average pooling, and the feature value of the channel at the l level is calculated as follows:
Figure BDA0002345785810000041
the method comprises the following steps that (i, j) represents horizontal and vertical coordinates of a point a in a feature diagram belonging to a α plane, the size of a α plane is H, the width of the α plane is W, the maximum values of (i, j) are omega and H respectively, the range of l is {1, 2.. C }, and the number of the feature diagrams is represented, and an activation function is used for modeling the correlation degree between channels;
D. after compressing the global information of each channel into a set of values, calculating a score function to find input information that should be focused on; step C obtains a vector e, which represents { e }l,l∈{1,2,...C}},
s=ξ{Ws[γ(Wa·e+ba)]+bs}
γ(z)=max(0,z)
Figure BDA0002345785810000042
Wa is a weight parameter matrix with dimensions of
Figure BDA0002345785810000043
sc represents a scaling parameter, the number of channels can be reduced, the calculation amount is reduced, and a weight parameter matrix
Figure BDA0002345785810000044
And
Figure BDA0002345785810000045
in the case of (2), deviation term
Figure BDA0002345785810000046
And bs∈RC×1γ denotes the RELU function, by which the dimension does not change in size, then multiplies WS and adds the bias term bS, looking it as a fully connected layer, and finally obtains the output of s by a Tanh activation function, which is a score function for each attention feature map;
E. the input feature map a and the attention weight s are multiplied element by element: replacing the original input a in the U-net main frame with x obtained by the channel attention module, and introducing the original input a into the U-net network structure to locate a target area to continue to realize the segmentation in the coder-decoder frame;
F. the thalamus, midline of the brain and the clear compartment in the thalamic cross section are divided; after segmentation of the transparent compartment and callus in the thalamic sagittal plane, the integrity of these structures and criteria for the chosen plane are ensured. Then, the width of the transparent compartment is calculated, and the conversion relation between the width of the transparent compartment and the image size is,
Figure BDA0002345785810000051
wherein h iswidthIndicating the pixel height of the CSP, edepthIs used to correspond to the actual depth of each picture, cwidthRepresenting the width of an image of 852 pixels.
In the step A, the coding path is a VGG11 network without a full connection layer and comprises 11 continuous convolution layers, and each layer of the coding path is subjected to two repeated 3x3 vector convolution operations, batch standardization operation and correction linear operation; a Dropout layer is included between the two blocks.
The Dropout layer sets the probability to 0.1.
In step B, the maximum pooling step size is 2.
In step B, each upsampling comprises two 3 × 3 convolution units 1, one batch normalization unit 3 and a modified linear unit 2.
Examples of the experiments
The prenatal examination system of the pregnant women with 18-37 weeks of fetal age is extracted from the prenatal examination systems of 2013-plus 2018 pregnant women in the obstetrics and gynecology department in the Hunan ya hospital, the Hunan ya second hospital and the Hunan ya third hospital. In this experimental design, all patients were 22-48 years of age. All the acquired images are acquired by a Philips iU22, GE Voluson 730 Doppler ultrasound (American) system and a two-dimensional convex array probe, and the frequency range is 2.5-5 MHz. 995 thalamic (TT) cross-sectional planes and 995 thalamic sagittal planes of the fetal ultrasound images were selected for training and testing. Since the clear Compartment (CSP) should be detected from these two standard parts of clinical medicine and during fetal pregnancies between 18-37 weeks, if TT could not be detected at TT cross-section, CSP and brain midline; if CC and CSP can not be detected in the TT sagittal plane, the fetus is judged to lack the structure. The image size is 1136 × 852 pixels. Real images of biological structures were drawn and measured by the cooperation of three sonographers with great experience in obstetrical ultrasonography. One sonographer is responsible for manually extracting the region and measuring the width of the CSP, and the other two sonographers are responsible for secondary examination and perfecting the correction result.
The effectiveness of the method is experimentally verified on a server with a CPU model of Intel Core E5-2690v4, a system RAM of 128GB and a GPU of two-way NvidiaR TX2080 Ti. The code is implemented in a system of Ubuntu 16.04LTSOS, Python 3.6, PyTorch 0.4. In the experiment, by using VGG11 as an encoder path, fetal ultrasound image segmentation was achieved with the proposed new architecture CA-Unet on the basis of having channel attention. The input is an image of size 1136 × 852 pixels, including a standard thalamus (TT) cross-section and sagittal plane. The output is a segmentation of the thalamus, midline of the brain and CSP in The Thalamic (TT) cross section. CSP and CC were segmented in The Thalamic (TT) sagittal plane to ensure the integrity of these structures and the standardization of the chosen plane. The width of the CSP is then calculated.
CSP width measurement method: according to professional medical measurement standards, the segmented CSP is extracted from the thalamic cross section, the maximum transverse diameter of the CSP is measured to be the width of the CSP, and the standard value of the CSP width diameter is 2-10 mm. If the width of the CSP is larger than 10mm, the CSP is judged to be widened. If the width of the CSP is less than 2mm, the CSP is judged to be too small. If there is no CSP, a preliminary determination should be made that CSP is missing
The ultrasound images of the TT cross section and the TT sagittal plane are divided into a training set and a testing set. The model is trained using a training set and physician-labeled actual structures. In the test set, the result of automatic segmentation by the application is compared with the segmentation of the actual structural region marked by the doctor to realize evaluation. And then calculating the accuracy, recall rate, Dice coefficient and Hausdorff distance according to the segmentation result.
Referring to FIGS. 4-7, rows (a) (c) from top to bottom represent the TT cross section and (b) (d) represent the TT sagittal plane. (a) (b) and (c) (d) are from different incisal planes of the same pregnant woman. At the TT cross section of the fetal head, The Thalamus (TT), the brain midline and CSP were segmented. In the TT sagittal plane, the present application subdivides CSPs and CCs. The original input image and the amplified real segmentation image marked by the sonographer are respectively arranged from left to right, and the segmentation and the real segmentation image marked by the sonographer are compared. In fig. 7, the thin vertical and horizontal line portions are those which are not predicted but exist in the real divided image, and the thin oblique line portions are those which are the misprediction flag portions. The square portion of the center line, the vertical portion of TT, the square portion of CSP, and the vertical portion of CC are the overlapping portions between ground truth and predictions. It can be seen that the present application performs well on this data set. Generally, the performance in the sagittal plane of TT is better than in the transverse plane of TT, which may be due to low signal-to-noise ratio in the axial plane of TT.
Next, the performance of the present application is compared to the current computational methods of hot comparison in segmentation algorithms (including U-net, TernasNet, and U-net + +). In the following table, the segmentation effects of multiple types of physiological structures are qualitatively compared according to the accuracy, recall rate, Dice coefficient and Hausdorff distance. The best results for each physiological structure segmentation are highlighted in bold. As shown in the following table, the model of the present application achieved an accuracy of 79.5% better than U-net and U-net + + 9.2% and 5%, respectively. The social Recall rate is slightly lower than 0.9% for U-net + + because a trade-off is required between Precision and Recall. The network of the present application exhibits better performance than other networks in terms of accuracy, Dice coefficient and Hausdorff distance. Through the above experiments, it can be seen that the U-net with the VGG11 encoder is superior to the U-net architecture. In addition, the channel attention module provided by the application utilizes the context information related to the channel dimension, and the performance of the model provided by the application is 7.1% better than that of a U-net baseline according to the Dice coefficient.
Next, the number of training parameters and the test time for each model are also summarized. Compared with a standard Unet structure, the model of the application increases 1.78M parameters, and the performance can be improved by 10% in terms of the Dice coefficient. As shown in the following table, the segmentation results of the CA-Unet network of the present application are significantly better than the conventional U-net, TernasNet and U-net + +.
Figure BDA0002345785810000071
Referring to fig. 8 to 13, the results of the automatic segmentation of the present application were quantitatively evaluated by evaluation indexes of Precision, Recall, and Dice coefficients. The results reported by histograms with error bars demonstrate their performance on different biological structures in the sagittal plane of TT and in the cross section of TT. Each column shows the average performance of the subdivision and the error bars indicate the stability of the model. It can be seen that the model of the present application shows good performance in Precision and DC in all different physiological structures, but this improvement is not evident for TT segmentation. This is because the boundary of TT is not apparent in the ultrasound image. Figures 14 and 15 show ROC curves on different planes for comparison with different prior art methods. For each ROC curve, the area under the curve (AUC) is calculated. The mean AUC values of the model of the present application reached 80.3% and 81.6%, respectively, indicating that the method proposed by the present application achieved the highest AUC on these two planes.
CSP width (maximum transverse diameter) was finally measured at the axial position of TT cross-section. The following table is an evaluation of the measured CSP widths. Precison, Recall and F1measure were used to evaluate the performance of the classifications, including normal CSP, too small CSP and large CSP. F1 scores can be calculated by F1-score ═ P × R/2(P + R). As shown in the following table, F1-score was at least 77%. The automatic measurement of the width of the CSP can save the marking time of the sonographer, thereby improving the working efficiency of the sonographer. In this practical case, the accuracy of the determination is greatly improved.
Figure BDA0002345785810000072

Claims (6)

1. A segmentation system for a transparent cell ultrasound image, comprising,
a convolution unit (1) for performing convolution calculations;
a modified linear unit (2) for calculating an activation value;
a batch normalization unit (3) for normalizing the input layers by means of adjustment and scaling activations;
a max pooling unit (4) for down-sampling the encoded output;
a transposition convolution unit (5) for doubling the size of the characteristic diagram by up-sampling and halving the number of channels;
an attention mechanism unit (6) comprising a global average pooling process, attention weights and a generating context vector function for learning global information to enhance the retrieval of meaningful information and suppress meaningless information from a channel perspective.
2. A method of segmentation in a segmentation system for transparent cell ultrasound images according to claim 1, comprising the steps of:
A. setting a U-net main body structure comprising an encoding path and a decoding path;
B. down-sampling the output of the coding path and performing a maximum pooling operation, wherein the size of the feature map becomes 1/2 each time down-sampling is performed; and the number of channels in the output element graph is doubled; performing transposition convolution on the decoding path, doubling the size of the feature graph each time, reducing the number of channels by half, realizing skipping connection between the cut feature graph and the decoding path so as to ensure that context information is added at the top layer of the network, and determining pixel segmentation probabilities belonging to different categories through a Sigmoid activation function and a 1x1 convolution network;
C. converting all points of each channel in the space into a single value by using global average pooling, wherein the input is a feature map with the dimension of C H W, wherein C, H and W respectively represent the number, length and width of the channels of the feature map, an C x 1x1 matrix is obtained by global average pooling, and the feature value of the channel at the l level is calculated as follows:
Figure FDA0002345785800000011
wherein (i, j) represents the horizontal and vertical coordinates of a point a in the characteristic diagram belonging to the α plane, the dimension of the α plane is H, the width is W, the maximum values of (i, j) are respectively omega, H, and the range of l is {1, 2.. C }, which represents the quantity of the characteristic diagram;
D. after compressing the global information of each channel into a set of values, calculating a score function to find input information that should be focused on; step C obtains a vector e, which represents { e }l,l∈{1,2,...C}},
s=ξ{Ws[γ(Wa·e+ba)]+bs}
γ(z)=max(0,z)
Figure FDA0002345785800000012
Wa is a weight parameter matrix with dimensions of
Figure FDA0002345785800000013
sc represents a scaling parameter to reduce the number of channelsThereby reducing the amount of computation, the weight parameter matrix
Figure FDA0002345785800000021
And
Figure FDA0002345785800000022
in the case of (2), deviation term
Figure FDA0002345785800000023
And bs∈RC ×1γ denotes the RELU function, by which the dimension does not change in size, then multiplies WS and adds the bias term bS, looking it as a fully connected layer, and finally obtains the output of s by a Tanh activation function, which is a score function for each attention feature map;
E. the input feature map a and the attention weight s are multiplied element by element: replacing the original input a in the U-net main frame with x obtained by the channel attention module, and introducing the original input a into the U-net network structure to locate a target area to continue to realize the segmentation in the coder-decoder frame;
F. the thalamus, midline of the brain and the clear compartment in the thalamic cross section are divided; after segmentation of the transparent compartment and callus in the thalamic sagittal plane, to ensure the integrity of these structures and criteria of the chosen plane; then, the width of the transparent compartment is calculated, and the conversion relation between the width of the transparent compartment and the image size is,
Figure FDA0002345785800000024
wherein h iswidthIndicating the pixel height of the CSP, edepthIs used to correspond to the actual depth of each picture, cwidthRepresenting the width of an image of 852 pixels.
3. The method for segmentation of a transparent cell ultrasound image segmentation system of claim 2, wherein: in the step A, the coding path is a VGG11 network without a full connection layer and comprises 11 continuous convolution layers, and each layer of the coding path is subjected to two repeated 3x3 vector convolution operations, batch standardization operation and correction linear operation; a Dropout layer is included between the two blocks.
4. The method for segmentation of a transparent cell ultrasound image segmentation system of claim 3, wherein: the Dropout layer sets the probability to 0.1.
5. The method for segmentation of a transparent cell ultrasound image segmentation system of claim 2, wherein: in step B, the maximum pooling step size is 2.
6. The method for segmentation of a transparent cell ultrasound image segmentation system of claim 5, wherein: in step B, each upsampling comprises two 3 × 3 convolution units (1), a batch normalization unit (3) and a modified linear unit (2).
CN201911394001.8A 2019-12-30 2019-12-30 Segmentation system and method for transparent separation cavity ultrasonic image Active CN111145183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911394001.8A CN111145183B (en) 2019-12-30 2019-12-30 Segmentation system and method for transparent separation cavity ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911394001.8A CN111145183B (en) 2019-12-30 2019-12-30 Segmentation system and method for transparent separation cavity ultrasonic image

Publications (2)

Publication Number Publication Date
CN111145183A true CN111145183A (en) 2020-05-12
CN111145183B CN111145183B (en) 2022-06-07

Family

ID=70521790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911394001.8A Active CN111145183B (en) 2019-12-30 2019-12-30 Segmentation system and method for transparent separation cavity ultrasonic image

Country Status (1)

Country Link
CN (1) CN111145183B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582215A (en) * 2020-05-17 2020-08-25 华中科技大学同济医学院附属协和医院 Scanning identification system and method for normal anatomical structure of biliary-pancreatic system
CN113487568A (en) * 2021-07-05 2021-10-08 陕西科技大学 Liver surface smoothness measuring method based on differential curvature

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276763A (en) * 2018-03-15 2019-09-24 中南大学 It is a kind of that drawing generating method is divided based on the retinal vessel of confidence level and deep learning
US20190370972A1 (en) * 2018-06-04 2019-12-05 University Of Central Florida Research Foundation, Inc. Capsules for image analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276763A (en) * 2018-03-15 2019-09-24 中南大学 It is a kind of that drawing generating method is divided based on the retinal vessel of confidence level and deep learning
US20190370972A1 (en) * 2018-06-04 2019-12-05 University Of Central Florida Research Foundation, Inc. Capsules for image analysis

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YAN LI ET AL.: "Automatic Fetal Body and Amniotic Fluid Segmentation from Fetal Ultrasound Images by Encoder-Decoder Network with Inner Layers", 《2017 39TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY(EMBC)》 *
陈睿敏等: "基于深度学习的红外遥感信息自动提取", 《红外》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582215A (en) * 2020-05-17 2020-08-25 华中科技大学同济医学院附属协和医院 Scanning identification system and method for normal anatomical structure of biliary-pancreatic system
CN113487568A (en) * 2021-07-05 2021-10-08 陕西科技大学 Liver surface smoothness measuring method based on differential curvature
CN113487568B (en) * 2021-07-05 2023-09-19 陕西科技大学 Liver surface smoothness measuring method based on differential curvature

Also Published As

Publication number Publication date
CN111145183B (en) 2022-06-07

Similar Documents

Publication Publication Date Title
CN109523521B (en) Pulmonary nodule classification and lesion positioning method and system based on multi-slice CT image
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
Cerrolaza et al. Deep learning with ultrasound physics for fetal skull segmentation
CN101692286B (en) Method for acquiring three-view drawing of medical image
CN111145183B (en) Segmentation system and method for transparent separation cavity ultrasonic image
CN115004223A (en) Method and system for automatic detection of anatomical structures in medical images
CN111696126B (en) Multi-view-angle-based multi-task liver tumor image segmentation method
CN113744271B (en) Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
Wang et al. Automatic real-time CNN-based neonatal brain ventricles segmentation
Hu et al. Automated placenta segmentation with a convolutional neural network weighted by acoustic shadow detection
CN113240654A (en) Multi-dimensional feature fusion intracranial aneurysm detection method
CN112529886A (en) Attention DenseUNet-based MRI glioma segmentation method
CN112633416A (en) Brain CT image classification method fusing multi-scale superpixels
Irene et al. Segmentation and approximation of blood volume in intracranial hemorrhage patients based on computed tomography scan images using deep learning method
CN111481233B (en) Thickness measuring method for transparent layer of fetal cervical item
Yasrab et al. End-to-end first trimester fetal ultrasound video automated crl and nt segmentation
Chen et al. Direction-guided and multi-scale feature screening for fetal head–pubic symphysis segmentation and angle of progression calculation
Lu et al. Data enhancement and deep learning for bone age assessment using the standards of skeletal maturity of hand and wrist for chinese
EP3838134A1 (en) Biomarker for early detection of alzheimer disease
CN113409447B (en) Coronary artery segmentation method and device based on multi-slice combination
CN113160256B (en) MR image placenta segmentation method for multitasking countermeasure model
Bhalla et al. Automatic fetus head segmentation in ultrasound images by attention based encoder decoder network
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
Ayu et al. Amniotic Fluids Classification Using Combination of Rules-Based and Random Forest Algorithm
Joharah et al. Evaluation of fetal head circumference (HC) and biparietal diameter (BPD (biparietal diameter)) in ultrasound images using multi-task deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant