CN110974302A - Automatic detection method and system for fetal head volume in ultrasonic image - Google Patents

Automatic detection method and system for fetal head volume in ultrasonic image Download PDF

Info

Publication number
CN110974302A
CN110974302A CN201910997674.6A CN201910997674A CN110974302A CN 110974302 A CN110974302 A CN 110974302A CN 201910997674 A CN201910997674 A CN 201910997674A CN 110974302 A CN110974302 A CN 110974302A
Authority
CN
China
Prior art keywords
layer
size
convolution
fetal
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910997674.6A
Other languages
Chinese (zh)
Other versions
CN110974302B (en
Inventor
李肯立
李胜利
翟宇轩
朱宁波
文华轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lanxiang Zhiying Technology Co.,Ltd.
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910997674.6A priority Critical patent/CN110974302B/en
Publication of CN110974302A publication Critical patent/CN110974302A/en
Application granted granted Critical
Publication of CN110974302B publication Critical patent/CN110974302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0808Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data

Abstract

The invention discloses an automatic detection method of fetal head volume in an ultrasonic image, which aims to intelligently detect the fetal head volume from three-dimensional ultrasonic volume data of fetal cranium and comprises the following steps: firstly, acquiring a large amount of data sets formed by fetal craniocerebral three-dimensional ultrasonic volume data and positions of fetal craniocerebral marked by doctors; subsequently, training the 3D FCN network with the acquired dataset; and finally, inputting the new fetal brain three-dimensional ultrasonic data into the trained 3D FCN to detect the fetal head volume in each fetal brain three-dimensional ultrasonic data. The invention can solve the technical problems of poor image definition and accuracy in the existing fetal craniocerebral volume detection method, the technical problem of influence on the wide application of the detection method due to large workload of sonographers, and the technical problem of inconsistent detection results due to different levels of sonographers using the detection method.

Description

Automatic detection method and system for fetal head volume in ultrasonic image
Technical Field
The invention belongs to the technical field of prenatal ultrasonic examination, and particularly relates to an automatic detection method and system for fetal head volume in an ultrasonic image.
Background
Fetal intracranial structural abnormalities are one of the most common congenital malformations, the incidence rate of which is 1% -3%, and the neurological functions in fetal mothers and after birth are affected to different degrees. Therefore, the detection of the fetal craniocerebral development state in the pregnancy period has important clinical significance. In addition to structural evaluation, the development size of each intracranial structure has important clinical significance, and can be used for evaluating whether hypoplasia exists in various craniocerebral structures (including whole brain, cerebellar hemisphere, vermis, and the like). Currently, the most common indexes for evaluating the fetal craniocerebral development condition in the conventional fetal ultrasound are the double apical diameter, the head circumference and the cerebellar transverse diameter. In a strict sense, however, volume values reflect organ growth levels more accurately than diameter values, as is also demonstrated in some other organ evaluations of the fetus.
The existing detection method of fetal brain volume is to assume that the fetal brain is a regular sphere and calculate the fetal head volume in a two-dimensional radial line mode. However, this method has some non-negligible drawbacks: firstly, due to the instability of the speed and direction of manual detection and uncontrollable factors such as movement artifacts caused by the movement of the pregnant woman or the fetus due to too long time consumption in the detection process, the definition of the finally obtained image is poor, and the detection accuracy is not high; secondly, as the fetal brain is of a three-dimensional structure, a long time is consumed when the fetal brain is converted into a two-dimensional section and is marked by a doctor, so that the workload of the sonographer is large, and the wide application of the method is influenced; third, different levels of sonographers using this method may have different diagnostic results, resulting in inconsistent test results.
Disclosure of Invention
The invention aims to solve the technical problems of poor image definition and accuracy in the existing detection method for fetal craniocerebral volume, the technical problem that the detection method is widely applied due to large workload of an ultrasonic doctor and the technical problem that the detection results are inconsistent due to different diagnostic results obtained by different levels of ultrasonic doctors using the detection method.
To achieve the above object, according to one aspect of the present invention, there is provided an automatic detection method for fetal head volume in an ultrasound image, comprising the steps of:
(1) acquiring a data set;
(2) and (2) preprocessing the data set obtained in the step (1) to obtain a preprocessed fetal craniocerebral three-dimensional ultrasonic data set.
(3) Inputting the fetal craniocerebral three-dimensional ultrasonic data set preprocessed in the step (2) into a trained 3D FCN network to obtain voxels of the fetal craniocerebral three-dimensional ultrasonic data.
(4) And (4) calculating the volume of the head of the fetus by using the voxel of the three-dimensional ultrasonic data of the skull of the fetus obtained in the step (3).
Preferably, the data set includes fetal craniocerebral three-dimensional ultrasound data acquired from a three-dimensional ultrasound device, and fetal craniocerebral location information manually labeled by a sonographer for each fetal craniocerebral three-dimensional ultrasound data.
Preferably, the preprocessing of the data set obtained in step (1) in step (2) is by a median filtering method.
Preferably, the head volume V is calculated by the formula: v is VP×UvIn which V ispIs the voxel, U, obtained in step (3)vIs the volume of a unit voxel.
Preferably, the 3D FCN network is trained by:
A. acquiring a data set which comprises fetal craniocerebral three-dimensional ultrasonic data acquired from a three-dimensional ultrasonic device and fetal craniocerebral position information manually labeled by a sonographer for each fetal craniocerebral three-dimensional ultrasonic data;
B. b, denoising the data set obtained in the step A by adopting a median filtering method, normalizing the denoised data set, and randomly dividing the normalized data set into a training set, a verification set and a test set;
C. and B, inputting the training set in the data set subjected to normalization processing in the step B into a 3D FCN network to obtain inference output of the volume of the head of the fetus, and inputting the inference output into a loss function in the 3D FCN network to obtain a loss value.
D. Optimizing a loss function in the 3D FCN according to a random gradient descent algorithm and by using the loss value obtained in the step C so as to update the 3D FCN;
E. and C, repeatedly executing the step C and the step D aiming at the rest data sets in the training set part in the data set obtained in the step B until the 3D FCN network is converged to the best, thereby obtaining the trained 3D FCN network.
Preferably, the loss function is: l (x, y) ═ x-y)2, where x is the fetal head volume acquired by the sonographer from the manually labeled fetal craniocerebral location information, which is specifically equal to the product between the volume of the voxels acquired by the sonographer from the manually labeled fetal craniocerebral location information and the unit voxel, and y is the inferential output of the fetal head volume.
Preferably, the network structure of the 3D FCN network is as follows:
the first layer is an input layer, the input of which is a matrix of 128 x 1 pixels;
the second layer is a convolution layer with convolution kernel size of 3 x 3, number of convolution kernels of 32, step size of 1, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the third layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), and a layer output matrix of 64 x 32;
the fourth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 64, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 64 × 64 is output;
the fifth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 32 x 64;
the sixth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the seventh layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the eighth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 16 x 128;
the ninth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the tenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the eleventh layer is an deconvolution layer with deconvolution kernel size of 4 x 4, number of deconvolution kernels of 128, which uses 2 times the upsampling operation, outputting a matrix of size 32 x 128;
the twelfth layer is a convolution layer, the convolution kernel size is 1 × 1, the number of convolution kernels is 128, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 32 × 128 is output;
the thirteenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and matrix with size of 32 × 128 is output;
a fourteenth layer is an deconvolution layer with deconvolution kernel size of 4 x 4, with a number of deconvolution kernels of 64, which uses 2 times the upsampling operation, outputting a matrix with size of 64 x 64;
the fifteenth layer is a convolution layer with convolution kernel size of 1 x 1, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and output matrix size of 64 x 64;
the sixteenth layer is a convolution layer with convolution kernel size of 3 x 3, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and the output size is a matrix of 64 x 64;
a seventeenth layer is a deconvolution layer with deconvolution kernel size of 4 x 4, with 32 deconvolution kernels, which uses 2 times the upsampling operation, outputting a matrix of size 128 x 32;
the eighteenth layer is a convolution layer with convolution kernel size 1 x 1, step size 1, number of convolution kernels 32, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the nineteenth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 32, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 128 × 32 is output;
the twentieth layer is a convolution layer with convolution kernel size 1 x 1, convolution kernel number 1, step size 1, this layer is filled using SAME pattern, and the output size is 128 x 1 matrix.
According to another aspect of the present invention, there is provided a system for automatic detection of fetal head volume in an ultrasound image, comprising:
a first module for obtaining a data set;
the second module is used for preprocessing the data set acquired by the first module to obtain a preprocessed fetal craniocerebral three-dimensional ultrasonic data set;
the third module is used for inputting the fetal brain three-dimensional ultrasonic data set pretreated by the second module into the trained 3DFCN network to obtain the voxel of the fetal brain three-dimensional ultrasonic data;
and the fourth module is used for calculating the volume of the head of the fetus by using the voxel of the three-dimensional ultrasonic data of the head of the fetus, which is obtained by the third module.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) because the invention uses the three-dimensional imaging technology, the invention can provide independent images of any layer and eliminate the influence of front and back overlapped tissues on the images, thereby solving the technical problem of low image definition and accuracy in the existing fetal brain image acquisition;
(2) because the fetal head volume is intelligently and automatically detected through deep learning, the method can reduce the technical requirements and workload for doctors, and can solve the technical problem that the fetal head volume is difficult to widely apply due to high requirements on the professional level of the doctors in the conventional fetal head volume measuring method;
(3) because the fetal craniocerebral volume data sets adopted by the invention are screened by professional sonographers, and the data used for training has a unique determined standard for volume evaluation, the technical problem of inconsistent detection results caused by the difference between the evaluation results of different doctors in the existing fetal head volume measurement method can be solved.
Drawings
FIG. 1 is a flow chart of a method for automatic detection of fetal head volume in an ultrasound image of the present invention;
fig. 2(a) and 2(b) show the data set acquired in step (1) of the automatic detection method of the present invention, wherein fig. 2(a) is fetal craniocerebral three-dimensional ultrasound data, and fig. 2(b) is fetal craniocerebral position information manually labeled by an sonographer.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention aims to provide an automatic detection method for fetal head volume in an ultrasonic image, which realizes a method for automatically, intelligently and quickly identifying a series of fetal facial standard sections by three-dimensional volume data through deeply learning a large amount of fetal craniofacial three-dimensional ultrasonic image data which are normal and abnormal in early pregnancy.
The basic idea of the invention is to acquire the volume of the head of the fetus by three-dimensional ultrasound, analyze the volume by artificial intelligent ultrasound, and automatically and rapidly calculate the volume of a plurality of structures of the skull and the cranium. Therefore, the size of the fetal brain volume can be accurately detected, and the technical defect that a measurement false positive or false negative result occurs when the fetal head is deformed under pressure in the existing fetal brain volume detection method is overcome.
As shown in fig. 1, the present invention provides a method for automatically detecting the volume of a fetal head in an ultrasound image, comprising the following steps:
(1) acquiring a data set;
specifically, the data set includes fetal craniocerebral three-dimensional ultrasound data (as shown in fig. 2 (a)) acquired from three-dimensional ultrasound equipment manufactured by mainstream manufacturers on the market (including maire, union photograph, siemens and the like), and fetal craniocerebral position information manually labeled by sonographers for each fetal craniocerebral three-dimensional ultrasound data (as shown in fig. 2 (b)).
(2) And (2) preprocessing the data set obtained in the step (1) to obtain a preprocessed fetal craniocerebral three-dimensional ultrasonic data set.
Specifically, the method for preprocessing the data set obtained in step (1) in this step is to perform denoising processing by using a median filtering method, and then perform normalization processing on the denoised data set.
(3) Inputting the fetal craniocerebral Three-dimensional ultrasonic data set preprocessed in the step (2) into a trained Three-dimensional full convolution Network (3D FCN Network for short) to obtain voxels of the fetal craniocerebral Three-dimensional ultrasonic data.
(4) And (4) calculating the volume of the head of the fetus by using the voxel of the three-dimensional ultrasonic data of the skull of the fetus obtained in the step (3).
Specifically, the head volume V is calculated by the formula: v is VP×Uv. Wherein, VpIs the voxel, U, obtained in step (3)vIs the volume of a unit voxel (this value is known and typically corresponds to 2cm3-3cm3 per unit voxel).
Specifically, the 3D FCN network in the present invention is obtained by training through the following steps:
A. a data set is acquired, which comprises fetal craniocerebral three-dimensional ultrasonic data acquired from three-dimensional ultrasonic equipment manufactured by mainstream manufacturers (including maire, union photograph, siemens and the like) on the market, fetal craniocerebral position information manually labeled by an ultrasonographer for each fetal craniocerebral three-dimensional ultrasonic data, and voxels calculated after the ultrasonographer labels the fetal craniocerebral.
B. B, denoising the data set obtained in the step A by adopting a median filtering method, normalizing the denoised data set, and randomly dividing the normalized data set into a training set, a verification set and a test set;
specifically, the preprocessed data set is randomly divided into 3 parts, 70% of which is used as a training set (Trainset), 20% of which is used as a verification set (Validation set), and 10% of which is used as a Test set (Test set). In this example, there are a total of 800 data sets, with the training set comprising 560 data sets, the validation set comprising 160 data sets, and the test set comprising 80 data sets.
For the 3D FCN network used in the present invention, the network structure is as follows:
the first layer is an input layer, the input of which is a matrix of 128 x 1 pixels;
the second layer is a convolution layer with convolution kernel size of 3 x 3, number of convolution kernels of 32, step size of 1, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the third layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), and a layer output matrix of 64 x 32;
the fourth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 64, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 64 × 64 is output;
the fifth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 32 x 64;
the sixth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the seventh layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the eighth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 16 x 128;
the ninth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the tenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the eleventh layer is an deconvolution layer with deconvolution kernel size of 4 x 4, number of deconvolution kernels of 128, which uses 2 times the upsampling operation, outputting a matrix of size 32 x 128;
the twelfth layer is a convolution layer, the convolution kernel size is 1 × 1, the number of convolution kernels is 128, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 32 × 128 is output;
the thirteenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and matrix with size of 32 × 128 is output;
a fourteenth layer is an deconvolution layer with deconvolution kernel size of 4 x 4, with a number of deconvolution kernels of 64, which uses 2 times the upsampling operation, outputting a matrix with size of 64 x 64;
the fifteenth layer is a convolution layer with convolution kernel size of 1 x 1, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and output matrix size of 64 x 64;
the sixteenth layer is a convolution layer with convolution kernel size of 3 x 3, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and the output size is a matrix of 64 x 64;
a seventeenth layer is a deconvolution layer with deconvolution kernel size of 4 x 4, with 32 deconvolution kernels, which uses 2 times the upsampling operation, outputting a matrix of size 128 x 32;
the eighteenth layer is a convolution layer with convolution kernel size 1 x 1, step size 1, number of convolution kernels 32, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the nineteenth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 32, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 128 × 32 is output;
the twentieth layer is a convolution layer with convolution kernel size 1 x 1, convolution kernel number 1, step size 1, this layer is filled using SAME pattern, and the output size is 128 x 1 matrix.
C. The training set (560 data sets in this example) in the normalized data set of step B is input into the 3D FCN network to obtain an inference output of the fetal head volume, which is input into a loss function in the 3D FCN network to obtain a loss value.
Specifically, the loss function is: l (x, y) ═ x-y)2, where x is the fetal head volume acquired by the sonographer from the manually labeled fetal craniocerebral location information, which is specifically equal to the product between the volume of the voxels acquired by the sonographer from the manually labeled fetal craniocerebral location information and the unit voxel, and y is the inferential output of the fetal head volume.
D, optimizing a loss function in the 3D FCN according to a random Gradient Descent (SGD) algorithm and the loss value obtained in the step C to update the 3D FCN;
E. and C, repeatedly executing the step C and the step D aiming at the rest data sets in the training set part in the data set obtained in the step B until the 3D FCN network is converged to the best, thereby obtaining the trained 3D FCN network.
F. B, verifying the trained 3D FCN network by using the verification set in the data set obtained in the step B;
G. and D, testing the trained 3D FCN network by using the test set in the data set obtained in the step B.
Test results
And inputting the three-dimensional ultrasonic image in the test set into the trained 3D FCN, wherein the 3D FCN can automatically identify the head volume of the fetus.
The method uses Mean Square Error (MSE) to measure the similarity of fetal craniocerebral ultrasonic images, and uses Mean Absolute Percentage Error (MAPE) to measure the detection rate of fetal head volume.
Specifically, the formula for calculating the mean square error is:
Figure RE-GDA0002334850360000101
where n is the number of samples in the data set, yiIs the actual value of the volume of the head of the fetus,
Figure RE-GDA0002334850360000102
Is the inferential output of the fetal head volume. The calculation formula of the average absolute percentage error is as follows:
Figure RE-GDA0002334850360000103
where n is the number of samples in the data set, xiIs the actual value of the volume of the head of the fetus,
Figure RE-GDA0002334850360000104
Is the inferential output of the fetal head volume. The head volume detection rate is 1-MAPE. The Mean Square Error (MSE), Mean Absolute Percent Error (MAPE), and head volume detection rate of the trained model on the new test set are shown in table 1 below.
TABLE 1
Figure RE-GDA0002334850360000105
As can be seen from table 1, the head volume detection rate of the method of the present invention is high, and the Mean Square Error (MSE) and the Mean Absolute Percentage Error (MAPE) are low.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method for automatically detecting the head volume of a fetus in an ultrasonic image is characterized by comprising the following steps:
(1) acquiring a data set;
(2) and (2) preprocessing the data set obtained in the step (1) to obtain a preprocessed fetal craniocerebral three-dimensional ultrasonic data set.
(3) Inputting the fetal craniocerebral three-dimensional ultrasonic data set preprocessed in the step (2) into a trained 3D FCN network to obtain voxels of the fetal craniocerebral three-dimensional ultrasonic data.
(4) And (4) calculating the volume of the head of the fetus by using the voxel of the three-dimensional ultrasonic data of the skull of the fetus obtained in the step (3).
2. The method of claim 1, wherein the data set includes fetal craniocerebral three-dimensional ultrasound data obtained from a three-dimensional ultrasound device and fetal craniocerebral position information manually labeled by a sonographer for each fetal craniocerebral three-dimensional ultrasound data.
3. The method for automatically detecting the volume of the fetal head in the ultrasonic image of claim 1, wherein the preprocessing of the data set obtained in the step (1) in the step (2) comprises a median filtering process and a normalization process which are performed sequentially.
4. The method for automatically detecting the volume of the fetal head in an ultrasonic image of claim 1, wherein the head volume V is calculated by the formula: v is VP×UvIn which V ispIs the voxel, U, obtained in step (3)vIs the volume of a unit voxel.
5. The method for automatically detecting the volume of the fetal head in an ultrasound image of claim 1, wherein the 3d fcn network is trained by the following steps:
A. acquiring a data set which comprises fetal craniocerebral three-dimensional ultrasonic data acquired from a three-dimensional ultrasonic device and fetal craniocerebral position information manually labeled by a sonographer for each fetal craniocerebral three-dimensional ultrasonic data;
B. b, denoising the data set obtained in the step A by adopting a median filtering method, normalizing the denoised data set, and randomly dividing the normalized data set into a training set, a verification set and a test set;
C. and B, inputting the training set in the data set subjected to normalization processing in the step B into a 3D FCN network to obtain inference output of the volume of the head of the fetus, and inputting the inference output into a loss function in the 3D FCN network to obtain a loss value.
D. Optimizing a loss function in the 3D FCN according to a random gradient descent algorithm and by using the loss value obtained in the step C so as to update the 3D FCN;
E. and C, repeatedly executing the step C and the step D aiming at the rest data sets in the training set part in the data set obtained in the step B until the 3D FCN network is converged to the best, thereby obtaining the trained 3D FCN network.
6. The method of claim 5, wherein the loss function is: l (x, y) ═ x-y)2, where x is the fetal head volume acquired by the sonographer from the manually labeled fetal craniocerebral location information, which is specifically equal to the product between the volume of the voxels acquired by the sonographer from the manually labeled fetal craniocerebral location information and the unit voxel, and y is the inferential output of the fetal head volume.
7. The method for automatically detecting the volume of the fetal head in an ultrasonic image of claim 5, wherein the network structure of the 3D FCN is as follows:
the first layer is an input layer, the input of which is a matrix of 128 x 1 pixels;
the second layer is a convolution layer with convolution kernel size of 3 x 3, number of convolution kernels of 32, step size of 1, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the third layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), and a layer output matrix of 64 x 32;
the fourth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 64, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 64 × 64 is output;
the fifth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 32 x 64;
the sixth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the seventh layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and a matrix with size of 32 × 128 is output;
the eighth layer is a pooling layer with a pooling window size of 2 x 2, step size of (2, 2, 2), the layer output matrix of 16 x 128;
the ninth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the tenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 256, step size of 1, this layer is filled using SAME pattern, and matrix with size of 16 × 256 is output;
the eleventh layer is an deconvolution layer with deconvolution kernel size of 4 x 4, number of deconvolution kernels of 128, which uses 2 times the upsampling operation, outputting a matrix of size 32 x 128;
the twelfth layer is a convolution layer, the convolution kernel size is 1 × 1, the number of convolution kernels is 128, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 32 × 128 is output;
the thirteenth layer is a convolution layer with convolution kernel size of 3 × 3, number of convolution kernels of 128, step size of 1, this layer is filled using SAME pattern, and matrix with size of 32 × 128 is output;
a fourteenth layer is an deconvolution layer with deconvolution kernel size of 4 x 4, with a number of deconvolution kernels of 64, which uses 2 times the upsampling operation, outputting a matrix with size of 64 x 64;
the fifteenth layer is a convolution layer with convolution kernel size of 1 x 1, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and output matrix size of 64 x 64;
the sixteenth layer is a convolution layer with convolution kernel size of 3 x 3, convolution kernel number of 64, step size of 1, this layer is filled using SAME pattern, and the output size is a matrix of 64 x 64;
a seventeenth layer is a deconvolution layer with deconvolution kernel size of 4 x 4, with 32 deconvolution kernels, which uses 2 times the upsampling operation, outputting a matrix of size 128 x 32;
the eighteenth layer is a convolution layer with convolution kernel size 1 x 1, step size 1, number of convolution kernels 32, this layer is filled using SAME pattern, and the output size is 128 x 32 matrix;
the nineteenth layer is a convolution layer, the convolution kernel size is 3 × 3, the number of convolution kernels is 32, the step size is 1, the layer is filled by using an SAME mode, and a matrix with the size of 128 × 32 is output;
the twentieth layer is a convolution layer with convolution kernel size 1 x 1, convolution kernel number 1, step size 1, this layer is filled using SAME pattern, and the output size is 128 x 1 matrix.
8. An automatic detection system for fetal head volume in an ultrasound image, comprising:
a first module for obtaining a data set;
the second module is used for preprocessing the data set acquired by the first module to obtain a preprocessed fetal craniocerebral three-dimensional ultrasonic data set;
the third module is used for inputting the fetal brain three-dimensional ultrasonic data set pretreated by the second module into the trained 3D FCN network to obtain voxels of the fetal brain three-dimensional ultrasonic data;
and the fourth module is used for calculating the volume of the head of the fetus by using the voxel of the three-dimensional ultrasonic data of the head of the fetus, which is obtained by the third module.
CN201910997674.6A 2019-10-21 2019-10-21 Automatic detection method and system for fetal head volume in ultrasonic image Active CN110974302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910997674.6A CN110974302B (en) 2019-10-21 2019-10-21 Automatic detection method and system for fetal head volume in ultrasonic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910997674.6A CN110974302B (en) 2019-10-21 2019-10-21 Automatic detection method and system for fetal head volume in ultrasonic image

Publications (2)

Publication Number Publication Date
CN110974302A true CN110974302A (en) 2020-04-10
CN110974302B CN110974302B (en) 2022-08-26

Family

ID=70082198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910997674.6A Active CN110974302B (en) 2019-10-21 2019-10-21 Automatic detection method and system for fetal head volume in ultrasonic image

Country Status (1)

Country Link
CN (1) CN110974302B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932513A (en) * 2020-08-07 2020-11-13 深圳市妇幼保健院 Method and system for imaging three-dimensional image of fetal sulcus gyrus in ultrasonic image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6575907B1 (en) * 1999-07-12 2003-06-10 Biomedicom, Creative Biomedical Computing Ltd. Determination of fetal weight in utero
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN107766874A (en) * 2017-09-07 2018-03-06 沈燕红 A kind of measuring method and measuring system of ultrasound volume biological parameter
CN109602434A (en) * 2018-03-09 2019-04-12 上海慈卫信息技术有限公司 A kind of fetal in utero cranial image detection method
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6575907B1 (en) * 1999-07-12 2003-06-10 Biomedicom, Creative Biomedical Computing Ltd. Determination of fetal weight in utero
US20090093717A1 (en) * 2007-10-04 2009-04-09 Siemens Corporate Research, Inc. Automated Fetal Measurement From Three-Dimensional Ultrasound Data
CN107766874A (en) * 2017-09-07 2018-03-06 沈燕红 A kind of measuring method and measuring system of ultrasound volume biological parameter
CN109602434A (en) * 2018-03-09 2019-04-12 上海慈卫信息技术有限公司 A kind of fetal in utero cranial image detection method
CN109671086A (en) * 2018-12-19 2019-04-23 深圳大学 A kind of fetus head full-automatic partition method based on three-D ultrasonic

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932513A (en) * 2020-08-07 2020-11-13 深圳市妇幼保健院 Method and system for imaging three-dimensional image of fetal sulcus gyrus in ultrasonic image

Also Published As

Publication number Publication date
CN110974302B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN107730497B (en) Intravascular plaque attribute analysis method based on deep migration learning
Cerrolaza et al. Deep learning with ultrasound physics for fetal skull segmentation
Rueda et al. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN110555836A (en) Automatic identification method and system for standard fetal section in ultrasonic image
CN110613483B (en) System for detecting fetal craniocerebral abnormality based on machine learning
CN109543623B (en) Fetus development condition prediction device based on nuclear magnetic resonance imaging
US9607392B2 (en) System and method of automatically detecting tissue abnormalities
CN103249358A (en) Medical image processing device
CN110335235A (en) Processing unit, processing system and the medium of cardiologic medical image
CN112638273A (en) Biometric measurement and quality assessment
CN110974302B (en) Automatic detection method and system for fetal head volume in ultrasonic image
CN114565572A (en) Cerebral hemorrhage CT image classification method based on image sequence analysis
CN111932513A (en) Method and system for imaging three-dimensional image of fetal sulcus gyrus in ultrasonic image
CN112992353A (en) Method and device for accurately predicting due date, computer equipment and storage medium
CN209770401U (en) Medical image rapid analysis processing system
CN115760851B (en) Ultrasonic image data processing equipment, system and computer readable storage medium based on machine learning
CN103501701A (en) Diagnostic brain imaging
CN113409275B (en) Method for determining thickness of transparent layer behind fetal neck based on ultrasonic image and related device
WO2023133929A1 (en) Ultrasound-based human tissue symmetry detection and analysis method
CN111481233B (en) Thickness measuring method for transparent layer of fetal cervical item
CN111862014A (en) ALVI automatic measurement method and device based on left and right ventricle segmentation
CN111127305A (en) Method for automatically obtaining standard tangent plane based on three-dimensional volume of fetal craniofacial part in early pregnancy
Abonyi et al. Texture analysis of sonographic image of placenta in pregnancies with normal and adverse outcomes, a pilot study
Easley et al. Inter-observer variability of vaginal wall segmentation from MRI: A statistical shape analysis approach

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211201

Address after: No.1023-1063, shatai South Road, Guangzhou, Guangdong 510515

Applicant after: SOUTHERN MEDICAL University

Applicant after: Hunan University

Address before: 518028 ultrasound department, 4th floor, building 1, Shenzhen maternal and child health hospital, 2004 Hongli Road, Futian District, Shenzhen City, Guangdong Province

Applicant before: Li Shengli

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230505

Address after: 518000, 6th Floor, Building A3, Nanshan Zhiyuan, No. 1001 Xueyuan Avenue, Changyuan Community, Taoyuan Street, Nanshan District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Lanxiang Zhiying Technology Co.,Ltd.

Address before: No.1023-1063, shatai South Road, Guangzhou, Guangdong 510515

Patentee before: SOUTHERN MEDICAL University

Patentee before: HUNAN University