WO2020215484A1 - 胎儿颈部透明层厚度测量方法、设备及存储介质 - Google Patents

胎儿颈部透明层厚度测量方法、设备及存储介质 Download PDF

Info

Publication number
WO2020215484A1
WO2020215484A1 PCT/CN2019/093710 CN2019093710W WO2020215484A1 WO 2020215484 A1 WO2020215484 A1 WO 2020215484A1 CN 2019093710 W CN2019093710 W CN 2019093710W WO 2020215484 A1 WO2020215484 A1 WO 2020215484A1
Authority
WO
WIPO (PCT)
Prior art keywords
distribution area
transparent layer
neck
pixel
thickness
Prior art date
Application number
PCT/CN2019/093710
Other languages
English (en)
French (fr)
Inventor
殷晨
李璐
赵明昌
Original Assignee
无锡祥生医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910320623.XA external-priority patent/CN111820948B/zh
Priority claimed from CN201910451627.1A external-priority patent/CN110163907B/zh
Application filed by 无锡祥生医疗科技股份有限公司 filed Critical 无锡祥生医疗科技股份有限公司
Publication of WO2020215484A1 publication Critical patent/WO2020215484A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Definitions

  • the invention relates to the technical field of ultrasonic image processing, in particular to a method, equipment and storage medium for measuring the thickness of the transparent layer of the fetal neck.
  • ultrasound examination plays an important role in the prenatal diagnosis and screening of fetal malformations.
  • Ultrasound examination as one of the important methods for monitoring fetal intrauterine growth, has been widely used in clinical practice, from two-dimensional ultrasound to four-dimensional ultrasound currently used, whether in the operating skills of the doctor or the instrument Both functions and resolution have been greatly improved.
  • Using ultrasound to measure the thickness of the fetal neck hyaline during pregnancy can assist doctors in diagnosing fetal chromosomal abnormalities and fetal heart malformations during pregnancy. It is a key inspection item during pregnancy.
  • the present invention aims to solve at least one of the technical problems existing in the prior art, and provides a method, equipment and storage medium for measuring the thickness of the transparent layer of the fetus’ neck, so as to realize the automatic selection of the best section of the transparent layer of the fetus’ neck and the neck
  • the automatic identification and automatic thickness measurement of the transparent layer improve the standardization, accuracy and efficiency of the measurement.
  • the first aspect of the present invention provides a method for measuring the thickness of the transparent layer of the fetal neck, including:
  • Segmenting a second distribution area including the neck transparent layer from the first distribution area of the neck transparent layer according to a gradient method the recognition accuracy of the second distribution area is greater than the recognition accuracy of the first distribution area
  • traversing the pixel units in the second distribution area includes:
  • segmenting the second distribution area including the neck transparent layer from the first distribution area of the neck transparent layer according to a gradient method includes:
  • calculating the rotation angle required for the first distribution area to rotate to the horizontal position includes:
  • segmenting the second distribution area including the neck transparent layer from the first distribution area of the neck transparent layer according to a gradient method includes:
  • the coordinates of the upper and lower edges of the second distribution area are calculated through a dynamic programming algorithm according to the pixel coordinates corresponding to the smallest first loss value and the smallest second loss value.
  • the dynamic programming algorithm reverses the optimal pixel point coordinates of the upper and lower edges of all the second distribution areas in an iterative manner.
  • the method before traversing the pixel units in the second distribution area, the method includes:
  • the color pixel value in the second distribution area is filled with 255, and the color pixel value outside the second distribution area is filled with 0.
  • traversing the pixel units in the contour of the second distribution area by means of pixel values includes:
  • the thickness of the transparent layer of the neck of the fetus is calculated according to the coordinates of the thickest pixels on the upper and lower edges, including:
  • the second aspect of the present invention also provides another method for measuring the thickness of the transparent layer of the fetal neck, including:
  • the first convolution measurement model including at least a classification neural network, a detection neural network, and a segmentation neural network;
  • Segmenting a second distribution area including the neck transparent layer from the first distribution area of the neck transparent layer according to a gradient method the recognition accuracy of the second distribution area is greater than the recognition accuracy of the first distribution area
  • segmenting the first distribution area of the transparent layer of the neck from the dynamic ultrasound video through the trained first convolution measurement model includes:
  • the first distribution area of the neck transparent layer is segmented from the location area of the neck transparent layer through the segmentation neural network.
  • the classification neural network is provided with a cross-entropy loss function to identify the best single-frame ultrasound image for measuring the transparent layer of the fetal neck from the dynamic ultrasound video.
  • segmentation neural network is provided with a pixel-level cross-entropy loss function.
  • the third aspect of the present invention provides an ultrasonic device including:
  • Memory used to store computer programs
  • the processor is configured to execute a computer program so that the processor executes the above-mentioned method for measuring the thickness of the transparent layer of the fetal neck.
  • the fourth aspect of the present invention provides a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, the steps of the method for measuring the thickness of the transparent layer of the fetal neck are implemented. .
  • the method for measuring the thickness of the fetal neck transparent layer of the present invention can automatically identify the first distribution area of the fetal neck transparent layer from the ultrasound image through a convolutional neural network model, and then use a gradient method from the first distribution area of the neck transparent layer
  • the second distribution area of the best neck transparent layer is segmented to improve the recognition accuracy of the neck transparent layer, and finally the pixel units in the second distribution area are traversed to obtain the maximum width value of the second distribution area in the width direction, so as to achieve
  • the automatic measurement of the thickness of the transparent layer of the neck has high accuracy and greatly improves the work efficiency of doctors.
  • the method for measuring the thickness of the fetal neck transparent layer of the present invention can perform rotation processing for the identified first distribution area of the neck transparent layer not in the horizontal position, and does not require the doctor to collect the ultrasound image in the horizontal position when collecting the ultrasound image.
  • the requirement for ultrasound images containing the transparent layer of the fetal neck is reduced, and the measurement speed and accuracy are improved.
  • the ultrasound equipment of the present invention can automatically identify the first distribution area of the transparent layer of the neck of the fetus, and then segment the second distribution area of the best transparent layer of the neck from the first distribution area of the transparent layer of the neck through a gradient method, thereby improving
  • the recognition accuracy of the transparent layer of the neck is finally obtained by traversing the pixel units in the second distribution area to obtain the maximum width value of the second distribution area along the width direction, which realizes the automatic measurement of the thickness of the neck transparent layer with high accuracy.
  • Fig. 1 is a schematic flowchart of the method for measuring the thickness of the transparent layer of the fetal neck of the present invention.
  • Fig. 2 is a schematic flow chart of the rotation to obtain a horizontal first distribution area of the present invention.
  • Fig. 3 is a schematic flow chart of the first distribution area rotation angle calculation of the present invention.
  • Fig. 4 is a schematic flow chart of calculating the upper and lower edge coordinates of the second distribution area according to the present invention.
  • FIG. 5 is a schematic flowchart of a method for measuring the thickness of the transparent layer of the fetal neck according to an embodiment of the present invention.
  • Figure 6a is an ultrasound image acquired by the present invention.
  • Fig. 6b is a schematic diagram showing that the first distribution area of the transparent layer of the neck has been identified in the ultrasound image obtained by the present invention.
  • Fig. 6c is a schematic diagram of the ultrasound image acquired by the present invention with the first centroid and the second centroid marked.
  • Fig. 6d is a schematic diagram of the first distribution area of the transparent layer of the neck that has been rotated to obtain a horizontal position.
  • Fig. 7 is a schematic diagram showing that the second distribution area of the transparent layer of the neck has been identified in the ultrasound image obtained by the present invention.
  • Fig. 8 is a mask area of an ultrasound image acquired by the present invention.
  • FIG. 9 is a schematic flow chart of another method for measuring the thickness of the transparent layer of the fetal neck of the present invention
  • FIG. 10 is a schematic diagram of a convolutional neural network structure according to an embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the structure of the segmentation neural network in the convolutional neural network model of the present invention.
  • the thickness of the transparent layer of the fetal neck is mainly measured by ultrasound equipment. After the ultrasound image is obtained by the doctor, the thickness of the transparent layer of the fetal neck is measured by the naked eye and experience. After manual calibration, the thickness of the transparent layer of the fetal neck is measured. low.
  • the first aspect of the present invention provides a method for measuring the thickness of the transparent layer of the fetal neck, as shown in Fig. 1, including the following steps:
  • S300 Segment a second distribution area 700 including the neck transparent layer from the first distribution area 600 of the neck transparent layer according to a gradient method, and the recognition accuracy of the second distribution area is greater than that of the first distribution area;
  • S400 Traverse the pixel units in the second distribution area 700 to obtain a maximum width value of the second distribution area along the width direction.
  • the method for measuring the thickness of the fetal neck transparent layer of the present invention can automatically identify the first distribution area 600 of the fetal neck transparent layer from the ultrasound image through a convolutional neural network model, and then use a gradient method from the first distribution area of the neck transparent layer
  • the second distribution area 700 of the best neck transparent layer is segmented in 600, which improves the recognition accuracy of the neck transparent layer.
  • the pixel units in the second distribution area 700 are traversed to obtain the width direction of the second distribution area.
  • the maximum width value realizes the automatic measurement of the thickness of the transparent layer of the neck with high accuracy and greatly improves the doctor's work efficiency. It can be understood that the first distribution area is an image of the approximate distribution area of the neck transparent layer, and the second distribution area is an image of the precise distribution area of the neck transparent layer.
  • the ultrasound image including at least the transparent layer of the neck of the fetus is acquired mainly by the transducer of the ultrasound imaging device.
  • the ultrasound imaging device at least includes a transducer, an ultrasound host, an input unit, a control unit, and a memory.
  • the ultrasonic imaging device may include a display screen, and the display screen of the ultrasonic imaging device may be a display of the identification system.
  • the transducer is used to transmit and receive ultrasonic waves.
  • the transducer is excited by the transmitted pulse to transmit ultrasonic waves to the target tissue (for example, the organs, tissues, blood vessels, etc.) in the human or animal body, and receive the reflection from the target area after a certain delay
  • the returned ultrasonic echo with the information of the target tissue, and the ultrasonic echo is reconverted into an electrical signal to obtain an ultrasonic image or video.
  • the transducer can be connected to the ultrasound host in a wired or wireless manner.
  • the input unit is used to input control instructions of the operator.
  • the input unit may be at least one of a keyboard, trackball, mouse, touch panel, handle, dial, joystick, and foot switch.
  • the input unit can also input non-contact signals, such as voice, gesture, line of sight, or brain wave signals.
  • the control unit can at least control scan information such as focus information, driving frequency information, driving voltage information, and imaging mode. According to the different imaging modes required by the user, the control unit processes the signals differently to obtain different modes of ultrasound image data, and then forms different modes of ultrasound images through logarithmic compression, dynamic range adjustment, digital scan conversion, etc., such as B Images, C images, D images, Doppler blood flow images, elastic images containing tissue elastic properties, etc., or other types of two-dimensional ultrasound images or three-dimensional ultrasound images.
  • control scan information such as focus information, driving frequency information, driving voltage information, and imaging mode.
  • the control unit processes the signals differently to obtain different modes of ultrasound image data, and then forms different modes of ultrasound images through logarithmic compression, dynamic range adjustment, digital scan conversion, etc., such as B Images, C images, D images, Doppler blood flow images, elastic images containing tissue elastic properties, etc., or other types of two-dimensional ultrasound images or three-dimensional ultrasound images.
  • the acquired ultrasound image containing at least the transparent layer of the fetal neck may also be an ultrasound image stored in a storage medium, for example, a cloud server, a U disk, or a hard disk.
  • the first distribution area 600 of the neck transparent layer in the ultrasound image is identified through the convolutional neural network model.
  • the convolutional neural network model uses the convolutional neural network to identify some of the marked fetal neck transparent layers.
  • the ultrasound image is determined for training.
  • the trained convolutional neural network model can automatically identify the first distribution area of the neck transparent layer in the ultrasound image, and the first distribution area 600 is the approximate distribution area identified by the convolutional neural network model.
  • the trained convolutional neural network model of the present invention includes an input layer, a hidden layer, and an output layer; the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers; the input ultrasound image first passes through several convolutional layers and The down-sampling layer performs convolution and down-sampling operations separately, and then passes through several convolutional layers and up-sampling layers to perform convolution and up-sampling operations respectively; the input layer and hidden layer of the neural network, and the hidden layer The hidden layer and the output layer are connected by weight parameters; the convolutional layer in the convolutional neural network model is used to automatically extract the features in the ultrasound image.
  • the neural network of the convolutional neural network model copies the features from the shallower convolutional layer to the deeper convolutional layer, and the copied features and the deeper convolutional layer every time the ultrasonic image is convolved or sampled. The features of corresponding pixels are added to enter the next convolutional layer.
  • the first distribution area 600 of the neck transparent layer can be judged whether the first distribution area 600 of the neck transparent layer is in the horizontal position; if the first distribution area is not in the horizontal position, the rotation angle required for the first distribution area 600 to rotate to the horizontal position is calculated . Judging whether the first distribution area of the transparent layer of the neck is in a horizontal position can be judged by a trained neural network, or it can be judged by a medical staff making a manual judgment on the first distribution area.
  • S2203 Divide the circumscribed rectangle R n into a first circumscribed rectangle R l and a second circumscribed rectangle R r equally along the length direction;
  • represents the angle of rotation
  • affine transformation also known as affine mapping, refers to a linear transformation in a vector space followed by a translation to transform it into another vector space in geometry.
  • a horizontal first distribution area 600 is obtained by performing affine transformation according to the calculated rotation angle.
  • the present invention segments the second distribution area 700 of the best neck transparent layer from the first distribution area 600 of the neck transparent layer according to a gradient method. It can be understood that the accuracy of the second distribution area 700 is greater than the accuracy of the first distribution area 600. As shown in Figure 4, specifically including
  • N represents the width of the ultrasound image
  • p 1 , p 2 , p 3 , p N-1 , and p N represent the coordinates of the first pixel to the Nth pixel on the upper and lower edges of the ultrasound image. If you need to understand, the pixel
  • the coordinates of a point include x-axis and y-axis.
  • S320 Calculate the first loss value corresponding to each pixel point coordinate in the first distribution area 600 according to the first loss function, and select the smallest first loss value;
  • p j represents the coordinates of the j-th pixel
  • Z l (p j ) represents the use of the bilateral effect of the image Laplacian to make the pixel coordinates to be searched close to the lower edge of the first distribution area 600.
  • the f adj (p j ,p j-1 ) function limits the distance between two adjacent pixels to be small, thereby ensuring the continuity of the edge; q is the coordinate of the pixel before p, and t is the pixel after p
  • the coordinate, ⁇ here is 90 degrees, which means that only the second derivative in the y direction is calculated.
  • S330 Calculate a second loss value corresponding to each pixel point coordinate in the first distribution area 600 according to the second loss function, and select the smallest second loss value;
  • Z u (p j ) indicates that the two-sided effect of the image Laplacian is used to make the pixel coordinates to be found close to the upper edge of the first distribution area 600.
  • q is the coordinate of the pixel before p, ⁇ here is 90 degrees, which means calculating the second derivative in the y direction, It is the sigmoid function of the coordinate distance between two pixels.
  • S340 Calculate the coordinates of the upper and lower edges of the second distribution area 700 through a dynamic programming algorithm according to the pixel coordinates corresponding to the smallest first loss value and the smallest second loss value.
  • the dynamic programming algorithm is to reverse the optimal pixel point coordinates of the upper and lower edges of all the second distribution areas 700 in an iterative manner.
  • the dynamic programming algorithm is:
  • the optimal pixel point coordinates of the upper and lower edges of all the second distribution areas 700 can be deduced to highlight the contour of the second distribution area 700.
  • the highlight display may use lines or curves to outline the contour of the second partial area, or highlight the contour of the second distribution area 700.
  • the thickness of the transparent layer of the fetal neck is the thickness of the thickest position of the transparent layer.
  • the present invention traverses the pixel units in the second distribution area to obtain the maximum width value of the second distribution area in the width direction.
  • the contour of the second distribution area is first highlighted, and the pixel units in the contour of the second distribution area are traversed in a pixel value manner to obtain the maximum width value of the second distribution area in the width direction. That is, the maximum distance between the upper and lower edges of the second distribution area 700 is determined as the thickness of the fetal neck transparent layer.
  • the color pixel values in the second distribution area 700 are filled with 255, and the color pixel values outside the second distribution area 700 are filled with 0, so as to obtain the mask area of the second distribution area 700.
  • traversing the pixel units in the contour of the second distribution area by means of pixel values includes:
  • S410 Divide the second distribution area 700 into a plurality of divided pixel units with equal width values along the length direction of the second distribution area 700;
  • S420 traverse several equally spaced divided pixel units, select the divided pixel unit with the largest area, and the equal width value is 1 pixel;
  • W x represents the weight, which is the average value of the pixels above the original image in the current S x segmented pixel unit area
  • p x, y are the pixel values of the original coordinates
  • the range of x is the second distribution area 700 Width
  • E x represents the numerical result of the calculation.
  • S440 Calculate the thickness of the transparent layer of the neck of the fetus according to the coordinates of the thickest pixel points on the upper and lower edges.
  • the x position corresponding to the largest E x and its corresponding upper and lower vertical coordinates p x and It is the coordinates of the thickest area of the neck transparent zone. After the coordinates are rotated, the position of the coordinates is inversely transformed into the original space to obtain the final actual coordinates, and the upper and lower two points are calculated according to the two-point distance formula Distance to get the final thickness.
  • the second aspect of the present invention also provides another method for measuring the thickness of the transparent layer of the fetal neck, as shown in Fig. 9, including:
  • S510 Acquire a dynamic ultrasound video containing at least the transparent layer of the fetal neck
  • the first convolution measurement model includes at least a classification neural network, a detection neural network, and a segmentation neural network;
  • S530 Separate a second distribution area including the neck transparent layer from the first distribution area of the neck transparent layer according to a gradient method, and the recognition accuracy of the second distribution area is greater than the recognition accuracy of the first distribution area;
  • S540 Traverse the pixel units in the second distribution area to obtain a maximum width value of the second distribution area along the width direction.
  • the present invention can segment the first distribution area 600 of the neck transparent layer from the dynamic ultrasound video through the trained first convolution measurement model. Then the second distribution area 700 of the best neck transparent layer is segmented from the first distribution area 600 of the neck transparent layer by a gradient method, which improves the recognition accuracy of the neck transparent layer, and finally traverses the pixels in the second distribution area
  • the unit recognizes the thickest position of the transparent layer and measures the thickness of the transparent layer on the neck of the fetus, which realizes the automatic measurement of the thickness of the transparent layer on the neck with high accuracy and greatly improves the work efficiency of the doctor.
  • the first distribution area is the approximate distribution area of the neck transparent layer
  • the second distribution area is the precise distribution area of the neck transparent layer.
  • a classification neural network is used to identify the best single-frame ultrasound image for measuring the transparent layer of the fetal neck from the dynamic ultrasound video.
  • the input of the classification neural network is all the complete ultrasound images in the dynamic ultrasound video.
  • the classification neural network passes through several convolutional layers and down-sampling layers, it outputs the prediction result of the ultrasound image according to the features in the ultrasound image.
  • the multiple convolutional layers of the classification neural network are used to automatically extract the features in the ultrasound image; the convolutional layers of the classification neural network, the input and the convolutional layer, and the convolutional layer and the output are connected by weight parameters ; Set the input layer size to match the size of the ultrasound image input to the neural network.
  • the classification neural network includes at least a convolutional layer (conv), a maximum pooling layer (max-pooling), an average layer (avg), a logistic regression layer (softmax) and a filter.
  • conv convolutional layer
  • max-pooling maximum pooling layer
  • avg average layer
  • softmax logistic regression layer
  • an implementation of the classification neural network includes 5 convolutional layers, 4 max-pooling layers, 1 average layer (avg), and 1 logistic regression layer (softmax).
  • set the input layer size of the classification neural network to 416*416*1, after several 3*3 convolution operations and maximum pooling operations, and then perform an averaging operation on each group of features to obtain the input ultrasound image Is the probability of the best measured image or not, and finally performs the softmax operation
  • the calculation method is as follows:
  • i represents the first value or the second value output by the 10th layer in Figure 10
  • the denominator on the right side of the equal sign represents the base number e
  • the two values output by the 10th layer are the sum of the results obtained by the index calculation respectively
  • softmax i represents the probability result output after the operation of the logistic regression layer; according to the probability result, the best ultrasound image that measures the transparent layer of the fetal neck is identified in the input dynamic ultrasound video.
  • the detection neural network After the classification neural network identifies the best single-frame ultrasound image of the transparent layer of the fetal neck from the dynamic ultrasound video, the detection neural network detects the pixel probability of the pixel in the transparent layer in the best ultrasound image, and marks it according to the pixel probability The location area of the transparent layer of the neck. Check the neural network to determine the pixels with pixel probability greater than and/or equal to the preset probability as the transparent layer.
  • the detection neural network includes an input layer, a hidden layer, and an output layer.
  • the input layer and hidden layer of the detection neural network, between each hidden layer, and between the hidden layer and the output layer are connected by weight parameters;
  • the hidden layer includes Convolutional layer, maximum pooling layer, and combined layer; first, several convolutional layers and several maximum pooling layers are alternately connected, then several convolutional layers are connected, and then a combined layer is connected, and the combined layer is connected before
  • the high-level feature layer is combined with one or several hidden layers before the high-level feature layer; the length and width of the output image of the high-level feature layer and the combined hidden layer are the same; the high-level feature layer is the same as the previous one.
  • the layers or several hidden layers are combined and input together to the last convolutional layer, and the last convolutional layer is used as the output layer.
  • the detection neural network detects the pixel probability of the transparent layer in the best ultrasound image. After marking the location area of the transparent layer of the neck according to the pixel probability, the neck is segmented from the location area of the transparent layer of the neck by the segmentation neural network.
  • the first distribution area 600 of the transparent layer can use lines or curves to outline the outline of the first part of the area, or highlight the outline of a distribution area.
  • the segmentation neural network segments the first distribution area 600 of the neck transparent layer from the location area of the neck transparent layer by highlighting.
  • a segmentation neural network includes an input layer, a hidden layer, and an output layer; the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers; the input ultrasound image first passes through several convolutional layers and down-sampling layers , Perform convolution and down-sampling operations separately, and then go through several convolutional layers and up-sampling layers to perform convolution and up-sampling operations respectively; more preferably, when the segmentation neural network performs convolution or sampling processing on ultrasound images, Copy features from the shallower convolutional layer to the deeper convolutional layer, add the copied features and the feature corresponding pixels of the deeper convolutional layer to enter the next convolutional layer; segment the input layer and hidden layer of the neural network , The hidden layers, the hidden layer and the output layer are connected by weight parameters; the convolutional layer is used to automatically extract the features in the ultrasound image.
  • the second distribution area 700 of the best neck transparent layer is segmented from the first distribution area 600 of the neck transparent layer according to the gradient method; the second distribution area 700 in the second distribution area 700 is traversed by the pixel value weighting method Pixel unit, determine the maximum distance between the upper and lower edges of the second distribution area 700 as the thickness of the fetal neck transparent layer, that is, traverse the pixel units in the second distribution area to obtain the maximum value of the second distribution area in the width direction The width value.
  • the training process of the first convolution measurement model is as follows:
  • Step S1 Collect the dynamic ultrasound video of the fetus, and mark the dynamic ultrasound video.
  • the preferred annotation method in this embodiment is to mark the best single-frame ultrasound image used to measure the thickness of the transparent layer of the fetal neck in the video; collect and measure The best image of the thickness of the transparent layer of the fetus's neck, marking the transparent layer of the fetus's neck with a closed curve formed by continuous broken lines
  • Step S2 Establish a classification neural network, a detection neural network, and a segmentation neural network based on the collected dynamic ultrasound video, ultrasound image, and annotation information, and train the first convolution measurement model.
  • the processing flow of the neural network includes:
  • Step S21 Divide the collected ultrasound images into a training set, a verification set and a test set;
  • the collected ultrasound images used for neural network detection and neural network segmentation training are also divided into three sets.
  • the training set is used to train the neural network;
  • the validation set is used to verify the effect of the neural network and help select the optimal neural network model parameters;
  • the test set is used to measure the effect of the neural network.
  • the selection ratio of the training set, the verification set and the test set is 3/5, 1/5, and 1/5.
  • the size of the single-frame ultrasound image split in the dynamic ultrasound video collected by different brands of ultrasound equipment or different models of ultrasound equipment of the same brand is different, and the ultrasound images need to be preprocessed.
  • the specific processing method is to subtract each pixel value in the ultrasound image from the mean value of the image pixels and divide by the variance of the image pixels; the mean value of the ultrasound image after normalization is 0, and the variance is 1; The size has changed, and the template of the ultrasound image needs to be changed accordingly;
  • Step S22 establishing a neural network structure
  • the present invention first establishes a classification neural network to predict which frame of the collected dynamic ultrasound video is used to measure the thickness of the transparent layer of the fetal neck.
  • the best single-frame ultrasound image for measuring the transparent layer of the fetal neck is identified from the dynamic ultrasound video through the classification neural network.
  • the input of the classification neural network is all the complete ultrasound images in the dynamic ultrasound video. After the classification neural network passes through several convolutional layers and down-sampling layers, it outputs the prediction result of the ultrasound image according to the features in the ultrasound image.
  • the multiple convolutional layers of the classification neural network are used to automatically extract the features in the ultrasound image; the convolutional layers of the classification neural network, the input and the convolutional layer, and the convolutional layer and the output are connected by weight parameters ; Set the input layer size to match the size of the ultrasound image input to the neural network.
  • the present invention establishes a detection neural network to detect the portion of the neck transparent layer in the ultrasound image for the best neck transparent layer thickness measurement.
  • the input of the detection neural network is the collected ultrasound image of the best neck transparent layer thickness measurement.
  • the curve annotation of the ultrasound image is converted into a label box and used to train the neural network; the detection neural network will convolutional neural network end advanced features
  • the layer is combined with the low-level fine-grained features of the previous layer or the previous layers to increase the detection effect of the convolutional neural network on small target objects.
  • the input of the detection neural network is set to 416*416*1, and the output is set to 13*13*35; the detection neural network outputs the coordinate information and probability information of the possible marker frames of the transparent layer of the neck in the ultrasound image;
  • the input layer size of the segmentation neural network is 256*256*1.
  • two 256*256*16 features are obtained respectively, and then the downsampling operation is performed to obtain 128*128 *16 features; continue with some 3*3 convolutions and downsampling to get 64*64*64 features; continue with some upsampling and 3*3 convolutions to get 256*256*16 features; finally proceed to 1 *1
  • the convolution operation obtains the prediction result of 256*256*2.
  • the value of the prediction result ranges from 0 to 1, indicating the probability that the corresponding pixel in the ultrasound image is within the transparent layer of the fetal neck. It needs to be understood
  • the predicted result is a probability.
  • the gray rectangles represent the features extracted after each convolution or sampling operation of the image, and the white rectangles represent the features obtained by copying; preferably, the convolution of the segmented neural network selects the appropriate scale expansion convolution , In order to improve the receptive field of the network and improve the accuracy of network prediction.
  • the up-sampling layer and down-sampling layer in the network can be removed, and the input layer and output layer of the network are still guaranteed to have the same length and width; optionally, the input ultrasound image of the segmentation neural network can be detected in the detection network prediction On the basis of the frame, expand appropriately, such as expanding by 20 pixels up, down, left, and right.
  • Step S23 initialize the neural network: set the weight parameter of the neural network to a random number between 0 and 1;
  • Step S24 calculating the loss function of the neural network
  • the loss function of the classification neural network designed above is cross entropy loss;
  • the loss function of the detection network involved above includes the loss of the detection frame position and the prediction probability of the detection frame;
  • the loss function of the segmentation neural network involved above selects the pixel-level cross entropy loss;
  • the loss function of the detection network consists of two parts: the error of the probability prediction of the detection frame containing the target detection object and the error of the center coordinate, height, and width prediction; the error of the probability prediction of the detection frame without the target detection object, calculate The formula is:
  • ⁇ 1 - ⁇ 3 represents the proportion of each error in the total loss function, and each error is in the form of square error.
  • the first term of the Loss_function loss function represents the error of the probability prediction of the detection frame containing the part of the transparent layer of the target neck.
  • S 2 indicates that the ultrasound image is divided into S ⁇ S grid units; B indicates how many detection frames are set for each grid unit; Indicates whether the j-th detection frame of the i-th grid unit contains the target detection object. If the intersection of the detection frame and the label frame is large, the detection frame is considered to contain the target detection object. Otherwise, it is considered that the detection frame does not contain the target detection object, Represents the prediction probability of the detection network for the current j-th detection frame of the grid unit; the second term represents the prediction error of the position and length and width of the detection frame containing the target object.
  • x i, y i, h i , w i denotes the center position and the length, width information labels the i-th frame of grid cells, Indicates the corresponding information of the predicted bounding box; the third item is the error of the probability prediction of the detection frame that does not contain the target object, because the bounding box that does not contain the target object is the majority, so ⁇ 3 is usually set to be smaller than ⁇ 1 , otherwise It is impossible to train a network with better recognition effect.
  • the pixel-level cross-entropy loss function of the segmentation neural network is:
  • the pixel-level cross entropy loss function is the sum of the prediction errors on each pixel in the ultrasound image; where x, y are the length and width of the input image of the segmentation neural network, and p ij is the i-th row and j-th column in the ultrasound image The probability that the pixel is predicted by the segmentation neural network to be the predicted location.
  • t ij is the value corresponding to the pixel in the i-th row and j-th column of the ultrasound image in the ultrasound image template. If the pixel is a predicted location, the value is 1, otherwise , The value is 0; the closer the predicted probability of the segmentation neural network output to the ultrasound image template, the smaller the cross-entropy loss function;
  • Step S25 training a neural network to obtain a first convolution measurement model
  • transformation operations include rotation and scaling , Cropping, elastic deformation, etc.; preferably, only random rotation operations are performed in the present invention; optionally, an adaptive moment estimation optimization method is used to update the network parameters according to the loss function of the neural network;
  • Step S26 selecting the optimal neural network model parameters.
  • Intersection ratio (prediction result ⁇ graphic template)/(prediction result ⁇ graphic template)
  • the parameter under the maximum intersection ratio as the optimal parameter of the detection network; calculate the intersection ratio between the prediction result of the segmentation neural network and the verification image template obtained from the annotation conversion, and select the parameter under the maximum intersection ratio as the segmentation
  • the optimal parameters of the neural network the intersection ratio between two objects is the intersection of the two divided by the union of the two;
  • Segmenting the first distribution area 600 of the transparent layer of the neck from the dynamic ultrasound video by the first convolution measurement model obtained after training includes:
  • Step S31 Fix all single-frame ultrasound images in the ultrasound image video to the same size that is adapted to the input layer of the classification neural network, and normalize the ultrasound images;
  • Step S32 input the normalized single-frame ultrasound image into the trained classification neural network model, and use the optimal parameters to obtain the ultrasound image predicted to be the best measurement frame output by the classification neural network; input the best measurement frame image
  • the neural network is detected to detect the location area of the fetal neck transparent layer in the ultrasound image, and then the first distribution area of the fetal neck transparent layer is obtained through the segmented neural network.
  • a third aspect of the present invention provides an ultrasound device, including: a memory for storing a computer program; a processor for executing the computer program, so that the processor executes the above-mentioned fetal neck transparent layer thickness measurement method.
  • the ultrasound equipment of the present invention can automatically identify the first distribution area 600 of the fetal neck transparent layer, and then segment the optimal second distribution area of the neck transparent layer from the first distribution area 600 of the neck transparent layer through a gradient method 700, to improve the recognition accuracy of the neck transparent layer. Finally, the thickest position of the transparent layer is identified by the pixel value weighting method to measure the thickness of the fetal neck transparent layer, which realizes the automatic measurement of the neck transparent layer thickness with high accuracy , which greatly improves the work efficiency of doctors.
  • the fourth aspect of the present invention provides a computer-readable storage medium in which a computer program is stored.
  • the computer program is executed by a processor, the steps of the method for measuring the thickness of the transparent layer of the fetal neck are implemented. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种胎儿颈部透明层厚度测量方法,包括:获取至少包含胎儿颈部透明层的超声图像(S100);通过卷积神经网络模型识别超声图像中颈部透明层的第一分布区域(S200);根据梯度方式从颈部透明层的第一分布区域(600)中分割出包含颈部透明层的第二分布区域(700),第二分布区域(700)的识别精度大于第一分布区域(600)的识别精度;遍历第二分布区域(700)中的像素单元,以获取第二分布区域(700)沿宽度方向的最大宽度值(S400)。还公开了一种超声设备及存储介质,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。

Description

胎儿颈部透明层厚度测量方法、设备及存储介质 技术领域
本发明涉及超声图像处理技术领域,尤其是一种胎儿颈部透明层厚度测量方法、设备及存储介质。
背景技术
随着产前超声诊断技术的不断发展,许多胎儿结构异常在产前得到发现。超声检查作为一项无创、无致畸、方便、快捷及安全的检查方法,在产前诊断与筛查胎儿畸形中发挥着重要的作用。超声检查作为一种监测胎儿宫内生长情况的重要手段之一,现已在临床中运用广泛,从二维超声直至目前所运用的四维超声,不论是在医师的操作技能上,还是在仪器的功能和分辨率上都有很大程度的提高。通过超声测量孕期胎儿的颈部透明物厚度,能协助医生在孕期诊断胎儿染色体异常及胎儿心脏畸形等疾病,是孕期一个关键的检查项目。
现有技术中主要通过超声设备获取动态超声图像,由医生通过肉眼和经验观察,选定一个最佳切面,然后手动标定并测量胎儿颈部透明层的厚度。一方面最佳切面选取很难规范化,医生和医生之间存在差异,即使同一个医生在不同的时间段也会存在差异,另外一方面,手动标定测量的准确度低,因此迫切需要一种更规范化并且更精准的测量方式。
发明内容
本发明旨在至少解决现有技术中存在的技术问题之一,提供一种胎儿颈部透明层厚度测量方法、设备及存储介质,以实现胎儿颈部透明层最佳切面的自动选取、颈部透明层的自动识别和厚度自动测量,提高了测量的规范化、准确度和效率。
本发明的第一方面提供了一种胎儿颈部透明层厚度测量方法,包括:
获取至少包含胎儿颈部透明层的超声图像;
通过卷积神经网络模型识别超声图像中颈部透明层的第一分布区域;
根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,第二分布区域的识别精度大于第一分布区域的识别精度;
遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。
进一步地,遍历第二分布区域中的像素单元,包括:
突出显示第二分布区域的轮廓;
通过像素值方式遍历第二分布区域轮廓内的像素单元。
进一步地,根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,包括:
计算第一分布区域旋转至水平位置所需的旋转角度;
若旋转角度不为零,则根据旋转角度对第一分布区域进行仿射变换,将第 一分布区域旋转至水平位置。
进一步地,计算第一分布区域旋转至水平位置所需的旋转角度,包括:
获取第一分布区域的像素坐标P n
根据像素坐标P n标记第一分布区域的外接矩形R n
将外接矩形R n沿长度方向平均分割为第一外接矩形R l和第二外接矩形R r
计算第一外接矩形R l的第一质心坐标C l和第二外接矩形R r的第二质心坐标C r
根据第一质心坐标C l和第二质心坐标C r计算第一分布区域所需旋转角度。
进一步地,根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,包括:
获取第一分布区域上下边缘像素坐标B N
根据第一损失函数计算第一分布区域内每个像素点坐标对应的第一损失值,选取最小第一损失值;
根据第二损失函数计算第一分布区域内每个像素点坐标对应的第二损失值,选取最小第二损失值;
根据最小第一损失值与最小第二损失值对应的像素点坐标通过动态规划算法计算出第二分布区域的上下边缘的坐标。
进一步地,动态规划算法为通过迭代方式反推所有第二分布区域的上下边缘的最优像素点坐标。
进一步地,遍历第二分布区域中的像素单元之前,包括:
将第二分布区域内的颜色像素值填充为255,第二分布区域外的颜色像素值填充为0。
进一步地,通过像素值方式遍历第二分布区域轮廓内的像素单元,包括:
沿第二分布区域长度方向将第二分布区域分割为若干等宽度值的分割像素单元;
遍历若干等间距的分割像素单元,选取面积最大的分割像素单元;
根据选取的面积最大的分割像素单元获取对应的上边缘的最厚像素点坐标和下边缘的最厚像素点坐标;
根据上下边缘的最厚像素点坐标计算胎儿的颈部透明层厚度。
进一步地,根据上下边缘的最厚像素点坐标计算胎儿的颈部透明层厚度,包括:
将上下边缘的最厚像素点坐标还原到原始超声图像中,得到上下边缘最厚像素点的实际坐标;
根据上下边缘最厚像素点的实际坐标计算胎儿的颈部透明层厚度。
本发明的第二个方面还提供了另一种胎儿颈部透明层厚度测量方法,包括:
获取至少包含胎儿颈部透明层的动态超声视频;
通过训练后的第一卷积测量模型从动态超声视频中分割出颈部透明层的第一分布区域,第一卷积测量模型至少包括分类神经网络、检测神经网络和分割神经网络;
根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,第二分布区域的识别精度大于第一分布区域的识别精度;
遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。
进一步地,通过训练后的第一卷积测量模型从动态超声视频中分割出颈部透明层的第一分布区域,包括:
通过分类神经网络从动态超声视频中识别出测量胎儿颈部透明层的最佳单帧超声图像;
通过检测神经网络检测最佳超声图像中像素点为透明层的像素点概率,根据像素点概率标记颈部透明层的位置区域;
通过分割神经网络从颈部透明层的位置区域中分割出颈部透明层的第一分布区域。
进一步地,分类神经网络设有交叉熵损失函数,以从动态超声视频中识别出测量胎儿颈部透明层的最佳单帧超声图像。
进一步地,分割神经网络设有像素级交叉熵损失函数。
本发明的第三个方面提供了一种超声设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序,使得处理器执行上述的胎儿颈部透明层厚度测量方法。
本发明的第四个方面提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序,计算机程序被处理器执行时用以实现上述的胎儿颈部透明层厚度测量方法的步骤。
本发明的胎儿颈部透明层厚度测量方法能够通过卷积神经网络模型从超声图像中自动识别胎儿颈部透明层的第一分布区域,再通过梯度方式从颈部透明层的第一分布区域中分割出最佳颈部透明层的第二分布区域,提高了颈部透明层的识别精度,最后遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。
进一步地,本发明的胎儿颈部透明层厚度测量方法能够针对识别出的颈部透明层的第一分布区域不在水平位置进行旋转处理,不需要医生在采集超声图像时采集水平位置的超声图像,降低了对包含胎儿颈部透明层的超声图像的要求,提高了测量的速度和准确度。
本发明的超声设备能够自动识别出胎儿颈部透明层的第一分布区域,再通过梯度方式从颈部透明层的第一分布区域中分割出最佳颈部透明层的第二分布区域,提高了颈部透明层的识别精度,最后通过遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。
附图说明
图1为本发明的胎儿颈部透明层厚度测量方法的流程示意图。
图2为本发明的旋转获得水平第一分布区域的流程示意图。
图3为本发明的第一分布区域旋转角度计算的流程示意图。
图4为本发明的计算第二分布区域上下边缘坐标的流程示意图。
图5为本发明的一实施例胎儿颈部透明层厚度测量方法的流程示意图。
图6a为本发明获取的超声图像。
图6b为本发明获取的超声图像已识别出颈部透明层第一分布区域的示意图。
图6c为本发明获取的超声图像已标出第一质心和第二质心的示意图。
图6d为已经旋转获得水平位置的颈部透明层第一分布区域的示意图。
图7为本发明获取的超声图像已识别出颈部透明层第二分布区域的示意图。
图8为本发明获取的超声图像的掩膜区。
图9为本发明的另一胎儿颈部透明层厚度测量方法的流程示意图
图10为本发明一实施例卷积神经网络结构示意图。
图11为本发明的卷积神经网络模型中分割神经网络的结构示意图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其他元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本领域的一般技术知识即可完整了解相关操作。另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。本文中为部件所编序号本身,例如“第一”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。
目前测量胎儿颈部透明层厚度主要通过超声设备获取超声图像后由医生通过肉眼和经验观察,手动标定后测量胎儿颈部透明层的厚度,准确度低,且依靠医生肉眼和经验工作的方式效率低。
本发明的第一方面提供了一种胎儿颈部透明层厚度测量方法,如图1所示,包括以下步骤:
S100,获取至少包含胎儿颈部透明层的超声图像;
S200,通过卷积神经网络模型识别超声图像中颈部透明层的第一分布区域600;
S300,根据梯度方式从颈部透明层的第一分布区域600中分割出包含颈部透明层的第二分布区域700,第二分布区域的识别精度大于第一分布区域的识别 精度;
S400,遍历第二分布区域700中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。
本发明的胎儿颈部透明层厚度测量方法能够通过卷积神经网络模型从超声图像中自动识别胎儿颈部透明层的第一分布区域600,再通过梯度方式从颈部透明层的第一分布区域600中分割出最佳颈部透明层的第二分布区域700,提高了颈部透明层的识别精度,最后通过遍历第二分布区域700中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。可以理解的是第一分布区域为颈部透明层的大概分布区域的图像,第二分布区域为颈部透明层的精确分布区域的图像。
步骤S100获取至少包含胎儿颈部透明层的超声图像主要通过超声成像设备的换能器采集。超声成像设备至少包括换能器、超声主机、输入单元、控制单元、和存储器。超声成像设备可以包括显示屏,超声成像设备的显示屏可以为识别系统的显示器。换能器用于发射和接收超声波,换能器受发射脉冲的激励,向目标组织(例如,人体或者动物体内的器官、组织、血管等等)发射超声波,经一定延时后接收从目标区域反射回来的带有目标组织的信息的超声回波,并将此超声回波重新转换为电信号,以获得超声图像或者视频。换能器可以通过有线或无线的方式连接到超声主机。
输入单元用于输入操作人员的控制指令。输入单元可以为键盘、跟踪球、鼠标、触摸面板、手柄、拨盘、操纵杆以及脚踏开关中的至少一个。输入单元也可以输入非接触型信号,例如声音、手势、视线或脑波信号。
控制单元至少可以控制焦点信息、驱动频率信息、驱动电压信息以及成像模式等扫描信息。控制单元根据用户所需成像模式的不同,对信号进行不同的处理,获得不同模式的超声图像数据,然后经对数压缩、动态范围调整、数字扫描变换等处理形成不同模式的超声图像,如B图像,C图像,D图像,多普勒血流图像,包含组织弹性特性的弹性图像等等,或者其他类型的二维超声图像或三维超声图像。
需要理解的是,在一实施例中获取至少包含胎儿颈部透明层的超声图像也可以为存储在存储介质中的超声图像,例如,云服务器,U盘或者硬盘等。
医生通常会根据肉眼和工作经验对获取的超声图像进行观察判断对胎儿颈部透明层进行标定测量,准确度和效率都很低。
本发明步骤S200通过卷积神经网络模型识别超声图像中颈部透明层的第一分布区域600,需要理解的是卷积神经网络模型通过卷积神经网络对已标记的胎儿颈部透明层的若干超声图像进行训练确定。利用训练后的卷积神经网络模型可以自动识别出超声图像中颈部透明层的第一分布区域,第一分布区域600是卷积神经网络模型识别出的大概的分布区域。
本发明训练后的卷积神经网络模型包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积 层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积神经网络模型中的卷积层用于自动提取超声图像中的特征。更优地,卷积神经网络模型的神经网络每次对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。
针对步骤S200识别出的第一分布区域600并非都是水平的,为了提高计算精度方便计算,首先需要计算出第一分布区域600需要旋转的旋转角度,然后将第一分布区域600旋转至水平位置。如图2所示参见图6a-图6d,具体包括以下步骤:
S210,计算第一分布区域旋转至水平位置所需的旋转角度;
S220,若旋转角度不为零,则根据旋转角度对第一分布区域进行仿射变换,将第一分布区域旋转至水平位置。
在另一实施例中可以先判断颈部透明层的第一分布区域600是否处于水平位置;若第一分布区域不处于水平位置,则计算第一分布区域600旋转至水平位置所需的旋转角度。判断颈部透明层的第一分布区域是否处于水平位置可以通过训练好的神经网络进行判断,也可以通过医护人员对第一分布区域进行人工判断的方式进行判断。
如图3所示,计算旋转角度的方法如下:
S2201,获取第一分布区域600的像素坐标P n
S2202,根据像素坐标P n标记第一分布区域600的外接矩形Rn;
S2203,将外接矩形R n沿长度方向平均分割为第一外接矩形R l和第二外接矩形R r
S2204,计算第一外接矩形R l的第一质心坐标C l和第二外接矩形R r的第二质心坐标C r;
S2205,根据第一质心坐标C l和第二质心坐标C r计算第一分布区域600所需旋转角度,计算方式如下:
Figure PCTCN2019093710-appb-000001
其中,θ表示旋转角度,
Figure PCTCN2019093710-appb-000002
表示第二质心坐标C r的y轴坐标值,
Figure PCTCN2019093710-appb-000003
表示第二质心坐标C r的x轴坐标值,
Figure PCTCN2019093710-appb-000004
表示第一质心坐标C l的y轴坐标值,
Figure PCTCN2019093710-appb-000005
表示第一质心坐标C l的x轴坐标值。
S230,根据旋转角度对第一分布区域600进行仿射变换,将第一分布区域旋转至水平位置。需要理解的是,仿射变换,又称仿射映射,是指在几何中一个向量空间进行一次线性变换并接上一个平移,变换为另一个向量空间。根据计算出的旋转角度进行仿射变换得到水平的第一分布区域600。
为了进一步地提高颈部透明层分布区域识别的精度,本发明根据梯度方式从颈部透明层的第一分布区域600中分割出最佳颈部透明层的第二分布区域700。可以理解的是,第二分布区域700的精度大于第一分布区域600的精度。如图4,具体地包括
S310,获取第一分布区域600上下边缘像素坐标B N
B N={p 1,p 2,p 3,p N-1,p N}
N表示超声图像的宽度,p 1,p 2,p 3,p N-1,p N表示超声图像上下边缘第一个像素点的坐标至第N个像素点的坐标,需要理解的时,像素点的坐标包括x轴和y轴。
S320,根据第一损失函数计算第一分布区域600内每个像素点坐标对应的第一损失值,选取最小第一损失值;
第一损失函数:
Figure PCTCN2019093710-appb-000006
Figure PCTCN2019093710-appb-000007
Figure PCTCN2019093710-appb-000008
其中,p j表示第j个像素点的坐标,Z l(p j)表示利用图像拉普拉斯的双边效应,使寻找的像素坐标接近第一分布区域600的下边缘。f adj(p j,p j-1)函数限制了相邻两个像素间距必须要小,从而保证了边缘的连续性;q为p前一个像素点的坐标,t为p后一个像素点的坐标,θ这里为90度,表示只计算y方向的二阶导数。
S330,根据第二损失函数计算第一分布区域600内每个像素点坐标对应的第二损失值,选取最小第二损失值;
第二损失函数:
Figure PCTCN2019093710-appb-000009
其中,
Figure PCTCN2019093710-appb-000010
Z u(p j)表示利用图像拉普拉斯的双边效应,使寻找的像素坐标接近第一分布区域600的上边缘。q为p前一个像素点的坐标,θ这里为90度,表示计算y方向的二阶导数,
Figure PCTCN2019093710-appb-000011
为两个像素点坐标距离的sigmoid函数。
S340,根据最小第一损失值与最小第二损失值对应的像素点坐标通过动态规划算法计算出第二分布区域700的上下边缘的坐标。动态规划算法为通过迭 代方式反推所有第二分布区域700的上下边缘的最优像素点坐标。
动态规划算法为:
对于下边缘采用如下迭代方式:
Figure PCTCN2019093710-appb-000012
Figure PCTCN2019093710-appb-000013
对于上边缘采用如下迭代方式:
Figure PCTCN2019093710-appb-000014
Figure PCTCN2019093710-appb-000015
Figure PCTCN2019093710-appb-000016
Figure PCTCN2019093710-appb-000017
分别为上下边缘第一个像素点的损失函数,通过计算每一行N个点的损失函数,得到超声图像最后一列所有像素点的损失函数,可以反推所有第二分布区域700的上下边缘的最优像素点坐标。如图7所示,通过反推出的所有第二分布区域700的上下边缘的最优像素点坐标可以突出显示第二分布区域700的轮廓。突出显示可以采用线条或曲线勾勒出第二部分区域的轮廓,也可以高亮显示第二分布区域700的轮廓。
需要理解的是,胎儿颈部透明层的厚度为透明层最厚的位置的厚度,本发明遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。在一实施中,先突出显示第二分布区域的轮廓,在通过像素值方式遍历第二分布区域轮廓内的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。即将第二分布区域700的上下边缘之间的最大距离值确定为胎儿的颈部透明层厚度。如图8所示,首先将第二分布区域700内的颜色像素值填充为255,第二分布区域700外的颜色像素值填充为0,以获得第二分布区域700的掩膜区。
如图5所示,通过像素值方式遍历第二分布区域轮廓内的像素单元,包括:
S410,沿第二分布区域700长度方向将第二分布区域700分割为若干等宽度值的分割像素单元;
S420,遍历若干等间距的分割像素单元,选取面积最大的分割像素单元,等宽度值为1个像素;
S430,根据选取的最大分割像素单元获取对应的上边缘的最厚像素点坐标和下边缘的最厚像素点坐标;通过列公式计算:
E x=S x×W x
Figure PCTCN2019093710-appb-000018
高度为
Figure PCTCN2019093710-appb-000019
Figure PCTCN2019093710-appb-000020
为掩膜中当前x位置对应的像素值为255的最小的纵坐标,
Figure PCTCN2019093710-appb-000021
则为最大的纵坐标,W x代表权重,为当前S x分割像素单元区域在原始图片上面像素的平均值,p x,y是原始坐标的像素值,x的范围为第二分布区域700的宽 度,E x表示计算的数值结果。此做法避免了某些情况因为边缘轮廓计算错误导致最终结果在非实际颈部透明层内,有些面积最大的区域不一定是实际厚度最大的位置,该方法理论上也考虑了人眼观察规律,也就是颈部透明带最厚的区域肯定是个低回声区域。
S440,根据上下边缘的最厚像素点坐标计算胎儿的颈部透明层厚度。
上述计算出最大的E x对应的x位置和其对应的上下两个纵坐标p x
Figure PCTCN2019093710-appb-000022
即为颈部透明带最厚的区域位置的坐标,由于该坐标旋转过后的,将该坐标的位置反变换到原始空间中得到最终的实际坐标,并根据两点距离公式计算出上下两点的距离,得到最终的厚度。
本发明的第二个方面还提供了另一种胎儿颈部透明层厚度测量方法,如图9所示,包括:
S510,获取至少包含胎儿颈部透明层的动态超声视频;
S520,通过训练后的第一卷积测量模型从动态超声视频中分割出颈部透明层的第一分布区域600,第一卷积测量模型至少包括分类神经网络、检测神经网络和分割神经网络;
S530,根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,第二分布区域的识别精度大于第一分布区域的识别精度;
S540,遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。
本发明能够通过训练后的第一卷积测量模型从动态超声视频中分割出颈部透明层的第一分布区域600。再通过梯度方式从颈部透明层的第一分布区域600中分割出最佳颈部透明层的第二分布区域700,提高了颈部透明层的识别精度,最后遍历第二分布区域中的像素单元识别出透明层最厚的位置测量出胎儿颈部透明层的厚度,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。可以理解的是第一分布区域为颈部透明层的大概分布区域,第二分布区域为颈部透明层的精确分布区域。
在进行颈部透明层测量之前需要识别出动态超声视频中测量胎儿颈部透明层的最佳单帧超声图像。本发明一实施了通过分类神经网络用于从动态超声视频识别出测量胎儿颈部透明层的最佳单帧超声图像。分类神经网络的输入是动态超声视频中所有完整的超声图像,分类神经网络经过若干卷积层和下采样层后,根据超声图像中的特征输出超声图像的预测结果。分类神经网络的多个卷积层用来自动提取超声图像中的特征;分类神经网络的各卷积层之间、输入和卷积层之间、卷积层和输出之间通过权重参数相连接;设置输入层尺寸,以和输入神经网络的超声图像的尺寸相适配。
分类神经网络至少包括卷积层(conv)、最大池化层(max-pooling)、平均层(avg)、逻辑回归层(softmax)和滤波器。如图10所示,在一实施了中分类神经网络包括5个卷积层、4个最大池化层(max-pooling)、1个平均层(avg)和1个逻辑回归层(softmax)。可选的,设置分类神经网络的输入层尺寸为416*416*1,经过若干个3*3的卷积操作和最大池化操作,然后再经过对每组特 征进行平均操作得到输入的超声图像是最佳测量图像与否的概率,最终进行softmax操作,计算方法如下:
Figure PCTCN2019093710-appb-000023
其中,i表示图10中第10层输出的第一个数值或第二个数值,等号右边分母表示以e为底数,第10层输出的两个数值分别为指数计算得到的结果的合计;softmax i表示逻辑回归层操作后输出的概率结果;根据概率结果识别输入的动态超声视频中测量胎儿颈部透明层的最佳超声图像。
通过分类神经网络从动态超声视频中识别出胎儿颈部透明层的最佳单帧超声图像后,通过检测神经网络检测最佳超声图像中像素点为透明层的像素点概率,根据像素点概率标记颈部透明层的位置区域。检查神经网络将像素点概率大于和/或等于预设概率的像素确定为透明层。检测神经网络包括输入层、隐含层、输出层,检测神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;隐含层包括卷积层、最大池化层和结合层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与该高级特征层之前的一层或数层隐含层相结合;该高级特征层与相结合的隐含层的输出图像的长和宽一致;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层,最后一个卷积层作为输出层。
通过检测神经网络检测最佳超声图像中像素点为透明层的像素点概率,根据像素点概率标记颈部透明层的位置区域后,通过分割神经网络从颈部透明层的位置区域中分割出颈部透明层的第一分布区域600。突出显示可以采用线条或曲线勾勒出第一部分区域的轮廓,也可以高亮显示一分布区域的轮廓。分割神经网络通过突出显示从颈部透明层的位置区域中分割出颈部透明层的第一分布区域600。在一实施了分割神经网络包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;更优地,分割神经网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层;分割神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积层用于自动提取超声图像中的特征。分割出第一分布区域后根据梯度方式从颈部透明层的第一分布区域600中分割出最佳颈部透明层的第二分布区域700;通过像素值加权方式遍历第二分布区域700中的像素单元,将第二分布区域700的上下边缘之间的最大距离值确定为胎儿的颈部透明层厚度,即遍历第二分布区域中的像素单元,以获取第二分布区域沿宽度方向的最大宽度值。
第一卷积测量模型的训练过程如下:
步骤S1,收集胎儿的动态超声视频,并对动态超声视频进行标注,本实施例中优选的标注方法是标注出视频中用于测量胎儿颈部透明层厚度的最佳单帧 超声图像;收集测量胎儿颈部透明层厚度的最佳图像,用连续折线形成的闭合曲线标注出胎儿的颈部透明层处;
步骤S2,基于收集到的动态超声视频、超声图像及标注信息建立分类神经网络、检测神经网络和分割神经网络,训练得到第一卷积测量模型。
神经网络的处理流程包括:
步骤S21,将收集到的超声图像划分为训练集、验证集和测试集;
收集的用于分类神经网络的超声图像视频中的超声图像中随机选取3/5的图像作为训练集,随机选取1/5的图像作为验证集;剩余1/5的超声图像作为测试集使用;收集的用于检测神经网络和分割神经网络训练的超声图像同样分为三个集合。训练集用于训练神经网络;验证集用于验证神经网络的效果并帮助选择最优的神经网络模型参数;测试集超用于测神经网络的使用效果。优选地,训练集、验证集和测试集选取的比例为3/5、1/5,1/5。
不同品牌的超声设备或同一品牌不同型号的超声设备采集的动态超声视频中拆分的单帧超声图像的大小是不一样的,需要对超声图像进行预处理。将训练集和验证集的超声图像固定到一定尺寸,并归一化同样尺寸的超声图像;如预处理后的超声图像为256*256*3;256*256表示预处理后超声图像的长和宽,即256像素长,256像素宽;可选地,将超声图像固定到一定尺寸时,保持原始图像的长宽比例,或者改变原始图像的长宽比例;对超声图像进行归一化操作的具体处理方法为将超声图像中每个像素值减去图像像素的均值后除以图像像素的方差;归一化后超声图像的均值是0,方差是1;由于超声图像预处理时超声图像的尺寸发生了变化,超声图像的模板也需要进行相应比例的改变;
步骤S22,建立神经网络结构;
本发明先建立了分类神经网络,用于预测收集到的动态超声视频中哪一帧用来测量胎儿颈部透明层厚度最佳。通过分类神经网络从动态超声视频识别出测量胎儿颈部透明层的最佳单帧超声图像。分类神经网络的输入是动态超声视频中所有完整的超声图像,分类神经网络经过若干卷积层和下采样层后,根据超声图像中的特征输出超声图像的预测结果。分类神经网络的多个卷积层用来自动提取超声图像中的特征;分类神经网络的各卷积层之间、输入和卷积层之间、卷积层和输出之间通过权重参数相连接;设置输入层尺寸,以和输入神经网络的超声图像的尺寸相适配。
在以上分类神经网络之后,本发明建立检测神经网络,用来检测出最佳颈部透明层厚度测量的超声图像中的颈部透明层的部分。检测神经网络的输入是收集到的最佳颈部透明层厚度测量的超声图像,超声图像的曲线标注转化为标注框后用于训练该神经网络;检测神经网络将卷积神经网络末端的高级特征层与前一层或者前几层的低级细粒度特征结合,增加卷积神经网络对偏小的目标对象的检测效果。可选的,检测神经网络的输入设置为416*416*1,输出设置为13*13*35;检测神经网络输出了超声图像中颈部透明层可能的标记框的坐标信息和概率信息;
在一实施中分割神经网络设置的输入层尺寸为256*256*1,经过两个3*3的 卷积操作分别得到两个256*256*16的特征,再经过下采样操作得到128*128*16的特征;再继续进行若干3*3卷积和下采样得到64*64*64的特征;再继续进行若干上采样和3*3卷积得到256*256*16的特征;最终进行1*1卷积操作得到256*256*2的预测结果,预测结果中数值的取值范围是0到1,表示超声图像中对应的像素点是胎儿颈部透明层范围内的概率,需要理解的是预测结果为概率。如图11所示,的灰色长方形即表示图像经过每次卷积或者采样操作后提取到的特征,白色长方形表示复制得到的特征;优选的,分割神经网络的卷积选择适合尺度的扩张卷积,以提高网络的感受野,提高网络预测准确率。可选地,网络中的上采样层和下采样层可以去除,仍保证网络的输入层和输出层的长和宽一致;可选的,分割神经网络的输入超声图像可以在检测网络预测的检测框的基础上适当扩大,如上下左右分别扩张20个像素点。
步骤S23,初始化神经网络:将神经网络的权重参数设置为0到1之间的随机数;
步骤S24,计算神经网络的损失函数;
以上设计的分类神经网络的损失函数为交叉熵损失;以上涉及的检测网络的损失函数包括检测框位置和检测框的预测概率的损失;以上涉及的分割神经网络的损失函数选择像素级的交叉熵损失;
分类神经网络的交叉熵计算公式为:
Figure PCTCN2019093710-appb-000024
其中,N表示超声图像的总数量;p i表示第i张超声图像是否为最佳测量图像;t i表示分类神经网络预测的第i张超声图像为最佳测量图像的概率;p i和t i的数值越接近,交叉熵损失越小。
检测网络的损失函数包含两大部分,分别为含有目标检测对象的检测框的概率预测的误差和中心坐标、高度、宽度预测的误差;不含有目标检测对象的检测框的概率预测的误差,计算公式为:
Figure PCTCN2019093710-appb-000025
其中,λ 13表示各项误差在总的损失函数中占的比重,各项误差都选用平方误差的形式。
Loss_function损失函数的第一项表示含有目标颈部透明层部分对象的检测框 的概率预测的误差。其中,S 2表示将超声图像划分成S×S个网格单元;B表示每个网格单元设置多少个检测框;
Figure PCTCN2019093710-appb-000026
表示第i个网格单元的第j个检测框是否含有目标检测对象,若检测框与标注框的交集较大,则认为检测框含有目标检测对象,
Figure PCTCN2019093710-appb-000027
否则,认为检测框不含有目标检测对象,
Figure PCTCN2019093710-appb-000028
Figure PCTCN2019093710-appb-000029
表示检测网络对该网格单元当前的第j个检测框的预测概率;第二项表示含有目标对象的检测框的位置和长宽的预测误差。其中x i,y i,h i,w i表示第i个网格单元的标注框的中心位置和长度、宽度信息,
Figure PCTCN2019093710-appb-000030
表示预测的边界框相应的信息;第三项是不含有目标对象的检测框的概率预测的误差,因为不含有目标对象的边界框占多数,所以λ 3通常会设置得比λ 1小,否则无法训练得到识别效果较好的网络。可选的,λ 1=5,λ 2=λ 3=1。
分割神经网络的像素级交叉熵损失函数为:
Figure PCTCN2019093710-appb-000031
像素级交叉熵损失函数是对超声图像中每个像素上的预测误差的总和;其中,x,y为分割神经网络的输入图像的长度和宽度,p ij是超声图像中第i行第j列像素被分割神经网络预测的该像素为预测部位的概率,t ij是超声图像中第i行第j列像素在超声图像模板中对应的数值,若该像素为预测部位,取值为1,否则,取值为0;分割神经网络输出的预测的概率与超声图像模板越接近,交叉熵损失函数越小;
步骤S25,训练神经网络,得到第一卷积测量模型;
此步骤中,随机选择训练集中的超声图像并对其进行随机的变换,然后输入神经网络,选择合适的训练迭代次数和批处理大小对神经网络进行训练;可选的,变换操作有旋转、缩放、裁剪、弹性形变等;优选的,本发明中只进行了随机旋转操作;可选的,根据神经网络的损失函数更新网络参数的机制使用自适应矩估计优化方法;
步骤S26,选择最优的神经网络模型参数。
计算不同参数下的三种神经网络在验证集上的预测结果,计算分类准确率,准确率最高对应的参数作为分类神经网络的参数;计算检测网络的预测框和验证集标注转化得到的标注框之间的交并比,交并比的计算方法为:
交并比=(预测结果∩图形模板)/(预测结果∪图形模板)
交并比的可能取值范围为[0,1]
选择交并比最大情况下的参数作为检测网络的最优参数;计算分割神经网络的预测结果和标注转化得到的验证图像模板之间的交并比,选择交并比最大情况下的参数作为分割神经网络的最优参数;两个对象间的交并比即两者的交集除以两者的并集;
通过经过训练得到的第一卷积测量模型从动态超声视频中分割出颈部透明层的第一分布区域600包括:
步骤S31,将超声图像视频中的所有单帧超声图像固定到与分类神经网络输入层相适配的同样尺寸,对超声图像进行归一化;
步骤S32,将归一化后的单帧超声图像输入训练好的分类神经网络模型,采用最优参数,得到分类神经网络输出的预测为最佳测量帧的超声图像;将最佳测量帧图像输入检测神经网络,检测出超声图像中胎儿颈部透明层的位置区域,再经过分割神经网络得到胎儿颈部透明层第一分布区域。
本发明的第三个方面提供了一种超声设备,包括:存储器,用于存储计算机程序;处理器,用于执行计算机程序,使得处理器执行使得处理器执行上述的胎儿颈部透明层厚度测量方法。
本发明的超声设备能够自动识别出胎儿颈部透明层的第一分布区域600,再通过梯度方式从颈部透明层的第一分布区域600中分割出最佳颈部透明层的第二分布区域700,提高了颈部透明层的识别精度,最后通过像素值加权方式识别出透明层最厚的位置测量出胎儿颈部透明层的厚度,实现了颈部透明层厚度的自动测量,准确度高,大大提高了医生的工作效率。
本发明的第四个方面提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序,计算机程序被处理器执行时用以实现上述的胎儿颈部透明层厚度测量方法的步骤。
最后所应说明的是,以上具体实施方式仅用以说明本发明的技术方案而非限制,尽管参照实例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。

Claims (15)

  1. 一种胎儿颈部透明层厚度测量方法,其特征在于,包括:
    获取至少包含胎儿颈部透明层的超声图像;
    通过卷积神经网络模型识别所述超声图像中颈部透明层的第一分布区域;
    根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,所述第二分布区域的识别精度大于所述第一分布区域的识别精度;
    遍历所述第二分布区域中的像素单元,以获取所述第二分布区域沿宽度方向的最大宽度值。
  2. 如权利要求1所述的胎儿颈部透明层厚度测量方法,其特征在于,所述遍历所述第二分布区域中的像素单元,包括:
    突出显示所述第二分布区域的轮廓;
    通过像素值方式遍历第二分布区域轮廓内的像素单元。
  3. 如权利要求1所述的胎儿颈部透明层厚度测量方法,其特征在于,所述根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,包括:
    计算所述第一分布区域旋转至水平位置所需的旋转角度;
    若所述旋转角度不为零,则根据所述旋转角度对所述第一分布区域进行仿射变换,将所述第一分布区域旋转至水平位置。
  4. 如权利要求3所述的胎儿颈部透明层厚度测量方法,其特征在于,所述计算所述第一分布区域旋转至水平位置所需的旋转角度,包括:
    获取所述第一分布区域的像素坐标P n
    根据所述像素坐标P n标记所述第一分布区域的外接矩形R n
    将所述外接矩形R n沿长度方向平均分割为第一外接矩形R l和第二外接矩形R r
    计算所述第一外接矩形R l的第一质心坐标C l和所述第二外接矩形R r的第二质心坐标C r
    根据所述第一质心坐标C l和第二质心坐标C r计算所述第一分布区域所需旋转角度。
  5. 如权利要求1-4中任一项所述的胎儿颈部透明层厚度测量方法,其特征在于,所述根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,包括:
    获取第一分布区域上下边缘像素坐标B N
    根据第一损失函数计算所述第一分布区域内每个像素点坐标对应的第一损失值,选取最小第一损失值;
    根据第二损失函数计算所述第一分布区域内每个像素点坐标对应的第二损失值,选取最小第二损失值;
    根据所述最小第一损失值与最小第二损失值对应的像素点坐标通过动态规 划算法计算出所述第二分布区域的上下边缘的坐标。
  6. 如权利要求5所述的胎儿颈部透明层厚度测量方法,其特征在于,所述动态规划算法为通过迭代方式反推所有所述第二分布区域的上下边缘的最优像素点坐标。
  7. 如权利要求1-4中任一项所述的胎儿颈部透明层厚度测量方法,其特征在于,所述遍历所述第二分布区域中的像素单元之前,包括:
    将所述第二分布区域内的颜色像素值填充为255,所述第二分布区域外的颜色像素值填充为0。
  8. 如权利要求2所述的胎儿颈部透明层厚度测量方法,其特征在于,所述通过像素值方式遍历第二分布区域轮廓内的像素单元,包括:
    沿所述第二分布区域长度方向将所述第二分布区域分割为若干等宽度值的分割像素单元;
    遍历所述若干等间距的分割像素单元,选取面积最大的分割像素单元;
    根据选取的面积最大的分割像素单元获取对应的上边缘的最厚像素点坐标和下边缘的最厚像素点坐标;
    根据上下边缘的最厚像素点坐标计算胎儿的颈部透明层厚度。
  9. 如权利要求8所述的胎儿颈部透明层厚度测量方法,其特征在于,所述根据上下边缘的最厚像素点坐标计算胎儿的颈部透明层厚度,包括:
    将所述上下边缘的最厚像素点坐标还原到原始超声图像中,得到上下边缘最厚像素点的实际坐标;
    根据上下边缘最厚像素点的实际坐标计算胎儿的颈部透明层厚度。
  10. 一种胎儿颈部透明层厚度测量方法,其特征在于,包括:
    获取至少包含胎儿颈部透明层的动态超声视频;
    通过训练后的第一卷积测量模型从所述动态超声视频中分割出颈部透明层的第一分布区域,所述第一卷积测量模型至少包括分类神经网络、检测神经网络和分割神经网络;
    根据梯度方式从颈部透明层的第一分布区域中分割出包含颈部透明层的第二分布区域,所述第二分布区域的识别精度大于所述第一分布区域的识别精度;
    遍历所述第二分布区域中的像素单元,以获取所述第二分布区域沿宽度方向的最大宽度值。
  11. 根据权利要求10所述的胎儿颈部透明层厚度测量方法,其特征在于,所述通过训练后的第一卷积测量模型从所述动态超声视频中分割出颈部透明层的第一分布区域,包括:
    通过所述分类神经网络从动态超声视频中识别出测量胎儿颈部透明层的最佳单帧超声图像;
    通过所述检测神经网络检测所述最佳超声图像中像素点为透明层的像素点概率,根据所述像素点概率标记颈部透明层的位置区域;
    通过分割神经网络从所述颈部透明层的位置区域中分割出颈部透明层的第一分布区域。
  12. 根据权利要求11所述的胎儿颈部透明层厚度测量方法,其特征在于,所述分类神经网络设有交叉熵损失函数,以从动态超声视频中识别出测量胎儿颈部透明层的最佳单帧超声图像。
  13. 根据权利要求10或11所述的胎儿颈部透明层厚度测量方法,其特征在于,所述分割神经网络设有像素级交叉熵损失函数。
  14. 一种超声设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序,使得处理器执行如权利要求1至13中任一项所述的胎儿颈部透明层厚度测量方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用以实现如权利要求1至13中任一项所述的胎儿颈部透明层厚度测量方法的步骤。
PCT/CN2019/093710 2019-04-20 2019-06-28 胎儿颈部透明层厚度测量方法、设备及存储介质 WO2020215484A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201910320623.X 2019-04-20
CN201910320623.XA CN111820948B (zh) 2019-04-20 2019-04-20 胎儿生长参数测量方法、系统及超声设备
CN201910451627.1 2019-05-28
CN201910451627.1A CN110163907B (zh) 2019-05-28 2019-05-28 胎儿颈部透明层厚度测量方法、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020215484A1 true WO2020215484A1 (zh) 2020-10-29

Family

ID=72941298

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093710 WO2020215484A1 (zh) 2019-04-20 2019-06-28 胎儿颈部透明层厚度测量方法、设备及存储介质

Country Status (1)

Country Link
WO (1) WO2020215484A1 (zh)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011058186A1 (en) * 2009-11-16 2011-05-19 Advanced Medical Diagnostics Holding S.A. Method of re-sampling ultrasound data
CN102113897A (zh) * 2009-12-31 2011-07-06 深圳迈瑞生物医疗电子股份有限公司 一种在图像中提取及测量感兴趣目标的方法及其装置
US20130190600A1 (en) * 2012-01-25 2013-07-25 General Electric Company System and Method for Identifying an Optimal Image Frame for Ultrasound Imaging
CN103239249A (zh) * 2013-04-19 2013-08-14 深圳大学 一种胎儿超声图像的测量方法
CN104156967A (zh) * 2014-08-18 2014-11-19 深圳市开立科技有限公司 一种胎儿颈部透明层图像分割方法、装置及系统
CN107582097A (zh) * 2017-07-18 2018-01-16 中山大学附属第医院 一种基于多模态超声组学的智能辅助决策系统
CN108186051A (zh) * 2017-12-26 2018-06-22 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿双顶径长度的图像处理方法及处理系统
CN108378869A (zh) * 2017-12-26 2018-08-10 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿头围长度的图像处理方法及处理系统
CN108629770A (zh) * 2018-05-03 2018-10-09 河北省计量监督检测研究院廊坊分院 基于支持向量机的超声图像分割方法
CN109273084A (zh) * 2018-11-06 2019-01-25 中山大学附属第医院 基于多模态超声组学特征建模的方法及系统
CN109544517A (zh) * 2018-11-06 2019-03-29 中山大学附属第医院 基于深度学习的多模态超声组学分析方法及系统

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011058186A1 (en) * 2009-11-16 2011-05-19 Advanced Medical Diagnostics Holding S.A. Method of re-sampling ultrasound data
CN102113897A (zh) * 2009-12-31 2011-07-06 深圳迈瑞生物医疗电子股份有限公司 一种在图像中提取及测量感兴趣目标的方法及其装置
US20130190600A1 (en) * 2012-01-25 2013-07-25 General Electric Company System and Method for Identifying an Optimal Image Frame for Ultrasound Imaging
CN103239249A (zh) * 2013-04-19 2013-08-14 深圳大学 一种胎儿超声图像的测量方法
CN104156967A (zh) * 2014-08-18 2014-11-19 深圳市开立科技有限公司 一种胎儿颈部透明层图像分割方法、装置及系统
CN107582097A (zh) * 2017-07-18 2018-01-16 中山大学附属第医院 一种基于多模态超声组学的智能辅助决策系统
CN108186051A (zh) * 2017-12-26 2018-06-22 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿双顶径长度的图像处理方法及处理系统
CN108378869A (zh) * 2017-12-26 2018-08-10 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿头围长度的图像处理方法及处理系统
CN108629770A (zh) * 2018-05-03 2018-10-09 河北省计量监督检测研究院廊坊分院 基于支持向量机的超声图像分割方法
CN109273084A (zh) * 2018-11-06 2019-01-25 中山大学附属第医院 基于多模态超声组学特征建模的方法及系统
CN109544517A (zh) * 2018-11-06 2019-03-29 中山大学附属第医院 基于深度学习的多模态超声组学分析方法及系统

Similar Documents

Publication Publication Date Title
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
JP6467041B2 (ja) 超音波診断装置、及び画像処理方法
WO2018120942A1 (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
US7783095B2 (en) System and method for fetal biometric measurements from ultrasound data and fusion of same for estimation of fetal gestational age
CN111179227B (zh) 基于辅助诊断和主观美学的乳腺超声图像质量评价方法
US20110082371A1 (en) Medical image processing device and medical image processing method
WO2015139267A1 (zh) 自动识别测量项的方法、装置及一种超声成像设备
US8285013B2 (en) Method and apparatus for detecting abnormal patterns within diagnosis target image utilizing the past positions of abnormal patterns
Li et al. Automated measurement network for accurate segmentation and parameter modification in fetal head ultrasound images
CN110163907B (zh) 胎儿颈部透明层厚度测量方法、设备及存储介质
WO2022110525A1 (zh) 一种癌变区域综合检测装置及方法
CN111374712A (zh) 一种超声成像方法及超声成像设备
WO2020215485A1 (zh) 胎儿生长参数测量方法、系统及超声设备
Zeng et al. TUSPM-NET: A multi-task model for thyroid ultrasound standard plane recognition and detection of key anatomical structures of the thyroid
Aji et al. Automatic measurement of fetal head circumference from 2-dimensional ultrasound
TWI428781B (zh) 從一影像處理系統中擷取一腫瘤輪廓的方法
WO2020215484A1 (zh) 胎儿颈部透明层厚度测量方法、设备及存储介质
US20220249060A1 (en) Method for processing 3d image data and 3d ultrasonic imaging method and system
CN116229236A (zh) 一种基于改进YOLO v5模型的结核杆菌检测方法
Al et al. Reinforcement learning-based automatic diagnosis of acute appendicitis in abdominal ct
CN115813433A (zh) 基于二维超声成像的卵泡测量方法和超声成像系统
CN111062956B (zh) 钼靶x线乳腺影像肿块目标分割方法及装置
CN116211349A (zh) 一种胎儿颜面部的超声成像方法、超声成像装置和介质
CN113229850A (zh) 超声盆底成像方法和超声成像系统
Wang et al. Ellipse guided multi-task network for fetal head circumference measurement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19925577

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19925577

Country of ref document: EP

Kind code of ref document: A1