WO2020215485A1 - 胎儿生长参数测量方法、系统及超声设备 - Google Patents

胎儿生长参数测量方法、系统及超声设备 Download PDF

Info

Publication number
WO2020215485A1
WO2020215485A1 PCT/CN2019/093711 CN2019093711W WO2020215485A1 WO 2020215485 A1 WO2020215485 A1 WO 2020215485A1 CN 2019093711 W CN2019093711 W CN 2019093711W WO 2020215485 A1 WO2020215485 A1 WO 2020215485A1
Authority
WO
WIPO (PCT)
Prior art keywords
layer
measured
ultrasound image
convolutional
layers
Prior art date
Application number
PCT/CN2019/093711
Other languages
English (en)
French (fr)
Inventor
李璐
赵明昌
Original Assignee
无锡祥生医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 无锡祥生医疗科技股份有限公司 filed Critical 无锡祥生医疗科技股份有限公司
Publication of WO2020215485A1 publication Critical patent/WO2020215485A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Definitions

  • the invention relates to the technical field of ultrasonic image processing, in particular to a method, system and ultrasonic equipment for measuring fetal growth parameters.
  • ultrasound examination plays an important role in the prenatal diagnosis and screening of fetal malformations.
  • B-ultrasound has been widely used in clinical practice, from two-dimensional ultrasound to four-dimensional ultrasound currently used, whether it is in the operating skills of the doctor or the instrument. Both functions and resolution have been greatly improved.
  • ultrasound to measure the main growth parameters of the fetus during pregnancy can assist doctors in diagnosing fetal structural malformations during pregnancy and is a key inspection item during pregnancy.
  • the main growth parameters of fetal ultrasound images such as double parietal diameter, head circumference, abdominal circumference, and femur length, are mainly measured manually, which depends on the doctor's experience and operating techniques, and the accuracy and efficiency are very low.
  • the present invention aims to solve at least one of the technical problems existing in the prior art, and provides a method, system and ultrasound equipment for measuring fetal growth parameters, so as to realize automatic measurement of fetal growth parameters and improve the accuracy and efficiency of the measurement.
  • the present invention provides a method for measuring fetal growth parameters, including:
  • the convolutional neural measurement model trains and determines the marked ultrasound images of different fetal parts through the convolutional neural network
  • the ultrasound image is a single frame of ultrasound image or at least two frames of ultrasound image.
  • determining the distribution area of the part to be measured in the ultrasound image according to the convolutional neural measurement model includes:
  • the pixel points in the ultrasound image whose pixel point probability exceeds the preset probability are determined as the part to be measured.
  • determining the distribution area of the part to be measured in the ultrasound image according to the convolutional neural measurement model includes:
  • the pixel points in the location area whose pixel point probability exceeds the preset probability are determined as the part to be measured.
  • the contour of the distribution area of the part to be measured is fitted by the least square method.
  • calculating the probability that each pixel in the ultrasound image is the part to be measured according to the convolutional neural measurement model includes:
  • the output layer performs convolution according to the second feature to calculate the probability that each pixel in the ultrasound image is the part to be measured.
  • the neural network of the convolutional neural measurement model performs convolution or sampling processing on ultrasound images
  • the features are copied from the shallower convolutional layer to the deeper convolutional layer, and the copied features correspond to the features of the deeper convolutional layer. After the pixels are added, it enters the next convolutional layer.
  • the detection network detects that the location area of the part to be measured is marked by a detection frame.
  • the detection network includes an input layer, a hidden layer, and an output layer.
  • the input layer and the hidden layer of the detection network, between each hidden layer, and between the hidden layer and the output layer are connected by weight parameters,
  • the hidden layer includes a convolutional layer, a maximum pooling layer, and a combined layer. First, several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected, and then a combined layer is connected to combine The high-level feature layer connected before the high-level feature layer is combined with one or more hidden layers before the high-level feature layer;
  • the length and width of the output image of the advanced feature layer and the combined hidden layer are consistent;
  • the high-level feature layer is combined with the previous one or several hidden layers and then input to the last convolutional layer, and the last convolutional layer is used as the output layer.
  • calculating the probability that each pixel in the location area is the part to be measured according to the segmentation network of the convolutional neural measurement model specifically including:
  • the ultrasonic image of the location area of the part to be measured marked by the detection network is subjected to a convolution operation and a down-sampling operation through several convolution layers and down-sampling layers to obtain the first feature;
  • the output layer performs convolution according to the second feature to calculate the probability that each pixel in the ultrasound image is the part to be measured.
  • the segmentation network performs convolution or sampling processing on the ultrasound image
  • the features are copied from the shallower convolutional layer to the deeper convolutional layer, and the copied features and the feature corresponding pixels of the deeper convolutional layer are added to enter the next A convolutional layer.
  • the detection network of the convolutional neural measurement model is trained through the loss function to reduce the detection error.
  • the present invention also provides a fetal growth parameter measurement system, including:
  • An acquiring unit which is used to acquire an ultrasound image of at least one part of the fetus
  • the first processing unit determines the distribution area of the part to be measured in the ultrasound image according to the convolutional neural network, and the measurement model trains and determines the marked ultrasound images of different fetal parts through the convolutional neural network;
  • the second processing unit the second processing unit highlights the contour of the distribution area of the part to be measured and performs fitting;
  • the measuring unit measures the fitted contours corresponding to the parts to be measured to obtain growth parameters of different parts to be measured.
  • the present invention also provides an ultrasonic device, including:
  • Memory used to store computer programs
  • the processor is used to execute a computer program to implement the above-mentioned method for measuring the growth parameters of the fetal ultrasound image.
  • the present invention also provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for measuring fetal growth parameters are implemented. .
  • the fetal growth parameter measurement method of the present invention can automatically identify the fetal part to be measured in the ultrasound image through the trained convolutional neural measurement model, and automatically measure the main growth parameters of the fetus. It is more accurate and efficient for manual measurement.
  • the present invention normalizes the ultrasound image before processing the fetal ultrasound image, and fixes the ultrasound image to a size suitable for the input layer of the convolutional nerve measurement model, so that ultrasound images of different sizes can be processed. Treatment improves applicability.
  • the present invention improves the accuracy and accuracy of measurement by identifying the probability that each pixel in the ultrasound image is the part to be measured.
  • the present invention first detects the location area of the part to be measured through the detection network of the convolutional neural measurement model, and then calculates the probability that each pixel in the location area is the part to be measured according to the segmentation network of the convolutional neural measurement model, which improves The precision and accuracy of the measurement.
  • the fetal growth parameter measurement system of the present invention can automatically identify the fetus to be measured in the ultrasound image through the trained convolutional neural measurement model, and automatically measure the main growth parameters of the fetus. The measurement result is compared with manual measurement. More accurate and more efficient.
  • the ultrasound equipment of the present invention can automatically measure the growth parameters of different parts of the fetus through the trained convolutional nerve measurement model, the measurement accuracy is high, and the work efficiency of the doctor is provided.
  • Figure 1 is a flow chart of the measurement method of the present invention.
  • Fig. 2a is an annotated schematic diagram of an ultrasound image of the fetal abdomen of the present invention.
  • Fig. 2b is a schematic diagram of the ultrasound image of the fetus thigh of the present invention. .
  • Fig. 2c is a schematic diagram of transforming an ultrasound image with annotated curve into a template according to the present invention.
  • Fig. 3 is a schematic diagram of the first structure of the convolutional neural measurement model of the present invention.
  • FIG. 4 is a schematic diagram of the detection network in the second structure of the convolutional neural measurement model of the present invention.
  • Fig. 5 is a schematic diagram of the marking frame at the fetus thigh of the present invention.
  • FIG. 6 is a schematic diagram of the segmentation network in the second structure of the convolutional neural measurement model of the present invention.
  • Figures 7a and 7b are schematic diagrams of post-processing of ultrasound images of the abdomen and head of the present invention, respectively.
  • Fig. 8 is a schematic diagram of post-processing ultrasound images of the thigh in the present invention.
  • FIG. 9 is a schematic diagram of processing for determining the distribution area of a part to be measured in an ultrasound image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of processing for determining the distribution area of a part to be measured in an ultrasound image in another embodiment of the present invention.
  • FIG. 11 is a schematic diagram of the process of drawing and fitting the distribution area contour of the part to be measured in the present invention.
  • Figure 12 is a schematic diagram of the structure of the fetal growth parameter measurement system of the present invention.
  • the main growth parameters of fetal ultrasound images such as double parietal diameter, head circumference, abdominal circumference, and femur length, are mainly measured manually, which depends on the doctor's experience and operating techniques, and the accuracy and efficiency are very low.
  • the first aspect of the present invention provides a fetal growth parameter measurement system, as shown in FIG. 12, including: an acquisition unit, a first processing unit, a second processing unit, and a measurement unit.
  • the acquiring unit is used to acquire an ultrasound image of at least one part of the fetus.
  • the first processing unit determines the distribution area of the part to be measured in the ultrasound image according to the convolutional nerve measurement model, and the convolutional nerve measurement model trains and determines the marked ultrasound images of different fetal parts through the convolutional neural network.
  • the second processing unit highlights the distribution area contour of the part to be measured and performs fitting.
  • the measuring unit measures the fitted contours corresponding to the parts to be measured to obtain the growth parameters of different parts to be measured.
  • the fetal growth parameter measurement system of the present invention can analyze and process the ultrasound image of the fetus, and automatically measure the parameters of each part to be measured; the parts to be measured include the abdomen, head, and thighs of the fetus. The accuracy and precision of measurement are improved, and the work efficiency of doctors is improved.
  • the acquisition unit of the present invention is an ultrasonic imaging device, that is, an ultrasonic image is acquired through the ultrasonic imaging device.
  • the ultrasound imaging device at least includes a transducer, an ultrasound host, an input unit, a control unit, and a memory.
  • the ultrasonic imaging device may include a display screen, and the display screen of the ultrasonic imaging device may be a display of the identification system.
  • the transducer is used to transmit and receive ultrasonic waves.
  • the transducer is excited by the transmitted pulse to transmit ultrasonic waves to the target tissue (for example, the organs, tissues, blood vessels, etc.) in the human or animal body, and receive the reflection from the target area after a certain delay
  • the returned ultrasonic echo with the information of the target tissue, and the ultrasonic echo is reconverted into an electrical signal to obtain an ultrasonic image or video.
  • the transducer can be connected to the ultrasound host in a wired or wireless manner.
  • the input unit is used to input control instructions of the operator.
  • the input unit may be at least one of a keyboard, trackball, mouse, touch panel, handle, dial, joystick, and foot switch.
  • the input unit can also input non-contact signals, such as voice, gesture, line of sight, or brain wave signals.
  • the control unit can at least control scan information such as focus information, driving frequency information, driving voltage information, and imaging mode. According to the different imaging modes required by the user, the control unit processes the signals differently to obtain different modes of ultrasound image data, and then forms different modes of ultrasound images through logarithmic compression, dynamic range adjustment, digital scan conversion, etc., such as B Images, C images, D images, Doppler blood flow images, elastic images containing tissue elastic properties, etc., or other types of two-dimensional ultrasound images or three-dimensional ultrasound images.
  • control scan information such as focus information, driving frequency information, driving voltage information, and imaging mode.
  • the control unit processes the signals differently to obtain different modes of ultrasound image data, and then forms different modes of ultrasound images through logarithmic compression, dynamic range adjustment, digital scan conversion, etc., such as B Images, C images, D images, Doppler blood flow images, elastic images containing tissue elastic properties, etc., or other types of two-dimensional ultrasound images or three-dimensional ultrasound images.
  • the acquiring unit is used to acquire the ultrasound image of at least one part of the fetus, which may be an ultrasound image stored in a storage medium, such as a cloud server, a U disk, or a hard disk.
  • a storage medium such as a cloud server, a U disk, or a hard disk.
  • the trained convolutional neural measurement model of the present invention includes an input layer, a hidden layer, and an output layer; the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers; the input ultrasound image is first After several convolutional layers and down-sampling layers, convolution and down-sampling operations are performed respectively, and then through several convolutional layers and up-sampling layers, convolution and up-sampling operations are performed respectively; the input layer and hidden layer of the neural network , The hidden layers, the hidden layer and the output layer are connected by weight parameters; the convolutional layer in the convolutional neural measurement model is used to automatically extract the features in the ultrasound image.
  • the neural network of the convolutional neural measurement model copies features from the shallower convolutional layer to the deeper convolutional layer, and the copied features and the deeper convolutional layer every time the ultrasound image is convolved or sampled. The features of corresponding pixels are added to enter the next convolutional layer.
  • the convolutional neural measurement model includes a detection network and a segmentation network.
  • the detection network includes an input layer, a hidden layer, and an output layer.
  • the input layer and hidden layer of the detection network, between each hidden layer, and between the hidden layer and the output layer are connected by weight parameters;
  • the hidden layer includes Convolutional layer, maximum pooling layer, and combined layer; first, several convolutional layers and several maximum pooling layers are alternately connected, then several convolutional layers are connected, and then a combined layer is connected, and the combined layer is connected before
  • the high-level feature layer is combined with one or several hidden layers before the high-level feature layer; the length and width of the output image of the high-level feature layer and the combined hidden layer are the same; the high-level feature layer is the same as the previous one.
  • the segments or several hidden layers are combined and input together to the last convolutional layer, and the last convolutional layer is used as the output layer.
  • the segmentation network includes an input layer, a hidden layer, and an output layer; the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers; the input ultrasound image first passes through several convolutional layers and down-sampling layers, and then convolution Operation and down-sampling operation, and then through several convolutional layers and up-sampling layers, respectively perform convolution operation and up-sampling operation; more preferably, when the segmentation network performs convolution or sampling processing on ultrasound images, start from the shallower convolution layer Copy features to the deeper convolutional layer, add the copied features and the feature corresponding pixels of the deeper convolutional layer to enter the next layer of convolutional layer; between the input layer and hidden layer of the neural network and each hidden layer , The hidden layer and the output layer are connected by weight parameters; the convolutional layer is used to automatically extract the features in the ultrasound image
  • the "parts" of different fetal positions obtained in the present invention refer to the abdomen, head and thighs of the fetus. Obtaining the growth parameters of different parts to be measured are mainly double parietal diameter parameters, head circumference parameters, abdominal circumference parameters and femur length parameters.
  • "Ultrasound image” is a single frame of ultrasound image or at least two frames of ultrasound image, that is, a single frame of ultrasound image or ultrasound video.
  • the convolutional neural measurement model or unit herein includes (or contains or has) other elements and those elements.
  • module means, but is not limited to, a software or hardware component that performs a specific task, such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC) or a processor, such as a CPU, GPU.
  • the unit may advantageously be configured to reside in an addressable storage medium and configured to execute on one or more processors. Therefore, as an example, a unit may include components (such as software components, object-oriented software components, class components, and task components), processes, functions, attributes, procedures, subroutines, program code segments, drivers, firmware, microcode , Circuits, data, databases, data structures, tables, arrays and variables.
  • the first processing unit of the present invention includes: a normalization unit, a probability calculation unit, and a determination unit.
  • the normalization unit unit is used to normalize the ultrasound image, and fix the ultrasound image to a size suitable for the input layer of the convolutional neural measurement model.
  • the probability calculation unit is used to calculate the probability that each pixel in the ultrasound image is the part to be measured according to the convolutional neural measurement model.
  • the determining unit determines the pixel points in the ultrasound image whose pixel point probability exceeds the preset probability as the part to be measured.
  • the probability range is 0-1.
  • the preset probability is 0.5.
  • the present invention normalizes the ultrasound image before processing the fetal ultrasound image, and fixes the ultrasound image to a size that is compatible with the input layer of the convolutional nerve measurement model, so that ultrasound images of different sizes can be processed, improving Applicability.
  • the invention improves the accuracy and accuracy of measurement by identifying the probability that each pixel in the ultrasound image is the part to be measured.
  • the first processing unit of the present invention includes: a normalization unit, a third processing unit, a fourth processing unit, and a fifth processing unit.
  • the normalization unit is used to normalize the ultrasound image and fix the ultrasound image to a size suitable for the input layer of the measurement model.
  • the third processing unit is used to detect the location area of the part to be measured through the detection network of the convolutional neural measurement model; the fourth processing unit calculates that each pixel in the location area is the location of the part to be measured according to the segmentation network of the convolutional neural measurement model Probability:
  • the fifth processing unit determines the pixel points in the location area whose pixel point probability exceeds the preset probability as the part to be measured.
  • the present invention detects the position area of the part to be measured through the detection network of the convolutional neural measurement model, that is, first detects the approximate position area of the part to be measured, and then calculates each pixel in the position area as the part to be measured through the fourth processing unit The probability of increasing the accuracy and accuracy of the measurement.
  • the second processing unit highlights the contour of the distribution area of the part to be measured and performs fitting.
  • the second processing unit includes: a restoration unit, an outline unit, and a fitting unit.
  • the restoration unit restores the ultrasound image of the distribution area of the part to be measured determined by the convolution measurement module to the size of the original ultrasound image.
  • the outline unit uses a curve to outline and enclose the determined distribution area of the part to be measured.
  • the fitting unit uses the least square method to fit the outline of the distribution area of the part to be measured.
  • the measurement unit measures the growth parameters of different measurement positions according to the fitted contour.
  • the present invention uses lines to enclose the distribution area of the part to be measured in the convolution measurement module, selects the part with the largest area among the possible parts obtained by enclosing, and fits the enclosed contour, which improves the accuracy of the present invention. Degree and precision.
  • the second aspect of the present invention provides a method for measuring fetal growth parameters, as shown in Figure 1, including the steps:
  • the storage medium is a magnetic storage medium, such as a magnetic disk (such as a floppy disk) or tape; an optical storage medium, such as an optical disk, an optical tape, or a machine-readable bar code; a solid-state electronic storage device, such as a random access memory (RAM) or a read-only memory (ROM); It can also be an ultrasound image stored in a cloud server.
  • a magnetic storage medium such as a magnetic disk (such as a floppy disk) or tape
  • an optical storage medium such as an optical disk, an optical tape, or a machine-readable bar code
  • solid-state electronic storage device such as a random access memory (RAM) or a read-only memory (ROM)
  • RAM random access memory
  • ROM read-only memory
  • S200 Determine the distribution area of the part to be measured in the ultrasound image according to the convolutional nerve measurement model, and the convolutional nerve measurement model trains and determines the marked ultrasound images of different fetal parts through the convolutional neural network;
  • the highlight display may use lines or curves to outline the contour of the part to be measured, or highlight the contour of the part to be measured.
  • S400 Obtain growth parameters of different parts to be measured by measuring the fitted contours corresponding to the parts to be measured.
  • the fetal growth parameter measurement method of the present invention can automatically identify the fetus to be measured in the ultrasound image through the trained convolutional nerve measurement model, and automatically measure various main growth parameters of the fetus.
  • the measurement result is more accurate than manual measurement. higher efficiency.
  • step S200 determines whether each pixel in the ultrasound image is a measurement location. As shown in FIG. 9, specifically, when determining the distribution area of the part to be measured in the ultrasound image according to the convolutional neural measurement model, step S200 specifically includes:
  • S210 Normalize the ultrasound image, and fix the ultrasound image to a size suitable for the input layer of the convolutional neural measurement model
  • S220 Calculate the probability that each pixel in the ultrasound image is a part to be measured according to the convolutional neural measurement model.
  • the ultrasound image input to the input layer of the convolutional neural measurement model is passed through several convolutional layers and downsampling layers to perform convolution and downsampling operations to obtain the first feature; the first feature is then passed through several convolutional layers and up The sampling layer performs a convolution operation and an up-sampling operation respectively to obtain the second feature; the output layer performs convolution according to the second feature to calculate the probability that each pixel in the ultrasound image is the part to be measured.
  • the input layer size of the convolutional neural measurement model is set to 256*256*1.
  • the down-sampling operation Get 128*128*16 features; then continue with some 3*3 convolutions and downsampling to get 64*64*64 features; then continue with some upsampling and 3*3 convolutions to get 256*256*16 features ;
  • the final output layer performs a 1*1 convolution operation to obtain a probability result of 256*256*4, and the probability range is 0 to 1.
  • the neural network of the convolutional neural measurement model of the present invention performs convolution or sampling processing on an ultrasound image, it copies features from a shallower convolutional layer to a deeper convolutional layer, and the copied features and the deeper convolutional layer The feature corresponding pixels are added to enter the next convolutional layer.
  • the gray rectangles in Figure 3 represent the features extracted after each convolution or sampling operation of the image, and the white rectangles represent the features copied from the shallower convolutional layer of the neural network to the deeper convolutional layer.
  • S230 Determine a pixel point in the ultrasound image whose pixel point probability exceeds a preset probability as a part to be measured.
  • a pixel with a pixel probability exceeding 0.5 is defined as a pixel of the part to be measured.
  • step S200 determines the distribution area of the part to be measured in the ultrasound image according to the convolutional nerve measurement model, specifically:
  • S240 Normalize the ultrasound image, and fix the ultrasound image to a size suitable for the input layer of the measurement model;
  • the detection network detects the location area of the part to be measured is marked by the detection frame.
  • the detection network includes an input layer, a hidden layer, and an output layer.
  • the input layer and hidden layer of the detection network, between each hidden layer, and between the hidden layer and the output layer are connected by weight parameters.
  • the hidden layer includes convolution.
  • Layer, maximum pooling layer and combination layer firstly connect several convolutional layers and several maximum pooling layers alternately, then connect several convolutional layers, and then connect a combination layer to connect the high-level
  • the feature layer is combined with one or several hidden layers before the high-level feature layer; the output image of the high-level feature layer and the combined hidden layer has the same length and width; the high-level feature layer is the same as the previous one or several hidden layers
  • the combined layers are input to the last convolutional layer, and the last convolutional layer is used as the output layer.
  • S260 Calculate the probability that each pixel in the location area is the part to be measured according to the segmentation network of the convolutional neural measurement model
  • the segmentation network processes the ultrasonic image of the location area to be measured by the detection network, it first passes through several convolutional layers and downsampling layers to perform convolution and downsampling operations to obtain the first feature; pass the first feature through several The convolution layer and the up-sampling layer respectively perform convolution and up-sampling operations to obtain the second feature; the output layer performs convolution according to the second feature to calculate the probability that each pixel in the ultrasound image is the part to be measured. More preferably, when the segmentation network performs convolution or sampling processing on the ultrasound image, the features are copied from the shallower convolutional layer to the deeper convolutional layer, and the copied features and the feature corresponding pixels of the deeper convolutional layer are added to enter The next convolutional layer.
  • the gray rectangles in Figure 6 represent the features extracted after each convolution or sampling operation of the image, and the white rectangles represent the copied features from the shallower convolutional layer of the neural network to the deeper convolutional layer.
  • the pixels corresponding to the deeper features are added directly to the next layer.
  • S270 Determine a pixel point in the location area whose pixel point probability exceeds a preset probability as a part to be measured.
  • the fetal growth parameter measurement method of the present invention detects the location area of the part to be measured through the detection network of the convolutional neural measurement model, that is, first detects the approximate location area of the part to be measured, and then calculates each pixel in the location area as the waiting area.
  • the probability of measuring parts improves the accuracy and accuracy of the measurement.
  • the present invention uses lines to outline the distribution area contour of the part to be measured and perform fitting. As shown in Figure 11, specifically
  • S310 first restore the ultrasound image of the distribution area of the part to be measured determined by the convolution measurement module to the size of the original ultrasound image
  • S320 Use a curve to outline and enclose the determined distribution area of the part to be measured
  • S330 Fit the contour of the outline of the distribution area of the part to be measured by the least square method.
  • S400 Obtain growth parameters of different parts to be measured by measuring the fitted contours corresponding to the parts to be measured.
  • the head circumference and abdominal circumference of the fetus are generally oval.
  • N points P i (x i , y i ) on the edge of the enclosed part the ellipse fitting is performed.
  • the principle of the ellipse fitting method is the least square method:
  • the center point coordinates of the ellipse (x 0 , y 0 ), the length of the major and minor axis 2a, 2b and the angle ⁇ can be calculated
  • the fetal abdominal circumference, head circumference and biparietal diameter can be conveniently calculated based on the center point of the ellipse, the length and angle of the long and short axis; the calculation formula for the abdominal circumference and head circumference ellipse circumference is:
  • the post-processing of the ultrasound image of the thigh is: the ultrasound image of the distribution area of the part to be measured is restored to the size of the original ultrasound image after the convolutional nerve measurement module is determined, and the enclosed part with the largest area is selected,
  • the fetal femur length can be easily measured after closing operation, mean filtering, and skeleton extraction in turn; among them, femur length is generally the femur length, and the calculation formula for mean filtering is:
  • I_after represents the ultrasonic image after mean filtering
  • I_after xy represents the pixel in the x-th row and y-th column in the image
  • m and n represent the length and width of the filter window, respectively
  • S xy represents the ultrasound image in the filter window Part
  • I_before xy represents the pixel in the x-th row and y-th column in the ultrasound image to be processed.
  • the present invention is based on the convolutional neural measurement model to determine the distribution area of the part to be measured in the ultrasound image. Training the convolutional neural measurement model is an important component to ensure measurement accuracy and precision.
  • the present invention uses a convolutional neural network to train and determine several marked ultrasound images of different fetal parts. It includes the following steps:
  • Step S1 collecting ultrasound images of each measurement location of the fetus and marking the measurement location;
  • the preferred labeling method in this embodiment is to use a closed curve formed by a continuous broken line to label each measurement part of the fetus, such as the abdomen, head, and thigh.
  • a closed curve formed by a continuous broken line As shown in Figure 2a and Figure 2b, they are the labeling effects of ultrasound images of the fetus’s abdomen and thighs respectively; after labeling the ultrasound images, each ultrasound image with annotated curve is converted into its corresponding template; the template is the ultrasound image The size is the same, but the values are different.
  • the conversion method of the template is to fill the marked curve and the area within the ultrasound image with a non-zero value, and use 0 to fill the area outside the marked curve in the ultrasound image; for example, mark the value of the curve and the area within it 1 means the abdomen of the fetus, a value of 2 means the head of the fetus, a value of 3 means the fetal thigh, and the value of the area outside the curve is 0.
  • Figure 2c An example is shown in Figure 2c.
  • Step S2 based on the collected ultrasound images and their annotations, establish a variety of neural networks, train to obtain the convolutional neural measurement model, and select the optimal convolutional neural measurement model parameters; specifically including:
  • Step S21 Divide the collected ultrasound images into a training set, a validation set and a test set; preprocess the ultrasound image: fix the ultrasound image to a certain size, and normalize the ultrasound image of the same size;
  • the ratio can be 3/5, 1/5, 1/5, or other ratios;
  • Ultrasound image preprocessing fix the ultrasound image to a certain size, and normalize the ultrasound image of the same size; for example, the ultrasound image after the fixed size is 256*256*3; 256*256 represents the length and width of the ultrasound image after preprocessing , That is, 256 pixels long, 256 pixels wide, and 3 means RGB three channels; optionally, when the ultrasound image is fixed to a certain size, the aspect ratio of the original image is maintained, or the aspect ratio of the original image is changed;
  • the specific processing method of the normalization operation is to subtract the mean value of the image pixels from each pixel value in the ultrasound image and divide by the standard deviation of the image pixels, specifically through the following formula:
  • Image_norm (Image- ⁇ )/ ⁇
  • Image is an ultrasound image of 256*256*3
  • is the average of pixel values in Image
  • is the standard deviation of pixel values in Image
  • Image_norm is the normalized ultrasound image.
  • Step S22 establishing a neural network structure
  • the structure of neural network includes two types: the first type of convolutional neural network and the second type of convolutional neural network;
  • the first type of convolutional neural network it is used to input the ultrasound images of the training set of each measurement position of the fetus into the same neural network to obtain the prediction result of the measurement position; during the training process, the prediction result of the neural network output is getting closer and closer Input the template corresponding to the image, the loss function of the neural network is the difference between the predicted result and the template, and the loss function keeps decreasing during the training process;
  • the input of the first type of convolutional neural network is an ultrasound image with the same size as the input layer in the neural network;
  • the neural network includes an input layer, a hidden layer and an output layer;
  • the hidden layer includes several convolutional layers and downsampling layers , Up-sampling layer;
  • the input ultrasound image first passes through several convolutional layers and down-sampling layers for convolution and down-sampling operations respectively, and then passes through several convolutional layers and up-sampling layers for convolution and up-sampling operations respectively ;
  • the input layer and hidden layer of the neural network, between the hidden layers, and between the hidden layer and the output layer are connected by weight parameters;
  • the convolutional layer of the first type of convolutional neural network is used to automatically extract ultrasound images Characteristics;
  • the convolution of the first type of convolutional neural network selects the expansion convolution suitable for the expansion rate to increase the receptive field of the network and improve the accuracy of the network prediction; for example, the expansion convolution with the expansion rate of 2 is 3*3 Insert 0 in the rows and columns of the ordinary convolution kernel to obtain a 5*5 expanded convolution kernel.
  • This expanded convolution improves the receptive field of the network while ensuring that the parameters of the network remain unchanged.
  • the second type of convolutional neural network includes detection network and segmentation network
  • Figure 4 shows that the detection network includes an input layer, a hidden layer, and an output layer.
  • the input layer and the hidden layer of the detection network, between each hidden layer, and between the hidden layer and the output layer are connected by weight parameters;
  • Figure 4 There are four columns in China, which respectively indicate the name of each layer of the hidden layer, the number of filters in each layer, the input image size and output image size of each layer;
  • the hidden layer includes the convolutional layer, the maximum pooling layer and the combined layer; the first is Several convolutional layers and several maximum pooling layers are alternately connected, and then several convolutional layers are connected, and then a bonding layer is connected, and the high-level feature layer connected before the combined layer and the layer before the high-level feature layer or Several hidden layers are combined; the length and width of the output image of the high-level feature layer and the combined hidden layer must be the same; the high-level feature layer is combined with the previous one or several hidden layers and then input together To the last convolutional layer (the last convolutional layer is the output
  • the detection network in Figure 4 For the detection network in Figure 4, first 5 convolutional layers and 5 maximum pooling layers are alternately connected; then several convolutional layers are connected. In this example, two convolutional layers are connected; then a combined layer is connected. (Route layer), used to combine the high-level feature layer (the 11th layer in Figure 4) connected before the combination layer with one or more hidden layers before the high-level feature layer, so that the high-level feature layer and Combination of low-level fine-grained features; the length and width of the output image of the high-level feature layer and the combined hidden layer must be the same; in Figure 4, the 11th and 9th layers (a maximum pooling layer) are combined, or Combine the 11th layer with the 9th layer and the 10th layer; this high-level feature layer is combined with the previous layer or layers of hidden layers and then input to the last convolutional layer; this increases the neural network pairing The detection effect of the target object;
  • the annotation curve of the ultrasound image at the thigh is converted into a label box.
  • Figure 5 shows the annotation box 501 at the fetal thigh; the detection network uses the ultrasound image as input to detect possible measurement locations in the ultrasound image
  • the detection result is framed by the detection frame.
  • the loss function of the detection network is based on the error of the labeling frame and the detection frame.
  • the detection network of the convolutional neural measurement model is trained through the loss function to reduce the detection error.
  • the loss function is getting smaller and smaller during the training process of the detection network.
  • the detection frame is getting closer and closer to the label frame;
  • ⁇ 1 - ⁇ 3 represents the proportion of each error in the total loss function, and each error is in the form of square error.
  • the first term of the loss function represents the error of the probability prediction of the detection frame containing the target object.
  • S 2 indicates that the ultrasound image is divided into S ⁇ S grid units, such as the above-mentioned 13*13 units;
  • B indicates how many detection frames are set for each grid unit;
  • the second term represents the prediction error of the position and length and width of the detection frame containing the target object.
  • x i, y i, h i , w i denotes the center position and the length
  • width information labels the i-th frame of grid cells, Indicates the corresponding information of the predicted bounding box;
  • the input of the segmentation network is the image in the detection frame output by the detection network;
  • the segmentation network includes an input layer, a hidden layer and an output layer;
  • the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers;
  • the input ultrasound image First go through several convolutional layers and down-sampling layers to perform convolution operations and down-sampling operations respectively, and then go through several convolutional layers and up-sampling layers to perform convolution operations and up-sampling operations respectively;
  • the input layer and implicitness of the neural network The layers, the hidden layers, and the hidden layers and the output layer are connected by weight parameters;
  • the convolutional layer is used to automatically extract features in the ultrasound image; optionally, the input image range of the second part of the segmentation network can be Properly zoom in on the basis of the detection frame, for example, expand 20 pixels up, down, left, and right;
  • the second type of neural network of the second type takes ultrasound images of the abdomen and head as input, and includes each segmentation network corresponding to the input ultrasound image.
  • the segmentation network includes the input layer and the hidden layer And the output layer;
  • the hidden layer includes several convolutional layers, down-sampling layers, and up-sampling layers;
  • the input ultrasound image first passes through several convolutional layers and down-sampling layers, performs convolution and down-sampling operations respectively, and then passes through several The convolutional layer and the upsampling layer perform convolution and upsampling operations respectively;
  • the input layer and the hidden layer of the neural network, between the hidden layers, and the hidden layer and the output layer are connected by weight parameters;
  • volume Multilayer is used to automatically extract features in the ultrasound image; as shown in Figure 6; the difference from Figure 3 is that because the segmentation network predicts the category of 2, that is, the predicted part and the non-predicted part, the output of the segmentation network is 256* 256*2; the white rectangle indicates that
  • Step S23 initialize the neural network: set the weight parameter of the neural network to a random number between 0 and 1;
  • Step S24 calculating the loss function of the neural network
  • the loss function of the detection network involved above includes the loss of the detection frame position and the predicted probability of the detection frame; the loss function of the segmentation network involved above selects the pixel-level cross entropy loss; the calculation formula is:
  • the cross entropy loss is the sum of the prediction errors on each pixel in the ultrasound image; where x, y are the length and width of the input image of the segmentation network, and p ij is the pixel in the i-th row and j-th column of the ultrasound image.
  • the predicted pixel is the probability of the predicted location
  • t ij is the value corresponding to the pixel in the i-th row and j-th column of the ultrasound image in the ultrasound image template. If the pixel is the predicted location, the value is 1, otherwise, the value is 0 ; The closer the predicted probability of the segmentation network output to the ultrasound image template, the smaller the cross-entropy loss function;
  • Step S25 training a neural network to obtain a convolutional neural measurement model
  • Update the weight parameters of the neural network according to the loss function of the neural network, and the mechanism of updating the weight parameters of the neural network uses the adaptive moment estimation optimization method
  • Step S26 selecting the weight parameters of the optimal convolutional neural measurement model
  • the possible value range of IOU is [0,1]
  • the fetal growth parameter measurement method of the present invention can automatically identify the fetus to be measured in the ultrasound image through the trained convolutional nerve measurement model, and automatically measure various main growth parameters of the fetus.
  • the measurement result is more accurate than manual measurement. higher efficiency.
  • the third aspect of the present invention provides an ultrasound device including a memory and a processor.
  • the memory is used to store computer programs.
  • the processor is used to execute a computer program to implement the above-mentioned method for measuring the growth parameters of the fetal ultrasound image.
  • the ultrasound equipment of the present invention can automatically measure the growth parameters of different parts of the fetus through the trained convolutional nerve measurement model, the measurement accuracy is high, and the work efficiency of the doctor is provided.
  • the fourth aspect of the present invention provides a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for measuring fetal growth parameters are implemented. .

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种胎儿生长参数测量方法。测量方法包括:获取胎儿至少一个部位的超声图像(S100);根据卷积神经测量模型确定超声图像中待测量部位的分布区域,卷积神经测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定(S200);突出显示待测量部位的分布区域轮廓并进行拟合(S300);通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数(S400)。还提供了一种胎儿生长参数测量系统、超声设备及存储介质。胎儿生长参数测量方法、测量系统、超声设备及存储介质能够利用训练好的卷积神经测量模型自动测量不同胎儿部位的生长参数,提高了测量的准确度,提高了医生的工作效率。

Description

胎儿生长参数测量方法、系统及超声设备 技术领域
本发明涉及超声图像处理技术领域,尤其是一种胎儿生长参数测量方法、系统、超声设备。
背景技术
随着产前超声诊断技术的不断发展,许多胎儿结构异常在产前得到发现。超声检查作为一项无创、无致畸、方便、快捷及安全的检查方法,在产前诊断与筛查胎儿畸形中发挥着重要的作用。B超作为一种监测胎儿宫内生长情况的重要手段之一,现已在临床中运用广泛,从二维超声直至目前所运用的四维超声,不论是在医师的操作技能上,还是在仪器的功能和分辨率上都有很大程度的提高。通过超声测量孕期胎儿的主要生长参数,包括双顶径、头围、腹围、股骨长的结果能协助医生在孕期诊断胎儿结构畸形等疾病,是孕期一个关键的检查项目。
目前胎儿超声图像主要生长参数例如双顶径、头围、腹围、股骨长,均以手动测量为主,依赖于医生的经验和操作手法,准确度和效率都很低。
发明内容
本发明旨在至少解决现有技术中存在的技术问题之一,提供一种胎儿生长参数测量方法、系统及超声设备,以实现胎儿生长参数的自动测量,提高了测量的准确度和效率。
特别地,本发明提供了一种胎儿生长参数测量方法,包括:
获取胎儿至少一个部位的超声图像;
根据卷积神经测量模型确定超声图像中待测量部位的分布区域,卷积神经测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定;
突出显示待测量部位的分布区域轮廓并进行拟合;
通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数。
进一步地,所述超声图像为单帧超声图像或至少两帧超声图像。
进一步地,根据卷积神经测量模型确定超声图像中待测量部位的分布区域,包括:
对超声图像进行归一化,将超声图像固定到与卷积神经测量模型的输入层相适配的尺寸;
根据卷积神经测量模型计算超声图像中每个像素点为待测量部位的概率;
将超声图像中像素点概率超过预设概率的像素点确定为待测量部位。
进一步地,根据卷积神经测量模型确定超声图像中待测量部位的分布区域,包括:
对超声图像进行归一化,将超声图像固定到与测量模型的输入层相适配的尺寸;
通过卷积神经测量模型的检测网络检测出待测量部位的位置区域;
根据卷积神经测量模型的分割网络计算位置区域中每个像素点为待测量部位的概率;
将位置区域中像素点概率超过预设概率的像素点确定为待测量部位。
进一步地,突出显示待测量部位的分布区域轮廓并进行拟合,包括:
将经卷积神经测量模块确定待测量部位的分布区域的超声图像还原到初始超声图像的尺寸;
采用曲线对确定的待测量部位的分布区域的轮廓进行勾勒围合;
通过最小二乘法对勾勒的待测量部位的分布区域轮廓进行拟合。
进一步地,根据卷积神经测量模型计算超声图像中每个像素点为待测量部位的概率,包括:
将输入卷积神经测量模型输入层的超声图像经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;
将第一特征再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;
输出层根据第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。
进一步地,卷积神经测量模型的神经网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。
进一步地,检测网络检测出待测量部位的位置区域由检测框标出。
进一步地,检测网络包括输入层、隐含层、输出层,检测网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接,
隐含层包括卷积层、最大池化层和结合层,首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与高级特征层之前的一层或数层隐含层相结合;
高级特征层与相结合的隐含层的输出图像的长和宽一致;
高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层,最后一个卷积层作为输出层。
进一步地,根据所述卷积神经测量模型的分割网络计算所述位置区域中每个像素点为待测量部位的概率;具体包括:
将由检测网络标出待测量部位位置区域超声图像经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;
将所述第一特征再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;
输出层根据所述第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。
进一步地,分割网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。
进一步地,卷积神经测量模型的检测网络通过损失函数训练,缩小检测误差。
特别地,本发明还提供了一种胎儿生长参数测量系统,包括:
获取单元,获取单元用于获取胎儿至少一个部位的超声图像;
第一处理单元,第一处理单元根据卷积神经测量模型确定超声图像中待测量部位的分布区域,测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定;
第二处理单元,第二处理单元突出显示待测量部位的分布区域轮廓并进行拟合;
测量单元,测量单元通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数。
特别地,本发明还提供了一种超声设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行计算机程序以实现上述的胎儿超声图像生长参数测量的方法。
特别地,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用以实现上述的胎儿生长参数测量方法的步骤。
本发明的优点是:本发明的胎儿生长参数测量方法能够通过训练后的卷积神经测量模型自动识别超声图像中胎儿的待测量部位,并自动测量胎儿的各项主要生长参数,测量结果相比于手动测量更准确,效率更高。
进一步地,本发明通过在处理胎儿超声图像之前先对超声图像进行归一化,将超声图像固定到与卷积神经测量模型的输入层相适配的尺寸,使得不同尺寸的超声图像都能进行处理,提高了适用性。
进一步地,本发明通过识别超声图像中每一个像素点为待测量部位的概率,提高了测量的精确度和准确性。
进一步地,本发明先通过卷积神经测量模型的检测网络检测出待测量部位的位置区域再根据卷积神经测量模型的分割网络计算位置区域中每个像素点为待测量部位的概率,提高了测量的精确度和准确度。
进一步地,本发明的胎儿生长参数测量系统能够通过训练后的卷积神经测量模型自动识别超声图像中胎儿的待测量部位,并自动测量胎儿的各项主要生长参数,测量结果相比于手动测量更准确,效率更高。
本发明的超声设备能够通过训练后的卷积神经测量模型自动测量胎儿不同部位的生长参数,测量的准确度高,提供了医生的工作效率。
附图说明
图1为本发明的测量方法流程图。
图2a为本发明的胎儿腹部的超声图像的标注示意图。
图2b为本发明的胎儿大腿处的超声图像的标注示意图。。
图2c为本发明的带有标注曲线的超声图像转化成模板示意图。
图3为本发明的卷积神经测量模型第一种结构示意图。
图4为本发明的卷积神经测量模型第二种结构中检测网络示意图。
图5为本发明的胎儿大腿处的标注框示意图。
图6为本发明的卷积神经测量模型第二种结构中分割网络示意图。
图7a和图7b分别为本发明中腹部和头部超声图像后处理示意图。
图8为本发明中大腿处超声图像后处理示意图。
图9为本发明中一实施例确定超声图像中待测量部位的分布区域的处理示意图。
图10为本发明中另一实施例确定超声图像中待测量部位的分布区域的处理示意图。
图11为本发明中勾勒待测量部位的分布区域轮廓并进行拟合的处理示意图。
图12为本发明胎儿生长参数测量系统结构示意图。
具体实施方式
下面通过具体实施方式结合附图对本发明作进一步详细说明。其中不同实施方式中类似元件采用了相关联的类似的元件标号。在以下的实施方式中,很多细节描述是为了使得本申请能被更好的理解。然而,本领域技术人员可以毫不费力的认识到,其中部分特征在不同情况下是可以省略的,或者可以由其他元件、材料、方法所替代。在某些情况下,本申请相关的一些操作并没有在说明书中显示或者描述,这是为了避免本申请的核心部分被过多的描述所淹没,而对于本领域技术人员而言,详细描述这些相关操作并不是必要的,他们根据说明书中的描述以及本领域的一般技术知识即可完整了解相关操作。另外,说明书中所描述的特点、操作或者特征可以以任意适当的方式结合形成各种实施方式。同时,方法描述中的各步骤或者动作也可以按照本领域技术人员所能显而易见的方式进行顺序调换或调整。因此,说明书和附图中的各种顺序只是为了清楚描述某一个实施例,并不意味着是必须的顺序,除非另有说明其中某个顺序是必须遵循的。本文中为部件所编序号本身,例如“第一”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。
目前胎儿超声图像主要生长参数例如双顶径、头围、腹围、股骨长,均以手动测量为主,依赖于医生的经验和操作手法,准确度和效率都很低。
本发明的第一方面提供了一种胎儿生长参数测量系统,如图12所示,包括:获取单元、第一处理单元、第二处理单元和测量单元。获取单元用于获取胎儿至少一个部位的超声图像。第一处理单元根据卷积神经测量模型确定超声图像中待测量部位的分布区域,卷积神经测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定。第二处理单元突出显示待测量部位的分布区域轮廓并进行拟合。测量单元通过对待测量部位对应的拟合后的轮廓 进行测量,获取不同待测量部位的生长参数。
本发明的胎儿生长参数测量系统能够对胎儿的超声图像进行分析处理,自动测量各待测量部位的参数;待测量部位包括胎儿的腹部、头部、大腿部等。提高了测量的准确度和精确度,提高了医生的工作效率。
本发明的获取单元为超声成像设备,即通过超声成像设备获取超声图像。超声成像设备至少包括换能器、超声主机、输入单元、控制单元、和存储器。超声成像设备可以包括显示屏,超声成像设备的显示屏可以为识别系统的显示器。换能器用于发射和接收超声波,换能器受发射脉冲的激励,向目标组织(例如,人体或者动物体内的器官、组织、血管等等)发射超声波,经一定延时后接收从目标区域反射回来的带有目标组织的信息的超声回波,并将此超声回波重新转换为电信号,以获得超声图像或者视频。换能器可以通过有线或无线的方式连接到超声主机。
输入单元用于输入操作人员的控制指令。输入单元可以为键盘、跟踪球、鼠标、触摸面板、手柄、拨盘、操纵杆以及脚踏开关中的至少一个。输入单元也可以输入非接触型信号,例如声音、手势、视线或脑波信号。
控制单元至少可以控制焦点信息、驱动频率信息、驱动电压信息以及成像模式等扫描信息。控制单元根据用户所需成像模式的不同,对信号进行不同的处理,获得不同模式的超声图像数据,然后经对数压缩、动态范围调整、数字扫描变换等处理形成不同模式的超声图像,如B图像,C图像,D图像,多普勒血流图像,包含组织弹性特性的弹性图像等等,或者其他类型的二维超声图像或三维超声图像。
需要理解的是,在一实施例中获取单元用于获取胎儿至少一个部位的超声图像可以为存储在存储介质中的超声图像,例如,云服务器,U盘或者硬盘等。
在一个实施例中,本发明训练后的卷积神经测量模型包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积神经测量模型中的卷积层用于自动提取超声图像中的特征。更优地,卷积神经测量模型的神经网络每次对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。
在另一个实施例中,卷积神经测量模型包括检测网络和分割网络。其中,检测网络包括输入层、隐含层、输出层,检测网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;隐含层包括卷积层、最大池化层和结合层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与该高级特征层之前的一层或数层隐含层相结合;该高级特征层与相结合的隐含层的输出图像的长和宽一致;该高级特征层与之前的一层或数层隐含层相结合后 一起输入到最后一个卷积层,最后一个卷积层作为输出层。分割网络包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;更优地,分割网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积层用于自动提取超声图像中的特征。
本发明获取的不同胎位“部位”指胎儿的腹部,头部和大腿部。获取不同待测量部位的生长参数主要是双顶径参数、头围参数、腹围参数以及股骨长参数。“超声图像”为单帧超声图像或至少两帧超声图像,即单帧超声图像或者超声视频。
本文的卷积神经测量模型或单元包括(或者包含或者具有)其它元件以及那些元件。如本文中使用的术语“模块”意指但不限于执行特定任务的软件或硬件组件,诸如现场可编程门阵列(FPGA)或专用集成电路(ASIC)或处理器,例如CPU、GPU。单元可以有利地被配置为驻留在可寻址存储介质中并且配置为在一个或多个处理器上执行。因此,作为示例,单元可以包括组件(诸如软件组件、面向对象软件组件、类组件和任务组件)、进程、函数、属性、过程、子例行程序、程序代码段、驱动程序、固件、微码、电路、数据、数据库、数据结构、表、数组和变量。
为了提高卷积神经测量模型识别测量的准确度和精度。本发明中“第一”、“第二”等,仅用于区分所描述的对象,不具有任何顺序或技术含义。在一实施例中,本发明的第一处理单元包括:归一化单元、概率计算单元和确定单元。归一化单元单元用于对超声图像进行归一化,将超声图像固定到与卷积神经测量模型的输入层相适配的尺寸。概率计算单元用于根据卷积神经测量模型计算超声图像中每个像素点为待测量部位的概率。确定单元将超声图像中像素点概率超过预设概率的像素点确定为待测量部位。概率取值范围为0-1。优选地,预设概率为0.5。
本发明通过在处理胎儿超声图像之前先对超声图像进行归一化,将超声图像固定到与卷积神经测量模型的输入层相适配的尺寸,使得不同尺寸的超声图像都能进行处理,提高了适用性。本发明通过识别超声图像中每一个像素点为待测量部位的概率,提高了测量的精确度和准确性。
在另一实施例中,本发明的第一处理单元包括:归一化单元,第三处理单元、第四处理单元和第五处理单元。归一化单元用于对超声图像进行归一化,将超声图像固定到与测量模型的输入层相适配的尺寸。第三处理单元用于通过卷积神经测量模型的检测网络检测出待测量部位的位置区域;第四处理单元根据卷积神经测量模型的分割网络计算位置区域中每个像素点为待测量部位的概率;第五处理单元将位置区域中像素点概率超过预设概率的像素点确定为待测量部位。
本发明通过卷积神经测量模型的检测网络检测出待测量部位的位置区域,即先检测出待测量部位的大致位置区域,再通过第四处理单元计算位置区域中每个像素点为待测量部位的概率,提高了测量的精确度和准确度。
在一实施例中,第二处理单元突出显示待测量部位的分布区域轮廓并进行拟合。第二处理单元包括:还原单元、勾勒单元、拟合单元。还原单元将经卷积测量模块确定待测量部位的分布区域的超声图像还原到初始超声图像的尺寸。勾勒单元采用曲线对确定的待测量部位的分布区域的轮廓进行勾勒围合。拟合单元通过最小二乘法对勾勒的待测量部位的分布区域轮廓进行拟合。最后测量单元根据拟合后的轮廓测量不同测量部位的生长参数。
本发明在卷积测量模块确定待测量部位的分布区域利用线条进行围合,选择围合得到的可能部位中面积最大的部位,并对围合后的轮廓进行拟合,提高了本发明的准确度和精度。
本发明的第二个方面,提供了一种胎儿生长参数测量方法,如图1所示,包括步骤:
S100,获取胎儿至少一个部位的超声图像;获取胎儿不同部位的超声图像的方式可以通过超声成像设备实时获取胎儿的超声图像,也可以通过加载存储在存储介质中的超声图像进行自动识别测量。存储介质为磁存储介质,例如磁盘(如软盘)或磁带;光存储介质,如光盘,光带,或机器可读的条形码;固态电子存储装置,如随机存取存储器(RAM)或只读存储器(ROM);也可以为存储在云服务器中的超声图像。
S200,根据卷积神经测量模型确定超声图像中待测量部位的分布区域,卷积神经测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定;
S300,突出显示待测量部位的分布区域轮廓并进行拟合;突出显示可以采用线条或曲线勾勒出待测量部位的轮廓,也可以高亮显示待显示测量部位的轮廓。
S400,通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数。
本发明的胎儿生长参数测量方法能够通过训练后的卷积神经测量模型自动识别超声图像中胎儿的待测量部位,并自动测量胎儿的各项主要生长参数,测量结果相比于手动测量更准确,效率更高。
为了提供测量的准确度,本发明对超声图像中每一个像素点是否为测量部位进行确定。如图9所示,具体地,步骤S200在根据卷积神经测量模型确定超声图像中待测量部位的分布区域时,具体包括包括:
S210,对超声图像进行归一化,将超声图像固定到与卷积神经测量模型的输入层相适配的尺寸;
S220,根据卷积神经测量模型计算超声图像中每个像素点为待测量部位的概率。具体为:将输入卷积神经测量模型输入层的超声图像经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;将第一特征再经过若 干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;输出层根据第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。
例如,图3中,设置卷积神经测量模型的输入层尺寸为256*256*1,经过两个3*3的卷积操作分别得到两个256*256*16的特征,再经过下采样操作得到128*128*16的特征;再继续进行若干3*3卷积和下采样得到64*64*64的特征;再继续进行若干上采样和3*3卷积得到256*256*16的特征;最终输出层进行1*1卷积操作得到256*256*4的概率结果,概率的取值范围是0到1。
本发明的卷积神经测量模型的神经网络每次对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。图3中的灰色长方形即表示图像经过每次卷积或者采样操作后提取到的特征,白色长方形表示从神经网络的较浅的卷积层复制特征到较深的卷积层层,复制的特征和较深的特征对应像素直接相加后进入下一层,这样将较浅的层的粗特征与较深的卷积层层的细特征结合计算,有利于将待测量部位从超声图像中分割出来,以作测量基础,提高测量的准确度和精确度。
S230,将超声图像中像素点概率超过预设概率的像素点确定为待测量部位。
在一实施例中将像素点概率超过0.5的像素点定义为待测量部位的像素点。
在另一实施例中,如图10所示,步骤S200根据卷积神经测量模型确定超声图像中待测量部位的分布区域,具体为:
S240,对超声图像进行归一化,将超声图像固定到与测量模型的输入层相适配的尺寸;
S250,通过卷积神经测量模型的检测网络检测出待测量部位的位置区域;
检测网络检测出待测量部位的位置区域由检测框标出。检测网络包括输入层、隐含层、输出层,检测网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接,隐含层包括卷积层、最大池化层和结合层,首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与高级特征层之前的一层或数层隐含层相结合;高级特征层与相结合的隐含层的输出图像的长和宽一致;高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层,最后一个卷积层作为输出层。
S260,根据卷积神经测量模型的分割网络计算位置区域中每个像素点为待测量部位的概率;
分割网络处理由检测网络标出待测量部位位置区域超声图像时,先经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;将所述第一特征再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;输出层根据所述第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。更优地,分割网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。图6中的灰色长方形即表示图像经过每次卷积或者 采样操作后提取到的特征,白色长方形表示从神经网络的较浅的卷积层复制特征到较深的卷积层层,复制的特征和较深的特征对应像素直接相加后进入下一层。
S270,将位置区域中像素点概率超过预设概率的像素点确定为待测量部位。
本发明的胎儿生长参数测量方法通过卷积神经测量模型的检测网络检测出待测量部位的位置区域,即先检测出待测量部位的大致位置区域,再通计算位置区域中每个像素点为待测量部位的概率,提高了测量的精确度和准确度。
确定好待测量部位的位置后需要将待测量部位从超声图像中分割出来进行计算,本发明采用线条勾勒出待测量部位的分布区域轮廓并进行拟合。如图11所示,具体为
S310,先将经卷积测量模块确定待测量部位的分布区域的超声图像还原到初始超声图像的尺寸;
S320,采用曲线对确定的待测量部位的分布区域的轮廓进行勾勒围合;
S330,通过最小二乘法对勾勒的待测量部位的分布区域轮廓进行拟合。
S400,通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数。
例如,如图7a、7b所示,胎儿的头围和腹围一般都为椭圆。找到围合部位的边缘上的N个点P i(x i,y i)后进行椭圆拟合,椭圆拟合方法的原理为最小二乘法:
设椭圆一般方程为:x 2+Axy+By 2+Cx+Dy+E=0
根据最小二乘原理,需要最小化的目标函数为:
Figure PCTCN2019093711-appb-000001
Figure PCTCN2019093711-appb-000002
即需使
Figure PCTCN2019093711-appb-000003
解方程得到A,B,C,D,E的值
根据椭圆的几何知识,可以计算出椭圆的中心点坐标(x 0,y 0),长短轴长度2a,2b和角度θ
Figure PCTCN2019093711-appb-000004
Figure PCTCN2019093711-appb-000005
Figure PCTCN2019093711-appb-000006
Figure PCTCN2019093711-appb-000007
Figure PCTCN2019093711-appb-000008
在拟合得到椭圆后,即可根据椭圆的中心点,长短轴的长度和角度信息方 便地计算出胎儿腹围、头围和双顶径;腹围、头围椭圆周长的计算公式为:
L=2πb+4(a-b)
优选地,如图8,对大腿部超声图像后处理为:在经卷积神经测量模块确定待测量部位的分布区域的超声图像还原到初始超声图像的尺寸,选择面积最大的围合部位,依次做闭运算、均值滤波和骨架提取后即可方便测量出胎儿的股骨长;其中,股骨长通俗讲就是大腿骨长度,均值滤波的计算公式为:
Figure PCTCN2019093711-appb-000009
其中,I_after表示均值滤波处理后的超声图像,I_after xy表示图像中第x行第y列的像素;m和n分别表示滤波器窗口的长度和宽度;S xy表示超声图像在滤波器窗口内的部分;I_before xy表示待处理的超声图像中第x行第y列的像素。
可以理解是,本发明是基于卷积神经测量模型确定超声图像中待测量部位的分布区域。训练卷积神经测量模型是确保测量准确度和精度的重要组成。本发明采用卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定。包括以下步骤:
步骤S1,收集胎儿各测量部位的超声图像并对测量部位进行标注;
本实施例中优选的标注方法是用连续折线形成的闭合曲线标注出胎儿的各测量部位,如腹部,头部,大腿部。如图2a、图2b所示,分别是胎儿腹部和大腿处的超声图像的标注效果;对超声图像标注后,将每张带有标注曲线的超声图像转化为其对应的模板;模板跟超声图像尺寸相同,但数值不同,模板的转化方法为用非0数值填充超声图像中标注曲线及其以内的区域,用0填充超声图像中标注曲线以外的区域;例如,标注曲线及其以内区域的数值1表示是胎儿腹部,数值为2表示是胎儿头部,数值为3表示是胎儿大腿处,标注曲线以外区域的数值为0,一个例子如图2c所示。
步骤S2,基于收集到的超声图像及其标注,建立多种神经网络,训练得到卷积神经测量模型,选择最优卷积神经测量模型的参数;具体包括:
步骤S21,对收集的超声图像划分为训练集、验证集和测试集;对超声图像进行预处理:将超声图像固定到一定尺寸,并归一化同样尺寸的超声图像;
在收集的所有超声图像中随机选取3/5的图像作为训练集,基于这些图像的模板进行训练;随机选取1/5的图像作为验证集;剩余1/5的超声图像作为测试集使用;训练集超声图像用于训练神经网络模型;验证集超声图像用于验证神经网络的效果并帮助选择最优的神经网络模型参数;测试集超声图像用于测试神经网络模型的使用效果;当然随机选取的比例可以是3/5、1/5,1/5,也可以是其他的比例;
超声图像预处理:将超声图像固定到一定尺寸,并归一化同样尺寸的超声图像;如固定尺寸后的超声图像为256*256*3;256*256表示预处理后超声图像的长和宽,即256像素长,256像素宽,3表示RGB三通道;可选地,将超声图 像固定到一定尺寸时,保持原始图像的长宽比例,或者改变原始图像的长宽比例;对超声图像进行归一化操作的具体处理方法为将超声图像中每个像素值减去图像像素的均值后除以图像像素的标准差,具体通过如下公式:
Image_norm=(Image-μ)/σ
Image是256*256*3的超声图像,μ是Image中像素值的平均值,σ是Image中像素值的标准差,Image_norm是归一化后的超声图像。
由于超声图像预处理时超声图像的尺寸发生了变化,所有超声图像的模板也需要进行相应比例的改变;改变尺寸后的模板在训练神经网络时使用;
步骤S22,建立神经网络结构;
神经网络的结构包括两类:第一类卷积神经网络和第二类卷积神经网络;
对于第一类卷积神经网络,用于将胎儿各测量部位的训练集超声图像输入到同一个神经网络,以得到测量部位的预测结果;在训练过程中神经网络输出的预测结果越来越接近输入图像对应的模板,神经网络的损失函数即预测结果与模板的差异,训练过程中损失函数保持下降;
第一类卷积神经网络的输入是与该神经网络中输入层尺寸相同的超声图像;该神经网络包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;第一类卷积神经网络的卷积层用于自动提取超声图像中的特征;
如图3所示,设置神经网络的输入层尺寸为256*256*1,其中的“1”表示灰度图像;经过两个3*3的卷积操作分别得到两个256*256*16的特征,其中的“16”表示特征数;再经过下采样操作得到128*128*16的特征;再继续进行若干3*3卷积和下采样得到64*64*64的特征;再继续进行若干上采样和3*3卷积得到256*256*16的特征;灰色长方形表示超声图像经过每次卷积或者采样操作后提取到的特征,白色长方形表示从神经网络的较浅的层复制特征到较深的层,复制的特征和较深的层的特征对应像素直接相加后进入下一层,这样将较浅的层的粗特征与较深的层的细特征结合计算,有利于得到更好的分割结果;
优选的,第一类卷积神经网络的卷积选择适合扩张率的扩张卷积,以提高网络的感受野,提高网络预测准确率;如扩张率为2的扩张卷积即在3*3的普通卷积核的行和列中都插入0,得到5*5的扩张卷积核,这扩张卷积提高网络的感受野的同时能保证网络的参数量不变。
第二类卷积神经网络包括检测网络和分割网络;
图4显示了检测网络包括输入层、隐含层、输出层,检测网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;图4中共四列,分别表示隐含层各层的名称,每层的过滤器数量、每层的输入图像尺寸和输出图像尺寸;隐含层包括卷积层、最大池化层和结合层;首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合 层,将结合层之前相连接的高级特征层与该高级特征层之前的一层或数层隐含层相结合;该高级特征层与相结合的隐含层的输出图像的长和宽必需相应一致;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层(最后一个卷积层为输出层);
对于图4中的检测网络,先是5个卷积层和5个最大池化层交替连接;接着是连接若干个卷积层,本例选择连接了两个卷积层;随后再连接一个结合层(Route层),用于将结合层之前相连接的高级特征层(图4中的第11层)与该高级特征层之前的一层或数层隐含层相结合,以使得高级特征层与低级细粒度特征结合;该高级特征层与相结合的隐含层的输出图像的长和宽必需相应一致;图4中将第11层和第9层(一个最大池化层)结合,也可以将第11层和第9层、第10层结合;该高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层;这样增加了神经网络对偏小的目标对象的检测效果;
对于检测网络,训练前将将大腿处的超声图像的标注曲线转化为标注框,图5展示了胎儿大腿处的标注框501;检测网络将超声图像作为输入,检测出超声图像中可能的测量部位,检测结果由检测框框出,检测网络的损失函数基于标注框和检测框的误差,卷积神经测量模型的检测网络通过损失函数训练,缩小检测误差,检测网络训练过程中损失函数越来越小,检测框越来越接近标注框;
损失函数的计算公式为:
Figure PCTCN2019093711-appb-000010
其中,λ 13表示各项误差在总的损失函数中占的比重,各项误差都选用平方误差的形式。
损失函数的第一项表示含有目标对象的检测框的概率预测的误差。其中,S 2表示将超声图像划分成S×S个网格单元,如上述的13*13个单元;B表示每个网格单元设置多少个检测框;
Figure PCTCN2019093711-appb-000011
表示第i个网格单元的第j个检测框是否含有目标检测对象,若检测框与标注框的交集较大,则认为检测框含有目标检测对象,
Figure PCTCN2019093711-appb-000012
否则,认为检测框不含有目标检测对象,
Figure PCTCN2019093711-appb-000013
表示检测网络对该网格单元当前的第j个检测框的预测概率;
第二项表示含有目标对象的检测框的位置和长宽的预测误差。其中x i,y i,h i,w i表示第i个网格单元的标注框的中心位置和长度、宽度信息,
Figure PCTCN2019093711-appb-000014
表示预测的边界框相应的信息;
第三项是不含有目标对象的检测框的概率预测的误差,因为不含有目标对象的边界框占多数,所以λ 3通常会设置得比λ 1小,否则无法训练得到识别效果较好的网络。可选的,λ 1=5,λ 2=λ 3=1。
分割网络的输入是检测网络输出的检测框内的图像;分割网络包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积层用于自动提取超声图像中的特征;可选地,第二部分分割网络的输入图像范围可以在检测框的基础上适当放大,例如上下左右分别扩张20个像素点;
第二类神经网络中的第二种,将腹部和头部的超声图像作为输入,包括与输入超声图像对应的各个分割网络,本例是两个分割网络;分割网络包括输入层、隐含层和输出层;其中隐含层包括若干卷积层、下采样层、上采样层;输入的超声图像先经过若干卷积层和下采样层,分别进行卷积操作和下采样操作,再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作;神经网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接;卷积层用于自动提取超声图像中的特征;如图6所示;与图3的不同在于,因为分割网络预测的类别为2,即预测部位和非预测部位,所以分割网络的输出为256*256*2;白色长方形表示从神经网络的较浅的层复制特征到较深的层,复制的特征和较深的层的特征对应像素直接相加后进入下一层;
步骤S23,初始化神经网络:将神经网络的权重参数设置为0到1之间的随机数;
步骤S24,计算神经网络的损失函数;
以上涉及的检测网络的损失函数包括检测框位置和检测框的预测概率的损失;以上涉及的分割网络的损失函数选择像素级的交叉熵损失;计算公式为:
Figure PCTCN2019093711-appb-000015
交叉熵损失是对超声图像中每个像素上的预测误差的总和;其中,x,y为分割网络的输入图像的长度和宽度,p ij是超声图像中第i行第j列像素被分割网络预测的该像素为预测部位的概率,t ij是超声图像中第i行第j列像素在超声图像模板中对应的数值,若该像素为预测部位,取值为1,否则,取值为0;分割网络输出的预测的概率与超声图像模板越接近,交叉熵损失函数越小;
步骤S25,训练神经网络以得到卷积神经测量模型;
随机选择训练集中的超声图像并对其进行随机的变换,然后输入神经网络,选择合适的训练迭代次数和批处理大小对神经网络进行训练;可选的,变换操作有旋转、缩放、裁剪、弹性形变等;优选的,本发明中只进行了随机旋转操作;
根据神经网络的损失函数更新神经网络的权重参数,更新神经网络的权重参数的机制使用自适应矩估计优化方法;
步骤S26,选择最优卷积神经测量模型的权重参数;
计算不同权重参数下的卷积神经测量模型在验证集上的预测结果,计算预测结果和标注转化得到的验证集图像模板之间的交并比,选择交并比最大情况下的权重参数作为最优参数;预测结果与图像模板的交并比即两者的交集除以两者的并集,记为IOU;计算公式为:
IOU=(预测结果∩图形模板)/(预测结果∪图形模板)
IOU的可能取值范围为[0,1]
最后根据训练后的卷积神经测量模型确定超声图像中待测量部位的分布区域。本发明的胎儿生长参数测量方法能够通过训练后的卷积神经测量模型自动识别超声图像中胎儿的待测量部位,并自动测量胎儿的各项主要生长参数,测量结果相比于手动测量更准确,效率更高。
本发明的第三方面提供了一种超声设备,包括存储器和处理器。存储器用于存储计算机程序。处理器用于执行计算机程序以实现上述的胎儿超声图像生长参数测量的方法。本发明的超声设备能够通过训练后的卷积神经测量模型自动测量胎儿不同部位的生长参数,测量的准确度高,提供了医生的工作效率。
本发明的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用以实现上述的胎儿生长参数测量方法的步骤。
最后所应说明的是,以上具体实施方式仅用以说明本发明的技术方案而非限制,尽管参照实例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。

Claims (13)

  1. 一种胎儿生长参数测量方法,其特征在于,包括:
    获取胎儿至少一个部位的超声图像;
    根据卷积神经测量模型确定所述超声图像中待测量部位的分布区域,所述卷积神经测量模型通过卷积神经网络对已标记的胎儿部位的若干超声图像进行训练确定;
    突出显示待测量部位的分布区域轮廓并进行拟合;
    通过对待测量部位对应的拟合后的轮廓进行测量,获取待测量部位的生长参数。
  2. 如权利要求1所述的胎儿生长参数测量方法,其特征在于,所述根据卷积神经测量模型确定所述超声图像中待测量部位的分布区域,包括:
    对超声图像进行归一化,将超声图像固定到与所述卷积神经测量模型的输入层相适配的尺寸;
    根据所述卷积神经测量模型计算所述超声图像中每个像素点为待测量部位的概率;
    将超声图像中像素点概率超过预设概率的像素点确定为待测量部位。
  3. 如权利要求1所述的胎儿生长参数测量方法,其特征在于,所述根据卷积神经测量模型确定所述超声图像中待测量部位的分布区域,包括:
    对超声图像进行归一化,将超声图像固定到与所述测量模型的输入层相适配的尺寸;
    通过所述卷积神经测量模型的检测网络检测出待测量部位的位置区域;
    根据所述卷积神经测量模型的分割网络计算所述位置区域中每个像素点为待测量部位的概率;
    将位置区域中像素点概率超过预设概率的像素点确定为待测量部位。
  4. 如权利要求2或3所述的胎儿生长参数测量方法,其特征在于,所述突出显示待测量部位的分布区域轮廓并进行拟合,包括:
    将经所述卷积神经测量模块确定待测量部位的分布区域的超声图像还原到初始超声图像的尺寸;
    采用曲线对确定的待测量部位的分布区域的轮廓进行勾勒围合;
    通过最小二乘法对勾勒的待测量部位的分布区域轮廓进行拟合。
  5. 如权利要求2所述的胎儿生长参数测量方法,其特征在于,所述根据所述卷积神经测量模型计算所述超声图像中每个像素点为待测量部位的概率,包括:
    将输入卷积神经测量模型输入层的超声图像经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;
    将所述第一特征再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;
    输出层根据所述第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。
  6. 如权利要求2或5所述的胎儿生长参数测量方法,其特征在于,所述卷积神经测量模型的神经网络对超声图像进行卷积或采样处理时,从较浅卷积层复制特征到较深的卷积层,复制的特征和较深卷积层的特征对应像素相加后进入下一层卷积层。
  7. 如权利要求3所述的胎儿生长参数测量方法,其特征在于,所述检测网络检测出待测量部位的位置区域由检测框标出。
  8. 如权利要求3或7所述的胎儿生长参数测量方法,其特征在于,所述检测网络包括输入层、隐含层、输出层,检测网络的输入层与隐含层、各隐含层之间、隐含层与输出层之间通过权重参数相连接,
    所述隐含层包括卷积层、最大池化层和结合层,首先是数个卷积层和数个最大池化层交替连接,然后再连接若干卷积层,随后再连接一个结合层,将结合层之前相连接的高级特征层与所述高级特征层之前的一层或数层隐含层相结合;
    所述高级特征层与相结合的隐含层的输出图像的长和宽一致;
    所述高级特征层与之前的一层或数层隐含层相结合后一起输入到最后一个卷积层,最后一个卷积层作为输出层。
  9. 如权利要求3所述的胎儿生长参数测量方法,其特征在于,
    根据所述卷积神经测量模型的分割网络计算所述位置区域中每个像素点为待测量部位的概率;具体包括:
    将由检测网络标出待测量部位位置区域超声图像经过若干卷积层和下采样层分别进行卷积操作和下采样操作获取第一特征;
    将所述第一特征再经过若干卷积层和上采样层,分别进行卷积操作和上采样操作获取第二特征;
    输出层根据所述第二特征进行卷积计算出超声图像中每个像素点为待测量部位的概率。
  10. 如权利要求3或7所述的胎儿生长参数测量方法,其特征在于,所述卷积神经测量模型的检测网络通过损失函数训练,缩小检测误差。
  11. 一种胎儿生长参数测量系统,其特征在于,包括:
    获取单元,所述获取单元用于获取胎儿至少一个部位的超声图像;
    第一处理单元,所述第一处理单元根据卷积神经测量模型确定所述超声图像中待测量部位的分布区域,所述测量模型通过卷积神经网络对已标记的不同胎儿部位的若干超声图像进行训练确定;
    第二处理单元,所述第二处理单元突出显示待测量部位的分布区域轮廓并进行拟合;
    测量单元,所述测量单元通过对待测量部位对应的拟合后的轮廓进行测量,获取不同待测量部位的生长参数。
  12. 一种超声设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序以实现如权利要求1至10中任意一项所述的胎儿超声图像生长参数测量的方法。
  13. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序,所述计算机程序被处理器执行时用以实现如权利要求1至10中任一项所述的胎儿生长参数测量方法的步骤。
PCT/CN2019/093711 2019-04-20 2019-06-28 胎儿生长参数测量方法、系统及超声设备 WO2020215485A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910320623.X 2019-04-20
CN201910320623.XA CN111820948B (zh) 2019-04-20 2019-04-20 胎儿生长参数测量方法、系统及超声设备

Publications (1)

Publication Number Publication Date
WO2020215485A1 true WO2020215485A1 (zh) 2020-10-29

Family

ID=72911788

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/093711 WO2020215485A1 (zh) 2019-04-20 2019-06-28 胎儿生长参数测量方法、系统及超声设备

Country Status (2)

Country Link
CN (1) CN111820948B (zh)
WO (1) WO2020215485A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033422A (zh) * 2021-03-29 2021-06-25 中科万勋智能科技(苏州)有限公司 基于边缘计算的人脸检测方法、系统、设备和存储介质
CN113487581A (zh) * 2021-07-16 2021-10-08 武汉中旗生物医疗电子有限公司 胎儿头臀径自动测量方法、系统、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112220497A (zh) * 2020-11-11 2021-01-15 深圳开立生物医疗科技股份有限公司 一种超声成像显示方法及相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999004680A2 (en) * 1997-07-25 1999-02-04 Arch Development Corporation Method and system for the automated analysis of lesions in ultrasound images
EP1315125A2 (en) * 2001-11-20 2003-05-28 General Electric Company Method and system for lung disease detection
CN101081168A (zh) * 2007-07-06 2007-12-05 深圳市迈科龙电子有限公司 胎儿图像性别部位识别屏蔽方法
CN103239249A (zh) * 2013-04-19 2013-08-14 深圳大学 一种胎儿超声图像的测量方法
CN106951928A (zh) * 2017-04-05 2017-07-14 广东工业大学 一种甲状腺乳头状癌的超声图像识别方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102151149B (zh) * 2010-12-24 2013-01-23 深圳市理邦精密仪器股份有限公司 一种胎儿超声图像自动测量方法及系统
CN105662474B (zh) * 2016-01-22 2018-08-17 飞依诺科技(苏州)有限公司 胎儿头围超声图像的自动检测方法及检测系统
CN106408566B (zh) * 2016-11-10 2019-09-10 深圳大学 一种胎儿超声图像质量控制方法及系统
CN108186051B (zh) * 2017-12-26 2021-11-30 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿双顶径长度的图像处理方法及处理系统
CN108378869B (zh) * 2017-12-26 2021-04-20 珠海艾博罗生物技术股份有限公司 一种从超声图像中自动测量胎儿头围长度的图像处理方法及处理系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999004680A2 (en) * 1997-07-25 1999-02-04 Arch Development Corporation Method and system for the automated analysis of lesions in ultrasound images
EP1315125A2 (en) * 2001-11-20 2003-05-28 General Electric Company Method and system for lung disease detection
CN101081168A (zh) * 2007-07-06 2007-12-05 深圳市迈科龙电子有限公司 胎儿图像性别部位识别屏蔽方法
CN103239249A (zh) * 2013-04-19 2013-08-14 深圳大学 一种胎儿超声图像的测量方法
CN106951928A (zh) * 2017-04-05 2017-07-14 广东工业大学 一种甲状腺乳头状癌的超声图像识别方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033422A (zh) * 2021-03-29 2021-06-25 中科万勋智能科技(苏州)有限公司 基于边缘计算的人脸检测方法、系统、设备和存储介质
CN113487581A (zh) * 2021-07-16 2021-10-08 武汉中旗生物医疗电子有限公司 胎儿头臀径自动测量方法、系统、设备及存储介质

Also Published As

Publication number Publication date
CN111820948A (zh) 2020-10-27
CN111820948B (zh) 2022-03-18

Similar Documents

Publication Publication Date Title
WO2018120942A1 (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
US11229419B2 (en) Method for processing 3D image data and 3D ultrasonic imaging method and system
JP6467041B2 (ja) 超音波診断装置、及び画像処理方法
WO2020215485A1 (zh) 胎儿生长参数测量方法、系统及超声设备
CN110853111A (zh) 医学影像处理系统、模型训练方法及训练装置
Nie et al. Automatic detection of standard sagittal plane in the first trimester of pregnancy using 3-D ultrasound data
WO2022110525A1 (zh) 一种癌变区域综合检测装置及方法
WO2022048171A1 (zh) 眼底图像的血管管径的测量方法及测量装置
CN111861989A (zh) 一种脑中线检测方法、系统、终端及存储介质
CN110163907B (zh) 胎儿颈部透明层厚度测量方法、设备及存储介质
Huang et al. Bone feature segmentation in ultrasound spine image with robustness to speckle and regular occlusion noise
Zeng et al. TUSPM-NET: A multi-task model for thyroid ultrasound standard plane recognition and detection of key anatomical structures of the thyroid
Aji et al. Automatic measurement of fetal head circumference from 2-dimensional ultrasound
Chaudhari et al. Ultrasound image based fully-automated nuchal translucency segmentation and thickness measurement
CN111062956B (zh) 钼靶x线乳腺影像肿块目标分割方法及装置
Rahmatullah et al. Anatomical object detection in fetal ultrasound: computer-expert agreements
US11941806B2 (en) Methods and systems for automatic assessment of fractional limb volume and fat lean mass from fetal ultrasound scans
JP2022179433A (ja) 画像処理装置及び画像処理方法
CN115813433A (zh) 基于二维超声成像的卵泡测量方法和超声成像系统
CN113229850A (zh) 超声盆底成像方法和超声成像系统
WO2023133929A1 (zh) 一种基于超声的人体组织对称性检测分析方法
CN110916724A (zh) 一种基于闭环最短路径的b超图像胎儿头围检测方法
CN112426170A (zh) 一种胎盘厚度确定方法、装置、设备及存储介质
Liu et al. Automated fetal lateral ventricular width estimation from prenatal ultrasound based on deep learning algorithms
WO2020215484A1 (zh) 胎儿颈部透明层厚度测量方法、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19926523

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19926523

Country of ref document: EP

Kind code of ref document: A1