WO2021004174A1 - 基于超声图像的产时头盆关系自动测量方法和装置 - Google Patents

基于超声图像的产时头盆关系自动测量方法和装置 Download PDF

Info

Publication number
WO2021004174A1
WO2021004174A1 PCT/CN2020/091898 CN2020091898W WO2021004174A1 WO 2021004174 A1 WO2021004174 A1 WO 2021004174A1 CN 2020091898 W CN2020091898 W CN 2020091898W WO 2021004174 A1 WO2021004174 A1 WO 2021004174A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
pubic symphysis
neural network
training
interest
Prior art date
Application number
PCT/CN2020/091898
Other languages
English (en)
French (fr)
Inventor
陆尧胜
周铭鸿
袁超
齐建国
Original Assignee
暨南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 暨南大学 filed Critical 暨南大学
Publication of WO2021004174A1 publication Critical patent/WO2021004174A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data

Definitions

  • the invention belongs to the field of computer vision-assisted diagnosis, and specifically relates to the study of a method for segmentation of ultrasound images through the perineum during childbirth, and in particular to a method and device for automatically measuring the relationship between the head and pelvis during childbirth based on ultrasound images.
  • the factors that affect delivery include four factors: productivity, birth canal, fetus, and mental and psychological factors.
  • the normal head delivery mechanism can be described as: when the fetal presenting part passes through the birth canal, it passively performs a process in order to adapt to the different forms of the pelvis. The process of series adaptive rotation through the birth canal with its smallest diameter. It can be seen that the core of the delivery mechanism is the descent of the fetal head, and the intuitive way of describing the delivery of the head is the trajectory of the fetal head center.
  • the current assessment of the progress of labor mainly relies on medical staff to perform "digital vaginal examination".
  • Digital vaginal examination is a common method used by doctors to insert the index finger of the puerpera into the vagina of the parturient to understand the position of the fetal head and dilation of the uterine orifice.
  • this kind of method is not objective enough.
  • Obstetricians or midwives need to estimate the progress of labor through digital vaginal examination regularly, up to 6 to 8 times.
  • This method of labor assessment relies on the work experience of a midwife or midwife, and is subjective. At the same time, the examined mother has to bear the pain of invasive operations and increases the risk of uterine cavity infection.
  • 2D ultrasound examination can accurately provide effective objective delivery parameters such as fetal position, uterine orifice size and labor progress angle, and can dynamically display the cephalopelvic relationship and labor progress, which can provide clinicians with powerful
  • ultrasound examination can reduce the chance of vaginal examination, reduce the rate of uterine cavity infection of the pregnant woman, reduce the fear of the pregnant woman about vaginal labor, increase the rate of vaginal labor, and help reduce non-medical indications of cesarean section rate. Therefore, ultrasound examination can be used in observing the labor process, instead of digital vaginal examination to evaluate the progress of labor process, and 2D ultrasound has been popularized in primary hospitals. Ultrasound monitoring has less disposable consumables, is economical, and the operation process is simple and easy, and it is easy to popularize and promote in primary hospitals.
  • transperineal ultrasound imaging can objectively quantify the level of fetal head decline in the birth canal.
  • 2D ultrasound can be used to check the position and descent of the fetus, and can assist in the assessment of the delivery process and the measurement of other fetal biological parameters.
  • Some ultrasound parameters including the angle of progression (AOD) in transperineal ultrasound during childbirth, and the joint-fetal head distance (SFD), have been suggested for assessing the head position of the fetus during childbirth.
  • AOD angle of progression
  • SFD joint-fetal head distance
  • AOD it has been shown that the greater the AOD in the second stage of delivery, the greater the probability of successful assisted or natural delivery.
  • Obstetrics clinics need a practical and feasible method for automatic measurement of fetal ultrasound parameters during delivery to objectively, quickly and accurately measure the position of the fetus’s head, assist doctors in making scientific decisions on the delivery process, thereby effectively reducing maternal-fetal injuries and unnecessary cesarean section To improve the quality of fertility. Therefore, it is necessary to provide a method for automatically measuring the fetal head-pelvic relationship during labor, so as to realize the auxiliary diagnosis of labor.
  • the purpose of the present invention is to overcome the shortcomings in the measurement of fetal parameters during labor and provide a method and device for automatic measurement of cephalo-pelvic relationship during labor based on ultrasound images, which has the advantages of accuracy, objectivity, and speed.
  • an automatic measurement method of intrapartum head-pelvic relationship based on ultrasound images including the steps:
  • the present invention can accurately and objectively measure fetal parameters during childbirth in real time, without any prior knowledge and human intervention, and the method can meet the requirements of real-time application and has clinical application prospects. .
  • step S2 is as follows:
  • step S202 in order to make more effective use of the labeled data, perform data enhancement on the collected transperineal ultrasound images that will be used to train the network model to increase training data, and use the data-enhanced data set as the training set .
  • Data enhancement methods include: flip, translation, rotation, noise and elastic deformation.
  • the neural network model adopts a Uag-net-based fully convolutional neural network
  • the Uag-net-based fully convolutional neural network refers to integrating the attention gate (AG) module into U- Net model.
  • the U-Net model includes a contraction path and an expansion path.
  • the contraction path is mainly used to capture the contextual information in the picture, while the symmetrical expansion path is to accurately locate the part that needs to be segmented in the picture.
  • the Uag-net model sends the feature map of the expansion path and the feature map of the contraction path into the attention gating module.
  • the attention gating module is used to automatically focus on the pubic symphysis and fetal head structure to suppress input images that are not related to specific target tasks. While highlighting the salient features useful for detecting specific targets.
  • the contraction path is constructed based on the convolutional network architecture, and the contraction path contains multiple convolution modules with the same structure.
  • Each convolution module is composed of two 3*3 convolutional layers and a 2*2 pooling layer.
  • the convolutional layer uses nonlinear ReLU as the activation function.
  • the convolutional layer is a padding-free convolution
  • the pooling layer adopts maximum pooling, and the number of feature channels is doubled after each pooling.
  • the expansion path includes several upper convolution layers, each of which is a deconvolution layer, and each step first uses up-convolution, and each time the deconvolution is used, the feature channel The number is halved, and the size of the feature map is doubled, and then the result of the deconvolution is spliced with the feature map of the corresponding stage in the shrinking path, and the spliced feature map is convolved twice with 3*3.
  • the feature map sampled on the expansion path is taken as g
  • the feature map of the contraction path is taken as x l .
  • Both are sent to the attention gating module, and the following operations are performed in the attention gating module: Let g pass 1* 1 Convolution Wg, x l through 1*1 convolution Wx, then add Wg and Wx output point by point and input the ReLU function, and then input the Sigmoid function after 1*1 convolution ⁇ , and then perform resampling to calculate the attention Force coefficient ⁇ , and then multiply ⁇ and x l to output
  • step S3 the calculation method of the head-basin relationship parameter is:
  • a device for automatically measuring the cephalo-pelvic relationship based on ultrasound images includes:
  • the training set building module is used to obtain the maternal transperineal ultrasound image data set for training as a training set
  • the neural network model training module is used to train an end-to-end segmentation model based on the data in the training set to segment the area of interest of the pubic symphysis and the area of interest of the fetal head; repeated training to obtain a trained neural network model;
  • the parameter calculation module is used to input the real-time acquired ultrasound images into the trained neural network model to obtain the region of interest of the pubic symphysis and the region of interest of the fetal head, enhance and fit the boundary of the region of interest, and calculate Head-to-basin relationship parameters.
  • the present invention has the following advantages and beneficial effects:
  • the present invention proposes a U-net-based fetal ultrasound image recognition method through the perineum that can be trained end-to-end. Unlike traditional methods that require prior knowledge, it has limitations.
  • the method of the present invention does not require any prior knowledge and human intervention, can realize fully automatic end-to-end parameter measurement, and the algorithm can meet real-time application requirements, and has a clinical application prospect.
  • the program can accurately and objectively measure fetal parameters during delivery in real time, assist doctors in analyzing and making decisions about the delivery process, reduce the cesarean section rate, and protect the health of mothers and babies.
  • the present invention proposes a method for improving the segmentation effect of fetal ultrasound images by using the attention gate model, and uses the attention gate control module to automatically focus target structures of different shapes and sizes.
  • the model trained with AG can suppress irrelevant areas in the input image while highlighting salient features useful for specific tasks. Integrate the attention gating module into the U-Net model to reduce computational overhead, while improving model sensitivity and prediction accuracy.
  • Fig. 1 is a flow chart of the method for automatically measuring the relationship between the head and the basin during labor based on ultrasound images of the present invention.
  • Fig. 2 is a reference explanatory diagram of the Uag-net architecture of this embodiment.
  • Fig. 3 is a schematic diagram of the internal structure of the attention gating module of this embodiment.
  • Figure 4 is a schematic diagram of the measurement of the fetal head descending angle AOD and the pubic symphysis-fetal head distance SFD in this embodiment.
  • an automatic measurement method of intrapartum head-pelvic relationship based on ultrasound images mainly includes the following steps:
  • the method of collecting transperineal ultrasound images of the parturient in step S1 is as follows: the method of collecting the transperineal ultrasound image is to make the pregnant parturient's legs and hips be at a 45-degree angle, legs and knees at a 90-degree angle, lying on the bed in a semi-recumbent position ,
  • the ultrasound probe uses a curved ultrasound probe, which is placed under the pubic symphysis in the form of a sagittal plane, and the probe is moved slightly until the anatomical structure of the pubic symphysis and the outline of the fetal head can be clearly observed on the ultrasound image.
  • a total of 150 ultrasound images were collected, and the size of each image was 1024*768.
  • the method of data enhancement in step S1 is as follows: divide the acquired intraperitoneal ultrasound images into a training set and a test set, with 120 training sets and 30 testing sets.
  • a data enhancement method is used for the training set to increase the training data.
  • the specific methods of data enhancement include: flip, translation, rotation, noise addition and elastic deformation.
  • Computers are used to effectively simulate the common deformations of tissues and human structures in real situations, making the trained models more robust.
  • step S2 the data set after data enhancement is input into Uag-net.
  • the architecture of the Uag-net model is shown in Figure 2. It is composed of multiple layers, which essentially refers to integrating the attention gate (AG) module into Uag-net.
  • Net model includes contraction path, expansion path and attention gating module.
  • the U-Net model structure includes a contraction path and an expansion path, where the contraction path is used for downsampling, including five convolutions, and the expansion path is used for upsampling, including five upsampling layers.
  • the model uses five times of convolution and pooling to down-sample the input image to extract deep features, which are used to classify pixels of different categories in the neural network; and then the feature maps that are down-sampled for each pooling are sequentially up-sampled Perform interpolation on the feature map in the, and then gradually restore the up-sampled feature map to the original resolution to supplement the shallow detail information.
  • the convolution kernel slides on the original image to obtain the feature map of the original image.
  • the depth of the feature map corresponds to the number of convolution kernels, representing the features of different categories of the original image.
  • the size of the convolution kernel is positively correlated with the size of the convolution receptive field (wherein the convolution receptive field is a well-known technical term in the art and will not be repeated in this embodiment of the present invention) and the number of parameters.
  • the large convolution kernel can extract more Comprehensive features, but at the same time it also contains more parameters, which reduces the training speed and computational efficiency of the model.
  • the contraction path includes five convolution modules, and each convolution module consists of two 3*3 convolution layers (conv1_1, conv1_2, conv4_1, and conv4_2, etc., for example: 3 ⁇ 3 ⁇ 64, 3 ⁇ 3 ⁇ 128 Etc.) and a 2*2max pooling layer (pool1, pool2, pool5, etc., for example: 112 ⁇ 112 ⁇ 128, etc.), each convolutional layer (conv) uses a nonlinear ReLU as an activation function.
  • each convolutional layer uses a nonlinear ReLU as an activation function.
  • the convolution layer in the above convolution module is a padding-free convolution
  • the number of characteristic channels is doubled after each pooling.
  • the essence of pooling is sampling.
  • max pooling that is, maximum pooling is one of the commonly used pooling algorithms, as shown in the following formula:
  • a, b, c, d are the four positions of the local area corresponding to the 2*2 pooling matrix.
  • Pooling can increase the convolution receptive field of the subsequent convolutional layer while reducing the number of parameters, thereby reducing the complexity of the model.
  • the output of the pooling layer can remain unchanged, so the pooling also has a certain anti-interference effect. Can improve the robustness of the network.
  • the expansion path includes five up-sampling layers, of which five up-sampling layers are all deconvolution layers, each step first uses up-convolution, and each time deconvolution is used, the number of feature channels is halved , The feature map size is doubled.
  • the result of the deconvolution is spliced with the feature map of the corresponding stage in the contraction path.
  • the size of the feature map in the shrink path is slightly larger. In order to make the size more matched, trim it before splicing it.
  • the size of the convolution kernel in the last layer of the Uag-net model is 1*1, and the 64-channel feature map is converted into a specific number of categories. In order to get a better segmentation effect, set the number of categories to 3, representing the background and the pubic symphysis respectively There are three categories: area and fetal head area.
  • the last layer of the network uses the cross entropy function and softmax.
  • the cross entropy function is as follows:
  • ⁇ 1 , ⁇ 2 ,..., ⁇ k are the parameters of the model, and x is the input of the softmax layer, This term normalizes the probability distribution so that the sum of all probabilities is 1.
  • the convolutional neural network structure In order to increase the receptive field and capture the semantic context information, the convolutional neural network structure often performs stepwise downsampling. In this way, the deep-level features model the position and relationship between the organizations in the global picture. However, when small objects are large in the picture, it is still difficult to reduce false positive predictions. To improve accuracy, current segmentation frameworks often first Perform an independent positioning step before proceeding to the subsequent segmentation step.
  • the attention gating model can automatically focus on target structures of different shapes and sizes.
  • the model trained with attention gating can suppress irrelevant regions in the input image while highlighting salient features useful for specific tasks, which eliminates the need to use cascaded convolutional neural networks and cascade tissue positioning modules.
  • the attention gating of this embodiment can be easily integrated into the U-Net model, has a small computational overhead, and improves the model sensitivity and prediction accuracy at the same time. Furthermore, grid-based gating is used to make the attention coefficient more specific to a local area, which improves performance compared with gating based on global feature vectors.
  • the specific steps in attention gating are: let g pass 1*1 convolution Wg, x l pass 1*1 convolution Wx, and then add the two outputs point by point and enter ReLU Function, the output obtained is then subjected to 1*1 convolution ⁇ , and then input into Sigmoid, after resampler (Resampler), the attention coefficient ⁇ is calculated, and ⁇ and x l are dot-multiplied to obtain the output
  • the feature map of the expansion path is sampled and convolved, and then sent to the AG together with the feature map of the contraction path.
  • the up-sampled feature map on the decoding side is taken as g, and the feature map on the encoding side is taken as x l , both together Calculated by AG
  • step S3 after the boundary of the pubic symphysis structure is obtained, the points on the boundary are extracted, and the ellipse is fitted by the least square method.
  • the two end points of the major semi-axis of the ellipse obtained by fitting are the upper and lower edges of the pubic symphysis.
  • the outer edge of the fetal head contour is not replaced by the fitted ellipse when calculating the SFD, but the actual outer edge of the fetal head skull is used for calculation.
  • the parameter measurement method is shown in Figure 4.
  • a device for automatically measuring the cephalo-pelvic relationship based on ultrasound images includes:
  • the training set building module is used to obtain the maternal transperineal ultrasound image data set for training as a training set
  • the neural network model training module is used to train an end-to-end segmentation model based on the data in the training set to segment the area of interest of the pubic symphysis and the area of interest of the fetal head; repeated training to obtain a trained neural network model;
  • the parameter calculation module is used to input the real-time acquired ultrasound images into the trained neural network model to obtain the region of interest of the pubic symphysis and the region of interest of the fetal head, enhance and fit the boundary of the region of interest, and calculate The head-pelvic relationship parameters, where the head-pelvic relationship parameters include but are not limited to: fetal head descending angle AOD and pubic symphysis-fetal head distance SFD.
  • the disclosed equipment, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules is only a logical function division.
  • there may be other division methods, or modules with the same function may be integrated into one Units, for example, multiple units or components can be combined or integrated into another system, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Gynecology & Obstetrics (AREA)
  • Pregnancy & Childbirth (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种基于超声图像的产时头盆关系自动测量方法和装置,方法包括步骤:获取用于训练的产妇经会阴超声图像数据集,作为训练集;将训练集中的数据输入到构建的神经网络模型中,训练端到端的分割模型,分割出耻骨联合和胎儿头部的感兴趣区域;反复训练,得到训练好的神经网络模型;在实际应用时,将实时采集的超声图像输入到训练好的神经网络模型中,得到耻骨联合和胎儿头部的感兴趣区域,对感兴趣区域进行增强和边界拟合,计算胎儿头部下降角AOD和耻骨联合-胎儿头部距离SFD。相对于现有的产时胎儿参数测量方法,能准确、客观地对产时胎儿参数进行实时测量,不需要任何先验知识和人为干预,且能达到实时应用要求,具有临床使用前景。

Description

基于超声图像的产时头盆关系自动测量方法和装置 技术领域
本发明属于计算机视觉辅助诊断领域,具体涉及产时经会阴超声图像分割方法的研究,特别涉及一种基于超声图像的产时头盆关系自动测量方法和装置。
背景技术
影响分娩的要素包括产力、产道、胎儿和精神心理四个因素,正常的头位分娩机制可描述为:胎儿先露部在通过产道时,为适应骨盆各平面的不同形态,被动地进行一系列适应性转动,以其最小径线通过产道的过程。可以看出,分娩机制的核心是胎儿头部下降,头位分娩直观的描述方式即为胎儿头部中心的运动轨迹。因此,密切监视产程中的胎儿头部位置有助于了解胎儿的实时状况并及时应对,从而有效降低分娩风险、减少不必要的剖宫产、降低母胎伤害。医护人员及时、准确地评估分娩参数在临床实践中非常重要。
在实际操作中,目前产程进展的评估主要依靠医护人员进行“阴道指检”。阴道指检是医生用戴指套的食指插入产妇阴道来了解胎儿头部位置和宫口扩张常用的方法。但该类方法不够客观,产科医生或助产士需定时通过阴道指检来估测产程进展,高达6~8次。该产程评估方法依赖助产科医生或助产士的工作经验,存在较大的主观性,同时被检产妇需承受侵入性操作的痛苦,且增加了宫腔感染的风险。
有研究对比了2D和3D会阴超声的观察者内、观察者间的相关性以及方法间的一致性,发现2D和3D超声无明显的差异,但2D超声设备相对便宜且操作简单,可在分娩室床旁进行分析,因此临床操作时可优先选择2D超声设备进行测量。与传统阴道指检相比,2D超声检查可以准确提供胎方位、宫口大小和产程进展角等有效的客观分娩参数,同时还能动态显示头盆关系及产程进展情况,可为临床医师提供有力的决策依据,同时超声检查可减少阴道检查的几率,降低产妇的宫腔感染率,减少产妇对阴道试产的恐惧感,提高阴道试产率,有助于降低非医学指征的剖宫产率。故超声检查可应用于产程观察中,替代阴道指检来评估产程进展,且2D超声在基层医院已普及,超声监测一次性耗材少、经济、操作流程简单易行,易于在基层医院普及和推广
最近的研究已表明经会阴的超声成像可实现对产道中胎儿头部下降水平的客观量化。2D超声可用于对胎儿位置以及下降的检查,可辅助评估分娩的进程和其它胎儿生物参数的测量。一些超声参数,包括产时经会阴超声中的进展角(AOD)、和联合-胎儿头部距离(SFD),已被建议用于评估胎儿分娩时的头位。包含胎儿头部会阴距离以及测量AOD和SFD的多个参数已被提出用来评估分娩进程。
关于AOD,已被表明的是,分娩的第二阶段中AOD越大,则成功的辅助或自然接生的概率就越大。已经有大量胎儿相关的研究表明,AOD与选取自然阴道或器械接生或剖腹产的决定具有很大的关联性。
实现产时胎儿超声参数的自动测量能够使胎儿头部位置评估更加客观。但由于超声图像中耻骨联合特征不明显,形态差异较大,边缘缺失严重以及胎儿头部轮廓边缘缺失严重等问题使得实现产时胎儿超声参数的自动测量难度很大,所以目前产时胎儿超声参数自动测量的相关研究很少,并且多停留在理论阶段,实时性不好,难以在临床上应用。而且目前存在的测量方法都需要人工标记进行辅助或利用先验知识进行测量,具有局限性,不存在一种无需人工干预的自动测量方法
产科临床需要一种切实可行的产时胎儿超声参数自动测量方法来客观、快速、精确地测量胎儿头部位置,辅助医生对分娩进程进行科学决策,从而有效降低母胎伤害、减少不必要的剖宫产,提高生育质量。因此,提供一种用于产时胎儿头盆关系的自动测量的方法,从而实现产程辅助诊断是很有必要的。
发明内容
本发明的目的在于克服现有产时胎儿参数测量中的不足,提供一种基于超声图像的产时头盆关系自动测量方法和装置,其具有准确、客观、快速的优点、
本发明的目的通过以下的技术方案实现:基于超声图像的产时头盆关系自动测量方法,包括步骤:
S1:获取产妇经会阴超声图像数据;
S2:提取出耻骨联合和胎儿头部的感兴趣区域ROI1和ROI2;
S3:根据ROI1和ROI2区域,计算头盆关系参数。
相对于现有的产时胎儿参数测量方法,本发明能准确、客观地对产时胎儿参数进行实时测量,不需要任何先验知识和人为干预,且方法能达到实时应用 要求,具有临床使用前景。
优选的,所述的步骤S2如下:
S201、构建用于ROI区域分割的神经网络模型;
S202、采集将用于训练网络模型的经会阴超声图像,作为训练集;
S203、使用训练集训练构建的神经网络模型,使用训练好的模型对产时经会阴超声图像进行分割,分割出耻骨联合和胎儿头部ROI区域。
优选的,步骤S202中,为了更有效的利用标注数据,对采集到的将用于训练网络模型的经会阴超声图像进行数据增强,用于增加训练数据,将数据增强后的数据集作为训练集,数据增强方法包括:翻转、平移、旋转、加噪和弹性形变。
优选的,步骤S201中,所述神经网络模型采用基于Uag-net的全卷积神经网络,所述基于Uag-net的全卷积神经网络是指把注意力门(AG)模块集成到U-Net模型中。U-Net模型包括收缩路径、扩张路径,收缩路径主要是用来捕捉图片中的上下文信息,而与之相对称的扩张路径则是为了对图片中所需要分割出来的部分进行精准定位。Uag-net模型将扩张路径的特征图与收缩路径的特征图一起送入注意力门控模块,注意力门模块用于自动聚焦耻骨联合和胎儿头部结构,抑制输入图像中与特定目标任务无关的区域,同时突出对检测特定目标有用的显著特征。
其中,所述收缩路径基于卷积网络架构搭建,收缩路径中包含多个结构相同的卷积模块,每个卷积模块都由两个3*3卷积层、一个2*2池化层组成,卷积层使用非线性ReLU作为激活函数。
更进一步的,所述卷积层是无padding卷积,池化层采用最大值池化,每一次池化后都把特征通道的数量加倍。
其中,所述扩张路径包括若干个上卷积层,其中每个上采样层均为反卷积层,每一步首先使用反卷积(up-convolution),每次使用反卷积都将特征通道数量减半,特征图大小加倍,然后将反卷积的结果与收缩路径中对应阶段的特征图拼接起来,对拼接后的特征图再进行2次3*3的卷积。
其中,扩张路径上采样后的特征图作为g,收缩路径的特征图作为x l,二者一同送入注意力门控模块,在注意力门控模块中执行下述操作:令g通过1*1卷积Wg,x l通过1*1卷积Wx,再将Wg和Wx输出逐点相加后输入ReLU函数,再经过1*1卷积ψ后输入Sigmoid函数,然后进行重采样计算出注意力系 数α,再将α与x l进行点乘,输出
Figure PCTCN2020091898-appb-000001
优选的,步骤S3中,所述头盆关系参数的计算方法是:
(3-1)对分割出的耻骨联合感兴趣区域进行增强,用椭圆拟合耻骨联合结构,得到椭圆长轴两个端点为耻骨联合上下缘。
(3-2)计算耻骨联合下缘与胎儿头部ROI区域切线和耻骨联合上下缘连线之间的夹角,得到胎儿头部下降角AOD。
(3-3)过耻骨联合下缘端点作耻骨联合上下缘连线的切线,即得到耻骨下边缘线,测量从耻骨联合下缘端点开始,沿耻骨下边缘线到胎儿头部轮廓外边缘的距离,即为耻骨联合-胎儿头部距离SFD。
一种基于超声图像的产时头盆关系自动测量装置,包括:
训练集构建模块,用于获取用于训练的产妇经会阴超声图像数据集,作为训练集;
神经网络模型训练模块,用于根据训练集中的数据训练端到端的分割模型,分割出耻骨联合的感兴趣区域和胎儿头部的感兴趣区域;反复训练,得到训练好的神经网络模型;
参数计算模块,用于将实时采集的超声图像输入到训练好的神经网络模型中,得到耻骨联合的感兴趣区域和胎儿头部的感兴趣区域,对感兴趣区域进行增强和边界拟合,计算头盆关系参数。
本发明与现有技术相比,具有如下优点和有益效果:
1、本发明提出了一种基于U-net可以端到端训练的经会阴胎儿超声图像识别方法,不像传统方法需要先验知识,具有局限性。本发明方法不需要任何先验知识和人为干预,可实现全自动的端到端参数测量,且算法能达到实时应用要求,具有临床使用前景。该方案能准确、客观地对产时胎儿参数进行实时测量,辅助医生对分娩进程进行分析决策,降低剖宫产率,守护母婴健康。
2、本发明提出了一种利用注意力门模型改进胎儿超声图像分割效果的方法,使用注意力门控模块自动聚焦不同形状和大小的目标结构。用AG训练的模型可以抑制输入图像中的不相关区域,同时突出显示对特定任务有用的显著特征。把注意力门控模块集成到U-Net模型中,降低计算开销,同时提高模型灵敏度和预测精度。
附图说明
图1是本发明基于超声图像的产时头盆关系自动测量方法的流程步骤图。
图2是本实施例Uag-net的架构参考说明图。
图3是本实施例注意力门控模块的内部结构原理图。
图4是本实施例胎儿头部下降角AOD和耻骨联合-胎儿头部距离SFD的测量示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,但所描述的实施例仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例1
如图1所示,一种基于超声图像的产时头盆关系自动测量方法,主要包括以下步骤:
S1、获取用于训练的产妇经会阴超声图像数据集,对产时经会阴超声图像数据集进行数据增强,得到增强后的数据集,作为训练集;
S2、将训练集中的数据输入到构建的神经网络模型Uag-net中,训练端到端的分割模型,分割出耻骨联合的感兴趣区域和胎儿头部的感兴趣区域;反复训练,得到训练好的神经网络模型;
S3、在实际应用时,将实时采集的超声图像输入到训练好的神经网络模型中,得到耻骨联合的感兴趣区域和胎儿头部的感兴趣区域,对感兴趣区域进行增强和边界拟合,计算胎儿头部下降角AOD和耻骨联合-胎儿头部距离SFD。
步骤S1中采集产妇的经会阴超声图像方法如下:经会阴超声图像的采集方法为让孕产妇双腿与臀部呈45度角、双腿与膝盖呈90度角,以半卧位姿势躺在床上,超声探头使用曲面的超声探头,以矢状面的形式放置在耻骨联合下方,微微移动探头,直到能够在超声图像上清晰地观察到耻骨联合解剖结构和胎儿头部轮廓解剖结构。共采集150幅超声图像,每幅图像的大小为1024*768。
步骤S1中数据增强的方法如下:将采到的产时经会阴超声图像分成训练集 和测试集,训练集120张,测试集30张。
为了更有效的利用标注数据,对训练集使用了数据增强的方法,增加训练数据。数据增强的具体方法包括:翻转、平移、旋转、加噪和弹性形变。采用弹性形变的方式增加数据,让模型学习得到形变不变性。用计算机去有效的模拟真实情况下组织和人体结构的常见的形变,使训练出来的模型更有鲁棒性。
步骤S2中将数据增强后的数据集输入到Uag-net中,Uag-net模型的架构如图2所示,其由多层组成,实质是指把注意力门(AG)模块集成到U-Net模型中,包括收缩路径、扩张路径和注意力门控模块。
本实施例中U-Net模型结构包括收缩路径和扩张路径,其中收缩路径用于下采样,包括五次卷积,扩张路径用于上采样,包括五次上采样层。该模型使用五次卷积和池化对输入图片进行下采样以提取深层特征,用于对不同类别的像素进行神经网络的像素分类;再将每次池化下采样的特征图依次与上采样中的特征图进行插值,再将上采样特征图逐步还原到原图分辨率的过程中补充浅层细节信息。
卷积核在原始图像上滑动,得到原始图像的特征图。特征图的深度对应卷积核个数,表示原图像不同类别的特征。卷积核大小与卷积感受野(其中,卷积感受野为本领域公知的技术术语,本发明实施例对此不做赘述)大小以及参数的数量呈正相关,大卷积核可以提取到更全面的特征,但是同时也包含了更多的参数导致模型的训练速度和计算效率降低。
其中,收缩路径包括五次卷积模块,每个卷积模块由两个3*3的卷积层(conv1_1、conv1_2、conv4_1、以及conv4_2等,例如:3×3×64,3×3×128等)和一个2*2max pooling池化层(pool1、pool2、以及pool5等,例如:112×112×128等)组成,每个卷积层(conv)均使用非线性ReLU作为激活函数。
本实施例中,上述卷积模块中的卷积层是无padding卷积
本实施例中,每一次池化后都把特征通道的数量加倍。池化的本质是采样,对于输入的特征图,选择一种算法对其进行压缩,max pooling,即最大值池化是常用的池化算法之一,如下式所示:
y=Max(a,b,c,d)
其中,a,b,c,d为2*2大小的池化矩阵对应的局部区域的四个位置。
池化可以在增大后续卷积层的卷积感受野的同时,减少参数数量,从而减少模型的复杂度。输入图像的像素在邻域内发生微小位移时,池化层的输出可 以保持不变,因而池化也有一定的抗扰动作用。可以提高网络的鲁棒性。
其中,扩张路径包括五次上采样层,其中五次上采样层均为反卷积层,每一步首先使用反卷积(up-convolution),每次使用反卷积都将特征通道数量减半,特征图大小加倍。
在反卷积后将反卷积的结果与收缩路径中对应阶段的特征图拼接起来。收缩路径中的特征图尺寸稍大,为了使其尺寸更匹配,将其修剪过后再进行拼接。
在Uag-net模型中,对拼接后的特征图再进行2次3*3的卷积。
在Uag-net模型最后一层的卷积核大小为1*1,将64通道的特征图转化为特定类别数量,为了得到更好的分割效果,设置分类数量为3,分别代表背景、耻骨联合区域、胎儿头部区域三个类别。网络最后一层使用交叉熵函数和softmax。交叉熵函数如下:
Figure PCTCN2020091898-appb-000002
其中y代表真实值,
Figure PCTCN2020091898-appb-000003
代表softmax求出的值。
softmax的函数为:
Figure PCTCN2020091898-appb-000004
其中θ 1,θ 2,…,θ k是模型的参数,x为softmax层的输入,
Figure PCTCN2020091898-appb-000005
这一项对概率分布进行归一化,使得所有概率之和为1。
输入softmax分析器可以得到不同条件下的概率,这里分成三个类别,最后得到y=0、y=1、y=2的概率值,分别代表背景、耻骨联合区域、胎儿头部区域。
为增大感受野并捕获语义上下文信息,卷积神经网络结构多进行逐步下采样。这样,深层次的特征就建模了全局中组织之间的位置和关系,然而当小物体在图片中很大时,仍然很难减少假阳性预测,为提高准确性,当前的分割框架往往先执行独立的定位步骤,再进行后续的分割步骤。
而在卷积神经网络结构模型中集成注意力门控可以实现相同的目标,不需要训练多个模型和大量额外的参数,注意力门控在不相关的背景区域中逐渐抑制特征响应,不需要在网络之间裁剪出ROI。
注意力门控模型能自动聚焦不同形状和大小的目标结构。用注意力门控训练的模型可以抑制输入图像中的不相关区域,同时突出显示对特定任务有用的显著特征,这使得无需使用级联卷积神经网络并级联组织定位模块。
本实施例注意力门控可以很容易地集成到U-Net模型中,具有很小的计算 开销,同时提高模型灵敏度和预测精度。进一步地,采用基于网格的门控,使注意力系数更加特定于局部区域,与基于全局特征向量的门控相比提高了性能。
参见图3,本实施例中,注意力门控内具体步骤为:令g通过1*1卷积Wg,x l通过1*1卷积Wx,再将二者输出逐点相加后输入ReLU函数,得到的输出再经过1*1卷积ψ,再输入Sigmoid,经过重采样(Resampler)后计算出注意力系数α,再将α与x l进行点乘,得到输出
Figure PCTCN2020091898-appb-000006
在Uag-net中扩张路径的特征图上采样并卷积后与收缩路径的特征图一起送入AG,解码侧上采样后的特征图作为g,编码侧的特征图作为x l,二者一同经过AG计算出
Figure PCTCN2020091898-appb-000007
在S3步骤中,得出耻骨联合结构边界后,提取出边界上的点,用最小二乘法进行椭圆拟合,拟合得到的椭圆长半轴的两个端点即为耻骨联合上下缘。计算过耻骨联合下缘且与胎儿头部轮廓相切的切线,根据该切线与耻骨联合上下缘的连线所成的夹角得到胎儿头部下降角AOD。
过耻骨联合下缘端点作耻骨联合上下缘连线的切线,即得到耻骨下边缘线,测量从耻骨联合下缘端点开始,沿耻骨下边缘线到胎儿头部轮廓外边缘的距离,即为SFD。
为了得到更精确的参数值,在计算SFD时胎儿头部轮廓外边缘不用拟合出的椭圆代替,而是用真实的胎儿头部头骨外缘计算。参数测量方法如图4所示。
实施例2
本实施例除下述特征外其他结构同实施例1:
一种基于超声图像的产时头盆关系自动测量装置,包括:
训练集构建模块,用于获取用于训练的产妇经会阴超声图像数据集,作为训练集;
神经网络模型训练模块,用于根据训练集中的数据训练端到端的分割模型,分割出耻骨联合的感兴趣区域和胎儿头部的感兴趣区域;反复训练,得到训练好的神经网络模型;
参数计算模块,用于将实时采集的超声图像输入到训练好的神经网络模型中,得到耻骨联合的感兴趣区域和胎儿头部的感兴趣区域,对感兴趣区域进行增强和边界拟合,计算头盆关系参数,这里的头盆关系参数包括但不限于:胎儿头部下降角AOD和耻骨联合-胎儿头部距离SFD。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。本领域普通技术人员可以意识到,结合本发明中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
在本发明所提供的几个实施例中,应该理解到,所揭露的设备、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为逻辑功能划分,实际实现时可以有另外的划分方式,也可以将具有相同功能的模块集合成一个单元,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (10)

  1. 基于超声图像的产时头盆关系自动测量方法,其特征在于,包括以下步骤:
    S1:获取产妇经会阴超声图像数据;
    S2:提取出耻骨联合和胎儿头部的感兴趣区域ROI1和ROI2;
    S3:根据ROI1和ROI2区域,计算头盆关系参数。
  2. 根据权利要求1所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,所述的步骤S2如下:
    S201、构建用于ROI区域分割的神经网络模型;
    S202、采集将用于训练网络模型的经会阴超声图像,作为训练集;
    S203、使用训练集训练所述构建的神经网络模型,使用训练好的模型对产时经会阴超声图像进行分割,分割出耻骨联合和胎儿头部的感兴趣区域。
  3. 根据权利要求2所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,步骤S202中,对采集到的将用于训练网络模型的经会阴超声图像进行数据增强,用于增加训练数据,将数据增强后的数据集作为训练集,数据增强方法包括:翻转、平移、旋转、加噪和弹性形变。
  4. 根据权利要求2所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,步骤S201中,所述神经网络模型采用基于Uag-net的全卷积神经网络,所述基于Uag-net的全卷积神经网络是指把注意力门控模块集成到U-Net模型中;U-Net模型包括收缩路径、扩张路径,收缩路径用来捕捉图片中的上下文信息,而与之相对称的扩张路径则是为了对图片中所需要分割出来的部分进行精准定位;Uag-net模型将扩张路径的特征图与收缩路径的特征图一起送入注意力门控模块,注意力门控模块用于自动聚焦耻骨联合和胎儿头部结构,抑制输入图像中与特定目标任务无关的区域,同时突出对检测特定目标有用的显著特征。
  5. 根据权利要求4所述的基于超声图像的产时头盆关系自动测量方法,其 特征在于,所述收缩路径基于卷积网络架构搭建,收缩路径中包含多个结构相同的卷积模块,每个卷积模块都由两个3*3卷积层、一个2*2池化层组成,卷积层使用非线性ReLU作为激活函数。
  6. 根据权利要求5所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,所述卷积层是无padding卷积,池化层采用最大值池化,每一次池化后都把特征通道的数量加倍。
  7. 根据权利要求5所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,所述扩张路径包括若干个上卷积层,其中每个上采样层均为反卷积层,每一步首先使用反卷积,每次使用反卷积都将特征通道数量减半,特征图大小加倍,然后将反卷积的结果与收缩路径中对应阶段的特征图拼接起来,对拼接后的特征图再进行2次3*3的卷积。
  8. 根据权利要求5所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,扩张路径上采样后的特征图作为g,收缩路径的特征图作为x l,二者一同送入注意力门控模块,在注意力门控模块中执行下述操作:令g通过1*1卷积Wg,x l通过1*1卷积Wx,再将Wg和Wx输出逐点相加后输入ReLU函数,再经过1*1卷积ψ后输入Sigmoid函数,然后进行重采样计算出注意力系数α,再将α与x l进行点乘,输出
    Figure PCTCN2020091898-appb-100001
  9. 根据权利要求1所述的基于超声图像的产时头盆关系自动测量方法,其特征在于,步骤S3中,所述头盆关系参数的计算方法是:
    (3-1)对分割出的耻骨联合感兴趣区域进行增强,用椭圆拟合耻骨联合结构,得到椭圆长轴两个端点为耻骨联合上下缘;
    (3-2)计算耻骨联合下缘与胎儿头部ROI区域切线和耻骨联合上下缘连线之间的夹角,得到胎儿头部下降角AOD;
    (3-3)过耻骨联合下缘端点作耻骨联合上下缘连线的切线,即得到耻骨下边缘线,测量从耻骨联合下缘端点开始,沿耻骨下边缘线到胎儿头部轮廓外边缘的距离,即为耻骨联合-胎儿头部距离SFD。
  10. 基于超声图像的产时头盆关系自动测量装置,其特征在于,包括:
    训练集构建模块,用于获取用于训练的产妇经会阴超声图像数据集,作为训练集;
    神经网络模型训练模块,用于根据训练集中的数据训练端到端的分割模型,分割出耻骨联合的感兴趣区域和胎儿头部的感兴趣区域;反复训练,得到训练好的神经网络模型;
    参数计算模块,用于将实时采集的超声图像输入到训练好的神经网络模型中,得到耻骨联合的感兴趣区域和胎儿头部的感兴趣区域,对感兴趣区域进行增强和边界拟合,计算头盆关系参数。
PCT/CN2020/091898 2019-07-11 2020-05-22 基于超声图像的产时头盆关系自动测量方法和装置 WO2021004174A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910624587.6A CN110432929A (zh) 2019-07-11 2019-07-11 基于超声图像的产时头盆关系自动测量方法和装置
CN201910624587.6 2019-07-11

Publications (1)

Publication Number Publication Date
WO2021004174A1 true WO2021004174A1 (zh) 2021-01-14

Family

ID=68430184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/091898 WO2021004174A1 (zh) 2019-07-11 2020-05-22 基于超声图像的产时头盆关系自动测量方法和装置

Country Status (2)

Country Link
CN (1) CN110432929A (zh)
WO (1) WO2021004174A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113520321A (zh) * 2021-07-14 2021-10-22 张太斌 基于参数触发的助产条件判断平台
CN114088817A (zh) * 2021-10-28 2022-02-25 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法
CN114255250A (zh) * 2021-12-28 2022-03-29 黄河勘测规划设计研究院有限公司 基于深度学习的河防工程边坡坍塌检测方法
CN114677325A (zh) * 2022-01-25 2022-06-28 安徽农业大学 水稻茎秆截面分割模型的构建方法及基于该模型的检测方法
CN114711824A (zh) * 2022-03-21 2022-07-08 广州三瑞医疗器械有限公司 一种中骨盆平面胎头枕横位不均倾势的识别方法
CN115886766A (zh) * 2022-11-29 2023-04-04 重庆理工大学 一种基于注意力机制与ctg图像的胎儿、新生儿缺氧无创诊断系统
CN117058146A (zh) * 2023-10-12 2023-11-14 广州索诺星信息科技有限公司 一种基于人工智能的超声数据安全监管系统及方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110432929A (zh) * 2019-07-11 2019-11-12 暨南大学 基于超声图像的产时头盆关系自动测量方法和装置
CN111062948B (zh) * 2019-11-18 2022-09-13 北京航空航天大学合肥创新研究院 一种基于胎儿四腔心切面图像的多组织分割方法
CN113034535A (zh) * 2019-12-24 2021-06-25 无锡祥生医疗科技股份有限公司 胎儿头部分割方法、装置和存储介质
CN111326256B (zh) * 2020-02-28 2023-12-29 李胜利 胎儿超声标准切面图像智能识别自我训练学习系统及考试方法
JP7501935B2 (ja) 2020-07-28 2024-06-18 国立大学法人 東京大学 分娩進行状況評価装置、分娩進行状況評価方法およびプログラム
CN112155594B (zh) * 2020-10-10 2023-04-07 无锡声亚医疗科技有限公司 一种用于超声图像的配准方法、超声设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160045152A1 (en) * 2014-08-12 2016-02-18 General Electric Company System and method for automated monitoring of fetal head descent during labor
CN106999157A (zh) * 2014-09-12 2017-08-01 通用电气公司 用于通过计算和显示超声测量和图形模型进行胎儿显像的方法与系统
CN108836394A (zh) * 2018-06-15 2018-11-20 暨南大学 一种胎头下降角度自动测量方法
CN109671086A (zh) * 2018-12-19 2019-04-23 深圳大学 一种基于三维超声的胎儿头部全自动分割方法
CN110432929A (zh) * 2019-07-11 2019-11-12 暨南大学 基于超声图像的产时头盆关系自动测量方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IT1391829B1 (it) * 2008-11-21 2012-01-27 C N R Consiglio Naz Delle Ricerche Apparato basato sugli ultrasuoni per misurare parametri indicatori di avanzamento di un parto
WO2016176863A1 (zh) * 2015-05-07 2016-11-10 深圳迈瑞生物医疗电子股份有限公司 三维超声成像方法和装置
CN108309354B (zh) * 2017-01-16 2021-04-02 深圳迈瑞生物医疗电子股份有限公司 超声盆底检测引导方法和超声成像系统
CN107766874B (zh) * 2017-09-07 2021-06-04 深圳度影医疗科技有限公司 一种超声容积生物学参数的测量方法及测量系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160045152A1 (en) * 2014-08-12 2016-02-18 General Electric Company System and method for automated monitoring of fetal head descent during labor
CN106999157A (zh) * 2014-09-12 2017-08-01 通用电气公司 用于通过计算和显示超声测量和图形模型进行胎儿显像的方法与系统
CN108836394A (zh) * 2018-06-15 2018-11-20 暨南大学 一种胎头下降角度自动测量方法
CN109671086A (zh) * 2018-12-19 2019-04-23 深圳大学 一种基于三维超声的胎儿头部全自动分割方法
CN110432929A (zh) * 2019-07-11 2019-11-12 暨南大学 基于超声图像的产时头盆关系自动测量方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LI, RUIRUI ET AL.: "Connection Sensitive Attention U-NET for Accurate Retinal Vessel Segmentation", HTTPS://ARXIV.ORG/PDF/1903.05558.PDF, 23 April 2019 (2019-04-23), XP081128960, DOI: 20200723193348Y *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113520321A (zh) * 2021-07-14 2021-10-22 张太斌 基于参数触发的助产条件判断平台
CN114088817A (zh) * 2021-10-28 2022-02-25 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法
CN114088817B (zh) * 2021-10-28 2023-10-24 扬州大学 基于深层特征的深度学习的平板陶瓷膜超声缺陷检测方法
CN114255250A (zh) * 2021-12-28 2022-03-29 黄河勘测规划设计研究院有限公司 基于深度学习的河防工程边坡坍塌检测方法
CN114677325A (zh) * 2022-01-25 2022-06-28 安徽农业大学 水稻茎秆截面分割模型的构建方法及基于该模型的检测方法
CN114711824A (zh) * 2022-03-21 2022-07-08 广州三瑞医疗器械有限公司 一种中骨盆平面胎头枕横位不均倾势的识别方法
CN115886766A (zh) * 2022-11-29 2023-04-04 重庆理工大学 一种基于注意力机制与ctg图像的胎儿、新生儿缺氧无创诊断系统
CN117058146A (zh) * 2023-10-12 2023-11-14 广州索诺星信息科技有限公司 一种基于人工智能的超声数据安全监管系统及方法
CN117058146B (zh) * 2023-10-12 2024-03-29 广州索诺星信息科技有限公司 一种基于人工智能的超声数据安全监管系统及方法

Also Published As

Publication number Publication date
CN110432929A (zh) 2019-11-12

Similar Documents

Publication Publication Date Title
WO2021004174A1 (zh) 基于超声图像的产时头盆关系自动测量方法和装置
CN112529894B (zh) 一种基于深度学习网络的甲状腺结节的诊断方法
Zhang et al. Detection of ovarian tumors in obstetric ultrasound imaging using logistic regression classifier with an advanced machine learning approach
CN109636805B (zh) 一种基于分类先验的宫颈图像病变区域分割装置及方法
TWI667996B (zh) 乳房腫瘤輔助檢測模型及乳房腫瘤輔助檢測系統
CN112086197B (zh) 基于超声医学的乳腺结节检测方法及系统
CN110164550B (zh) 一种基于多视角协同关系的先天性心脏病辅助诊断方法
Bai et al. Detection of cervical lesion region from colposcopic images based on feature reselection
CN110279433A (zh) 一种基于卷积神经网络的胎儿头围自动精确测量方法
CN111820948B (zh) 胎儿生长参数测量方法、系统及超声设备
CN108229584A (zh) 一种基于深度学习的多模态医学影像识别方法及装置
CN112071418B (zh) 基于增强ct影像组学的胃癌腹膜转移的预测系统及方法
Goudarzi et al. Segmentation of arm ultrasound images in breast cancer-related lymphedema: A database and deep learning algorithm
CN108836394B (zh) 一种胎头下降角度自动测量方法
CN111481233B (zh) 胎儿颈项透明层厚度测量方法
Thomas et al. Deep learning measurement model to segment the nuchal translucency region for the early identification of down syndrome
Jan et al. Machine learning approaches in medical image analysis of PCOS
CN114332910A (zh) 一种面向远红外图像的相似特征计算的人体部位分割方法
Xia et al. Automatic plane of minimal hiatal dimensions extraction from 3D female pelvic floor ultrasound
Chen et al. Fetal Head and Pubic Symphysis Segmentation in Intrapartum Ultrasound Image Using a Dual-Path Boundary-Guided Residual Network
WO2020103098A1 (zh) 超声成像方法、设备、存储介质,处理器及计算机设备
CN116188424A (zh) 一种人工智能辅助超声诊断脾肝脏创伤模型建立的方法
Shiney et al. A Review on techniques for computer aided diagnosis of soft markers for detection of down syndrome in ultrasound fetal images
CN113936775A (zh) 基于人在回路智能辅助导航的胎心超声标准切面提取方法
Mi et al. Detecting carotid intima-media from small-sample ultrasound images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20837359

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20837359

Country of ref document: EP

Kind code of ref document: A1