WO2022027910A1 - 颅脑超声图像的脑白质区域分割方法、装置及电子设备 - Google Patents
颅脑超声图像的脑白质区域分割方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2022027910A1 WO2022027910A1 PCT/CN2020/140244 CN2020140244W WO2022027910A1 WO 2022027910 A1 WO2022027910 A1 WO 2022027910A1 CN 2020140244 W CN2020140244 W CN 2020140244W WO 2022027910 A1 WO2022027910 A1 WO 2022027910A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ultrasound image
- white matter
- region
- target
- Prior art date
Links
- 238000002604 ultrasonography Methods 0.000 title claims abstract description 56
- 210000004885 white matter Anatomy 0.000 title claims abstract description 53
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000011218 segmentation Effects 0.000 claims abstract description 55
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 238000001514 detection method Methods 0.000 claims description 30
- 238000011176 pooling Methods 0.000 claims description 11
- 238000009792 diffusion process Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 7
- 238000009560 cranial ultrasound Methods 0.000 claims description 6
- 210000002987 choroid plexus Anatomy 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 210000003140 lateral ventricle Anatomy 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000006931 brain damage Effects 0.000 description 1
- 231100000874 brain damage Toxicity 0.000 description 1
- 208000029028 brain injury Diseases 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002028 premature Effects 0.000 description 1
- 230000002739 subcortical effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the invention belongs to the technical field of image processing, and in particular relates to a method, a device and an electronic device for segmenting a brain white matter region of a cranial ultrasound image.
- Traditional white matter ultrasound image analysis methods use manual segmentation. If the damaged parts of the white matter are the anterior horn of the lateral ventricle, the white matter near the posterior horn, the lateral and dorsal white matter of the lateral ventricle, or the subcortical white matter, the accuracy of manual segmentation is easily affected by the operator's experience level, resulting in artificial errors. In addition, the traditional method performs feature extraction and classification after completing manual segmentation, which consumes a lot of manpower and material resources.
- the difficulty in segmenting the white matter region of ultrasound images is that the image contrast is low, especially when the acoustic impedance of adjacent tissues is not very different, the segmentation will be very difficult; at the same time, the resolution is low, there are many speckle noise, there are various artifacts, and the sound and shadow are also It will adversely affect the segmentation accuracy.
- the area of interest has no clear boundary with the surrounding environment, and has a very similar texture to the rest of the image, and the proportion of white matter areas is too low. There is a strong negative impact, so it is more difficult to do segmentation directly.
- the purpose of the present invention is to provide a white matter region segmentation method for craniocerebral ultrasound images, which aims to solve the technical problem that the proportion of white matter regions is too low, and the choroid plexus and other high-brightness regions have a strong negative impact on the segmentation results.
- the present invention provides a method for segmenting a brain white matter region of a craniocerebral ultrasound image, the method comprising the following steps:
- the present invention also provides a device for segmenting brain white matter regions of a craniocerebral ultrasound image, comprising:
- a preprocessing unit which is used for preprocessing of filtering and equalization of the original ultrasound image
- the coarse segmentation unit uses the target detection network Faster-Rcnn to perform target detection on the preprocessed ultrasound image, and generates a detection frame on the map; and then cuts out the ultrasound image in the detection frame to generate a white matter area and a non-sensing frame. the target image of the region of interest;
- the fine segmentation unit uses the semantic segmentation network SegNet to eliminate the non-interested region in the target image, and completes the precise segmentation of the white matter region of the target image.
- the present invention also provides an electronic device, comprising:
- the memory stores instructions executable by the one processor, the instructions being executed by the at least one processor to enable the at least one processor to execute the brain of any one of 1 to 5 above A method for segmentation of white matter regions in ultrasound images.
- the original input ultrasonic image is preprocessed by diffusion and enhancement, and then the processed ultrasonic image is roughly segmented to obtain the target image including the white matter area and the rest of the non-interested area;
- the sub-segmentation removes the non-interesting regions in the box, and obtains the accurately segmented white matter region. It can effectively avoid that the region of interest in the ultrasound image does not have a clear boundary with the surrounding environment, the proportion of white matter regions is too low, and the highlighted regions such as the choroid plexus have a strong negative impact on the segmentation results.
- Fig. 1 is the realization flow chart of the brain white matter region segmentation method of the cranial ultrasound image provided in the first embodiment of the present invention
- FIG. 2 is a structural block diagram of an apparatus for segmenting brain white matter regions of a cranial ultrasound image according to Embodiment 2 of the present invention
- FIG. 3 is a schematic flowchart of a method for segmenting a brain white matter region of a craniocerebral ultrasound image provided in Embodiment 1 of the present invention
- FIG. 4 is a schematic diagram of a rough segmentation process of a method for segmenting a brain white matter region of a craniocerebral ultrasound image provided in Embodiment 1 of the present invention
- FIG. 5 is a schematic diagram of a fine segmentation process of a method for segmenting a brain white matter region of a cranial ultrasound image according to Embodiment 1 of the present invention.
- FIG. 1 and FIG. 3 show a method for segmenting a brain white matter region of a craniocerebral ultrasound image provided by Embodiment 1 of the present invention, and the method includes the following steps:
- step S1 includes the following steps:
- step S2 includes the following steps:
- the preprocessed ultrasound image is scaled and put into a convolution layer to extract features to obtain a feature map
- the network used in the first step of rough segmentation is the Faster-RCNN network.
- the Faster-RCNN network (Faster-Neural Convolutional Network) generates candidate frames based on the Anchor mechanism by adding the RPN network (regional candidate network), and finally integrates feature extraction, candidate frame extraction, frame regression and classification into one network.
- the specific process is to scale the input image and put it into the convolution layer to extract features to obtain a feature map, then send the feature map to the RPN network to generate a series of possible candidate boxes, and then combine the original feature map and all the RPN output.
- the candidate frame is input to the ROI pooling layer (region of interest pooling layer), extracts and collects the proposal (candidate frame), and calculates a fixed size 7*7 proposal feature map, which is sent to the fully connected layer Softmax layer for target classification and regression.
- the ultrasound image of the ventricle is input, and the target image of the white matter area and the non-interested area is detected by the Faster-RCNN network, so as to prepare for the next step to segment the damaged area on the basis of the white matter area.
- semantic segmentation network SegNet includes an encoder and a decoder.
- step S4 includes the following steps:
- the encoder extracts the feature of each pixel of the target image and classifies the pixel, and then increases the receptive field and simultaneously reduces the picture size through the pooling layer;
- the decoder performs deconvolution on the target image processed by the encoder so that the image classification features can be reproduced
- the decoder is restored to the original size of the image through an upsampling operation, and outputs the maximum value of different classifications;
- the decoder corresponds the parsed information to the original ultrasound image to form a final accurate segmentation map of white matter.
- FIG. 5 shows the SegNet network, a network model for semantic segmentation.
- SegNet realizes image segmentation by classifying each pixel in the image and identifying the category of each pixel.
- the network mainly consists of two parts: encoder and decoder.
- the encoder part features are extracted, and the receptive field is increased through the pooling layer, while the picture becomes smaller; while in the decoder part, the main operations are deconvolution and upsampling, and the deconvolution makes the features reproduced after image classification, Upsampling is restored to the original size of the image, and finally through the Softmax layer, the maximum value of different categories is output to obtain the final segmentation map.
- the decoder corresponds to the parsed information into the final image form.
- the gradient descent algorithm is used to optimize the model, the learning rate is set to 1, the power coefficient is 0.9, and a total of 15 epochs are trained.
- the above-mentioned variant of the neural network model is applied to the white matter segmentation of cranial ultrasound images, and the variant of the network model includes simple modification of the number of network layers, change of the size of the convolution kernel, selection of an optimization function and an activation function, and the like.
- the Faster-Rcnn network is used to locate the region containing the cranial white matter with a rectangular frame, and then it is cut out; further, the semantic segmentation SegNet network is used to segment the white matter region more accurately. It can provide doctors with more accurate segmentation images, which is helpful for doctors to judge brain damage.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- FIG. 2 shows a device for segmenting brain white matter regions of a craniocerebral ultrasound image provided in Embodiment 2 of the present invention, including:
- a preprocessing unit which is used for preprocessing of filtering and equalization of the original ultrasound image
- the coarse segmentation unit uses the target detection network Faster-Rcnn to perform target detection on the preprocessed ultrasound image, and generates a detection frame on the map; and then cuts out the ultrasound image in the detection frame to generate a white matter area and a non-sensing frame. the target image of the region of interest;
- the fine segmentation unit uses the semantic segmentation network SegNet to eliminate the non-interested region in the target image, and completes the precise segmentation of the white matter region of the target image.
- the preprocessing unit includes:
- a filtering module which uses an anisotropic filter to perform diffusion processing on the original ultrasound image
- the image enhancement module is used to enhance the original ultrasound image after diffusion processing through histogram equalization.
- the coarse segmentation unit includes:
- the convolution layer is used to extract the features of the preprocessed ultrasound image to obtain a feature map
- the region of interest pooling layer is used to extract a candidate feature map with a fixed size of 7*7 according to the feature map and all the candidate frames;
- the fully-connected layer is used to perform target classification and regression on the candidate feature map to obtain an ultrasound image with the detection frame.
- semantic segmentation network SegNet includes an encoder and a decoder
- the encoder extracts the feature of each pixel of the target image and classifies the pixel, and then increases the receptive field and reduces the size of the picture through the pooling layer;
- the decoder performs deconvolution on the target image processed by the encoder so that the features of the image after classification can be reproduced; and then restores to the original size of the image through an upsampling operation, and outputs the maximum value of different classifications;
- the decoder is further configured to correspond the parsed information to the original ultrasound image to form a final accurate segmentation map of white matter.
- the sequential segmentation of the coarse segmentation unit and the fine segmentation unit can avoid the influence of subjectivity and provide effective help for the doctor to make a follow-up diagnosis.
- An electronic device provided in Embodiment 3 of the present invention includes:
- a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to execute The method for segmenting the brain white matter region of a craniocerebral ultrasound image according to any one of the above.
Abstract
Description
Claims (10)
- 一种颅脑超声图像的脑白质区域分割方法,其特征在于,所述方法包括下述步骤:S1.对原始超声图像进行滤波和均衡化的预处理;S2.使用目标检测网络Faster-Rcnn对预处理后的超声图像进行目标检测,在图上生成检测框;S3.将所述检测框内的超声图像裁剪出来,生成包含脑白质区域以及非感兴趣区域的目标图像;S4.使用语义分割网络SegNet剔除所述目标图像中的所述非感兴趣区域,完成对所述目标图像的脑白质区域的精准分割。
- 如权利要求1所述的方法,其特征在于,所述步骤S1包括以下步骤:S11.使用各向异性滤波器对所述原始超声图像进行扩散处理;S12.对扩散处理后的原始超声图像通过直方图均衡化进行增强处理。
- 如权利要求1所述的方法,其特征在于,所述步骤S2包括以下步骤:S21.将预处理后的超声图像缩放后放入卷积层提取特征,得到特征图;S22.将所述特征图送入区域候选网络生成一系列可能的候选框;S23.将所述特征图和所有的所述候选框输入到感兴趣区域池化层,提取出固定大小为7*7的候选特征图;S24.将所述候选特征图送入全连接层进行目标分类与回归,得到带有所述检测框的超声图像。
- 如权利要求1所述的方法,其特征在于,所述语义分割网络SegNet包括编码器与解码器。
- 如权利要求4所述的方法,其特征在于,所述步骤S4包括以下步骤:S41.所述编码器提取所述目标图像每一个像素点的特征并对该像素点分类,进而通过池化层增大感受野同时缩小图片尺寸;S42.所述解码器对所述编码器处理后的目标图像进行反卷积使得图像分类后特征得以重现;S43.所述解码器通过上采样操作还原到图像原始尺寸,输出不同分类的最大值;S44.所述解码器将解析后的信息对应到所述原始超声图像,形成最终的脑白质精准分割图。
- 一种颅脑超声图像的脑白质区域分割装置,其特征在于,包括:预处理单元,用于对原始超声图像进行滤波和均衡化的预处理;粗分割单元,使用目标检测网络Faster-Rcnn对预处理后的超声图像进行目标检测,在图上生成检测框;进而将所述检测框内的超声图像裁剪出来,生成包含脑白质区域以及非感兴趣区域的目标图像;精细分割单元,使用语义分割网络SegNet剔除所述目标图像中的所述非感兴趣区域,完成对所述目标图像的脑白质区域的精准分割。
- 如权利要求6所述的装置,其特征在于,所述预处理单元包括:滤波模块,使用各向异性滤波器对所述原始超声图像进行扩散处理;图像增强模块,用于对扩散处理后的原始超声图像通过直方图均衡化进行增强处理。
- 如权利要求7所述的装置,其特征在于,所述粗分割单元包括:卷积层,用于提取预处理后的超声图像的特征,得到特征图;区域候选网络,用于通过所述特征图生成一系列可能的候选框;感兴趣区域池化层,用于根据所述特征图和所有的所述候选框提取出固定大小为7*7的候选特征图;全连接层,用于对所述候选特征图进行目标分类与回归,得到带有所述检测框的超声图像。
- 如权利要求5所述的装置,其特征在于,所述语义分割网络SegNet包括编码器和解码器;所述编码器提取所述目标图像每一个像素点的特征并对该像素点分类,进而通过池化层增大感受野同时缩小图片尺寸;所述解码器对所述编码器处理后的目标图像进行反卷积使得图像分类后特征得以重现;进而通过上采样操作还原到图像原始尺寸,输出不同分类的最大值;所述解码器还用于将解析后的信息对应到所述原始超声图像,形成最终的脑白质精准分割图。
- 一种电子设备,包括:至少一个处理器;以及与所述至少一个处理器通信连接的存储器;其特征在于,所述存储器存储有可被所述一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述1至5任一项所述的颅脑超声图像的脑白质区域分割方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010775228.3 | 2020-08-05 | ||
CN202010775228.3A CN111951279B (zh) | 2020-08-05 | 2020-08-05 | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022027910A1 true WO2022027910A1 (zh) | 2022-02-10 |
Family
ID=73337966
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/140244 WO2022027910A1 (zh) | 2020-08-05 | 2020-12-28 | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111951279B (zh) |
WO (1) | WO2022027910A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115880287A (zh) * | 2023-02-20 | 2023-03-31 | 广东工业大学 | 一种脑白质高信号病灶区域分割及评级方法 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951279B (zh) * | 2020-08-05 | 2024-04-23 | 中国科学院深圳先进技术研究院 | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150023556A1 (en) * | 2013-07-17 | 2015-01-22 | Samsung Electronics Co., Ltd. | Method and apparatus for selecting seed area for tracking nerve fibers in brain |
CN110910396A (zh) * | 2019-10-18 | 2020-03-24 | 北京量健智能科技有限公司 | 一种用于优化图像分割结果的方法和装置 |
CN111105421A (zh) * | 2019-11-29 | 2020-05-05 | 上海联影智能医疗科技有限公司 | 一种脑白质高信号分割方法、装置、设备及存储介质 |
CN111951279A (zh) * | 2020-08-05 | 2020-11-17 | 中国科学院深圳先进技术研究院 | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107862695A (zh) * | 2017-12-06 | 2018-03-30 | 电子科技大学 | 一种基于全卷积神经网络的改进型图像分割训练方法 |
CN109389585B (zh) * | 2018-09-20 | 2021-11-02 | 东南大学 | 一种基于全卷积神经网络的脑组织提取方法 |
CN109859215B (zh) * | 2019-01-30 | 2021-11-02 | 北京慧脑云计算有限公司 | 一种基于Unet模型的脑白质高信号自动分割系统及其方法 |
CN110533664B (zh) * | 2019-07-26 | 2022-05-03 | 浙江工业大学 | 一种基于大样本数据驱动的颅神经自动分割方法 |
CN110991408B (zh) * | 2019-12-19 | 2022-09-06 | 北京航空航天大学 | 基于深度学习方法分割脑白质高信号的方法和装置 |
-
2020
- 2020-08-05 CN CN202010775228.3A patent/CN111951279B/zh active Active
- 2020-12-28 WO PCT/CN2020/140244 patent/WO2022027910A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150023556A1 (en) * | 2013-07-17 | 2015-01-22 | Samsung Electronics Co., Ltd. | Method and apparatus for selecting seed area for tracking nerve fibers in brain |
CN110910396A (zh) * | 2019-10-18 | 2020-03-24 | 北京量健智能科技有限公司 | 一种用于优化图像分割结果的方法和装置 |
CN111105421A (zh) * | 2019-11-29 | 2020-05-05 | 上海联影智能医疗科技有限公司 | 一种脑白质高信号分割方法、装置、设备及存储介质 |
CN111951279A (zh) * | 2020-08-05 | 2020-11-17 | 中国科学院深圳先进技术研究院 | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 |
Non-Patent Citations (1)
Title |
---|
MENG WANG; KAI YU; WEIFANG ZHU; FEI SHI; XINJIAN CHEN: "Multi-Strategy Deep Learning Method for Glaucoma Screening on Fundus Image", INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE: IOVS, vol. 60, no. 9, 28 April 2019 (2019-04-28), US , pages 6148, XP009533830, ISSN: 0146-0404 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115880287A (zh) * | 2023-02-20 | 2023-03-31 | 广东工业大学 | 一种脑白质高信号病灶区域分割及评级方法 |
Also Published As
Publication number | Publication date |
---|---|
CN111951279B (zh) | 2024-04-23 |
CN111951279A (zh) | 2020-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191457B (zh) | 一种病理图像质量有效性识别方法 | |
WO2020024127A1 (zh) | 骨龄评估与身高预测模型、其系统及其预测方法 | |
WO2022027910A1 (zh) | 颅脑超声图像的脑白质区域分割方法、装置及电子设备 | |
CN108305241B (zh) | 基于深度投票模型的sd-oct图像ga病变分割方法 | |
CN109544564A (zh) | 一种医疗图像分割方法 | |
WO1999004362A1 (en) | Automatic background recognition and removal (abrr) in projection digital radiographic images (pdri) | |
CN115205469A (zh) | 基于cbct的牙齿与牙槽骨重建方法、设备及介质 | |
CN112614145A (zh) | 一种基于深度学习的颅内出血ct图像分割方法 | |
CN110428426A (zh) | 一种基于改进随机森林算法的mri图像自动分割方法 | |
CN112330613A (zh) | 一种细胞病理数字图像质量的评价方法及系统 | |
CN109087310A (zh) | 睑板腺纹理区域的分割方法、系统、存储介质及智能终端 | |
Basha et al. | Enhanced and Effective Computerized Classification of X-Ray Images | |
CN112819755A (zh) | 一种甲状腺的结节ti-rads分级系统及方法 | |
CN116309806A (zh) | 一种基于CSAI-Grid RCNN的甲状腺超声图像感兴趣区域定位方法 | |
Kusakunniran et al. | Encoder-decoder network with RMP for tongue segmentation | |
CN116725563A (zh) | 眼球突出度测量装置 | |
Chandran et al. | Segmentation of dental radiograph images | |
CN113592843B (zh) | 基于改进的U-Net眼底视网膜血管图像分割方法及装置 | |
CN114332858A (zh) | 病灶检测方法及装置、病灶检测模型获取方法 | |
CN111640126A (zh) | 基于医学影像的人工智能诊断辅助方法 | |
JPWO2021140602A5 (ja) | 画像処理システム及びプログラム | |
Xie et al. | Endoscopic ultrasound image recognition based on data mining and deep learning | |
TWI807809B (zh) | 利用深度學習之卷積神經網路輔助辨識牙周病及齲齒之系統、電腦程式及電腦可讀取媒體 | |
CN114648570B (zh) | 一种基于深度学习的面向差异化背景网格的曲线提取方法 | |
CN112861849B (zh) | 一种脊柱畸形矫正手术术中组织识别方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20948283 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20948283 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20948283 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.08.2023) |