WO2020263002A1 - Procédé de segmentation de vaisseau sanguin - Google Patents

Procédé de segmentation de vaisseau sanguin Download PDF

Info

Publication number
WO2020263002A1
WO2020263002A1 PCT/KR2020/008319 KR2020008319W WO2020263002A1 WO 2020263002 A1 WO2020263002 A1 WO 2020263002A1 KR 2020008319 W KR2020008319 W KR 2020008319W WO 2020263002 A1 WO2020263002 A1 WO 2020263002A1
Authority
WO
WIPO (PCT)
Prior art keywords
blood vessel
image
learning
net
algorithm
Prior art date
Application number
PCT/KR2020/008319
Other languages
English (en)
Korean (ko)
Inventor
조한용
권순성
Original Assignee
에이아이메딕 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 에이아이메딕 주식회사 filed Critical 에이아이메딕 주식회사
Publication of WO2020263002A1 publication Critical patent/WO2020263002A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • A61B5/0042Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/0035Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Definitions

  • the present invention relates to a vascular segmentation method. More specifically, the present disclosure relates to a method of segmenting a blood vessel region by processing a plurality of 2D blood vessel tomography images by a deep running method.
  • a medical image processing device is a device that acquires an image capable of non-invasively showing the internal structure of a human body.
  • the medical image output from the medical image processing device may be analyzed and used to diagnose a patient's disease.
  • Devices for photographing and processing medical images include Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Single Photon Emission Computed Tomography (SPECT), and positron tomography. PET, Positron Emission Tomography) and Ultrasound.
  • MRI Magnetic Resonance Imaging
  • CT Computed Tomography
  • SPECT Single Photon Emission Computed Tomography
  • positron tomography PET, Positron Emission Tomography
  • Ultrasound Ultrasound.
  • CT and MRI are widely used for the diagnosis of cerebrovascular diseases. Since the causes of cerebrovascular diseases are diverse and treatment methods and prognosis may vary depending on the patient, the exact cause analysis, determination of appropriate treatment methods, and prognosis Various imaging techniques are being developed for prediction.
  • the method using CT has the disadvantage of not knowing the extent of the cerebral infarction accurately and the use of radiation exposure and contrast media, and MRI can determine the extent of the cerebral infarction more accurately, but it takes a relatively long time to obtain an image and requires emergency such as acute cerebral infarction. It can be limited in the situation, is very sensitive to the patient's movements, and is relatively safer than CT, but it also has the disadvantage of requiring a contrast medium.
  • segmentation to generate a three-dimensional shape model of a blood vessel by processing a plurality of two-dimensional tomographic images is required.
  • a method of accurately and quickly segmenting a blood vessel region is required.
  • a medical image processing technology using a deep running or machine learning technique has been developed.
  • development to diagnose diseases by applying deep running techniques to medical images acquired from devices such as X-ray, ultrasound, CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc.
  • an auxiliary diagnostic system has been developed to classify whether the tissue shown in medical images is normal or abnormal, and in the case of tumors, whether it is positive or negative using deep running techniques, and it is known that it has been developed to the level in which radiology doctors can read images. .
  • Algorithms such as naive bayes, SVM (Support Vector Machine), ANN (Artificial Neural Network), and HMM (Hidden Markov Model) are known as algorithms for automatically classifying the presence or absence of such lesions.
  • machine learning algorithms can be used for this classification, and machine learning algorithms are largely classified into supervised learning and unsupervised learning algorithms.
  • An object of the present invention is to provide a method for segmenting a blood vessel region by processing a plurality of tomography medical images.
  • the present invention provides a method for segmenting a blood vessel region.
  • the segmentation method according to the present invention includes the steps of receiving a plurality of 2D tomography images, preprocessing the received 2D tomography images to display an area where a blood vessel is located to generate training image data, and Generating a blood vessel feature image prediction model by learning from the training image data, and inputting a plurality of two-dimensional tomographic images into the generated blood vessel feature image prediction model to output a plurality of two-dimensional tomographic images displaying blood vessel features. do.
  • the GAN algorithm in the learning of the blood vessel feature image prediction model, and to use the U-net algorithm as a generator module in the GAN algorithm.
  • the GAN algorithm is used, and the U-net algorithm is used as a generator module in the GAN algorithm, first initial learning of U-net, and initial learning of the U-net.
  • the MR segmentation image output from U-net is a fake image, and to learn simultaneously with the inspector module.
  • a method of segmenting a blood vessel region by processing a plurality of two-dimensional tomographic images to accurately and quickly diagnose and analyze a lesion of a blood vessel such as cardiovascular or cerebrovascular disease.
  • FIG. 1 is a schematic diagram of a conventional blood vessel modeling method
  • FIG. 2 is an MRI image showing (a) an original image, (b) an image for learning, (c) an original image preprocessing result, and (d) a training image preprocessing result.
  • FIG. 3 is a schematic diagram of a U-net learning algorithm according to the present invention.
  • FIG. 5 is a schematic diagram of a cerebrovascular target method using the windowing technique of the present invention
  • FIG. 6 is a schematic diagram of a U-net architecture according to the present invention.
  • FIG. 7 is a schematic diagram of a GAN algorithm according to the present invention.
  • FIG. 9 is a schematic diagram of an embodiment of a GAN algorithm according to the present invention.
  • image may mean multi-dimensional data composed of discrete image elements (eg, pixels in a 2D image and voxels in a 3D image).
  • the image may include a medical image of an object acquired by an MRI or CT imaging apparatus.
  • the "object” may include a human or an animal, or a part of a human or animal.
  • the subject may include organs such as liver, heart, uterus, brain, breast, and abdomen, or blood vessels.
  • the "object” may include a phantom.
  • the phantom refers to a material having a volume very close to the density and effective atomic number of an organism, and may include a sphere-shaped phantom having properties similar to the body.
  • the "user" may be a medical expert, such as a doctor, a nurse, a clinical pathologist, a medical imaging expert, and the like, and may be a technician who repairs a medical device, but is not limited thereto.
  • the CT image may be a cardiovascular or cerebrovascular image, but is not limited thereto, and any tomography image including a blood vessel may be used.
  • the brain MRA image was used as an example in the detailed description of the present invention, it should be understood as an exemplary.
  • a commercial medical image viewer receives MRA data (DICOM file) and outputs two to three hundred two-dimensional images in an axial view.
  • MRA data DICOM file
  • cerebrovascular and tissues similar to cerebrovascular intensity are primarily divided.
  • stenosis is implemented, disconnected blood vessels are connected, and tissues other than blood vessels are removed according to manual work in the divided cerebrovascular shape. In this case, the worker needs segmentation know-how and anatomical knowledge.
  • the mesh is created to complete the grid for computer simulation.
  • a method of segmenting a blood vessel region of a blood vessel according to the present invention is a method of segmenting a plurality of 2D tomographic images using a deep running technique.
  • Fig. 2 shows an image for learning according to the segmentation method according to the present invention.
  • Fig. 2(a) is an original image
  • Fig. 2(b) is an image showing a blood vessel area for learning (Ground Truth)
  • Fig. 2(c) is an image preprocessed of the original image
  • Fig. 2(d) is a training image This is the pre-processed MRI image.
  • FIG. 4 shows a result of automatically extracting a cerebrovascular region from an MR image predicted by an artificial intelligence system according to an embodiment of the present invention, and the green part represents the cerebrovascular region.
  • the segmentation method includes the steps of receiving a plurality of 2D tomography images, preprocessing the received 2D tomography images to display an area where a blood vessel is located to generate training image data, and Generating a blood vessel feature image prediction model by learning from the training image data, and inputting a plurality of two-dimensional tomographic images into the generated blood vessel feature image prediction model to output a plurality of two-dimensional tomographic images displaying blood vessel features.
  • a model capable of obtaining a 2D tomography image showing a blood vessel region is trained by performing machine learning using a 2D tomography image showing a blood vessel region.
  • the FCN Full Convolutional Network
  • the FCN model uses a method of properly mixing by upsampling the values of the convolutional lower pooling layer so that the output is not a class value but a pixel heat map.
  • an algorithm that approaches the global optimal function is implemented by learning using pairs of data consisting of an input (raw CT image) and a result (clinically verified cerebrovascular segmentation image).
  • U-net algorithm can also be used as a machine learning algorithm.
  • the U-net algorithm is an algorithm developed based on FCN, and has the characteristic of obtaining more accurate segmentation results even with little data.
  • U-net is a name given because it has a U-shaped shape, and the left is called a contracting path and the right is called an expansive path based on the center of the network.
  • a blue box indicates a multi-channel feature map, and each arrow indicates a different operation for each color.
  • the red arrow is max pooling, the yellow arrow is up-convolution, and the green is copy and crop, which is a concept of skip connection.
  • cerebrovascular blood vessels are a three-dimensional structure that connects from the top to the bottom of the head, rather than determining the location of cerebrovascular blood vessels in a single MR image and making them regions, information on the distribution of cerebrovascular vessels by considering all the front and rear of the image to be region.
  • 150 MR images included in one case of cerebrovascular blood vessels used for learning are a collection of images showing a cross section moving from the top to the bottom of the head. Therefore, instead of a single MR image, a windowing technique was used in which multiple MR images were bundled and context information was input to the development network to learn data without a repetitive structure.
  • a stack was constructed by stacking a single MR image as a target, and k images in the +Z-axis direction and k-images in the -Z-axis direction of the image.
  • this stack is composed of (x, y, 2k+1) dimensional tensors. Therefore, the window stack is input to the U-net algorithm, not a single MR image, and the channel size of the neural network input unit is 2k+1.
  • the cerebrovascular vessel is a structure that is connected from the top to the bottom without a disconnected area, and when comparing through continuous images constituting a window, overlapping sections are necessarily generated. When the distributions of these sections are overlapped, only the cerebrovascular area, which is the target of regionalization, appears prominently as shown in the stacked-distribution shown in FIG. 5.
  • the contracting path of the U-net follows a general convolutional network, and performs two repeated 3x3 convolution operations including stride 2, 2x2 max pooling operation and Relu function for down-sampling.
  • stride 2x2 max pooling operation and Relu function for down-sampling.
  • it is calculated in the order of 3x3convolution-Relu-2x2max pooling-3x3convolution-Relu-2x2max pooling, and the number of feature map channels doubles during the down-sampling process. This is concatenated with the feature map created in the expansive path and the feature map created in the contracting path.
  • the U-net model according to the present invention uses ADAM (Adaptive Moment Estimation) among gradient descent algorithms to learn to find a variable that minimizes cross entropy.
  • Cross entropy is a value that expresses two different probability distributions describing the same event, and is a measure of how close the probability distribution of the model is to the probability distribution of the actual label in the artificial intelligence learning process.
  • the system developed according to the present invention processed a total of 70 case data to proceed with learning of the artificial intelligence system.
  • 50 case MR images were amplified into 360 cases and used for learning, and 15 cases were used as verification data.
  • a total of 100,000 iterations were performed, and various variables were tested to obtain parameter values optimized for cerebrovascular regionalization.
  • the weight (weight) and bias (bias) values were learned by themselves to construct an optimal neural network, and the following variables (hyperparameters) were tuned.
  • the learning rate is a variable that determines how much error of the result is reflected in learning. If the learning rate is high, the result is likely to vibrate without convergence. If the learning rate is low, the learning rate is slow and there is a possibility that convergence to the local minimum.
  • the cost function is a function that calculates the difference between the expected value and the actual value according to the input.
  • the developer decides which function to use among various cost functions so that the problem to be solved by artificial intelligence can be efficiently learned. Typical examples include mean square error and cross entropy error.
  • -Mini-batch size It takes a lot of time to calculate the cost function of all data, so some of the data are used to update the weights. At this time, if the size of the mini-batch is large, the learning speed can be increased, and if the size of the mini-batch is small, the weight can be updated more frequently, so the accuracy of the neural network may vary.
  • -Training repetition number If the number of training is too many, the actual accuracy of the test data may decrease even if the accuracy is increased during training due to overfitting.
  • Dropout omits part of the network. Randomly omitting neurons in some networks, reducing the likelihood of overfitting. There is a difference in neural network performance depending on the proportion of neurons that are omitted.
  • the GAN algorithm in the learning of the blood vessel feature image prediction model, and to use the U-net algorithm as a generator module in the GAN algorithm.
  • the GAN algorithm is used, and the U-net algorithm is used as a generator module in the GAN algorithm, first initial learning of U-net, and initial learning of the U-net.
  • the MR segmentation image output from U-net is a fake image, and to learn simultaneously with the inspector module.
  • the GAN algorithm was used to further improve the performance of U-net.
  • GAN is a pair of mathematical models composed of a generator and a discriminator.
  • the generator and the inspector oppose each other hostilely and are gradually improving each other's performance.
  • the creator tries to deceive the inspector by falsifying data like a banknote counterfeiter, and the inspector is a method of improving performance through efforts to discriminate the forged data from real data.
  • the generator module of the GAN model according to the present invention becomes a U-Net, and the inspector module is configured to estimate a probability value whether the received data is an output of U-Net or actual data.
  • the relationship between the two modules constituting the GAN algorithm is expressed by the following equation.
  • D(x) is trained so that it becomes 1, and the random noise probability distribution z is converted to the resultant G(z) of the inspector module. Enter the function D and learn so that the value D(G(z)) becomes 0.
  • the generator module attempts to increase the probability that the inspector module makes a mistake, while the inspector module attempts to reduce the performance of the generator module by identifying the counterfeit product generated by the generator module as false.
  • the creator's ability to forge data and the inspector's ability to discriminate data gradually improve with each other, thereby enhancing the creator's ability one step further.
  • the blue dotted line represents the probability distribution of the inspector
  • the black dotted line represents the actual data distribution
  • the green solid line represents the fake data distribution generated from the generator.
  • the blue dotted line which is the probability distribution of the inspector
  • the general GAN model trains two network models, the generator module and the inspector module at the same time, and modifies the weight and bias variables that minimize cross entropy by applying the inspector's gradient to the generator.
  • the GAN algorithm according to the present invention proceeds to learn by separating into two steps.
  • FIG. 9 is a schematic diagram of an embodiment of the GAN algorithm according to the present invention.
  • the U-net is initially trained and the output MR segmentation image is regarded as a fake image, that is, U-net is applied as a GAN generator module to proceed with learning.
  • the generator performs a function to find the location of cerebrovascular vessels by learning independently without receiving the examiner's gradient.
  • learning proceeds with the examiner at the same time, where the generator receives the examiner's gradient and modifies the weights and bias variables again.
  • additional performance enhancements are made to the generator module, which has already converged and cannot be expected to improve performance any more.
  • the inspector module receives the original MR image and the territorialized data as a set, and determines whether the received territorialized image is real data or a counterfeit output from the creator.
  • MRA image and regionized image are compressed by passing through multi-layer convolution and pooling layers, respectively. At this time, a zero-padding technique is applied to each convolution layer to preserve the original size of the image regardless of the kernel size. Therefore, the image received in the Convolutional-Relu-Pooling step is accurately compressed horizontally and vertically into 1/2 size and transmitted to the next layer.
  • the MRA image and the segmented image that passed through the inspector module are converted into a feature map compressed to 1/32 size, and the feature map of the compressed MRA image and the segmented feature map are combined and delivered to a fully connected layer. .
  • a total of four fully connected layers were used, and all of the activation functions were Relu functions. Through all these networks, it is determined whether it is a real video or a fake video output from the creator.
  • the GAN model according to the present invention is implemented using the following equation as described above.
  • the inspector module learns in the direction of widening the gap between D(Xct,Ygt) and D(Xct,S(ct)), and the generator module reduces the gap between Ygt and S(Xct) and finally D(Xct,S(ct)) is learned in the same direction as each other.
  • the GAN learning process checks the performance change every short learning section, While staying near the minimum of cross entropy, finely adjust according to the gradient of the above equation. Compared to initial learning through tests, only a very small number of variables are updated. It is possible to converge, and the change of parameters is not large in the transfer learning section, which is performed relatively short in the examiner module.
  • the GAN algorithm stores the weights and biases of the neural network with improved performance, and completes the development of a network that segments the cerebrovascular area when MRA data is input using this.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Neurology (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé de segmentation de vaisseau sanguin. Plus spécifiquement, la présente invention concerne un procédé de segmentation d'une région de vaisseau sanguin par traitement d'une pluralité d'images de tomographie bidimensionnelle de vaisseau sanguin par un procédé d'apprentissage profond. L'invention concerne également un procédé de segmentation de région de vaisseau sanguin. Le procédé de segmentation selon la présente invention comprend les étapes consistant à : recevoir, en tant qu'entrée, une pluralité d'images de tomographie bidimensionnelle ; pré-traiter la pluralité d'images de tomographie bidimensionnelle reçues en tant qu'entrée pour marquer une région dans laquelle un vaisseau sanguin est situé pour ainsi générer des données d'image d'apprentissage ; réaliser un apprentissage avec les données d'image d'apprentissage générées pour générer un modèle de prédiction d'image de caractéristique de vaisseau sanguin ; et entrer la pluralité d'images de tomographie bidimensionnelle dans le modèle de prédiction d'image de caractéristique de vaisseau sanguin généré pour recevoir, en tant que sortie, une pluralité d'images de tomographie bidimensionnelle affichant des caractéristiques de vaisseau sanguin.
PCT/KR2020/008319 2019-06-27 2020-06-26 Procédé de segmentation de vaisseau sanguin WO2020263002A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020190077024A KR102267219B1 (ko) 2019-06-27 2019-06-27 혈관 세그멘테이션 방법
KR10-2019-0077024 2019-06-27

Publications (1)

Publication Number Publication Date
WO2020263002A1 true WO2020263002A1 (fr) 2020-12-30

Family

ID=74059792

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/008319 WO2020263002A1 (fr) 2019-06-27 2020-06-26 Procédé de segmentation de vaisseau sanguin

Country Status (2)

Country Link
KR (1) KR102267219B1 (fr)
WO (1) WO2020263002A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344940A (zh) * 2021-05-07 2021-09-03 西安智诊智能科技有限公司 一种基于深度学习的肝脏血管图像分割方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102526434B1 (ko) * 2021-07-13 2023-04-26 경희대학교 산학협력단 병변 진단 장치 및 그 방법

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010016293A1 (fr) * 2008-08-08 2010-02-11 コニカミノルタエムジー株式会社 Dispositif d'affichage d'image médicale, et procédé d'affichage d'image médicale et programme
KR20160047921A (ko) * 2014-10-23 2016-05-03 삼성전자주식회사 초음파 영상 장치 및 그 제어 방법
KR20180099119A (ko) * 2017-02-28 2018-09-05 연세대학교 산학협력단 Ct 영상 데이터베이스 기반 심장 영상의 영역화 방법 및 그 장치
KR20190056880A (ko) * 2017-11-17 2019-05-27 안영샘 진단 영상을 변환하기 위한 장치, 이를 위한 방법 및 이 방법을 수행하는 프로그램이 기록된 컴퓨터 판독 가능한 기록매체

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010016293A1 (fr) * 2008-08-08 2010-02-11 コニカミノルタエムジー株式会社 Dispositif d'affichage d'image médicale, et procédé d'affichage d'image médicale et programme
KR20160047921A (ko) * 2014-10-23 2016-05-03 삼성전자주식회사 초음파 영상 장치 및 그 제어 방법
KR20180099119A (ko) * 2017-02-28 2018-09-05 연세대학교 산학협력단 Ct 영상 데이터베이스 기반 심장 영상의 영역화 방법 및 그 장치
KR20190056880A (ko) * 2017-11-17 2019-05-27 안영샘 진단 영상을 변환하기 위한 장치, 이를 위한 방법 및 이 방법을 수행하는 프로그램이 기록된 컴퓨터 판독 가능한 기록매체

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LARGENT, A. ET AL.: "Pseudo-CT Generation for Mri-only Radiotherapy: Comparative Study Between A Generative Adversarial Network, A U-Net Network, A Patch-Based, and an Atlas Based Methods", 2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019, 2019, pages 1109 - 1113, XP033576402, DOI: 10.1109/ISBI.2019.8759278 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344940A (zh) * 2021-05-07 2021-09-03 西安智诊智能科技有限公司 一种基于深度学习的肝脏血管图像分割方法

Also Published As

Publication number Publication date
KR20210001233A (ko) 2021-01-07
KR102267219B1 (ko) 2021-06-21

Similar Documents

Publication Publication Date Title
WO2021030629A1 (fr) Segmentation d'objet tridimensionnelle d'images médicales localisées avec détection d'objet
Omonigho et al. Breast cancer: tumor detection in mammogram images using modified alexnet deep convolution neural network
ES2914387T3 (es) Estudio inmediato
CN107563434B (zh) 一种基于三维卷积神经网络的脑部mri图像分类方法、装置
WO2021021329A1 (fr) Système et procédé d'interprétation d'images médicales multiples à l'aide d'un apprentissage profond
KR20200080626A (ko) 병변 진단에 대한 정보 제공 방법 및 이를 이용한 병변 진단에 대한 정보 제공용 디바이스
JP2020010805A (ja) 特定装置、プログラム、特定方法、情報処理装置及び特定器
US20230005140A1 (en) Automated detection of tumors based on image processing
Shahangian et al. Automatic brain hemorrhage segmentation and classification in CT scan images
Lu et al. Breast cancer detection based on merging four modes MRI using convolutional neural networks
WO2020263002A1 (fr) Procédé de segmentation de vaisseau sanguin
Das et al. Cross-population train/test deep learning model: abnormality screening in chest x-rays
Asadi et al. Efficient breast cancer detection via cascade deep learning network
Nayan et al. A deep learning approach for brain tumor detection using magnetic resonance imaging
Hasan et al. Performance of grey level statistic features versus Gabor wavelet for screening MRI brain tumors: A comparative study
Krishna et al. Automated classification of common maternal fetal ultrasound planes using multi-layer perceptron with deep feature integration
Zeng et al. A 2.5 D deep learning-based method for drowning diagnosis using post-mortem computed tomography
Xu et al. Improved cascade R-CNN for medical images of pulmonary nodules detection combining dilated HRNet
Sengun et al. Automatic liver segmentation from CT images using deep learning algorithms: a comparative study
Kathalkar et al. Artificial neural network based brain cancer analysis and classification
Song et al. A survey of deep learning based methods in medical image processing
CN115004225A (zh) 弱监督病灶分割
Kalyani et al. Medical Image Processing from Large Datasets Using Deep Learning
Farzana et al. Semantic Segmentation of Brain Tumor from 3D Structural MRI Using U-Net Autoencoder
Srivastava et al. Design of novel hybrid model for detection of liver cancer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20832194

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20832194

Country of ref document: EP

Kind code of ref document: A1