WO2023163287A1 - Procédé et appareil d'analyse d'image médicale - Google Patents
Procédé et appareil d'analyse d'image médicale Download PDFInfo
- Publication number
- WO2023163287A1 WO2023163287A1 PCT/KR2022/008379 KR2022008379W WO2023163287A1 WO 2023163287 A1 WO2023163287 A1 WO 2023163287A1 KR 2022008379 W KR2022008379 W KR 2022008379W WO 2023163287 A1 WO2023163287 A1 WO 2023163287A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- information
- medical image
- analysis model
- generated
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000004458 analytical method Methods 0.000 claims abstract description 74
- 239000000203 mixture Substances 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 11
- 238000003703 image analysis method Methods 0.000 claims description 9
- 210000000577 adipose tissue Anatomy 0.000 claims description 8
- 210000003205 muscle Anatomy 0.000 claims description 7
- 238000010191 image analysis Methods 0.000 claims description 6
- 210000001596 intra-abdominal fat Anatomy 0.000 claims description 6
- 206010033675 panniculitis Diseases 0.000 claims description 6
- 210000004003 subcutaneous fat Anatomy 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000002591 computed tomography Methods 0.000 claims description 3
- 238000007918 intramuscular administration Methods 0.000 claims description 3
- 210000000056 organ Anatomy 0.000 claims description 3
- 238000004891 communication Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 210000001015 abdomen Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000003187 abdominal effect Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 206010020649 Hyperkeratosis Diseases 0.000 description 1
- 206010030113 Oedema Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000004141 dimensional analysis Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- APTZNLHMIGJTEW-UHFFFAOYSA-N pyraflufen-ethyl Chemical compound C1=C(Cl)C(OCC(=O)OCC)=CC(C=2C(=C(OC(F)F)N(C)N=2)Cl)=C1F APTZNLHMIGJTEW-UHFFFAOYSA-N 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
Definitions
- the conventional method provides only relatively simple information (e.g., muscle mass versus fat mass, etc.), it is impossible to analyze quantitative data for predicting a disease caused by an imbalance in body composition for each body part of a patient or diagnosing a disease. It also has limitations.
- a medical image analysis method includes obtaining a 3D medical image generated by photographing a body of an examinee; generating at least one 2D cross-sectional image corresponding to each of the at least one analysis model by segmenting the medical image based on an anatomical direction corresponding to each of the at least one analysis model that has been learned; generating image information obtained by dividing a plurality of body component regions by inputting the 2D cross-sectional image to the corresponding analysis model; and calculating information on at least one of ratios and volumes of a plurality of body components from the image information.
- the medical image may include at least one of a 3D computed tomography (3D-CT) image and a magnetic resonance image (MRI).
- 3D-CT 3D computed tomography
- MRI magnetic resonance image
- the medical image may be a T1-weighted magnetic resonance image.
- the 2D cross-sectional image may include at least one of an axial image, a sagittal image, and a coronal image.
- the calculating of information on at least one of a ratio and a volume of the body component may include calculating the number of pixels for each body component region from the image information generated corresponding to each of the 2D sectional images. doing; obtaining information on an actually measured size of a pixel from photographing information of the medical image; and calculating the volume of each body component based on the number of pixels of the body component region and information about the measured size.
- a medical image analysis apparatus includes at least one processor; a memory for storing a program executable by the processor; and the processor, by executing the program, obtains a 3D medical image generated by photographing the examinee's body, and generates the medical image based on an anatomical direction corresponding to each of at least one pre-learned analysis model.
- the processor By dividing, at least one 2D sectional image corresponding to each of the analysis models is generated, and image information obtained by dividing a plurality of body component regions is generated by inputting the 2D sectional image to the analysis model corresponding to the analysis model.
- Information on at least one of ratios and volumes of a plurality of body components may be calculated from the information.
- FIG. 1 is a flowchart illustrating a method for analyzing a medical image according to an embodiment of the present disclosure.
- 3 and 4 are diagrams exemplarily illustrating body fat analysis information calculated by a medical image analysis method according to an embodiment of the present disclosure.
- FIG. 8 is a block diagram briefly illustrating the configuration of a medical image analysis apparatus according to an embodiment of the present disclosure.
- one component when one component is referred to as “connected” or “connected” to another component, the one component may be directly connected or directly connected to the other component, but in particular Unless otherwise described, it should be understood that they may be connected or connected via another component in the middle.
- ⁇ unit means a unit that processes at least one function or operation, which includes a processor, a micro Processor (Micro Processor), Micro Controller, CPU (Central Processing Unit), GPU (Graphics Processing Unit), APU (Accelerate Processor Unit), DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array) may be implemented by hardware or software or a combination of hardware and software.
- a micro Processor Micro Processor
- Micro Controller CPU
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- APU Accelerate Processor Unit
- DSP Digital Signal Processor
- ASIC Application Specific Integrated Circuit
- FPGA Field Programmable Gate Array
- classification of components in the present disclosure is merely a classification for each main function in charge of each component. That is, two or more components to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function.
- each component to be described below may additionally perform some or all of the functions of other components in addition to its main function, and some of the main functions of each component may be performed by other components. Of course, it may be dedicated and performed by .
- the method may be performed on one or more computing devices.
- at least one or more steps of the method 100 according to an embodiment of the present disclosure may be performed in a client device and other steps may be performed in a server device.
- the client device and the server device may be connected through a network to transmit and receive calculation results.
- method 100 may be performed by distributed computing technology.
- FIG. 1 is a flowchart for explaining a method for analyzing a medical image according to an embodiment of the present disclosure
- FIG. 2 is a flowchart for explaining an embodiment of step S140 of FIG. 1 .
- the device may obtain a 3D medical image generated by photographing the examinee's body.
- the medical image may be received from an external database server or may be captured by a photographing device connected to the device through wired or wireless communication.
- the medical image may be an image generated by photographing the entire body of the examinee (ie, the entire body), but is not limited thereto, and according to an embodiment, an image generated by photographing a part of the examinee's body may be used.
- the medical image may include at least one of a 3D computed tomography (3D-CT) image and a magnetic resonance image (MRI).
- 3D-CT 3D computed tomography
- MRI magnetic resonance image
- a T1-weighted magnetic resonance image may be applied.
- a T1-weighted magnetic resonance image since the contrast value difference between fat and muscle is high, segmentation accuracy of body component regions can be further improved through the analysis model described in detail below.
- the method 100 may further include pre-processing the medical image after step S110.
- the device may first filter whether the medical image is a normal image that can be analyzed through a preprocessing algorithm, and convert the data format to be suitable for the input format of the analysis model.
- the device may correct a contrast value deviation due to an error or noise of a photographing device through an image normalization process.
- the device cuts at least a part of the examinee's body included in the 3D medical image perpendicularly to the first direction at regular intervals while moving in a first direction (eg, a direction from the head to the legs). By doing so, at least one 2-dimensional cross-sectional image (eg, cross-sectional image) corresponding to the first analysis model may be generated.
- the device moves at least a part of the examinee's body included in the 3D medical image in a second direction (eg, from the left arm to the right arm), and at regular intervals perpendicular to the second direction. By cutting, at least one 2-dimensional cross-sectional image (eg, sagittal image) corresponding to the second analysis model may be generated.
- the device cuts at least a part of the examinee's body included in the 3D medical image perpendicularly to the third direction at regular intervals while moving in a third direction (eg, from the front to the rear direction).
- a third direction eg, from the front to the rear direction.
- at least one 2-dimensional cross-sectional image eg, coronal image
- the first direction, the second direction, and the third direction may be directions perpendicular to each other.
- a plurality of analysis models may be configured, and each may be pre-learned to segment a body component region based on different 2D cross-sectional images.
- the analysis model includes a first analysis model and a second analysis model, and the first analysis model is pre-trained to segment body component regions through a plurality of cross-sectional images generated from a plurality of three-dimensional training medical images.
- the second analysis model may be pre-trained to segment the body component region through at least one of a plurality of sagittal plane images and coronal plane images generated from the training medical images.
- a network function may be used in the same sense as a neural network and/or a neural network.
- a neural network may be composed of a set of interconnected computational units, which may be generally referred to as nodes, and these nodes may be referred to as neurons.
- a neural network generally includes a plurality of nodes, and the nodes constituting the neural network may be interconnected by one or more links. In this case, some of the nodes constituting the neural network may configure one layer based on distances from the first input node. For example, a set of nodes having a distance of n from the first input node may constitute n layers.
- the neural network may include a deep neural network (DNN) including a plurality of hidden layers in addition to an input layer and an output layer.
- DNN deep neural network
- the device may calculate information on at least one of ratios and volumes of a plurality of body components from the image information.
- step S140 may include steps S141 to S143.
- the device may obtain information on the actually measured size (horizontal and vertical length) of the pixel from photographing information (eg, Dicom tag information) of the medical image.
- photographing information eg, Dicom tag information
- the device may calculate the area of a body component included in each 2D cross-sectional image by multiplying the number of pixels of each body component region by the measured size.
- the device may calculate the volume occupied by each body component in the entire or part of the examinee's body based on the arrangement and division intervals between the plurality of consecutive 2D sections.
- one or more sagittal plane images and/or coronal plane images of the entire body are generated from a 3D medical image, and based on this, information on the ratio of each body component in the entire body of the examinee is calculated.
- information on the ratio of body components may include a ratio occupied by the corresponding body component in the whole and/or a relative ratio between two or more body components.
- the calculated body composition ratio may be provided to the user (or user terminal) in the form of a predetermined table.
- one or more cross-sectional images of the entire body may be generated from a 3D medical image, and information on the volume and/or ratio of each body component in the entire body of the examinee may be calculated based on the generated cross-sectional images. Similarly, the volume and/or ratio of the calculated body composition may be provided to the user (or user terminal) in a table format.
- each body component region segmented by the analysis model may be displayed on a sectional image (ie, a cross-sectional image) in a predetermined manner so as to be identified separately.
- the user may change the cross-sectional image in which the body composition region is displayed through a predetermined input. For example, a shape corresponding to the entire body of the examinee may be displayed, and when a user selects a specific location thereof, a body component region may be displayed on a cross-sectional image corresponding to the location.
- step S520 may be performed based on user input. That is, when a user sets a specific body part as an analysis target through a predetermined interface, the device extracts the body part set by the user from the medical image and divides it from other body parts. For example, the user may set one or more body parts, such as legs, arms, and abdomen, as analysis targets.
- step S520 may be performed by a pretrained network function.
- the device may generate one or more 2D cross-sectional images of the corresponding body part by segmenting the body part set as the analysis target in an anatomical direction corresponding to each analysis model.
- step S540 and S550 the device generates image information obtained by dividing a plurality of body component regions by inputting the two-dimensional cross-sectional image into a corresponding analysis model, and from the image information, the ratio and volume of the plurality of body components included in each body part. Information on at least one of them can be calculated.
- step S550 may be performed in the same manner as step S140 described above with reference to FIG. 2 .
- the device may generate body composition comparison information for each body part. That is, when a plurality of body parts are set as analysis targets by the user, the device may generate comparison information by comparing information such as the body component volume and/or ratio of each body part with each other.
- the generated comparison information may be provided to a user or the like in a predetermined manner.
- 6 and 7 are diagrams exemplarily illustrating body fat analysis information calculated by a medical image analysis method according to an embodiment of the present disclosure.
- an abdominal region may be segmented from a medical image. Subsequently, information on the ratio of each body component included in the abdomen of the examinee may be calculated based on the plurality of cross sections generated from the abdominal region. The calculated body composition ratio may be provided to the user (or user terminal) in the form of a predetermined table.
- Each body component region segmented by the analysis model may be displayed in a predetermined manner in a 3D image of each arm separated from a sectional image or a medical image to be identified separately.
- FIG. 8 is a block diagram briefly illustrating the configuration of a medical image analysis apparatus according to an embodiment of the present disclosure.
- the communication unit 810 may receive input data (medical images, etc.) for performing medical image analysis.
- the communication unit 810 may include a wired/wireless communication unit.
- the communication unit 810 may include a local area network (LAN), a wide area network (WAN), a value added network (VAN), and a mobile communication network ( Mobile Radio Communication Network), a satellite communication network, and one or more components that enable communication through a mutual combination thereof.
- the communication unit 810 includes a wireless communication unit
- the communication unit 810 transmits and receives data or signals wirelessly using cellular communication, a wireless LAN (eg, Wi-Fi), and the like.
- the communication unit may transmit/receive data or signals with an external device or an external server under the control of the processor 840 .
- the input unit 820 may receive various user commands through external manipulation.
- the input unit 820 may include or connect one or more input devices.
- the input unit 820 may be connected to various input interfaces such as a keypad and a mouse to receive user commands.
- the input unit 820 may include an interface such as a thunderbolt as well as a USB port.
- the input unit 820 may receive an external user command by including or combining various input devices such as a touch screen and buttons.
- the memory 830 may store programs and/or program commands for operation of the processor 840 and may temporarily or permanently store input/output data.
- the memory 830 is a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (eg SD or XD memory, etc.), RAM , SRAM, ROM (ROM), EEPROM, PROM, magnetic memory, a magnetic disk, it may include at least one type of storage medium.
- the memory 830 may store various network functions and algorithms, and may store various data, programs (one or more instructions), applications, software, commands, codes, etc. for driving and controlling the device 700. there is.
- the processor 840 may control the overall operation of the device 800 .
- Processor 840 may execute one or more programs stored in memory 830 .
- the processor 840 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to the technical idea of the present disclosure are performed.
- the processor 840 obtains a 3D medical image generated by photographing the examinee's body, and divides the medical image based on an anatomical direction corresponding to each of at least one pre-learned analysis model. , At least one 2D cross-sectional image corresponding to each of the analysis models is generated, and image information obtained by dividing a plurality of body component regions is generated by inputting the 2-dimensional cross-sectional image to the corresponding analysis model, and from the image information Information on at least one of ratios and volumes of a plurality of body components may be calculated.
- the processor 840 calculates the number of pixels for each of the body component regions from the image information generated corresponding to each of the 2D cross-sectional images, and information about the actually measured size of a pixel from the photographing information of the medical image. is obtained, and the volume of each body component can be calculated based on the number of pixels of the body component region and information on the measured size.
- the processor 840 may divide at least one body part to be analyzed from the medical image, and generate the 2D cross-sectional image of the divided body part from the medical image.
- the processor 840 may generate body component comparison information for each body part by comparing information on at least one of a ratio and a volume of the body components calculated from the different body parts.
- the method according to the disclosed embodiments may be provided by being included in a computer program product.
- Computer program products may be traded between sellers and buyers as commodities.
- a computer program product may include a storage medium of a server or a storage medium of a client device in a system composed of a server and a client device.
- the computer program product may include a storage medium of the third device.
- the computer program product may include a S/W program itself transmitted from the server to the client device or the third device or from the third device to the client device.
- one of the server, the client device and the third device may execute the computer program product to perform the method according to the disclosed embodiments.
- two or more of the server, the client device, and the third device may execute the computer program product to implement the method according to the disclosed embodiments in a distributed manner.
- a server may execute a computer program product stored in the server to control a client device communicatively connected to the server to perform a method according to the disclosed embodiments.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Pathology (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Pulmonology (AREA)
- Theoretical Computer Science (AREA)
Abstract
La présente divulgation se rapporte à un procédé et à un appareil d'analyse d'une image médicale, un procédé selon un mode de réalisation de la présente divulgation pouvant comprendre les étapes consistant : à acquérir une image médicale tridimensionnelle générée par photographie du corps d'un sujet ; à diviser l'image médicale sur la base d'une direction anatomique correspondant à chacun d'au moins un modèle d'analyse pré-formé, à générer au moins une image de section transversale bidimensionnelle correspondant à chacun dudit modèle d'analyse ; à générer des informations d'image dans lesquelles une pluralité de régions de composants de corps sont divisées par saisie de l'image de section transversale bidimensionnelle dans le modèle d'analyse correspondant ; et à partir des informations d'image, à calculer des informations concernant la proportion et/ou un volume de la pluralité de composants de corps.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020220024899A KR102525396B1 (ko) | 2022-02-25 | 2022-02-25 | 의료 영상 분석 방법 및 장치 |
KR10-2022-0024899 | 2022-02-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023163287A1 true WO2023163287A1 (fr) | 2023-08-31 |
Family
ID=86100576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/008379 WO2023163287A1 (fr) | 2022-02-25 | 2022-06-14 | Procédé et appareil d'analyse d'image médicale |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102525396B1 (fr) |
WO (1) | WO2023163287A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101063594B1 (ko) * | 2009-07-09 | 2011-09-07 | 연세대학교 산학협력단 | 복부내장지방 측정장치 |
KR102144672B1 (ko) * | 2020-01-17 | 2020-08-14 | 성균관대학교산학협력단 | 시맨틱 분할을 이용한 인공지능형 초음파 의료 진단 장치 및 이를 이용한 원격 의료 진단 방법 |
KR20200133593A (ko) * | 2019-05-20 | 2020-11-30 | 주식회사 힐세리온 | 인공지능형 초음파 지방간 자동 진단 장치 및 이를 이용한 원격 의료 진단 방법 |
KR102321427B1 (ko) * | 2021-01-20 | 2021-11-04 | 메디컬아이피 주식회사 | 의료영상을 이용한 인체성분 분석 방법 및 그 장치 |
KR20220011488A (ko) * | 2020-07-21 | 2022-01-28 | 서울대학교산학협력단 | Ct 영상 기반 체성분 분석 장치 및 방법 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180038251A (ko) * | 2016-10-06 | 2018-04-16 | 주식회사 제니스헬스케어 | 기계학습을 이용한 체성분 측정시스템 |
KR101981202B1 (ko) * | 2018-12-11 | 2019-05-22 | 메디컬아이피 주식회사 | 의료영상 재구성 방법 및 그 장치 |
-
2022
- 2022-02-25 KR KR1020220024899A patent/KR102525396B1/ko active IP Right Grant
- 2022-06-14 WO PCT/KR2022/008379 patent/WO2023163287A1/fr unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101063594B1 (ko) * | 2009-07-09 | 2011-09-07 | 연세대학교 산학협력단 | 복부내장지방 측정장치 |
KR20200133593A (ko) * | 2019-05-20 | 2020-11-30 | 주식회사 힐세리온 | 인공지능형 초음파 지방간 자동 진단 장치 및 이를 이용한 원격 의료 진단 방법 |
KR102144672B1 (ko) * | 2020-01-17 | 2020-08-14 | 성균관대학교산학협력단 | 시맨틱 분할을 이용한 인공지능형 초음파 의료 진단 장치 및 이를 이용한 원격 의료 진단 방법 |
KR20220011488A (ko) * | 2020-07-21 | 2022-01-28 | 서울대학교산학협력단 | Ct 영상 기반 체성분 분석 장치 및 방법 |
KR102321427B1 (ko) * | 2021-01-20 | 2021-11-04 | 메디컬아이피 주식회사 | 의료영상을 이용한 인체성분 분석 방법 및 그 장치 |
Also Published As
Publication number | Publication date |
---|---|
KR102525396B1 (ko) | 2023-04-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111402228B (zh) | 图像检测方法、装置和计算机可读存储介质 | |
Yang et al. | Clinical skin lesion diagnosis using representations inspired by dermatologist criteria | |
CN110473192A (zh) | 消化道内镜图像识别模型训练及识别方法、装置及系统 | |
Tsoli et al. | Model-based anthropometry: Predicting measurements from 3D human scans in multiple poses | |
CN110853111B (zh) | 医学影像处理系统、模型训练方法及训练装置 | |
CN103222848A (zh) | 图像处理设备和图像处理方法 | |
CN109745046A (zh) | 一种适用于运动状态下的电阻抗成像电极与系统 | |
WO2022145841A1 (fr) | Procédé d'interprétation de lésion et appareil associé | |
CN117172294B (zh) | 一种稀疏脑网络的构建方法、系统、设备和存储介质 | |
KR20210121615A (ko) | 수면 무호흡증 진단을 위한 기도 형태 분석 방법 및 장치 | |
CN116705300A (zh) | 基于体征数据分析的医疗决策辅助方法、系统及存储介质 | |
Xu et al. | Automatic classification of male and female skeletal muscles using ultrasound imaging | |
CN115170401A (zh) | 图像补全方法、装置、设备及存储介质 | |
WO2023163287A1 (fr) | Procédé et appareil d'analyse d'image médicale | |
CN106510708A (zh) | 用于多对比度脑磁共振数据中异常检测的框架 | |
CN115169067A (zh) | 脑网络模型构建方法、装置、电子设备及介质 | |
Diez et al. | Deep reinforcement learning for detection of abnormal anatomies | |
US11416653B2 (en) | Numerical model of the human head | |
WO2023282389A1 (fr) | Procédé de calcul de masse grasse utilisant une image de tête et de cou et dispositif associé | |
CN115421597B (zh) | 一种基于双脑耦合特征的脑机接口控制方法及系统 | |
WO2023058837A1 (fr) | Procédé de détection de diaphragme à partir d'image de poitrine, et appareil associé | |
KR20230049938A (ko) | 폐기종의 정량적 분석 방법 및 이를 위한 장치 | |
KR20220143187A (ko) | 딥러닝을 이용한 폐기종 자동 추출 방법 및 장치 | |
CN113792740A (zh) | 眼底彩照的动静脉分割方法、系统、设备及介质 | |
WO2023282388A1 (fr) | Procédé et appareil pour fournir des informations nécessaires pour diagnostiquer une métastase de ganglion lymphatique du cancer de la thyroïde |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22929038 Country of ref document: EP Kind code of ref document: A1 |