WO2023113230A1 - Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond - Google Patents

Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond Download PDF

Info

Publication number
WO2023113230A1
WO2023113230A1 PCT/KR2022/017166 KR2022017166W WO2023113230A1 WO 2023113230 A1 WO2023113230 A1 WO 2023113230A1 KR 2022017166 W KR2022017166 W KR 2022017166W WO 2023113230 A1 WO2023113230 A1 WO 2023113230A1
Authority
WO
WIPO (PCT)
Prior art keywords
measurement
radiographic image
point
action
head
Prior art date
Application number
PCT/KR2022/017166
Other languages
English (en)
Korean (ko)
Inventor
팽준영
김형건
홍우재
김성민
Original Assignee
사회복지법인 삼성생명공익재단
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 사회복지법인 삼성생명공익재단, 성균관대학교산학협력단 filed Critical 사회복지법인 삼성생명공익재단
Publication of WO2023113230A1 publication Critical patent/WO2023113230A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/501Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the technology to be described below relates to a measurement well detection method of a head measurement radiographic image based on deep reinforcement learning.
  • Orthodontic diagnosis is an important process for evaluating orthodontic treatment and orthodontic results.
  • Cephalometric evaluation is a representative analysis method used for orthodontic diagnosis. In cephalometric radiographic analysis, it is important to identify landmarks in images.
  • measurement point identification mainly used the subjective judgment of the medical staff. Furthermore, a measurement point detection technique using a neural network model has recently emerged, but there is a limitation in that it is difficult to collect a large amount of learning data for learning. In order to overcome this, the previous technology used a method of increasing the amount of learning data (data augmentation). However, in this case, there is also a problem that the complexity of the process of preparing the learning data increases, and the accuracy of the model tends to decrease because it is a method of generating virtual data.
  • the technology to be described below aims to provide a measurement point detection technique of a head measurement radiographic image based on deep reinforcement learning, which is one of the learning models.
  • a measurement point detection method of a head measurement radiographic image using deep reinforcement learning includes the steps of receiving a head measurement radiographic image of a subject by an analysis device, inputting the head measurement radiographic image to a feature extraction layer learned in advance by the analysis device, and the above steps.
  • the analysis device inputs the feature map output from the feature extraction layer to a Q-learning network to determine an action at a current point in time.
  • the action is information on a moving direction for detecting a measuring point based on the current position in the input image, and the analysis device repeats the process of determining the action while updating the reward by reflecting the state according to the determined action. to detect the measurement point.
  • An analysis device that detects a measurement point of a head measurement radiographic image includes an input device that receives a head measurement radiographic image of a subject, a feature extraction layer that generates a feature map from a frame of the head measurement radiographic image, and a current location reference that receives the feature map.
  • the technology to be described below can detect a measurement point in a head measurement radiographic image with high performance without performing tasks such as augmenting learning data.
  • 1 is an example of a system for detecting measurement points in a head measurement radiographic image.
  • FIG. 2 is an example of a DNN structure of DQN for detecting a measurement point in a head measurement radiographic image.
  • 3 is an example of an analysis device for detecting a measurement point in a head measurement radiographic image.
  • first, second, A, B, etc. may be used to describe various elements, but the elements are not limited by the above terms, and are merely used to distinguish one element from another. used only as For example, without departing from the scope of the technology described below, a first element may be referred to as a second element, and similarly, the second element may be referred to as a first element.
  • the terms and/or include any combination of a plurality of related recited items or any of a plurality of related recited items.
  • each component to be described below may be combined into one component, or one component may be divided into two or more for each more subdivided function.
  • each component to be described below may additionally perform some or all of the functions of other components in addition to its main function, and some of the main functions of each component may be performed by other components. Of course, it may be dedicated and performed by .
  • each process constituting the method may occur in a different order from the specified order unless a specific order is clearly described in context. That is, each process may occur in the same order as specified, may be performed substantially simultaneously, or may be performed in the reverse order.
  • the technique described below detects a measurement point in a head measurement radiographic image.
  • the head measurement radiographic image is an image obtained by taking a subject's head at a specific point in time using a method such as computed tomography (CT) or X-ray.
  • CT computed tomography
  • X-ray X-ray
  • the head measurement radiographic image is a head image for orthodontic treatment, and there is no limitation in the recording method.
  • head measurement radiographic images will be described based on 2D images, but the techniques described below can be applied to 3D images as well.
  • the analysis device identifies measurement points in the head measurement radiographic image using a reinforcement learning model.
  • the analysis device identifies measurement points in the input data using a reinforcement learning model prepared in advance.
  • the analysis device may be implemented with various devices capable of processing data.
  • the analysis device may be implemented as a PC, a server on a network, a smart device, a chipset in which a dedicated program is embedded, and the like.
  • 1 is an example of a system 100 for detecting a measurement point in a head measurement radiographic image. 1 illustrates an example in which the analysis device is a computer terminal 130 and a server 140 .
  • the head radiography apparatus 110 acquires a head measurement radiographic image of a subject.
  • the subject's cephalometric radiographic image may be stored in an Electronic Medical Record (EMR) 120.
  • EMR Electronic Medical Record
  • the user A may use the computer terminal 130 to detect measurement points in the head measurement radiographic image.
  • the computer terminal 130 receives a head measurement radiographic image of the subject.
  • the computer terminal 130 may receive a head measurement radiation image from the head radiography apparatus 110 or the EMR 120 through a wired or wireless network.
  • the computer terminal 130 may be a device physically connected to the head radiation imaging apparatus 110 .
  • the computer terminal 130 analyzes the head measurement radiographic image of the subject and detects a measurement point.
  • the computer terminal 130 may detect measurement points in an input image using a reinforcement learning model.
  • the computer terminal 130 may output measurement points detected from the head measurement radiographic image. User A can check the analysis result.
  • the server 140 may receive a head measurement radiation image from the head radiography apparatus 110 or the EMR 120 .
  • the server 140 analyzes the head measurement radiographic image of the subject and detects a measurement point.
  • the server 140 may detect measurement points in an input image using a reinforcement learning model.
  • the server 140 may transmit the analysis result to the terminal of user A. User A can check the analysis result.
  • the computer terminal 130 and/or the server 140 may forward the analysis results to the EMR 120 .
  • the analysis device detects measurement points in the head measurement radiographic image using a reinforcement learning model.
  • reinforcement learning is briefly described.
  • the problem of reinforcement learning is expressed as a Markov Decision Process (MDP).
  • Reinforcement learning selects a specific action a t at each reinforcement learning step from the observed environmental state s t .
  • t refers to a specific point in time.
  • the purpose of reinforcement learning is to find a policy that can maximize the accumulated reward R t expressed as in Equation 1 below.
  • ⁇ ⁇ ⁇ 0,1 ⁇ is the discount exponent for the reward.
  • reinforcement learning is to find a policy ⁇ * (s) that can maximize the accumulated reward as shown in Equation 2 below, and to select action a that maximizes the expectation of the sum of rewards in state s.
  • Q-learning is one of the reinforcement learning techniques.
  • Q-learning when action a is taken in state s in reinforcement learning, the reward value for the action is learned using function Q. That is, Q-learning can be said to be a method for finding the accumulated Q value as shown in Equation 3 below.
  • MDP-based Q-learning is difficult to directly handle high-dimensional data such as head measurement radiographic images.
  • DQN Deep Q-Network
  • DNN Deep Neural Network
  • 2 is an example of the structure of the DNN 200 of DQN for detecting a measurement point in a head measurement radiographic image. 2 also shows the learning process of the DNN 220.
  • the DNN 200 receives a head measurement radiographic image of a specific subject as state s and outputs a Q value.
  • the DNN 200 may include a feature extraction layer 210 and a Q-learning network 220 .
  • the feature extraction layer 210 receives a head measurement radiographic image of the subject and outputs a feature map.
  • 2 shows a feature extraction layer 210 composed of four convolutional layers and a fully connected layer.
  • the feature extraction layer 210 receives an image corresponding to four consecutive frames and outputs a feature map.
  • the Q-learning network 220 receives the feature map and outputs an action to be selected next.
  • the loss function of the Q-learning network 220 is defined as Equation 4 below.
  • s is a state
  • a is an action
  • r is a reward
  • s' is a state changed by a
  • a' means an action in s'.
  • Actions for interaction within the agent's environment were defined as 4 actions in 2D coordinates.
  • Each action means a positional change in a movable direction in the 2D coordinate system.
  • Q1 can mean up
  • Q2 can mean down
  • Q3 can mean left
  • Q4 can mean right.
  • the action in each step may be movement by a fixed value.
  • the reward should include a measure of whether the agent has achieved the final goal when taking a specific action in the current state.
  • an important criterion for achieving the goal is the distance between the current state and the correct measuring point (Euclidean distance).
  • Euclidean distance the distance between the current state and the correct measuring point
  • the researcher used a compensation function that includes the distance of the previous state. Compensation is expressed as in Equation 5 below.
  • D() is a function representing the distance between two positions.
  • Pi is the coordinates of the measurement point predicted at point i
  • P t is the location of the measurement point of the correct answer. If the difference between the distance to the correct answer measurement point at the previous time point and the distance to the correct answer measurement point in the current state is a positive value, the agent may determine that the target is approaching and set a high reward. That is, as the distance between the predicted measurement point and the correct answer measurement point at time t is smaller than the distance between the predicted measurement point and the correct answer measurement point at time t-1, the reward value is higher.
  • a reward function based on the new action is updated. Also, based on the action, the position in the ROI is updated.
  • the analysis device determines an action by inputting a plurality of frames to the DNN 220 again based on the updated position. The analysis device detects the measurement point while repeating the process of updating the location and determining the action.
  • the measurement point detection termination condition may be a point in time at which a value of compensation becomes 0.
  • the environment is a 2D image.
  • a state is defined on a 2D space that leads to the environment and can be represented as a region of interest (ROI) in which the agent estimates the current measurement point.
  • ROI region of interest
  • the state is defined as a ROI of a single frame, it may be difficult for an agent to determine a search path, and in the worst case, path search may be repeated like an infinite loop. Therefore, the researcher defined a state including 4 frames derived from 4 previously performed actions.
  • Dilation convolution is to place regular intervals between pixels input to the neural network. Dilated convolution can be implemented by adding padding to the filter.
  • the researcher When extracting the ROI defining the state, the researcher sets the pixel spacing to the scale S inflated. When the agent reached the target measurement point, S was gradually reduced and learned until it reached the original image space. The researcher adjusted the amount of input information so that the agent could observe information from the global state to the local state by gradually reducing the expansion scale when inputting the input image to the DNN (200).
  • the researcher set the conditions for notifying termination when the agent reaches the target measurement point, distinguishing between the learning process and the verification process.
  • the agent terminated when the distance from the target measurement point was less than a predetermined ⁇ value
  • the verification process it was set that the agent terminated when the value of the reward became 0.
  • the researcher constructed a model by securing the ISBI (International Symposium on Biomedical Imaging) 2015 dataset and the dataset (500 patients) from the Department of Oral and Maxillofacial Surgery at Samsung Seoul Hospital.
  • the researcher implemented the model based on a total of 44 measurement points, including those used clinically, in addition to the 19 measurement points selected by ISBI. Verification of the learned model was performed.
  • Table 1 and Table 2 below are the results of verification for 19 measurement points for the ISBI dataset, respectively.
  • Table 1 below shows the results of the experiment by setting the input image size to 128, expansion degree 3, and multi-scale 3
  • Table 2 shows the experiment results by setting the input image size to 64, expansion degree 3, and multi-scale 3. Each experiment was performed twice.
  • the experimental results for each measurement point indicate the distance difference (unit mm) between the location predicted by the model learned using the ISBI dataset and the location of the actual measurement point. The accuracy was higher when the image size was 128.
  • the above result corresponds to performance similar to that of CNN-based measurement point detection techniques published in 2019, 2020, and 2021.
  • conventional CNN-based models lack a large amount of learning models, so most of them are configured through training data augmentation, whereas the above-described DQN is built using only data obtained without training data augmentation.
  • the experimental results for each measurement point indicate the distance difference (unit mm) between the position predicted by the model learned using the Samsung Medical Center dataset and the location of the actual measurement point. There was little difference in performance between DQN and DDQN-based networks. The image size was set to 128, which showed better performance when using the ISBI dataset. The results showed an error rate of less than 2 mm, and it was confirmed that the clinical standard was reached.
  • the analysis device 300 is an example of an analysis device 300 that detects a measurement point in a head measurement radiographic image.
  • the analysis device 300 corresponds to the above-described analysis devices (130 and 140 in FIG. 1).
  • the analysis device 300 may be physically implemented in various forms.
  • the analysis device 300 may have a form of a computer device such as a PC, a network server, and a chipset dedicated to data processing.
  • the analysis device 300 may include a storage device 310, a memory 320, an arithmetic device 330, an interface device 340, a communication device 350, and an output device 360.
  • the storage device 310 may store a reinforcement learning model for detecting a measurement point in a head measurement radiographic image.
  • the storage device 310 may store a head measurement radiographic image of the subject.
  • the storage device 310 may store other programs or codes for image processing.
  • the storage device 310 may store instructions or program codes for a process of detecting a measurement point in a head measurement radiographic image through the process described above.
  • the storage device 310 may store analysis results (video, text, etc.).
  • the memory 320 may store data and information generated in the process of the analysis device 300 detecting measurement points in the head measurement radiographic image.
  • the interface device 340 is a device that receives certain commands and data from the outside.
  • the interface device 340 may receive a head measurement radiographic image of an analysis target from a physically connected input device or an external storage device.
  • the interface device 340 may transmit analysis results (video, text, etc.) to an external object.
  • the communication device 350 refers to a component that receives and transmits certain information through a wired or wireless network.
  • the communication device 350 may receive a head measurement radiographic image of an analysis target from an external object.
  • the communication device 350 may transmit analysis results (video, text, etc.) to an external object such as a user terminal.
  • the interface device 340 and the communication device 350 are configured to send and receive certain data from a user or other physical object, they can also be collectively referred to as input/output devices.
  • the interface device 340 and the communication device 350 may be referred to as input devices when limited to a function of receiving a head measurement radiation image.
  • the output device 360 is a device that outputs certain information.
  • the output device 360 may output interfaces and analysis results necessary for data processing.
  • the arithmetic device 330 may detect measurement points in the head measurement radiographic image using instructions or program codes stored in the storage device 310 .
  • the arithmetic device 330 may detect measurement points in the head measurement radiographic image using the reinforcement learning model stored in the storage device 310 .
  • the reinforcement learning model may be the DQN described in FIG. 2 .
  • the arithmetic device 330 may set an ROI in an input image and input it to the DQN.
  • the computing device 330 may use four consecutive frames derived from actions previously output by the DQN as inputs of the DQN.
  • the calculation device 330 may repeatedly perform a process of finding a measurement point according to a result output from the DQN. For example, the arithmetic device 330 may determine the current position as the measurement point when the compensation value becomes 0, as in the condition used in the verification process.
  • the arithmetic device 330 may adjust the size of the filter using dilation convolution in the detection process. Meanwhile, the arithmetic device 330 may vary the size of the expansion in an iterative process of finding a measurement point. As described above, the computing device 330 may perform from a global search to a local search while gradually reducing the size of the expansion.
  • the arithmetic device 330 may be a device such as a processor, an AP, or a chip in which a program is embedded that processes data and performs certain arithmetic operations.
  • the above-described measurement point detection method in a head measurement radiographic image may be implemented as a program (or application) including an executable algorithm that may be executed on a computer.
  • the program may be stored and provided in a temporary or non-transitory computer readable medium.
  • a non-transitory readable medium is not a medium that stores data for a short moment, such as a register, cache, or memory, but a medium that stores data semi-permanently and can be read by a device.
  • the various applications or programs described above are CD, DVD, hard disk, Blu-ray disk, USB, memory card, ROM (read-only memory), PROM (programmable read only memory), EPROM (Erasable PROM, EPROM)
  • ROM read-only memory
  • PROM programmable read only memory
  • EPROM Erasable PROM, EPROM
  • it may be stored and provided in a non-transitory readable medium such as EEPROM (Electrically EPROM) or flash memory.
  • Temporary readable media include static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), and enhanced SDRAM (Enhanced SDRAM). SDRAM, ESDRAM), Synchronous DRAM (Synclink DRAM, SLDRAM) and Direct Rambus RAM (DRRAM).
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • Enhanced SDRAM Enhanced SDRAM
  • SDRAM ESDRAM
  • Synchronous DRAM Synchronous DRAM
  • SLDRAM Direct Rambus RAM
  • DRRAM Direct Rambus RAM

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Optics & Photonics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • Quality & Reliability (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Neurology (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)
  • Coating With Molten Metal (AREA)

Abstract

Procédé de détection d'un point de repère d'une image radiographique céphalométrique à l'aide d'un apprentissage par renforcement profond comprenant les étapes dans lesquelles un dispositif d'analyse : reçoit une image radiographique céphalométrique d'un sujet ; introduit l'image radiographique céphalométrique dans une couche d'extraction de caractéristique pré-formée ; et introduit une sortie de carte de caractéristiques à partir de la couche d'extraction de caractéristiques dans un réseau d'apprentissage Q de façon à déterminer une action au moment actuel. L'action est une information concernant la direction de déplacement pour une détection de point de repère sur la base de la position actuelle dans l'image d'entrée, et le dispositif d'analyse réfléchit un état selon l'action déterminée de façon à détecter le point de repère en répétant un processus de détermination de l'action tout en mettant à jour une compensation.
PCT/KR2022/017166 2021-12-14 2022-11-03 Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond WO2023113230A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210178223A KR102706418B1 (ko) 2021-12-14 2021-12-14 심층강화학습을 이용한 두부 계측 방사선 영상의 계측점 검출 방법 및 분석장치
KR10-2021-0178223 2021-12-14

Publications (1)

Publication Number Publication Date
WO2023113230A1 true WO2023113230A1 (fr) 2023-06-22

Family

ID=86772929

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/017166 WO2023113230A1 (fr) 2021-12-14 2022-11-03 Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond

Country Status (2)

Country Link
KR (1) KR102706418B1 (fr)
WO (1) WO2023113230A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100119137A1 (en) * 2008-11-12 2010-05-13 Siemens Corporation Method and System for Anatomic Landmark Detection Using Constrained Marginal Space Learning and Geometric Inference
JP2012075806A (ja) * 2010-10-06 2012-04-19 Toshiba Corp 医用画像処理装置、及び医用画像処理プログラム
KR102044237B1 (ko) * 2018-10-23 2019-11-13 연세대학교 산학협력단 2차원 음영 영상 기반 기계학습을 이용한 자동 3차원 랜드마크 검출 방법 및 장치
KR20200058316A (ko) * 2018-11-19 2020-05-27 티쓰리큐 주식회사 인공지능 기술을 활용한 치과용 두부 계측점 자동 추적 방법 및 그를 이용한 서비스 시스템
KR20210066074A (ko) * 2019-11-27 2021-06-07 기초과학연구원 3차원 해부학적 기준점 검출 방법 및 장치

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200066952A (ko) * 2018-12-03 2020-06-11 삼성전자주식회사 확장 컨벌루션 연산을 수행하는 장치 및 방법
KR20200083822A (ko) 2018-12-28 2020-07-09 디디에이치 주식회사 교정 진단을 위한 치과 영상 분석을 지원하는 컴퓨팅 장치 및 치과 영상 분석 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100119137A1 (en) * 2008-11-12 2010-05-13 Siemens Corporation Method and System for Anatomic Landmark Detection Using Constrained Marginal Space Learning and Geometric Inference
JP2012075806A (ja) * 2010-10-06 2012-04-19 Toshiba Corp 医用画像処理装置、及び医用画像処理プログラム
KR102044237B1 (ko) * 2018-10-23 2019-11-13 연세대학교 산학협력단 2차원 음영 영상 기반 기계학습을 이용한 자동 3차원 랜드마크 검출 방법 및 장치
KR20200058316A (ko) * 2018-11-19 2020-05-27 티쓰리큐 주식회사 인공지능 기술을 활용한 치과용 두부 계측점 자동 추적 방법 및 그를 이용한 서비스 시스템
KR20210066074A (ko) * 2019-11-27 2021-06-07 기초과학연구원 3차원 해부학적 기준점 검출 방법 및 장치

Also Published As

Publication number Publication date
KR102706418B1 (ko) 2024-09-11
KR20230089658A (ko) 2023-06-21

Similar Documents

Publication Publication Date Title
Pan et al. Tackling the radiological society of North America pneumonia detection challenge
WO2017022908A1 (fr) Procédé et programme de calcul de l'âge osseux au moyen de réseaux neuronaux profonds
CN111160367B (zh) 图像分类方法、装置、计算机设备和可读存储介质
CN110969245B (zh) 医学图像的目标检测模型训练方法和装置
US20220301714A1 (en) Method for predicting lung cancer development based on artificial intelligence model, and analysis device therefor
WO2019143179A1 (fr) Procédé de détection automatique de mêmes régions d'intérêt entre des images du même objet prises à un intervalle de temps, et appareil ayant recours à ce procédé
KR20200120311A (ko) 의료 영상을 이용한 암의 병기 결정 방법 및 의료 영상 분석 장치
WO2019124836A1 (fr) Procédé de mappage d'une région d'intérêt d'une première image médicale sur une seconde image médicale, et dispositif l'utilisant
WO2019143021A1 (fr) Procédé de prise en charge de visualisation d'images et appareil l'utilisant
CN111126268A (zh) 关键点检测模型训练方法、装置、电子设备及存储介质
CN110570425B (zh) 一种基于深度强化学习算法的肺结节分析方法及装置
CN110533120B (zh) 器官结节的图像分类方法、装置、终端及存储介质
WO2024146507A1 (fr) Procédé et appareil de prédiction de densité de foule de site pittoresque, dispositif électronique et support
JP2022128414A (ja) 深層学習に基づく気管挿管位置決め方法、装置及び記憶媒体
EP3843038B1 (fr) Procédé et système de traitement d'images
WO2023113230A1 (fr) Procédé et dispositif d'analyse permettant de détecter un point de repère d'une image radiographique céphalométrique au moyen d'un apprentissage par renforcement profond
WO2023093407A1 (fr) Procédé et appareil d'étalonnage, dispositif électronique et support de stockage lisible par ordinateur
US12026879B2 (en) Method for detecting the presence of pneumonia area in medical images of patients, detecting system, and electronic device employing method
US10390798B2 (en) Computer-aided tracking and motion analysis with ultrasound for measuring joint kinematics
CN114841913A (zh) 实时生物图像识别方法及装置
WO2023101203A1 (fr) Procédé d'estimation de volume de lésion à l'aide d'une image radiographique et dispositif d'analyse
Engelson et al. LNQ Challenge 2023: Learning Mediastinal Lymph Node Segmentation with a Probabilistic Lymph Node Atlas
WO2024172328A1 (fr) Appareil de mesure de l'angle de cobb d'une scoliose et son procédé de fonctionnement
WO2024167062A1 (fr) Procédé et dispositif de traitement d'images pour enrichir une imagerie par résonance magnétique (irm) à l'aide d'un modèle d'interpolation de trame vidéo
Belgaum et al. Development of IoT-healthcare model for gastric cancer from pathological images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22907701

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE