WO2021100906A1 - Method for displaying virtual x-ray image by using deep neural network - Google Patents

Method for displaying virtual x-ray image by using deep neural network Download PDF

Info

Publication number
WO2021100906A1
WO2021100906A1 PCT/KR2019/015980 KR2019015980W WO2021100906A1 WO 2021100906 A1 WO2021100906 A1 WO 2021100906A1 KR 2019015980 W KR2019015980 W KR 2019015980W WO 2021100906 A1 WO2021100906 A1 WO 2021100906A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ray
ray image
camera
neural network
Prior art date
Application number
PCT/KR2019/015980
Other languages
French (fr)
Korean (ko)
Inventor
오주영
Original Assignee
오주영
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 오주영 filed Critical 오주영
Priority to PCT/KR2019/015980 priority Critical patent/WO2021100906A1/en
Publication of WO2021100906A1 publication Critical patent/WO2021100906A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to a method of displaying a virtual X-ray image using a deep neural network.
  • Radiography is an essential medical technique that can quickly and reliably grasp a patient's condition, but it is also a dangerous technique that inflicts certain harm to the human body when used. Therefore, accurate and safe radiography and accurate results are the biggest task of radiologists.
  • the skill of the radiologist's imaging technique inevitably can only be obtained through repeated failures and re-shoots (fail and repeat) for a number of patients, etc., damage to the public due to repeated re-shooting occurs. do. Accordingly, national institutions centered on the Ministry of Food and Drug Safety are striving to reduce the amount of medical radiation exposure.
  • the present invention provides a method for displaying a virtual X-ray image using a deep neural network.
  • the method for displaying a virtual X-ray image using a deep neural network is a method for displaying a virtual X-ray image using a deep neural network, the method comprising: acquiring a training image database having camera images and X-ray images matched for each location of a human body; Generating an X-ray image estimation model by repeatedly deeply learning the camera image and the X-ray image through the training image database; Estimating an X-ray image with the highest similarity from the camera image of an actual patient or trainee through the estimation model; And synthesizing and displaying the estimated X-ray image as a predicted X-ray image of the camera image of the actual patient or the trainee, on the camera image of the real patient or the trainee.
  • the obtaining of the training image database includes correcting a camera image captured by a main camera located at least one side of a camera device other than an X-ray tube or an X-ray imaging device of the x-ray imaging device based on the x-ray image of the x-ray imaging device. step; Matching the X-ray image and the camera image; And storing the X-ray image and the camera image as a pair in the training image database.
  • the obtaining of the learning image database includes modeling the 3D image of the human body and the shape of the bone, and positioning the 3D image and the shape of the bone in a radiography posture and direction to obtain the X-ray image and the camera image. I can.
  • the 3D image of the human body and the shape of the bone are modeled, and the 3D image of the human body and the shape of the bone are input to a Generative Adversarial Network (GANs) algorithm to It is possible to change the style to a style close to the image and the X-ray image of, and learn in-depth one-to-one with the image of the human body and the X-ray image that are close to the reality in which the style is changed.
  • GANs Generative Adversarial Network
  • the step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image on the camera image of the actual patient or the subject is correcting the selected camera image based on the camera image of the actual patient or trainee. , Correcting and displaying the matched X-ray image to match the correction of the selected camera image.
  • the step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image to the camera image of the actual patient or the trainee is to match the thermal image of the auxiliary camera with the camera image of the actual patient or trainee. Thus, it may be to perform division of the human body region.
  • the learning It may be to extract and display an X-ray image of a corresponding situation from an image database.
  • the virtual X-ray generation and display method using the deep neural network according to the present invention can be generated by estimating and generating an X-ray image according to an estimation algorithm learned in advance through a learning image database, and by predicting and displaying the X-ray image, there is no exposure to radiation.
  • the patient may check the posture and position for radiography before imaging, or the trainee may be allowed to repeatedly practice X-ray imaging without radiation exposure.
  • the method of generating and displaying virtual X-rays using a deep neural network is to change the style of the body image to a real body image and an X-ray image through a generative hostile neural network, and change the style of the body image and the X-ray image. By learning, you can display an image similar to the real thing.
  • FIG. 1 is a flowchart illustrating a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • FIG. 2 is a flowchart showing detailed steps of acquiring a training image database in a method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3A is a schematic diagram illustrating a photographing equipment used in a method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3B is a conceptual diagram illustrating a process of performing correction of a camera image in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3C and 3D are conceptual diagrams illustrating a step of matching a camera image and a radiation image to acquire a training image in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3E is a conceptual diagram illustrating a process of creating a speculative model through learning by applying a generative adversarial neural network to a 3D model in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 4A to 4D are conceptual diagrams illustrating a step of generating and displaying a predictable radiographic image of a camera image of an actual patient in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • FIG. 5 is a conceptual diagram illustrating a step of displaying a guide line with a camera image of an actual patient in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 1 is a flowchart of a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 2 is a flowchart illustrating detailed steps of acquiring a training image database in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • a method and method for displaying a virtual X-ray image using a deep neural network include obtaining a learning image database (S1), estimating an X-ray image through a deep learning model (S2), and predicting It may include an X-ray image display step (S3).
  • the learning image database acquisition step (S1) includes a camera image generation and pre-processing step (S11), a camera image and an X-ray image matching step (S12), and a camera/X-ray image pair storage. It may be accomplished including step S13.
  • 3A is a schematic diagram illustrating a method of displaying a virtual X-ray image using a deep neural network and photographing equipment used in the method according to an embodiment of the present invention.
  • 3B is a schematic diagram illustrating a method for displaying a virtual X-ray image using a deep neural network and photographing equipment used in the method according to an embodiment of the present invention.
  • 3B is a conceptual diagram illustrating a process of performing distortion error correction of a camera image in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3C and 3D are conceptual diagrams illustrating a step of matching a camera image and an X-ray image to obtain a training image in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 3E is a conceptual diagram illustrating a process of changing a style and performing learning by applying a generative hostile neural network to a 3D model in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • the camera image correction step (S11) of the learning image database acquisition step (S1) may start with acquiring a camera image and an X-ray image together through the photographing equipment of FIG. 3A.
  • the camera image may be photographed through a main camera or an auxiliary camera (or sensor) located on the side of the X-ray photographing apparatus.
  • the main camera or the auxiliary camera may use a visible light or a thermal imaging camera for photographing, or, in some cases, a TOF camera or an ultrasonic sensor.
  • the X-ray image may be photographed through an X-ray imaging device located at the center of the equipment.
  • the main camera and the auxiliary camera are different in angles from the actual X-ray photographing apparatus, distortion may occur in the captured camera image.
  • the actual camera image is compared with the X-ray image, and the shape of the camera image is corrected based on the X-ray image, so that correction to reduce the distortion error may be performed.
  • the entire size of the training image to be obtained is cut out to the size of a cassette (square shape), because the training image is limited to a cassette-sized area for learning.
  • the image can be cut and saved by recognizing the size and shape of the cassette using edge detection or shape recognition, and ii) not using a cassette or completely removing the cassette with the body. If it is not covered, the light area of the X-ray irradiation field (when shooting, the area is set by emitting a square light on the area), the value set according to the existing imaging method, and the area value calculated back to the area of the X-ray image
  • the cassette position and area can be calculated by manual setting.
  • the resulting final post-processed human body image may be matched with the captured X-ray image, and the final image may be stored.
  • a method of obtaining an image by modeling and rendering a 3D image of a human body and a model of a bone may be used in addition to the image through actual radiation irradiation. It is possible to acquire a larger amount of training image data than to acquire an image by repeating actual photographing by making a posture for radiography through a 3D image of the human body and a model of a bone, adjusting the position, and matching the X-ray image according to it. have. In addition, it is possible to repeatedly process and store the above 3D image and bone model by body type (age, race, sex, height, weight) based on the existing actual patient data. In addition, it is possible to perform running (deep learning) while changing the location, angle, and size of images stored in the database.
  • the learning image database acquisition step (S1) is performed through Generative Adversarial Networks (GANs) before learning the 3D image of the human body and the 3D bone image.
  • GANs Generative Adversarial Networks
  • the style can be changed as an X-ray image.
  • a generative adversarial neural network is known as a deep learning algorithm that generates and learns data through competition between a generator and a discriminator.
  • an image of an actual human body and an actual X-ray image having a similar shape but with a changed style may be generated.
  • the learning image database acquisition step (S1) an image of a human body and an X-ray image whose style has been changed through a generative hostile neural network are generated, and matched 1:1 through this.
  • a generative adversarial neural network algorithm or a convolutional neural network a picture close to the real can be generated.
  • 4A to 4D are conceptual diagrams illustrating a step of matching predictable X-ray images with a camera image of an actual patient in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • 5 is a conceptual diagram illustrating a step of displaying a guide line with a camera image of an actual patient in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
  • the step of estimating an x-ray image through a deep learning model is a radiography of an actual patient or a camera photographed in the radiography practice of the trainee in the training process using the corresponding equipment.
  • This is a step of inferring an X-ray image from the image through the deep learning model.
  • the camera image acquires only a visible ray image
  • X-rays since X-rays are not used, the patient or the trainee may not be exposed to radiation.
  • an image is acquired and processed in accordance with an existing learning method and a guessing algorithm through edge detection or a thermal image camera screen in the camera image of the patient or the trainee, and similar X-ray images are obtained from the database. I can guess.
  • the predicted X-ray image display step (S3) is a step of displaying an X-ray image matched with the searched camera image on the screen.
  • the position of the patient's or the subject's human body may be different in position when compared with the human body (or 3D image) at the time of capturing a camera image for acquiring the database.
  • the camera image of the database is compared with the camera image of the actual patient or the trainee, and the camera image of the patient or the trainee is learned before estimating the X-ray image through the deep learning model. It is possible to transform it to fit the camera image.
  • an X-ray image matching the camera image of the database may also be deformed accordingly, and as a result, an expected X-ray image corresponding to the camera image of an actual patient or trainee may be generated.
  • the camera image of the main camera and the thermal image of the auxiliary camera may help to determine the human body position of an actual patient or trainee. Specifically, it is possible to measure the temperature of the skin through the thermal image of the secondary camera and visualize it as a color, and to perform the segmentation of the human body more accurately by coloring the image of the main camera obtained through visible light. I can.
  • the model of the existing database can be used as it is.
  • the step of calculating the ratio of the range of the clothing part from the actually photographed image may be performed. And if the ratio of clothes is less than the reference value, the clothes part is removed and the prediction model is applied only to the rest of the human body.
  • the ratio of clothes is greater than or equal to the reference value, it may be possible to search and apply a prediction model that assumes the presence of clothes from a database defined in advance.
  • a line for guiding a position suitable for radiography may be further displayed for a camera image of an actual patient or a trainee.
  • the guide line is generated from the camera image stored in the database, and is displayed as a line that is distinct from the actual image, so that the patient or the trainee may change the position of his or her body accordingly.
  • the degree of similarity to the area to be additionally photographed is measured, and if the similarity is low, the similarity score is displayed to the patient or the trainee, indicating that the area to be photographed is not the target area or the posture is incorrect. Or, you can reconfirm with the trainee.
  • the region is not a photographing target region
  • the present invention provides a method for displaying a virtual X-ray image using a deep neural network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Software Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computational Linguistics (AREA)
  • Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A method for displaying a virtual X-ray image by using a deep neural network is disclosed. In one embodiment, disclosed is a method for displaying a virtual X-ray image by using a deep neural network, comprising the steps of: acquiring a learning image database having a camera image and an X-ray image that are matched by location on a human body; generating an X-ray image estimation model by repetitively performing deep learning on camera images and X-ray images through the learning image database; estimating, on the basis of a camera image of a real patient or a practice subject, the X-ray image with the highest similarity through the estimation model; and synthesizing the estimated X-ray image with the camera image of the real patient or the practice subject as a predicted X-ray image of the camera image of the real patient or the practice subject, thereby displaying the synthesized image.

Description

심층신경망을 이용한 가상 엑스레이 영상 표시 방법Virtual X-ray image display method using deep neural network
본 발명은 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에 관한 것이다.The present invention relates to a method of displaying a virtual X-ray image using a deep neural network.
방사선 촬영은 환자의 상태를 빠르고 확실하게 파악을 할 수 있는 반드시 필요한 의료기술이지만, 사용 시 인체에 일정 해를 가하는 위험한 기술이기도 하다. 따라서, 정확하고 안전한 방사선 촬영과 정확한 결과물은 방사선사의 가장 큰 과업이라 할 수 있다. 하지만, 방사선사의 촬영 기술의 숙련은 결국 수많은 환자 등을 대상으로 한 반복적인 촬영 실패와 재촬영(fail and repeat)을 통해 얻어질 수 밖에 없기 때문에, 반복적인 재촬영으로 인한 공공의 피해가 발생하게 된다. 이에 따라, 식약처를 중심으로 한 국가 기관에서는 의료 방사선 피폭량을 줄이기 위한 방안에 고심하고 있다.Radiography is an essential medical technique that can quickly and reliably grasp a patient's condition, but it is also a dangerous technique that inflicts certain harm to the human body when used. Therefore, accurate and safe radiography and accurate results are the biggest task of radiologists. However, since the skill of the radiologist's imaging technique inevitably can only be obtained through repeated failures and re-shoots (fail and repeat) for a number of patients, etc., damage to the public due to repeated re-shooting occurs. do. Accordingly, national institutions centered on the Ministry of Food and Drug Safety are striving to reduce the amount of medical radiation exposure.
본 발명은 심층신경망을 이용한 가상 엑스레이 영상 표시 방법을 제공한다.The present invention provides a method for displaying a virtual X-ray image using a deep neural network.
본 발명에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법은 심층신경망을 이용한 가상 엑스레이 영상 표시 방법으로서, 인체의 위치별로 매칭되어 있는 카메라 이미지와 엑스레이 이미지를 갖는 학습 이미지 데이터베이스를 획득하는 단계; 학습 이미지 데이터베이스를 통해 카메라 이미지와 엑스레이 이미지를 반복적으로 심층학습하여 엑스레이 이미지 추정모델을 생성하는 단계; 실제 환자 또는 피실습자의 카메라 이미지를 상기 추정모델을 통해 유사도가 가장 높은 엑스레이 이미지를 추정하는 단계; 및 상기 추정된 엑스레이 이미지를 실제 환자 또는 피실습자의 그 카메라 이미지의 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계를 포함할 수 있다.The method for displaying a virtual X-ray image using a deep neural network according to the present invention is a method for displaying a virtual X-ray image using a deep neural network, the method comprising: acquiring a training image database having camera images and X-ray images matched for each location of a human body; Generating an X-ray image estimation model by repeatedly deeply learning the camera image and the X-ray image through the training image database; Estimating an X-ray image with the highest similarity from the camera image of an actual patient or trainee through the estimation model; And synthesizing and displaying the estimated X-ray image as a predicted X-ray image of the camera image of the actual patient or the trainee, on the camera image of the real patient or the trainee.
그리고 상기 학습 이미지 데이터베이스를 획득하는 단계는 엑스레이 촬영장치의 엑스레이 이미지를 기준으로 상기 엑스레이 촬영장치의 엑스선관 또는 엑스레이 촬영장치가 아닌 카메라 장치의 적어도 일측에 위치한 주 카메라를 통해 촬영된 카메라 이미지를 보정하는 단계; 상기 엑스레이 이미지와 카메라 이미지를 매칭하는 단계; 및 상기 엑스레이 이미지와 카메라 이미지의 쌍으로써 상기 학습 이미지 데이터베이스에 저장하는 단계를 포함할 수 있다.In addition, the obtaining of the training image database includes correcting a camera image captured by a main camera located at least one side of a camera device other than an X-ray tube or an X-ray imaging device of the x-ray imaging device based on the x-ray image of the x-ray imaging device. step; Matching the X-ray image and the camera image; And storing the X-ray image and the camera image as a pair in the training image database.
또한, 상기 학습 이미지 데이터베이스를 획득하는 단계는 인체의 3D 영상 및 뼈의 형상을 모델링하고, 상기 3D 영상 및 뼈의 형상을 방사선 촬영 자세와 방향으로 위치시켜서, 상기 엑스레이 이미지 및 카메라 이미지를 획득하는 것일 수 있다.In addition, the obtaining of the learning image database includes modeling the 3D image of the human body and the shape of the bone, and positioning the 3D image and the shape of the bone in a radiography posture and direction to obtain the X-ray image and the camera image. I can.
또한, 상기 학습 이미지 데이터베이스를 획득하는 단계는 인체의 3D 영상 및 뼈의 형상을 모델링하고, 상기 인체의 3D 영상 및 뼈의 형상을 생성적 적대 신경망(Generative Adversarial Network, GANs) 알고리즘에 입력하여 실제 인체의 이미지 및 엑스레이 이미지에 가까운 스타일로 변경하고, 상기 스타일이 변경된 실제에 가까운 인체의 이미지 및 엑스레이 이미지를 다시 1:1로 심층학습할 수 있다.In addition, in obtaining the learning image database, the 3D image of the human body and the shape of the bone are modeled, and the 3D image of the human body and the shape of the bone are input to a Generative Adversarial Network (GANs) algorithm to It is possible to change the style to a style close to the image and the X-ray image of, and learn in-depth one-to-one with the image of the human body and the X-ray image that are close to the reality in which the style is changed.
또한, 선택된 카메라 이미지에 매칭된 엑스레이 이미지를 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계는 상기 실제 환자 또는 피실습자의 카메라 영상을 기준으로 상기 선택된 카메라 이미지를 보정하고, 상기 선택된 카메라 이미지의 보정에 부합하도록 상기 매칭된 엑스레이 이미지를 보정하여 표시하는 것일 수 있다.In addition, the step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image on the camera image of the actual patient or the subject is correcting the selected camera image based on the camera image of the actual patient or trainee. , Correcting and displaying the matched X-ray image to match the correction of the selected camera image.
또한, 선택된 카메라 이미지에 매칭된 엑스레이 이미지를 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계는 상기 실제 환자 또는 피실습자의 카메라 이미지에 대해 보조 카메라의 열화상 영상을 매칭하여 인체의 영역 분할을 수행하는 것일 수 있다.In addition, the step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image to the camera image of the actual patient or the trainee is to match the thermal image of the auxiliary camera with the camera image of the actual patient or trainee. Thus, it may be to perform division of the human body region.
또한, 상기 색상에서 기준값 이상의 변화가 있는 경우, 상기 실제 환자 또는 피실습자가 옷을 입을 것으로 판단하고, 기준값 미만의 변화만 있는 경우 상기 실제 환자 또는 피실습자가 옷을 입지 않은 것으로 판단하여, 상기 학습 이미지 데이터베이스로부터 해당되는 상황의 엑스레이 이미지를 추출하여 표시하는 것일 수 있다.In addition, if there is a change in the color above the reference value, it is determined that the actual patient or the trainee is wearing clothes, and if there is only a change less than the reference value, it is determined that the actual patient or the trainee is not wearing clothes, and the learning It may be to extract and display an X-ray image of a corresponding situation from an image database.
본 발명에 의한 심층신경망을 이용한 가상 엑스레이 생성 및 표시 방법은 학습 이미지 데이터베이스를 통해서 미리 학습된 추정 알고리즘에 따라 엑스레이 이미지를 추정하여 생성해낼 수 있고, 엑스레이 이미지를 예상하여 표시함으로써, 방사선에 노출됨이 없이 환자가 방사선 촬영을 위한 자세와 위치를 촬영 전에 미리 확인하거나, 피실습자가 방사선 피폭 없이 반복적으로 엑스레이 이미지 촬영 연습을 수행하도록 할 수 있다.The virtual X-ray generation and display method using the deep neural network according to the present invention can be generated by estimating and generating an X-ray image according to an estimation algorithm learned in advance through a learning image database, and by predicting and displaying the X-ray image, there is no exposure to radiation. The patient may check the posture and position for radiography before imaging, or the trainee may be allowed to repeatedly practice X-ray imaging without radiation exposure.
또한, 본 발명에 의한 심층신경망을 이용한 가상 엑스레이 생성 및 표시 방법은 생성적 적대 신경망을 통해 신체의 이미지에 대해 진짜 신체의 이미지와 엑스레이 이미지로 스타일을 바꾸고, 스타일이 바뀐 신체의 이미지와 엑스레이의 이미지를 학습하여, 실제와 유사한 이미지를 표시하도록 할 수 있다.In addition, the method of generating and displaying virtual X-rays using a deep neural network according to the present invention is to change the style of the body image to a real body image and an X-ray image through a generative hostile neural network, and change the style of the body image and the X-ray image. By learning, you can display an image similar to the real thing.
도 1은 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법을 설명하기 위한 플로우차트이다.1 is a flowchart illustrating a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 학습 이미지 데이터베이스 획득 단계의 세부 단계를 도시한 플로우챠트이다.2 is a flowchart showing detailed steps of acquiring a training image database in a method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 3a는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에 사용되는 촬영 장비를 도시한 개략도이다.3A is a schematic diagram illustrating a photographing equipment used in a method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 3b는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 카메라 이미지의 보정을 수행하는 과정을 도시한 개념도이다.3B is a conceptual diagram illustrating a process of performing correction of a camera image in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 3c 및 도 3d는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 학습 이미지를 획득하기 위해 카메라 이미지와 방사선 이미지를 매칭하는 단계를 도시한 개념도이다.3C and 3D are conceptual diagrams illustrating a step of matching a camera image and a radiation image to acquire a training image in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 3e는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 3D 모델에 대해 생성적 적대 신경망을 적용하여 스타일을 변경하고 학습을 통해 추측 모델을 만드는 과정을 도시한 개념도이다.3E is a conceptual diagram illustrating a process of creating a speculative model through learning by applying a generative adversarial neural network to a 3D model in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 4a 내지 도 4d는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 실제 환자의 카메라 영상에 대해 예측 가능한 방사선 이미지를 생성하여 표시하는 단계를 도시한 개념도이다.4A to 4D are conceptual diagrams illustrating a step of generating and displaying a predictable radiographic image of a camera image of an actual patient in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 실제 환자의 카메라 영상에 대해 가이드 라인을 함께 표시하는 단계를 도시한 개념도이다.5 is a conceptual diagram illustrating a step of displaying a guide line with a camera image of an actual patient in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
본 발명이 속하는 기술분야에 있어서 통상의 지식을 가진 자가 용이하게 실시할 수 있을 정도로 본 발명의 바람직한 실시예를 도면을 참조하여 상세하게 설명하면 다음과 같다.The preferred embodiments of the present invention will be described in detail with reference to the drawings to the extent that those of ordinary skill in the art can easily implement the present invention.
도 1은 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법의 플로우차트이다. 도 2는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에서 학습 이미지 데이터베이스 획득 단계의 세부 단계를 도시한 플로우차트이다. 1 is a flowchart of a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention. 2 is a flowchart illustrating detailed steps of acquiring a training image database in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
먼저, 도 1을 참조하면, 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법은 학습 이미지 데이터베이스 획득 단계(S1), 심층학습 모델을 통한 엑스레이 이미지 추정 단계(S2), 예상 엑스레이 이미지 표시 단계(S3)를 포함할 수 있다.First, referring to FIG. 1, a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention include obtaining a learning image database (S1), estimating an X-ray image through a deep learning model (S2), and predicting It may include an X-ray image display step (S3).
한편, 도 1과 도 2를 함께 참조하면, 상기 학습 이미지 데이터베이스 획득 단계(S1)는 카메라 이미지 생성 및 전처리 단계(S11), 카메라 이미지와 엑스레이 이미지의 매칭 단계(S12), 카메라/엑스레이 이미지 쌍 저장 단계(S13)를 포함하여 이루어질 수 있다.Meanwhile, referring to FIGS. 1 and 2 together, the learning image database acquisition step (S1) includes a camera image generation and pre-processing step (S11), a camera image and an X-ray image matching step (S12), and a camera/X-ray image pair storage. It may be accomplished including step S13.
도 3a는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에 사용되는 촬영 장비를 도시한 개략도이다. 도 3b는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에 사용되는 촬영장비를 도시한 개략도이다. 도 3b는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에서 카메라 이미지의 왜곡 오차 보정을 수행하는 과정을 도시한 개념도이다. 도 3c 및 도 3d는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에서 학습 이미지를 획득하기 위해 카메라 이미지와 엑스레이 이미지를 매칭하는 단계를 도시한 개념도이다. 도 3e는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법에서 3D 모델에 대해 생성적 적대 신경망을 적용하여 스타일을 변경하고 학습을 진행하는 과정을 도시한 개념도이다.3A is a schematic diagram illustrating a method of displaying a virtual X-ray image using a deep neural network and photographing equipment used in the method according to an embodiment of the present invention. 3B is a schematic diagram illustrating a method for displaying a virtual X-ray image using a deep neural network and photographing equipment used in the method according to an embodiment of the present invention. 3B is a conceptual diagram illustrating a process of performing distortion error correction of a camera image in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention. 3C and 3D are conceptual diagrams illustrating a step of matching a camera image and an X-ray image to obtain a training image in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention. 3E is a conceptual diagram illustrating a process of changing a style and performing learning by applying a generative hostile neural network to a 3D model in a method of displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
먼저, 상기 학습 이미지 데이터베이스 획득 단계(S1) 중 상기 카메라 이미지 보정 단계(S11)는 도 3a의 촬영 장비를 통해 카메라 이미지와 엑스레이 이미지를 함께 획득하는 것으로 시작될 수 있다. 여기서, 카메라 이미지는 도 3a에서와 같이, 엑스레이 촬영장치의 측면에 위치한 주 카메라 또는 보조 카메라(혹은 센서)를 통해 촬영될 수 있다. 또한, 주 카메라 또는 보조 카메라는 촬영을 위해 가시광선, 열화상 카메라를 사용하거나, 또는 경우에 따라 TOF 카메라 또는 초음파 센서를 사용하는 것이 가능할 수 있다. 또한, 학습 이미지는 카메라 이미지와 엑스레이 이미지를 기존에 이미 확보해 놓았던 별도의 데이터베이스에서 추출하는 것도 가능하다First, the camera image correction step (S11) of the learning image database acquisition step (S1) may start with acquiring a camera image and an X-ray image together through the photographing equipment of FIG. 3A. Here, as shown in FIG. 3A, the camera image may be photographed through a main camera or an auxiliary camera (or sensor) located on the side of the X-ray photographing apparatus. In addition, the main camera or the auxiliary camera may use a visible light or a thermal imaging camera for photographing, or, in some cases, a TOF camera or an ultrasonic sensor. In addition, it is possible to extract the training image from a separate database that has already secured camera images and x-ray images.
한편, 엑스레이 이미지는 장비 중앙에 위치한 엑스레이 촬영장치를 통해 촬영될 수 있다. 다만, 주 카메라 및 보조 카메라는 실제 엑스레이 촬영장치와 각도에서 차이가 있기 때문에, 촬영된 카메라 이미지에서 왜곡이 발생할 수 있다. 이 경우, 도 3b와 같이 실제 카메라 이미지를 엑스레이 이미지와 비교하고, 엑스레이 이미지를 기준으로 카메라 이미지의 형태를 보정하여, 왜곡 오차를 줄이는 보정이 수행될 수 있다.Meanwhile, the X-ray image may be photographed through an X-ray imaging device located at the center of the equipment. However, since the main camera and the auxiliary camera are different in angles from the actual X-ray photographing apparatus, distortion may occur in the captured camera image. In this case, as shown in FIG. 3B, the actual camera image is compared with the X-ray image, and the shape of the camera image is corrected based on the X-ray image, so that correction to reduce the distortion error may be performed.
또한, 도 3c와 같이, 학습 이미지를 얻을 전체 크기는 카세트(네모난 모양)의 크기로 잘라내게 되는데, 상기 학습 이미지는 카세트 크기의 영역에 한정하여 학습하기 때문이다. i) 카세트를 사용할 때에는, 에지 검출(edge detection) 또는 형태(shape) 인식 방법 등을 사용하여 카세트 크기, 모양을 인식하여 이미지를 잘라 저장할 수 있고, ii) 카세트를 사용하지 않거나 몸으로 카세트를 완전히 덮어 보이지 않는 경우에는, 엑스선 조사야 (촬영할 때, 촬영 영역에 네모난 빛을 쏘아 촬영 영역을 설정함)의 빛 영역, 기존에 촬영방법에 따라 설정된 값, 엑스선 사진의 촬영범위로 역산출된 영역 값, 또는 수동 설정 등으로 카세트 위치와 영역을 산출할 수 있다.In addition, as shown in FIG. 3C, the entire size of the training image to be obtained is cut out to the size of a cassette (square shape), because the training image is limited to a cassette-sized area for learning. i) When using a cassette, the image can be cut and saved by recognizing the size and shape of the cassette using edge detection or shape recognition, and ii) not using a cassette or completely removing the cassette with the body. If it is not covered, the light area of the X-ray irradiation field (when shooting, the area is set by emitting a square light on the area), the value set according to the existing imaging method, and the area value calculated back to the area of the X-ray image Alternatively, the cassette position and area can be calculated by manual setting.
또한, 생성된 최종적으로 후 처리된 인체 이미지에 대해 촬영된 엑스레이 이미지를 함께 매칭하여, 최종적인 이미지를 저장할 수 있다.In addition, the resulting final post-processed human body image may be matched with the captured X-ray image, and the final image may be stored.
그리고 도 3d와 같이, 각 인체의 위치 및 자세 별로 카메라 이미지와 엑스레이 이미지를 각각 매칭하여, 특징점 (feature point)이 뛰어난 영상으로 필터링 (filtering) 처리한 이후 후처리 영상으로서 학습을 진행, 또는 경우에 따라 가시광선 영상 원본, 보조 카메라 (열화상) 영상 원본, 보조 카메라 (열화상)를 사용하여 처리된 후처리 영상을 통해 학습을 진행하면, 최종적인 학습 이미지 데이터 베이스를 획득하는 것이 가능하게 된다.And, as shown in FIG. 3D, after matching the camera image and the X-ray image according to the position and posture of each human body, filtering into an image having excellent feature points, learning is performed as a post-processed image, or in case Accordingly, when learning is performed through post-processed images processed using an original visible light image, an auxiliary camera (thermal image), and an auxiliary camera (thermal image), it is possible to obtain a final training image database.
한편, 여기서, 학습 이미지 데이터베이스 획득 단계(S1)는 실제 방사선 조사를 통한 이미지 외에도 인체의 3D 영상, 뼈의 모형을 모델링하고 렌더링하여 영상을 얻는 방법을 사용 가능하다. 인체의 3D 영상과 뼈의 모형을 통해 방사선 촬영 자세를 만들고, 위치를 조정하여, 여기에 따른 엑스레이 이미지를 매칭하여 실제 촬영을 반복하여 이미지를 획득하는 것 보다 많은 양의 학습 이미지 데이터를 획득할 수 있다. 또한, 기존의 실제 환자의 데이터를 기반으로, 위의 3D 영상, 뼈의 모형에 대해 체형별(나이, 인종, 성별, 키, 몸무게)로 반복하여 진행 및 저장하는 것이 가능하다. 그리고 데이터베이스에서 저장된 영상의 위치, 각도, 크기 등을 변경해가면서 러닝(심층 학습)을 수행하는 것도 가능하다.Meanwhile, in the step S1 of acquiring the learning image database, a method of obtaining an image by modeling and rendering a 3D image of a human body and a model of a bone may be used in addition to the image through actual radiation irradiation. It is possible to acquire a larger amount of training image data than to acquire an image by repeating actual photographing by making a posture for radiography through a 3D image of the human body and a model of a bone, adjusting the position, and matching the X-ray image according to it. have. In addition, it is possible to repeatedly process and store the above 3D image and bone model by body type (age, race, sex, height, weight) based on the existing actual patient data. In addition, it is possible to perform running (deep learning) while changing the location, angle, and size of images stored in the database.
한편, 도 3e와 같이, 학습 이미지 데이터 베이스 획득 단계(S1)는 인체의 3D 영상과 3D 뼈의 이미지를 학습하기 전에, 생성적 적대 신경망(Generative Adversarial Networks, GANs)를 통해 실제 인체의 이미지와 실제 엑스레이 이미지로서 스타일을 변경할 수 있다. 구체적으로, 생성적 적대 신경망은 생성자(Generator)와 구분자(Discriminator)간 경쟁을 통해 데이터를 생성 및 학습하는 딥러닝 알고리즘으로 알려져 있다. 그리고 해당 알고리즘에 대해 앞서 언급한 인체의 3D 영상과 3D 뼈의 이미지를 입력하면, 형태는 유사하나 스타일이 변화된 실제 인체의 이미지와 실제 엑스레이 이미지가 생성될 수 있다. 따라서, 학습 이미지 데이터 베이스 획득 단계(S1)에서 생성적 적대 신경망을 통해 스타일이 변화된 인체의 이미지와 엑스레이 이미지를 생성하고, 이를 통해 1:1로 매칭한다. 이를 통해 다시 생성적 적대 신경망 알고리즘 또는 합성곱 신경망 (Convolutional Neural Netowrk) 등을 통해 다시 심층학습을 실시하면 실제에 가까운 사진을 생성할 수 있다.On the other hand, as shown in Fig. 3e, the learning image database acquisition step (S1) is performed through Generative Adversarial Networks (GANs) before learning the 3D image of the human body and the 3D bone image. The style can be changed as an X-ray image. Specifically, a generative adversarial neural network is known as a deep learning algorithm that generates and learns data through competition between a generator and a discriminator. In addition, when the 3D image of the human body and the image of the 3D bone mentioned above are input to the algorithm, an image of an actual human body and an actual X-ray image having a similar shape but with a changed style may be generated. Accordingly, in the learning image database acquisition step (S1), an image of a human body and an X-ray image whose style has been changed through a generative hostile neural network are generated, and matched 1:1 through this. Through this, if deep learning is performed again through a generative adversarial neural network algorithm or a convolutional neural network, a picture close to the real can be generated.
도 4a 내지 도 4d는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에서 실제 환자의 카메라 영상에 대해 예측 가능한 엑스레이 이미지를 매칭하는 단계를 도시한 개념도이다. 도 5는 본 발명의 실시예에 따른 심층신경망을 이용한 가상 엑스레이 영상 표시 방법 및 방법에서 실제 환자의 카메라 영상에 대해 가이드 라인을 함께 표시하는 단계를 도시한 개념도이다.4A to 4D are conceptual diagrams illustrating a step of matching predictable X-ray images with a camera image of an actual patient in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention. 5 is a conceptual diagram illustrating a step of displaying a guide line with a camera image of an actual patient in a method and method for displaying a virtual X-ray image using a deep neural network according to an embodiment of the present invention.
도 1과 함께 도 4a 내지 도 4d를 함께 참조하면, 심층학습 모델을 통한 엑스레이 이미지 추정 단계(S2)는 실제 환자의 방사선 촬영 또는 해당 장비를 이용한 실습 과정에서 피실습자의 방사선 촬영 연습에서 촬영된 카메라 이미지를 상기 심층학습 모델을 통해 엑스레이 이미지를 추측하는 단계이다. 여기서, 카메라 이미지는 가시광선 영상만을 획득하므로, 엑스레이를 사용하지 않기 때문에 환자 또는 피실습자는 방사선에 노출되지 않을 수 있다. 또한, 도 4a와 같이, 환자 또는 피실습자의 카메라 영상에서 에지 검출 또는 이와 함께 열화상 카메라 화면을 통해 기존 학습 방법과 추측 알고리즘에 적합하게 이미지를 획득 및 처리하고, 상기 데이터베이스로부터 이와 유사한 엑스레이 이미지를 추측할 수 있다.Referring to FIGS. 4A to 4D together with FIG. 1, the step of estimating an x-ray image through a deep learning model (S2) is a radiography of an actual patient or a camera photographed in the radiography practice of the trainee in the training process using the corresponding equipment. This is a step of inferring an X-ray image from the image through the deep learning model. Here, since the camera image acquires only a visible ray image, since X-rays are not used, the patient or the trainee may not be exposed to radiation. In addition, as shown in FIG. 4A, an image is acquired and processed in accordance with an existing learning method and a guessing algorithm through edge detection or a thermal image camera screen in the camera image of the patient or the trainee, and similar X-ray images are obtained from the database. I can guess.
또한, 상기 예상 엑스레이 이미지 표시 단계(S3)는 검색된 카메라 이미지와 매칭된 엑스레이 이미지를 화면에 표시하는 단계이다. 여기서, 환자 또는 피실습자의 인체 위치는 데이터베이스 획득을 위한 카메라 이미지의 촬영시의 인체(또는 3D 영상)와 비교할 때, 위치에서 차이가 있을 수 있다. 이 경우, 도 4b 및 도 4c와 같이, 데이터베이스의 카메라 이미지를 실제 환자 또는 피실습자의 카메라 이미지와 비교하고, 심층학습 모델을 통해 엑스레이 이미지를 추측하기 이전에 환자 또는 피실습자의 카메라 이미지를 학습 데이터베이스의 카메라 이미지에 맞게 변형하는 것이 가능하다. 이는 두 영상 사이의 화소값의 오차 거리를 계산하고, 오차를 줄이는 방향으로 영상을 변형하여 오차를 개선하는 영상 정합알고리즘 (image registration algorithm)에 기초한다. 또한, 데이터베이스의 카메라 이미지에 매칭되는 엑스레이 이미지 역시 이에 따라 변형될 수 있고, 결과적으로 실제 환자 또는 피실습자의 카메라 이미지에 대응되는 예상 엑스레이 이미지가 생성될 수 있다.In addition, the predicted X-ray image display step (S3) is a step of displaying an X-ray image matched with the searched camera image on the screen. Here, the position of the patient's or the subject's human body may be different in position when compared with the human body (or 3D image) at the time of capturing a camera image for acquiring the database. In this case, as shown in FIGS. 4B and 4C, the camera image of the database is compared with the camera image of the actual patient or the trainee, and the camera image of the patient or the trainee is learned before estimating the X-ray image through the deep learning model. It is possible to transform it to fit the camera image. This is based on an image registration algorithm that calculates the error distance of pixel values between two images and improves the error by transforming the image in a direction to reduce the error. In addition, an X-ray image matching the camera image of the database may also be deformed accordingly, and as a result, an expected X-ray image corresponding to the camera image of an actual patient or trainee may be generated.
한편, 주 카메라의 카메라 이미지와 보조 카메라의 열화상 이미지는 실제 환자 또는 피실습자의 인체 위치 파악에 도움을 줄 수 있다. 구체적으로, 보조 카메라의 열화상 이미지를 통해 피부의 온도를 측정하여 색상으로서 시각화할 수 있고, 이를 가시광선을 통해 얻은 주 카메라의 이미지에 대해 색상을 입혀서 인체 영역 분할(segmentation)을 보다 정확하게 수행할 수 있다.On the other hand, the camera image of the main camera and the thermal image of the auxiliary camera may help to determine the human body position of an actual patient or trainee. Specifically, it is possible to measure the temperature of the skin through the thermal image of the secondary camera and visualize it as a color, and to perform the segmentation of the human body more accurately by coloring the image of the main camera obtained through visible light. I can.
또한, 만약 가시광선을 통해 표시된 색상에서 기준값 이상의 급격한 변화가 없다면 실제 환자 또는 피실습자가 해당 인체에 대해 옷을 입지 않은 것으로 판단하여 기존 데이터베이스의 모델을 그대로 사용할 수 있다. 반면, 색상에서 변화가 기준값 이상인 급격한 변화가 있다면, 실제 촬영되는 이미지로부터 옷 부분의 범위의 비율을 계산하는 단계가 이루어질 수 있다. 그리고 만약 옷의 비율이 기준값 미만일 때는 옷 부분을 제거하고 나머지 인체에 대해서만 예측 모델을 적용한다. 반면, 옷의 비율이 기준값 이상일 때, 사전에 정의된 데이터베이스로부터 옷이 있는 경우를 가정한 예측 모델을 검색하여 적용하는 것이 가능할 수 있다.In addition, if there is no sudden change above the reference value in the color displayed through visible light, it is determined that the actual patient or the trainee is not wearing clothes for the human body, and the model of the existing database can be used as it is. On the other hand, if there is a sudden change in the color of which the change is more than the reference value, the step of calculating the ratio of the range of the clothing part from the actually photographed image may be performed. And if the ratio of clothes is less than the reference value, the clothes part is removed and the prediction model is applied only to the rest of the human body. On the other hand, when the ratio of clothes is greater than or equal to the reference value, it may be possible to search and apply a prediction model that assumes the presence of clothes from a database defined in advance.
한편, 도 5를 참조하면, 예상 엑스레이 이미지 외에도 실제 환자 또는 피실습자의 카메라 이미지에 대해 방사선 촬영에 적합한 위치를 가이드하기 위한 라인이 더 표시될 수 있다. 여기서 가이드 라인은 상기 데이터베이스에 저장되어 있는 카메라 이미지로부터 생성되며, 실제 영상과 구별되는 선으로 표시하여, 환자 또는 피실습자가 자신의 인체 위치를 이에 따라 변경하도록 유도할 수 있다. 또한, 추가로 촬영되는 영역과 유사도를 측정하고, 유사도가 낮은 경우 유사도 점수가 낮음을 표시하여, 환자나 피실습자에게 방사선 촬영 전에 촬영 대상 영역이 아님, 혹은 자세가 잘못 되었음을 방사선사, 실제 환자 혹은 실습자에게 재확인 할 수 있다.Meanwhile, referring to FIG. 5, in addition to the predicted X-ray image, a line for guiding a position suitable for radiography may be further displayed for a camera image of an actual patient or a trainee. Here, the guide line is generated from the camera image stored in the database, and is displayed as a line that is distinct from the actual image, so that the patient or the trainee may change the position of his or her body accordingly. In addition, the degree of similarity to the area to be additionally photographed is measured, and if the similarity is low, the similarity score is displayed to the patient or the trainee, indicating that the area to be photographed is not the target area or the posture is incorrect. Or, you can reconfirm with the trainee.
또한, 이 경우, 촬영 대상영역이 아닌 경우 기존의 영상 데이터베이스에서 현재 촬영영역이 어느 위치인지 판단(인식)하고, 촬영대상 영역이 어디인지를 디스플레이로 제안하는 것도 가능하다.Further, in this case, when the region is not a photographing target region, it is possible to determine (recognize) a location of the current photographing region in an existing image database, and propose a location of the photographing target region as a display.
이상에서 설명한 것은 본 발명에 의한 심층신경망을 이용한 가상 엑스레이 영상 표시 방법을 실시하기 위한 하나의 실시예에 불과한 것으로서, 본 발명은 상기 실시예에 한정되지 않고, 이하의 특허청구범위에서 청구하는 바와 같이 본 발명의 요지를 벗어남이 없이 당해 발명이 속하는 분야에서 통상의 지식을 가진 자라면 누구든지 다양한 변경 실시가 가능한 범위까지 본 발명의 기술적 정신이 있다고 할 것이다.What has been described above is only one embodiment for implementing the method of displaying a virtual X-ray image using a deep neural network according to the present invention, and the present invention is not limited to the above embodiment, as claimed in the claims below. Without departing from the gist of the present invention, anyone of ordinary skill in the field to which the present invention pertains will have the technical spirit of the present invention to the extent that various changes can be implemented.
본 발명은 심층신경망을 이용한 가상 엑스레이 영상 표시 방법을 제공한다.The present invention provides a method for displaying a virtual X-ray image using a deep neural network.

Claims (7)

  1. 심층신경망을 이용한 가상 엑스레이 영상 표시 방법으로서,A virtual X-ray image display method using a deep neural network,
    인체의 위치별로 매칭되어 있는 카메라 이미지와 엑스레이 이미지를 갖는 학습 이미지 데이터베이스를 획득하는 단계; Obtaining a training image database having camera images and X-ray images matched for each location of the human body;
    학습 이미지 데이터베이스를 통해 카메라 이미지와 엑스레이 이미지를 반복적으로 심층학습하여 엑스레이 이미지 추정모델을 생성하는 단계;Generating an X-ray image estimation model by repeatedly deeply learning the camera image and the X-ray image through the training image database;
    실제 환자 또는 피실습자의 카메라 이미지를 상기 추정모델을 통해 유사도가 가장 높은 엑스레이 이미지를 추정하는 단계; 및Estimating an X-ray image with the highest similarity from the camera image of an actual patient or trainee through the estimation model; And
    상기 추정된 엑스레이 이미지를 실제 환자 또는 피실습자의 그 카메라 이미지의 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계를 포함하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A virtual X-ray image display method using a deep neural network comprising the step of synthesizing and displaying the estimated X-ray image as a predicted X-ray image of the camera image of the real patient or the trainee.
  2. 제 1 항에 있어서,The method of claim 1,
    상기 학습 이미지 데이터베이스를 획득하는 단계는The step of obtaining the training image database
    엑스레이 촬영장치의 엑스레이 이미지를 기준으로 상기 엑스레이 촬영장치의 엑스선관 또는 엑스레이 촬영장치가 아닌 카메라 장치의 적어도 일측에 위치한 주 카메라를 통해 촬영된 카메라 이미지를 보정하는 단계;Correcting a camera image captured by a main camera located at least on one side of a camera device other than an X-ray tube of the X-ray imaging device or an X-ray imaging device based on an X-ray image of the X-ray imaging device;
    상기 엑스레이 이미지와 카메라 이미지를 매칭하는 단계; 및Matching the X-ray image and the camera image; And
    상기 엑스레이 이미지와 카메라 이미지의 쌍으로써 상기 학습 이미지 데이터베이스에 저장하는 단계를 포함하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A method of displaying a virtual X-ray image using a deep neural network comprising the step of storing in the training image database as a pair of the X-ray image and the camera image.
  3. 제 1 항에 있어서,The method of claim 1,
    상기 학습 이미지 데이터베이스를 획득하는 단계는The step of obtaining the training image database
    인체의 3D 영상 및 뼈의 형상을 모델링하고, 상기 3D 영상 및 뼈의 형상을 방사선 촬영 자세와 방향으로 위치시켜서, 상기 엑스레이 이미지 및 카메라 이미지를 획득하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A virtual X-ray image display method using a deep neural network to obtain the X-ray image and camera image by modeling the 3D image of the human body and the shape of the bone, and positioning the 3D image and the shape of the bone in a radiography posture and direction.
  4. 제 1 항에 있어서,The method of claim 1,
    상기 학습 이미지 데이터베이스를 획득하는 단계는The step of obtaining the training image database
    인체의 3D 영상 및 뼈의 형상을 모델링하고, 상기 인체의 3D 영상 및 뼈의 형상을 생성적 적대 신경망(Generative Adversarial Network, GANs) 알고리즘에 입력하여 실제 인체의 이미지 및 엑스레이 이미지에 가까운 스타일로 변경하고,Model the 3D image of the human body and the shape of the bone, input the 3D image of the human body and the shape of the bone to the Generative Adversarial Network (GANs) algorithm, and change the style to a style close to the actual human body image and X-ray image. ,
    상기 스타일이 변경된 실제에 가까운 인체의 이미지 및 엑스레이 이미지를 다시 1:1로 심층학습하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A method of displaying a virtual X-ray image using a deep neural network for deep learning of the human body image and the X-ray image in which the style has been changed, which is close to the real one.
  5. 제 1 항에 있어서,The method of claim 1,
    선택된 카메라 이미지에 매칭된 엑스레이 이미지를 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계는The step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image on the camera image of the real patient or the subject
    상기 실제 환자 또는 피실습자의 카메라 영상을 기준으로 상기 선택된 카메라 이미지를 보정하고, 상기 선택된 카메라 이미지의 보정에 부합하도록 상기 매칭된 엑스레이 이미지를 보정하여 표시하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A virtual X-ray image display method using a deep neural network for correcting the selected camera image based on the camera image of the actual patient or the trainee, and correcting and displaying the matched X-ray image to match the correction of the selected camera image.
  6. 제 1 항에 있어서,The method of claim 1,
    선택된 카메라 이미지에 매칭된 엑스레이 이미지를 예상 엑스레이 이미지로서 상기 실제 환자 또는 피실습자의 카메라 이미지에 합성하여 표시하는 단계는The step of synthesizing and displaying the X-ray image matched with the selected camera image as a predicted X-ray image on the camera image of the real patient or the subject
    상기 실제 환자 또는 피실습자의 카메라 이미지에 대해 보조 카메라의 열화상 영상을 매칭하여 인체의 영역 분할을 수행하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.A method of displaying a virtual X-ray image using a deep neural network for performing region division of a human body by matching a thermal image of an auxiliary camera with the camera image of the actual patient or the trainee.
  7. 제 4 항에 있어서,The method of claim 4,
    상기 색상에서 기준값 이상의 변화가 있는 경우, 상기 실제 환자 또는 피실습자가 옷을 입을 것으로 판단하고, 기준값 미만의 변화만 있는 경우 상기 실제 환자 또는 피실습자가 옷을 입지 않은 것으로 판단하여, 상기 학습 이미지 데이터베이스로부터 해당되는 상황의 엑스레이 이미지를 추출하여 표시하는 심층신경망을 이용한 가상 엑스레이 영상 표시 방법.If there is a change in the color beyond the reference value, it is determined that the actual patient or the trainee is wearing clothes, and if there is only a change less than the reference value, it is determined that the actual patient or the trainee is not wearing clothes, and the learning image database A virtual X-ray image display method using a deep neural network that extracts and displays an X-ray image of a corresponding situation from
PCT/KR2019/015980 2019-11-20 2019-11-20 Method for displaying virtual x-ray image by using deep neural network WO2021100906A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2019/015980 WO2021100906A1 (en) 2019-11-20 2019-11-20 Method for displaying virtual x-ray image by using deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2019/015980 WO2021100906A1 (en) 2019-11-20 2019-11-20 Method for displaying virtual x-ray image by using deep neural network

Publications (1)

Publication Number Publication Date
WO2021100906A1 true WO2021100906A1 (en) 2021-05-27

Family

ID=75980592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/015980 WO2021100906A1 (en) 2019-11-20 2019-11-20 Method for displaying virtual x-ray image by using deep neural network

Country Status (1)

Country Link
WO (1) WO2021100906A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140084675A (en) * 2012-12-27 2014-07-07 전자부품연구원 Multiple Human Face Detection Method And Electronic Device supporting the same
KR20170132028A (en) * 2016-05-23 2017-12-01 건양대학교산학협력단 Virtual x-ray radiography system for practical training in medical imaging
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
WO2018048507A1 (en) * 2016-09-06 2018-03-15 Han Xiao Neural network for generating synthetic medical images
KR20180057203A (en) * 2016-11-22 2018-05-30 오주영 Radiography guide system and method using camera image
KR101938361B1 (en) * 2017-09-29 2019-01-14 재단법인 아산사회복지재단 Method and program for predicting skeleton state by the body ouline in x-ray image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140084675A (en) * 2012-12-27 2014-07-07 전자부품연구원 Multiple Human Face Detection Method And Electronic Device supporting the same
KR20170132028A (en) * 2016-05-23 2017-12-01 건양대학교산학협력단 Virtual x-ray radiography system for practical training in medical imaging
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
WO2018048507A1 (en) * 2016-09-06 2018-03-15 Han Xiao Neural network for generating synthetic medical images
KR20180057203A (en) * 2016-11-22 2018-05-30 오주영 Radiography guide system and method using camera image
KR101938361B1 (en) * 2017-09-29 2019-01-14 재단법인 아산사회복지재단 Method and program for predicting skeleton state by the body ouline in x-ray image

Similar Documents

Publication Publication Date Title
CN102793551B (en) Chest diagnostic support information generation system
JP5797352B1 (en) Method for tracking a three-dimensional object
US20200090408A1 (en) Systems and methods for augmented reality body movement guidance and measurement
US20030016853A1 (en) Image position matching method and apparatus therefor
CN109925053B (en) Method, device and system for determining surgical path and readable storage medium
KR102204309B1 (en) X-ray Image Display Method Based On Augmented Reality
WO2022035110A1 (en) User terminal for providing augmented reality medical image and method for providing augmented reality medical image
WO2021015507A2 (en) Method and device for converting contrast enhanced image and non-enhanced image by using artificial intelligence
CN114259197B (en) Capsule endoscope quality control method and system
WO2018097596A1 (en) Radiography guide system and method
CN113870331B (en) Chest CT and X-ray real-time registration algorithm based on deep learning
CN112261399B (en) Capsule endoscope image three-dimensional reconstruction method, electronic device and readable storage medium
KR20200081629A (en) Dance evaluation device using joint angle comparison and the method thereof
WO2021100906A1 (en) Method for displaying virtual x-ray image by using deep neural network
WO2024113275A1 (en) Gaze point acquisition method and apparatus, electronic device, and storage medium
CN111920434A (en) Automatic exposure control method and system in digital X-ray photography system
WO2014204126A2 (en) Apparatus for capturing 3d ultrasound images and method for operating same
KR102447480B1 (en) Low-resolution insertion hole image processing method
CN115844436A (en) CT scanning scheme self-adaptive formulation method based on computer vision
CN113361333B (en) Non-contact type riding motion state monitoring method and system
WO2017065591A1 (en) System and method for automatically calculating effective radiation exposure dose
KR20220069389A (en) Lock screw insertion hole detection method
CN114298986A (en) Thoracic skeleton three-dimensional construction method and system based on multi-viewpoint disordered X-ray film
CN106485650A (en) Determine method and the image acquiring method of matching double points
CN111860275A (en) Gesture recognition data acquisition system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953569

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953569

Country of ref document: EP

Kind code of ref document: A1