WO2020095343A1 - X線撮像装置 - Google Patents
X線撮像装置 Download PDFInfo
- Publication number
- WO2020095343A1 WO2020095343A1 PCT/JP2018/040975 JP2018040975W WO2020095343A1 WO 2020095343 A1 WO2020095343 A1 WO 2020095343A1 JP 2018040975 W JP2018040975 W JP 2018040975W WO 2020095343 A1 WO2020095343 A1 WO 2020095343A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- bone region
- image
- machine learning
- extracted
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 34
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 205
- 238000010801 machine learning Methods 0.000 claims abstract description 105
- 238000000605 extraction Methods 0.000 claims description 22
- 230000005484 gravity Effects 0.000 claims description 21
- 238000013135 deep learning Methods 0.000 claims description 20
- 238000001514 detection method Methods 0.000 claims description 18
- 239000000284 extract Substances 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 14
- 210000000689 upper leg Anatomy 0.000 claims description 11
- 230000037182 bone density Effects 0.000 description 17
- 238000000034 method Methods 0.000 description 8
- 210000004705 lumbosacral region Anatomy 0.000 description 7
- 238000001739 density measurement Methods 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 210000000115 thoracic cavity Anatomy 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001678 irradiating effect Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 210000001519 tissue Anatomy 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000004846 x-ray emission Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5217—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/505—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the present invention relates to an X-ray imaging apparatus, and more particularly, to an X-ray imaging apparatus including an image processing unit that extracts a bone region of a subject from a captured image of the subject based on machine learning.
- a spinal column array estimating device that estimates a spinal column array (spine column shape) based on machine learning from an image of a subject.
- a spinal column alignment estimating device is disclosed in, for example, WO 2017/141958.
- a moire image including moire fringes representing a three-dimensional shape of the back of a human body and a bone region of the back of the human body are imaged of the same person.
- a large number of data sets with different X-ray images are prepared.
- the data set (correct answer data) used for machine learning is labeled. For example, the centers of gravity of the thoracic spine and the lumbar spine reflected in the X-ray image are approximated by curves.
- the moire image and the X-ray image are aligned, and the coordinates of the moire image and the coordinates of the X-ray image are aligned.
- the coordinates of the centers of gravity of the thoracic spine and the lumbar spine on the moire image are used as the correct answer data for learning.
- learning is performed so that array information (coordinates of the center of gravity of the thoracic spine and lumbar spine) of the spinal column elements is output.
- the learning for example, deep learning is used.
- the spinal column array estimating device described in International Publication No. 2017/141958 based on the learned result (discriminator), from an unknown moire image (the array information of spinal column elements is unknown) imaged by the imaging device.
- the array information of spinal column elements is estimated. Further, the estimated array information of the spinal column elements is displayed on the display unit in a state of being superimposed on the moire image.
- an image (correct data) of a region where bone density is measured such as the lumbar spine and femur, is learned, and based on the learned result (discriminator), a bone region is extracted from an unknown image. Is extracted. If the extracted bone region is out of alignment with the actual bone region, the user corrects the bone region.
- the present invention has been made to solve the above problems, and one object of the present invention is to input an unknown image that is different from the image (correct answer data) used for machine learning. Even in such a case, it is an object of the present invention to provide an X-ray imaging apparatus capable of suppressing an increase in the burden on the user for correcting the extracted bone region.
- an X-ray imaging apparatus is an X-ray irradiator that irradiates a subject with X-rays, and an X-ray irradiator that detects X-rays emitted from the X-ray irradiator to the subject.
- the bone region of the subject is extracted based on machine learning, and in a predetermined case, in the acquired image,
- the image processing unit that extracts the bone region of the subject based on the rule of 1.
- the display unit that displays the image processed by the image processing unit, and whether the bone region of the subject that is extracted based on machine learning is appropriate or not.
- the control unit displays an image in which the bone region of the subject is extracted based on machine learning. Displayed in the section for machine learning If the bone area of the object extracted by Zui is determined to be inappropriate, it is configured to perform control to display an image bone region is extracted in the object based on a predetermined rule.
- the control unit determines the subject based on the machine learning.
- the image in which the bone region is extracted is displayed on the display unit and it is determined that the bone region of the subject extracted based on machine learning is not appropriate, the image in which the bone region of the subject is extracted is determined based on a predetermined rule. It is configured to perform display control.
- a predetermined rule It is configured to perform display control.
- the accuracy of extracting a bone region based on a predetermined rule is lower than the accuracy of extracting a bone region based on machine learning, while extracting a bone region based on a relatively simple rule. Since it is performed, the bone region can be extracted with a certain degree of accuracy even in an image in which the bone region cannot be properly extracted by machine learning. As a result, the amount of correction is smaller in the case of correcting the bone region extracted based on the predetermined rule than in the case of correcting the bone region that is not properly extracted based on machine learning. As a result, even when an unknown image that is different from the image (correct answer data) used for machine learning is input, it is possible to prevent the user's burden on the correction of the extracted bone region from increasing.
- the control unit is based on at least one of the area of the extracted bone region of the subject and the center of gravity of the extracted bone region of the subject in the acquired image. Then, it is configured to determine whether the bone region of the subject extracted based on the machine learning is appropriate. According to this structure, at least one of the area and the center of gravity of the bone region of the subject extracted by machine learning is compared with at least one of the area and the center of gravity of the bone region of the typical subject. For example, it is possible to easily determine whether or not the bone region of the subject extracted based on machine learning is appropriate.
- the control unit includes, in the acquired image, a predetermined extraction inappropriate image that makes it impossible to appropriately extract the bone region of the subject based on machine learning.
- the control is performed so that the bone region of the subject is not extracted based on machine learning, but the bone region of the subject is extracted based on a predetermined rule.
- an image displayed on the display unit in which a bone region of the subject is extracted based on machine learning and a bone region of the subject is extracted based on a predetermined rule is extracted based on a predetermined rule.
- a switching operation unit for switching between the displayed images is switched and compared.
- the switching operation unit includes a button on the display image displayed on the display unit.
- the bone region of the subject includes the bone region of the femur.
- the machine learning includes deep learning.
- the bone region extraction accuracy in deep learning is relatively high, it is possible to appropriately extract the bone region in most subjects, while it is impossible to properly extract the bone region even in deep learning.
- an image backup an image in which the bone region of the subject is extracted can be displayed as a backup.
- the predetermined rule is that the bone region of the subject is extracted based on the pixel value in the acquired image, and the subject is determined based on the gradient of the pixel values of adjacent pixels. At least one of extracting the bone region of. According to this structure, the bone region of the subject can be easily extracted based on the pixel value.
- the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an image processing unit 3, and a control unit 4.
- the X-ray imaging apparatus 100 also includes a display unit 5 that displays the image processed by the image processing unit 3.
- the X-ray irradiation unit 1 irradiates the subject T with X-rays.
- the X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 to the subject T.
- the X-ray imaging apparatus 100 is used to measure the bone density of the subject T, for example. In the measurement of bone density, for example, by irradiating the measurement site of the subject T with X-rays of two types of energy from the X-ray irradiation unit 1, DEXA (Dual-Energy X) that distinguishes a bone component from other tissues -The Ray Absorbometry method is used.
- DEXA Dual-Energy X
- the X-ray irradiation unit 1 includes an X-ray source 1a.
- the X-ray source 1a is an X-ray tube that is connected to a high voltage generator (not shown) and generates X-rays when a high voltage is applied.
- the X-ray source 1a is arranged with the X-ray emission direction facing the detection surface of the X-ray detection unit 2.
- the X-ray detection unit 2 detects the X-rays emitted from the X-ray irradiation unit 1 and transmitted through the subject T, and outputs a detection signal according to the detected X-ray intensity.
- the X-ray detection unit 2 is composed of, for example, an FPD (Flat Panel Detector).
- the image processing unit 3 includes an image acquisition unit 31, a machine learning base region extraction unit 32, a rule base region extraction unit 33, and a bone density measurement unit 34.
- Each of the image acquisition unit 31, the machine learning base region extraction unit 32, the rule base region extraction unit 33, and the bone density measurement unit 34 is a functional block as software in the image processing unit 3. That is, each of the image acquisition unit 31, the machine learning base region extraction unit 32, the rule base region extraction unit 33, and the bone density measurement unit 34 is configured to function based on the command signal of the control unit 4.
- the image acquisition unit 31 acquires an image I (see FIG. 3) of the subject T based on the X-rays detected by the X-ray detection unit 2. Specifically, the image acquisition unit 31 acquires the image I (X-ray image) based on the X-ray detection signal of a predetermined resolution output from the X-ray detection unit 2.
- the image I is an example of the “acquired image” in the claims.
- the machine learning base region extraction unit 32 extracts the bone region A (see FIG. 3) of the subject T in the image I acquired based on the X-rays detected by the X-ray detection unit 2 based on the machine learning. Is configured. Specifically, in this embodiment, deep learning is used as machine learning.
- the bone region A includes the bone region A of the femur.
- deep learning technology typified by deep neural networks is used to understand an image at the pixel level, and an object class is assigned to each pixel of the image.
- U-net a U-shaped convolutional network is used to perform area extraction of "what, where, and how" in an image.
- the output of each convolutional layer (Conv) of the encoder arranged on the left side of the U-net is directly coupled to each convolutional layer (Deconv) of the right side decoder, and is configured to concatenate the data in the channel direction. Has been done.
- the lower-dimensional feature amount is skipped, and it becomes possible to retain the position information while extracting the feature as in the conventional case. As a result, deterioration of the output image can be suppressed.
- multi-resolution local contrast normalization (LCN) is performed before inputting the input image to the first convolutional layer. After that, the data are sequentially input to the convolutional layer.
- the activation function the most general ReLu function is used for all layers except the output layer.
- batch normalization is performed after the activation function of each convolutional layer in order to speed up and stabilize the learning convergence.
- the cross entropy error is used as the loss function.
- the rule-based area extraction unit 33 in the image I (see FIG. 3) acquired based on the X-rays detected by the X-ray detection unit 2, based on a predetermined rule, the subject T Bone region A of is extracted (see FIG. 4).
- the rule-based area extraction unit 33 extracts the bone area A of the subject T based on the pixel value in the image I and whether or not the gradient of the pixel values of adjacent pixels is equal to or more than a threshold value. That is, in the image I, the boundary of the bone region A of the subject T is obtained based on a predetermined rule.
- the control unit 4 is configured to determine whether or not the bone region A of the subject T extracted based on machine learning is appropriate.
- the control unit 4 determines that the bone region A of the subject T extracted based on the machine learning is appropriate, the bone region A of the subject T is extracted based on the machine learning.
- the display unit 5 controls to display the image (image Im, see FIG. 3).
- the image image Ir, the bone region A of the subject T extracted based on a predetermined rule. (See FIG. 4) is displayed.
- the case where the bone region A of the subject T extracted based on the machine learning is not appropriate is, for example, the posture of the subject T when the image I (correct answer data) used for the machine learning is captured, and the unknown posture.
- Image I estimate image I
- Image I is significantly different from the posture of the subject T when the image is captured.
- the control unit 4 controls the area S of the bone region A of the extracted subject T and the center of gravity G of the bone region A of the extracted subject T in the image I. Based on at least one of the above, it is configured to determine whether or not the bone region A of the subject T extracted based on machine learning is appropriate. Specifically, the area S of the extracted bone region A of the subject T is compared with the area S of the typical bone region A. Then, if the difference between the area S of the bone region A of the extracted subject T and the area S of the typical bone region A is larger than a predetermined area threshold, the subject T extracted based on machine learning. It is determined that the bone area A of is not appropriate.
- the center of gravity G of the bone region A of the extracted subject T is compared with the center of gravity G of the typical bone region A. Then, if the difference between the coordinates of the center of gravity G of the bone region A of the extracted subject T and the coordinates of the center of gravity G of the typical bone region A is larger than a predetermined center of gravity threshold value, extraction is performed based on machine learning.
- the determined bone area A of the subject T is determined to be inappropriate. Note that it may be determined whether the bone region A is appropriate based on only one of the area S and the center of gravity G, or whether the bone region A is appropriate based on both the area S and the center of gravity G. May be.
- the control unit 4 includes, in the image I, a predetermined image P in which extraction of the bone region A of the subject T based on machine learning cannot be appropriately performed.
- the bone area A of the subject T is not extracted based on machine learning, but the bone area A of the subject T is extracted based on a predetermined rule.
- the image I includes an image P such as a metal (bolt)
- the bone region A of the subject T is not extracted based on machine learning, but the bone region A of the subject T is extracted based on a predetermined rule.
- the bone area A is extracted based on a relatively simple rule. Therefore, even if the bone area A cannot be properly extracted by machine learning, the bone area A can be extracted with a certain degree of accuracy. It is possible to extract the area A.
- the image P is an example of the “unsuitable extraction image” in the claims.
- an image (image Im) in which the bone region A of the subject T is extracted based on machine learning which is displayed on the display unit 5, and based on a predetermined rule.
- a button 5b on the display image 5a displayed on the display unit 5 is provided to switch the image (image Ir) in which the bone region A of the subject T is extracted.
- the button 5b is composed of, for example, a pull-down menu. When the user clicks the button 5b with the mouse, it is possible to select (switch) the image Im based on the machine learning included in the pull-down menu and the image Ir based on a predetermined rule.
- the pull-down menu also includes images used in the previous bone density measurement.
- the button 5b is an example of the "switching operation unit" in the claims.
- the user corrects the bone area A.
- the bone area A is corrected, for example, by the user operating the mouse so that the bone area A is filled (or erased).
- the area shown by the dotted line in FIG. 7 shows the bone area A of the subject T extracted based on the predetermined rule shown in FIG.
- the bone density measuring unit 34 measures the bone density in the extracted bone area A of the subject T (or the corrected bone area A when the bone area A is corrected). ..
- the bone density is measured in the area shown by the dotted line in FIG.
- step S1 the image I (image data) composed of an unknown X-ray image is input to the image processing unit 3.
- step S2 it is determined whether or not the unknown image I includes a predetermined image P (see FIG. 5) that makes it impossible to appropriately extract the bone region A of the subject T based on machine learning. It Note that this determination is performed by the control unit 4 using a general image recognition technique, for example. When it is determined in step S2 that the image P is not included, the process proceeds to step S3. On the other hand, when it is determined in step S2 that the image P is included, the process proceeds to step S6.
- step S3 the bone region A of the subject T is extracted based on machine learning in the image I acquired based on the X-rays detected by the X-ray detection unit 2.
- the image I is input to a classifier (model, see FIG. 3) generated in advance by learning by machine learning, and the classifier extracts the bone region A of the subject T.
- step S4 the control unit 4 determines whether or not the bone region A of the subject T extracted based on machine learning is appropriate. If it is determined in step S4 that the extracted bone region A of the subject T is appropriate, the process proceeds to step S5, and the image Im in which the bone region A of the subject T is extracted based on machine learning is displayed on the display unit 5. Displayed in. The bone region A displayed on the display unit 5 is corrected by the user, if necessary.
- step S4 If it is determined in step S4 that the extracted bone region A of the subject T is not appropriate, the process proceeds to step S6.
- step S6 the bone region A of the subject T is extracted based on a predetermined rule. Then, the process proceeds to step S5, and the image Ir in which the bone region A of the subject T is extracted based on a predetermined rule is displayed on the display unit 5.
- step S7 the bone density measuring unit 34 measures the bone density.
- the bone region A of the subject T is determined based on the machine learning.
- the extracted image Im is displayed on the display unit 5 and it is determined that the bone region A of the subject T extracted based on machine learning is not appropriate, the bone region A of the subject T is extracted based on a predetermined rule. It is configured to perform control to display the image Ir.
- the bone region A of the subject T extracted based on machine learning is not appropriate.
- the image Ir in which the bone region A of the subject T is extracted based on the predetermined rule is displayed.
- the accuracy of extracting the bone region A based on a predetermined rule is lower than the accuracy of extracting the bone region A based on machine learning, while the bone region A based on a relatively simple rule.
- the bone region A can be extracted with a certain degree of accuracy even in the image I where the bone region A cannot be properly extracted by machine learning.
- the amount of correction is smaller in the case of correcting the bone region A extracted based on a predetermined rule than in the case of correcting the bone region A that is not properly extracted based on machine learning.
- the control unit 4 sets the area S of the bone region A of the extracted subject T and the center of gravity G of the bone region A of the extracted subject T in the image Im. Based on at least one of the above, it is determined whether or not the bone region A of the subject T extracted based on machine learning is appropriate. Accordingly, at least one of the area S and the center of gravity G of the bone region A of the subject T extracted by machine learning, and at least one of the area S and the center of gravity G of the bone region A of the typical subject T. By comparing with, it is possible to easily determine whether or not the bone region A of the subject T extracted based on machine learning is appropriate.
- the image I includes the predetermined image P that cannot appropriately extract the bone region A of the subject T based on the machine learning
- the bone area A of the subject T is not extracted based on machine learning, but the bone area A of the subject T is extracted based on a predetermined rule.
- the bone region A of the subject T is not extracted based on machine learning, which imposes a burden on the image processing unit 3. Can be reduced.
- a button 5b for switching the image Ir from which A is extracted is provided.
- the button 5b is the button 5b on the display image displayed on the display unit 5.
- the displayed image Ir can be switched.
- the bone area A of the subject T includes the bone area A of the femur.
- the machine learning is deep learning.
- the bone region A in the deep learning is relatively high, the bone region A can be appropriately extracted in most of the subjects T, while the bone region A cannot be extracted properly even in the deep learning.
- the image Ir in which the bone region A of the subject T is extracted based on a predetermined rule can be displayed.
- the predetermined rule is that the bone region of the subject is extracted based on the pixel value in the image I, and the subject T is extracted based on the gradient of the pixel values of the adjacent pixels. At least one of extracting the bone region A is included. Accordingly, the bone region A of the subject T can be easily extracted based on the pixel value.
- the bone region of the subject extracted based on machine learning is Although the example in which it is determined whether or not it is appropriate is shown, the present invention is not limited to this. In the present invention, whether or not the bone region of the subject extracted based on machine learning is appropriate may be determined based on criteria other than the area of the bone region and the center of gravity.
- the above embodiment shows an example in which the image in which the bone region of the subject is extracted based on machine learning and the image in which the bone region of the subject is extracted based on a predetermined rule are switchable
- the present invention is not limited to this.
- the image in which the bone region of the subject is extracted based on machine learning and the image in which the bone region of the subject is extracted based on a predetermined rule may be displayed in parallel on the display unit.
- a button on the display image switches between an image in which the bone region of the subject is extracted based on machine learning and an image in which the bone region of the subject is extracted based on a predetermined rule
- the invention is not so limited.
- the images may be switched by a method (a physical switch or the like) other than the button on the display image.
- a bone region other than the subject's femur (such as the lumbar spine) may be extracted.
- the control unit determines that the bone region of the subject extracted based on the machine learning is not appropriate, an image in which the bone region of the subject is extracted based on a predetermined rule is displayed.
- the present invention is not limited to this.
- the control unit determines that the bone region of the subject extracted based on machine learning is appropriate and displays it, the user determines that the bone region of the subject extracted based on machine learning is not appropriate.
- the bone region of the subject may be extracted based on a predetermined rule and the extracted bone region of the subject may be displayed.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Radiology & Medical Imaging (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Dentistry (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Orthopedic Medicine & Surgery (AREA)
- Physiology (AREA)
- Human Computer Interaction (AREA)
- Databases & Information Systems (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
Description
図1に示すように、X線撮像装置100は、X線照射部1と、X線検出部2と、画像処理部3と、制御部4とを備えている。また、X線撮像装置100は、画像処理部3に処理された画像を表示する表示部5を備えている。
次に、図8を参照して、本実施形態のX線撮像装置100の動作について説明する。なお、画像処理部3において機械学習による学習は既に行われているとする。
本実施形態では、以下のような効果を得ることができる。
なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく特許請求の範囲によって示され、さらに特許請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
2 X線検出部
3 画像処理部
4 制御部
5 表示部
5b ボタン(切替操作部)
100 X線撮像装置
A 骨領域
G 重心
I 画像(取得画像)
P 画像(抽出不適切画像)
S 面積
T 被写体
Claims (8)
- 被写体にX線を照射するX線照射部と、
前記X線照射部から前記被写体に照射されたX線を検出するX線検出部と、
前記X線検出部により検出されたX線に基づいて取得された取得画像において、機械学習に基づいて、前記被写体の骨領域を抽出するとともに、所定の場合に、前記取得画像において、所定のルールに基づいて、前記被写体の骨領域を抽出する画像処理部と、
前記画像処理部に処理された画像を表示する表示部と、
前記機械学習に基づいて抽出された前記被写体の骨領域が適切か否かを判定する制御部とを備え、
前記制御部は、前記機械学習に基づいて抽出された前記被写体の骨領域が適切であると判定した場合、前記機械学習に基づいて前記被写体の骨領域が抽出された画像を前記表示部に表示させ、前記機械学習に基づいて抽出された前記被写体の骨領域が適切でないと判定した場合、前記所定のルールに基づいて前記被写体の骨領域が抽出された画像を表示させる制御を行うように構成されている、X線撮像装置。 - 前記制御部は、前記取得画像において、抽出された前記被写体の骨領域の面積と、抽出された前記被写体の骨領域の重心とのうちの少なくとも一方に基づいて、前記機械学習に基づいて抽出された前記被写体の骨領域が適切か否かを判定するように構成されている、請求項1に記載のX線撮像装置。
- 前記制御部は、前記取得画像に、前記機械学習に基づいた前記被写体の骨領域の抽出が適切に行えなくなる所定の抽出不適切画像が含まれている場合、前記機械学習に基づいた前記被写体の骨領域の抽出は行わずに、前記所定のルールに基づいて前記被写体の骨領域の抽出を行うように制御するように構成されている、請求項1または2に記載のX線撮像装置。
- 前記表示部に表示される、前記機械学習に基づいて前記被写体の骨領域が抽出された画像と、前記所定のルールに基づいて前記被写体の骨領域が抽出された画像とを切り替える切替操作部をさらに備える、請求項1または2に記載のX線撮像装置。
- 前記切替操作部は、前記表示部に表示される表示画像上のボタンを含む、請求項4に記載のX線撮像装置。
- 前記被写体の骨領域は、大腿骨の骨領域を含む、請求項1または2に記載のX線撮像装置。
- 前記機械学習は、深層学習を含む、請求項1または2に記載のX線撮像装置。
- 前記所定のルールは、前記取得画像における画素値に基づいて前記被写体の骨領域を抽出すること、および、隣り合う画素の画素値の勾配に基づいて前記被写体の骨領域を抽出することのうちの少なくとも一方を含む、請求項1または2に記載のX線撮像装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/040975 WO2020095343A1 (ja) | 2018-11-05 | 2018-11-05 | X線撮像装置 |
JP2020556372A JP7188450B2 (ja) | 2018-11-05 | 2018-11-05 | X線撮像装置 |
CN201880099224.9A CN112996440A (zh) | 2018-11-05 | 2018-11-05 | X射线摄像装置 |
KR1020217012342A KR20210068490A (ko) | 2018-11-05 | 2018-11-05 | X 선 촬상 장치 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2018/040975 WO2020095343A1 (ja) | 2018-11-05 | 2018-11-05 | X線撮像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020095343A1 true WO2020095343A1 (ja) | 2020-05-14 |
Family
ID=70610864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/040975 WO2020095343A1 (ja) | 2018-11-05 | 2018-11-05 | X線撮像装置 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP7188450B2 (ja) |
KR (1) | KR20210068490A (ja) |
CN (1) | CN112996440A (ja) |
WO (1) | WO2020095343A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06233761A (ja) * | 1993-02-09 | 1994-08-23 | Hitachi Medical Corp | 医用画像診断装置 |
JP2003265462A (ja) * | 2002-03-19 | 2003-09-24 | Hitachi Ltd | 関心領域抽出方法及び画像処理サーバ |
US20130336553A1 (en) * | 2010-08-13 | 2013-12-19 | Smith & Nephew, Inc. | Detection of anatomical landmarks |
JP2015530193A (ja) * | 2012-09-27 | 2015-10-15 | シーメンス プロダクト ライフサイクル マネージメント ソフトウェアー インコーポレイテッドSiemens Product Lifecycle Management Software Inc. | 3dコンピュータ断層撮影のための複数の骨のセグメンテーション |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4201939B2 (ja) * | 1999-10-22 | 2008-12-24 | 三菱電機株式会社 | 画像処理装置及び放射線治療計画システム |
JP2007135858A (ja) * | 2005-11-18 | 2007-06-07 | Hitachi Medical Corp | 画像処理装置 |
WO2008044441A1 (en) * | 2006-10-10 | 2008-04-17 | Hitachi Medical Corporation | Medical image diagnostic apparatus, medical image measuring method, and medical image measuring program |
JP5300569B2 (ja) * | 2009-04-14 | 2013-09-25 | 株式会社日立メディコ | 画像処理装置 |
US8437521B2 (en) * | 2009-09-10 | 2013-05-07 | Siemens Medical Solutions Usa, Inc. | Systems and methods for automatic vertebra edge detection, segmentation and identification in 3D imaging |
EP2803037A1 (en) * | 2012-01-10 | 2014-11-19 | Koninklijke Philips N.V. | Image processing apparatus |
US9646229B2 (en) * | 2012-09-28 | 2017-05-09 | Siemens Medical Solutions Usa, Inc. | Method and system for bone segmentation and landmark detection for joint replacement surgery |
US10039513B2 (en) * | 2014-07-21 | 2018-08-07 | Zebra Medical Vision Ltd. | Systems and methods for emulating DEXA scores based on CT images |
JP2018517207A (ja) * | 2015-05-18 | 2018-06-28 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | 自己認識画像セグメント化方法及びシステム |
WO2017141958A1 (ja) | 2016-02-15 | 2017-08-24 | 学校法人慶應義塾 | 脊柱配列推定装置、脊柱配列推定方法及び脊柱配列推定プログラム |
CN106228561B (zh) * | 2016-07-29 | 2019-04-23 | 上海联影医疗科技有限公司 | 血管提取方法 |
-
2018
- 2018-11-05 JP JP2020556372A patent/JP7188450B2/ja active Active
- 2018-11-05 WO PCT/JP2018/040975 patent/WO2020095343A1/ja active Application Filing
- 2018-11-05 KR KR1020217012342A patent/KR20210068490A/ko not_active Application Discontinuation
- 2018-11-05 CN CN201880099224.9A patent/CN112996440A/zh active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06233761A (ja) * | 1993-02-09 | 1994-08-23 | Hitachi Medical Corp | 医用画像診断装置 |
JP2003265462A (ja) * | 2002-03-19 | 2003-09-24 | Hitachi Ltd | 関心領域抽出方法及び画像処理サーバ |
US20130336553A1 (en) * | 2010-08-13 | 2013-12-19 | Smith & Nephew, Inc. | Detection of anatomical landmarks |
JP2015530193A (ja) * | 2012-09-27 | 2015-10-15 | シーメンス プロダクト ライフサイクル マネージメント ソフトウェアー インコーポレイテッドSiemens Product Lifecycle Management Software Inc. | 3dコンピュータ断層撮影のための複数の骨のセグメンテーション |
Also Published As
Publication number | Publication date |
---|---|
KR20210068490A (ko) | 2021-06-09 |
JPWO2020095343A1 (ja) | 2021-09-24 |
JP7188450B2 (ja) | 2022-12-13 |
CN112996440A (zh) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5393245B2 (ja) | 画像処理装置、画像処理装置の制御方法、x線画像撮影装置およびx線画像撮影装置の制御方法 | |
US6862364B1 (en) | Stereo image processing for radiography | |
JP4104054B2 (ja) | 画像の位置合わせ装置および画像処理装置 | |
US9734574B2 (en) | Image processor, treatment system, and image processing method | |
WO2016051603A1 (ja) | X線撮影装置 | |
EP1530162A2 (en) | Radiation image processing apparatus, radiation image processing method, program, and computer-readable medium | |
US20130051527A1 (en) | Image processing apparatus and method, and x-ray diagnostic apparatus | |
JP6684909B2 (ja) | 造影剤濃度マップを生成する方法 | |
US20160206266A1 (en) | X-ray imaging apparatus and method for controlling the same | |
JP2017131427A (ja) | X線画像診断装置及び骨密度計測方法 | |
US10182783B2 (en) | Visualization of exposure index values in digital radiography | |
CN110876627B (zh) | X射线摄影装置和x射线图像处理方法 | |
JP7345653B2 (ja) | 放射線医学的撮像方法 | |
JP4416823B2 (ja) | 画像処理装置、画像処理方法、及びコンピュータプログラム | |
US10299752B2 (en) | Medical image processing apparatus, X-ray CT apparatus, and image processing method | |
CN108074219A (zh) | 一种图像校正方法、装置及医疗设备 | |
EP2823465A1 (en) | Stereo x-ray tube based suppression of outside body high contrast objects | |
WO2020095343A1 (ja) | X線撮像装置 | |
US20160278727A1 (en) | Determination of an x-ray image data record of a moving target location | |
JP2016131805A (ja) | X線画像診断装置およびx線画像を作成する方法 | |
JP2022530298A (ja) | X線歯科ボリュームトモグラフィにおける金属アーチファクト低減の方法 | |
WO2016096833A1 (en) | Motion correction method in dual energy radiography | |
JP2021108758A (ja) | X線診断装置及び医用画像処理装置 | |
JP7310239B2 (ja) | 画像処理装置、放射線撮影システム及びプログラム | |
KR101676304B1 (ko) | 산란선 대 일차선비를 이용한 영상 보정 방법 및 컴퓨터 판독가능한 기록 매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18939241 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020556372 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217012342 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18939241 Country of ref document: EP Kind code of ref document: A1 |