WO2019208037A1 - Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device - Google Patents

Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device Download PDF

Info

Publication number
WO2019208037A1
WO2019208037A1 PCT/JP2019/011773 JP2019011773W WO2019208037A1 WO 2019208037 A1 WO2019208037 A1 WO 2019208037A1 JP 2019011773 W JP2019011773 W JP 2019011773W WO 2019208037 A1 WO2019208037 A1 WO 2019208037A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
organ
ray
drr
subject
Prior art date
Application number
PCT/JP2019/011773
Other languages
French (fr)
Japanese (ja)
Inventor
▲高▼橋 渉
翔太 押川
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Priority to KR1020207032563A priority Critical patent/KR102527440B1/en
Priority to CN201980035078.8A priority patent/CN112165900A/en
Priority to JP2020516112A priority patent/JP7092190B2/en
Publication of WO2019208037A1 publication Critical patent/WO2019208037A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Definitions

  • the present invention relates to an image analysis method, a segmentation method, a bone density measurement method, a learning model creation method, and an image creation device.
  • Patent Document 1 discloses collimating only a means for generating radiation, a single crystal lattice irradiated with the radiation, and radiation having a predetermined two reflection angles out of the radiation reflected by the crystal lattice.
  • radiation detection means By means of simultaneously irradiating the subject with radiation of different energies, radiation detection means through which the radiation of these two energies passes through the subject, and analyzing the pulse height of the output of the radiation detection means, respectively
  • An apparatus for quantitatively analyzing bone mineral which comprises a pulse height analyzing means for separating transmission data relating to radiation of different energy, and a means for calculating bone density by processing the separated data.
  • This measurement of bone density is targeted at the bone density of the lumbar vertebrae and femur, which requires clinical attention.
  • the femur has a large individual difference in shape, and in order to perform stable follow-up observation, it is important to specify the bone region of the subject.
  • the operator manually specifies the region, which is not only complicated, but also has a problem that the region specified by the operator varies.
  • the present invention has been made to solve the above problems, and an image analysis method capable of creating an image in which an organ region is accurately extracted from an X-ray image of a region including a subject's organ.
  • An object of the present invention is to provide a segmentation method, a bone density measurement method, a learning model creation method, and an image creation device.
  • the invention according to claim 1 is an image analysis method for performing segmentation for identifying a region of the organ by analyzing an image of the region including the organ of the subject, and machine learning is used as the segmentation technique. And a modified image creating step of creating a modified image in which the density of the region of the organ in the image including the organ of the subject is changed, and an image including the organ of the subject and the modified image creating step And a learning model creation step of creating a machine learning learning model by a learning process using the corrected image created in the above.
  • the invention according to claim 2 is the learning model according to claim 1, wherein the learning model is used for an X-ray image of a region including the subject's organ obtained by X-ray imaging of the subject.
  • An image representing the organ is created by performing conversion using the learning model created in the creation step.
  • the invention according to claim 3 is the invention according to claim 1, wherein the image of the region including the organ of the subject is a DRR image created from CT image data of the subject, and the correction In the image creating step, the density of the CT image data is changed using the region where the CT value of the CT image data is a predetermined value as the region of the organ.
  • a parameter including at least one of a projection coordinate and an angle of the geometric condition is changed, or an image is rotated.
  • Image processing including at least one of deformation and enlargement / reduction is performed to create a plurality of DRR images.
  • At least one of contrast change, noise addition, and edge enhancement is performed on the created DRR image.
  • the invention according to claim 6 is the invention according to claim 1, wherein the image of the region including the organ of the subject is an X-ray image created by X-ray imaging of the subject.
  • the density of the organ region is changed using the X-ray image and the image of the organ obtained by using dual energy subtraction.
  • an X-ray image of a region including the subject's organ obtained by X-raying the subject and the learning model creating step The image representing the organ obtained by performing the conversion using the learning model created in the above is used for learning of the learning model by the learning unit.
  • the invention according to claim 8 is the invention according to claim 1, wherein the organ has a symmetrical shape with respect to the body axis of the subject, and in the pre-learning model creation step, the right organ A machine learning learning model is created collectively for the left and right organ images by horizontally flipping either the left image or the left organ image.
  • the region of the bone part is segmented using the image analysis method according to the first aspect, wherein the organ is the bone part of the subject.
  • the bone density is measured for the bone region segmented by the segmentation method described in claim 9.
  • the invention according to claim 11 is a learning model used when performing segmentation for specifying the region of the organ by analyzing an image of the region including the organ of the subject using machine learning.
  • An image including the subject's organ and a modified image generated by changing the density of the region of the organ in the image including the subject's organ. Used to create a learning model by executing machine learning.
  • the invention according to claim 12 is an image creation device for creating an image obtained by extracting an area of the organ from an X-ray image of the area including the organ of the subject, and X-rays the area including the organ.
  • An X-ray image storage unit that stores a plurality of X-ray images obtained in this manner, a plurality of X-ray image teacher images for machine learning, and a DRR image generation unit that generates a DRR image of an area including the bone part
  • a DRR image that stores a plurality of DRR images created by the DRR image creation unit and a plurality of machine learning DRR image teacher images created based on the DRR image created by the DRR image creation unit
  • Machine learning is performed using the storage unit, the plurality of X-ray images stored in the X-ray image storage unit, and the plurality of X-ray image teacher images, and is stored in the DRR image storage unit.
  • the plurality of DRR images X-rays of a region including the organ of the subject using a learning model for recognizing the organ, which is created in advance by performing machine learning using the plurality of teacher images for DRR images
  • an image creation unit that creates an image representing the organ by performing conversion on the image.
  • the DRR image creation unit may include a part of the plurality of DRR images as a part of the region including the bone part. Created as a DRR image in which the density of the region is changed.
  • the invention described in claim 14 is the invention described in claim 11, wherein a part of the plurality of X-ray images stored in the X-ray image storage unit uses dual energy subtraction.
  • This is an X-ray image in which the density of the organ region of the region including the organ is changed.
  • the corrected image in which the density of the organ region of the subject is changed is used for machine learning, it can be applied to a subject having a low organ density. It is possible to create a learning model. For this reason, it becomes possible to improve the detection accuracy of an organ.
  • the parameters including the projection coordinates and angles of the geometric perspective conditions are changed, or image processing including rotation, deformation, and enlargement / reduction of the image is performed.
  • image processing including rotation, deformation, and enlargement / reduction of the image.
  • the fifth aspect of the present invention since contrast change, noise addition, and edge enhancement are performed on the created DRR image, even when there is a difference in image quality between the DRR image and the X-ray image. In addition, the position of the bone part can be accurately detected.
  • the seventh aspect of the present invention by reusing a plurality of X-ray images and an image representing an organ obtained by performing conversion using a learned learned model for learning of a learning model, It is possible to create a learning model with higher accuracy by expanding the learning image.
  • the corrected image in which the density of the organ region of the subject is changed is used for machine learning, a learning model corresponding to the subject having a low organ density is also provided. It becomes possible to create.
  • FIG. 1 is a schematic front view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus.
  • 1 is a schematic side view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus.
  • It is a block diagram which shows the control system of the bone part image creation apparatus which concerns on embodiment of this invention.
  • It is a schematic diagram for demonstrating the process of producing the bone part image of a subject using machine learning with the bone part image creation apparatus which concerns on embodiment of this invention.
  • FIG. 3 is a schematic diagram of an X-ray image 101 created by an X-ray image creation unit 81.
  • FIG. FIG. 6 is a schematic diagram of a teacher bone image for X-ray image 102 created by an X-ray image creation unit 81. It is explanatory drawing which shows typically the state which produces a DRR image by the virtual projection which simulated the geometric condition of the X-ray irradiation part 11 and the X-ray detection part 12 which are shown in FIG.
  • FIG. 6 is a schematic diagram of a DRR image 103 created by a DRR image creation unit 83.
  • FIG. FIG. 10 is a schematic diagram of a DRR image 104 in which the density of the bone region created by the DRR image creating unit 83 is changed to a small value.
  • 6 is a schematic diagram of a DRR image teacher bone image 105 created by a DRR image creation unit 83.
  • FIG. 3 is a schematic diagram of an X-ray image 106 created by an X-ray image creation unit 81.
  • FIG. 6 is a schematic diagram of a DRR image 107 created by a DRR image creation unit 83.
  • FIG. 1 is a schematic front view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus
  • FIG. 2 is a schematic side view thereof.
  • the present invention is applied to a bone image creating apparatus that creates an image of a bone part of a subject among organs such as a bone part and an organ.
  • This bone image creating apparatus that also functions as an X-ray imaging apparatus is also referred to as an X-ray fluoroscopic imaging table, and includes a top plate 13, an X-ray tube holding member 15, and an X-ray tube holding member 15.
  • X-ray irradiation unit 11 disposed at the tip, and X-rays such as a flat panel detector and an image intensifier (II) disposed on the opposite side of the X-ray irradiation unit 11 with respect to the top plate 13
  • an X-ray detection unit 12 having a detector.
  • the top plate 13, the X-ray tube holding member 15, the X-ray irradiation unit 11, and the X-ray detection unit 12 are shown in FIG. 1 and FIG. 2 by the action of a rotation mechanism 16 incorporating a motor (not shown). It is possible to rotate between a recumbent position where the surface of 13 faces the horizontal direction and a standing position where the surface of the top plate 13 faces the vertical direction. Further, the rotation mechanism 16 itself can be moved up and down with respect to the main column 17 erected on the base plate 18.
  • top 13 When the top 13 is in the prone position, X-ray imaging is performed on the subject in the prone position. At this time, the subject is placed on the top board 13. When the top 13 is in the standing position, X-ray imaging is performed on the subject in the standing position. At this time, the subject stands up in front of the top board 13.
  • FIG. 3 is a block diagram showing a control system of the bone image creating apparatus according to the embodiment of the present invention.
  • This bone part image creating apparatus is for creating a bone part image obtained by extracting a bone region from an X-ray image of a region including a bone part of a subject, and a CPU as a processor for executing a logical operation And a ROM that stores an operation program necessary for controlling the apparatus, a RAM that temporarily stores data and the like during control, and a control unit 80 that controls the entire apparatus.
  • the control unit 80 includes an X-ray image creation unit 81 for creating an X-ray image, and a plurality of X-ray images obtained by X-ray imaging of a region including a bone part such as a subject.
  • X-ray image storage unit 82 that stores a plurality of X-ray image teacher bone images for machine learning, and X-ray imaging of a subject with respect to CT image data of a region including the bone portion
  • DRR image creation unit 83 a plurality of DRR images created by DRR image creation unit 83, and a plurality of machines created based on the DRR image created by DRR image creation unit 83
  • a DRR image storage unit 84 for storing a training DRR image teacher bone image, a plurality of X-ray images and a plurality of X-ray image teacher bone images stored in the X-ray image storage unit 82 are used.
  • Machine learning and recognizing a bone part by executing machine learning using a plurality of DRR images and a plurality of DRR image teacher bone images stored in the DRR image storage unit 84.
  • a bone image creating unit 86 for creating The control unit 80 is composed of a computer in which software is installed. The functions of each unit included in the control unit 80 are realized by executing software installed in the computer.
  • the learning unit 85 may perform machine learning at the stage before delivery of the device and store the result in advance, and additionally machine learning after delivery of the device to a medical institution or the like. May be executed. At this time, the learning unit 85 creates a discriminator by various learnings using arbitrary methods such as FCN (Fully Convolutional Networks), a neural machine network, a support vector machine (SVM), and boosting.
  • FCN Full Convolutional Networks
  • SVM support vector machine
  • the control unit 80 is connected to the X-ray irradiation unit 11 and the X-ray detection unit 12 described above.
  • the control unit 80 is connected to a display unit 21 configured by a liquid crystal display panel or the like and displaying various images including an X-ray image, and an operation unit 22 having various input means such as a keyboard and a mouse. Yes.
  • the control unit 80 is connected online or offline to a CT image storage unit 70 that stores a CT image obtained by CT imaging of the subject.
  • the CT image storage unit 70 may be included in a CT imaging apparatus, or may be included in a treatment planning apparatus that creates a treatment plan for a subject.
  • FIG. 4 is a schematic diagram for explaining a process of creating a bone image of a subject using machine learning by the bone image creating apparatus according to the embodiment of the present invention.
  • a learning model is created.
  • an X-ray image and a DRR image of a region including a bone part are used as an input layer
  • an X-ray image teacher bone image indicating a bone part and a DRR image teacher bone part image are used as an output layer
  • a convolutional layer used as a learning model is learned by machine learning.
  • a bone part image is created.
  • An image showing a partial image is created.
  • FIG. 5 is a flowchart showing an operation of creating a bone portion image obtained by extracting a bone region from an X-ray image of a region including a bone portion of a subject by the bone portion image creating apparatus according to the embodiment of the present invention. .
  • an X-ray image creating step is executed (step S1).
  • an X-ray image of the subject on the top 13 is obtained by using the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in FIG. 1 by the X-ray image creation unit 81 shown in FIG.
  • a plurality of X-ray images are created.
  • an X-ray image may be obtained by taking an image taken by another X-ray imaging apparatus, or may be created by taking an X-ray image of a phantom instead of the subject.
  • the created X-ray image is stored in the X-ray image storage unit 82 shown in FIG. 3 (step S2).
  • a teacher bone image for X-ray image used for machine learning is created (step S3).
  • This X-ray image teacher bone part image is created by the X-ray image creation part 81 by trimming the region of the bone part of the subject with respect to the previously created X-ray image. Further, when the X-ray image teacher bone image is created, an image obtained by slightly translating, rotating, deforming, and enlarging / reducing the trimmed X-ray image is also created.
  • An image obtained by translating, rotating, transforming, and enlarging / reducing the trimmed X-ray image is also used for learning because the subject moves during X-ray imaging described later, or the X-ray irradiation unit 11 and the X-ray detection unit This is to cope with a case where 12 moves.
  • the created X-ray image teacher bone image is stored in the X-ray image storage unit 82 shown in FIG. 3 (step S4).
  • FIG. 6 is a schematic diagram of the X-ray image 101 created by the X-ray image creation unit 81
  • FIG. 7 is a schematic diagram of the teacher bone image 102 for the X-ray image created by the X-ray image creation unit 81. It is.
  • a femur 51, a pelvis 52, and a soft part region 53 are displayed. Further, the femur 51 and the pelvis 52 are displayed in the teacher bone image 102 for X-ray images.
  • step S5 a plurality of DRR images showing the region including the bone part are created (step S5), and the DRR image is stored in the DRR image storage unit 84 (step S6).
  • step S7 A plurality of teacher bone part images for DRR images showing a region including the image are created (step S7), and the teacher bone part images for DRR images are stored in the DRR image storage unit 84 (step S8).
  • a DRR image teacher bone image is created with a region having a CT value equal to or greater than a certain value as a bone region.
  • a DRR image teacher bone image is created by identifying a region having a CT value of 200 HU (Hounsfield Unit) or more as a bone region.
  • FIG. 8 is an explanatory diagram schematically showing a state in which a DRR image is created by virtual projection simulating the geometric conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in FIG.
  • reference numeral 300 indicates CT image data.
  • the CT image data 300 is three-dimensional voxel data that is a set of a plurality of two-dimensional CT image data.
  • the CT image data 300 has a structure in which, for example, about 200 two-dimensional images of 512 ⁇ 512 pixels are stacked in a direction crossing the subject (direction along the line segment L1 or L2 shown in FIG. 8). .
  • the DRR image creating unit 83 When the DRR image creating unit 83 creates a DRR image, it virtually projects the CT image data 300. At this time, the three-dimensional CT image data 300 is arranged on the computer. Then, the geometry which is the geometric arrangement of the X-ray imaging system is reproduced on the computer. In this embodiment, the X-ray irradiation unit 11 and the X-ray detection unit 12 are disposed on both sides of the CT image data 300.
  • the arrangement of the CT image data 300, the X-ray irradiation unit 11, and the X-ray detection unit 12 is such that the subject when performing X-ray imaging, the X-ray irradiation unit 11, and the X-ray detection unit 12 are arranged. It has the same geometry as the arrangement.
  • the term “geometry” means a geometric arrangement relationship between the imaging target, the X-ray irradiation unit 11 and the X-ray detection unit 12.
  • a large number of line segments L connecting the X-ray irradiation unit 11 and each pixel of the X-ray detection unit 12 via each pixel of the CT image data 300 are set.
  • two line segments L1 and L2 are shown for convenience of explanation.
  • a plurality of calculation points are set on the line segment L, and the CT value of each calculation point is calculated.
  • interpolation is performed using the CT value in CT data voxels around the calculation point.
  • the CT values of the calculation points on the line segment L are accumulated. This accumulated value is converted into a line integral of a line attenuation coefficient, and a DRR image is created by calculating attenuation of X-rays.
  • the DRR image is created by changing the parameters for creating the DRR image including at least one of the projection coordinates and the angle with respect to the CT image data 300.
  • image processing including at least one of slight translation, rotation, deformation, and enlargement / reduction is executed.
  • the parallel movement, rotation, deformation, and enlargement / reduction are executed in order to correspond to the case where the subject moves during the X-ray imaging described later, or the X-ray irradiation unit 11 and the X-ray detection unit 12 move. It is.
  • contrast change is executed on the created DRR image.
  • This contrast change, noise addition, and edge enhancement are performed in order to absorb the difference in image quality between the DRR image and the X-ray image and to more reliably recognize the bone region.
  • the parameters including the projection coordinates and angle of the geometric perspective condition are changed under the same conditions, or the image is rotated, deformed, or enlarged. Image processing including reduction is performed under the same conditions.
  • the DRR image creation part 83 selects a part of the DRR images from the plurality of DRR images as a bone in the region including the bone part. It is created as a DRR image in which the density of the partial area is changed. More specifically, the CT value of the bone region where the CT value is a certain value or more is set to a value smaller than the actual CT value. Thereby, it is possible to obtain a DRR image simulating a bone part having a reduced bone density.
  • FIG. 9 is a schematic diagram of the DRR image 103 created by the DRR image creation unit 83.
  • FIG. 10 shows a DRR image 104 in which the density of the bone region created by the DRR image creation unit 83 is changed to a small value.
  • FIG. 11 is a schematic diagram of the DRR image teacher bone part image 105 created by the DRR image creation unit 83.
  • the femur 51, the pelvis 52, and the soft part region 53 are displayed. Further, the femur 51 and the pelvis 52 are displayed in the DRR image teacher bone part image 105.
  • the learning unit 85 executes machine learning using the X-ray image 101 shown in FIG. 6 as an input layer and the X-ray image teacher bone image 102 shown in FIG. 7 as an output layer.
  • a learning model for recognizing the bone part (the femur 51 and the pelvis 52) is created (step S9).
  • FCN is used for this machine learning.
  • the convolutional neural network used in the FCN is configured as shown in FIG. That is, when creating a learning model, the input layer is an X-ray image 101 and DRR images 103 and 104, and the output layer is an X-ray image teacher bone image 102 and a DRR image teacher bone image 105.
  • step S10 X-ray imaging is performed on the subject.
  • step S11 the bone part image creation unit 86 converts the captured X-ray image by using the learning model (convolution layer) created earlier, thereby executing segmentation, and the bone part (femur) 51 and the pelvis 52) are created (step S11). That is, the learning model created previously is used for an X-ray image obtained by X-ray imaging, and an image representing a bone part is created as an output layer. Then, the bone density is measured by various methods using the bone region specified by the segmentation.
  • segmentation is a concept including a process of specifying an outline of a bone or the like or a process of specifying an outline of a bone or the like in addition to the process of specifying a region such as a bone in this embodiment. is there.
  • the operator corrects the created bone part image as necessary. Then, the corrected bone image and the original X-ray image are used for creating a learning model by the learning unit 85 or for re-learning. As a result, it is possible to create a learning model with higher accuracy by expanding learning images including failure examples.
  • the extraction accuracy can be improved by extracting the bone region by machine learning.
  • machine learning is performed using both the X-ray image and the DRR image
  • the learning image can be expanded, and the collection of learning clinical data can be easily performed.
  • a DRR image in which the density of the bone region is changed it is possible to perform machine learning with a DRR image simulating a bone portion having a reduced bone density, resulting in a decrease in bone density and osteoporosis. Bone extraction accuracy can be improved for patients including patients.
  • the X-ray image may be input to the learning model after being blurred by a Gaussian filter or the like.
  • a DRR image is created from a low-resolution CT image, it has a lower resolution than an X-ray image. For this reason, it is possible to more reliably identify bone parts by blurring the X-ray image, reducing noise in the X-ray image, and setting the resolution to be equivalent to that of the DRR image at the time of learning.
  • the DRR image and the X-ray image input to the learning model may be input after performing contrast normalization in advance. Further, a local contrast normalization layer or a local response normalization layer may be added to the intermediate layer.
  • the bone density is reduced by creating a part of the plurality of DRR images as a DRR image in which the density of the bone region of the region including the bone portion is changed.
  • a DRR image simulating the bone part is created and used for machine learning.
  • an X-ray image (high-voltage image) obtained by imaging a part of the plurality of X-ray images with a high voltage applied to the X-ray tube;
  • dual energy subtraction that performs subtraction processing on an X-ray image (low-pressure image) taken with a low voltage applied to the X-ray tube, the bone region of the region including the bone portion
  • An X-ray image having a changed density is used.
  • an X-ray image taken with a high voltage applied to the X-ray tube and a low voltage applied to the X-ray tube A configuration is adopted in which bone density is measured by dual energy subtraction for performing subtraction processing on the X-ray image. Even at the time of specifying the bone part image, this dual energy subtraction was used to capture an X-ray image taken with a high voltage applied to the X-ray tube and a low voltage applied to the X-ray tube. After weighting the X-ray image, a differential energy subtraction image representing the bone part is created by taking the difference between them.
  • an X-ray image used for machine learning either a high pressure image, a low pressure image, or a dual energy subtraction image may be used, or an image obtained by connecting these images in the channel direction may be used. Good.
  • parameter adjustment is performed on the dual energy subtraction image, thereby simulating the bone portion having a reduced bone density. May be obtained.
  • FIG. 12 is a schematic diagram of the X-ray image 106 created by the X-ray image creation unit 81
  • FIG. 13 is a schematic diagram of the DRR image 107 created by the DRR image creation unit 83.
  • FIG. 6 described above is a schematic diagram of the X-ray image 101 near the subject's right foot
  • FIG. 9 is a schematic diagram of the DRR image 103 near the subject's right foot
  • FIG. 12 is a schematic diagram of an X-ray image 106 near the left foot of the subject
  • FIG. 13 is a schematic diagram of a DRR image 107 near the left foot of the subject.
  • the learning unit 85 performs an image of the right bone part on the left side.
  • the machine learning is executed on the left and right bone images collectively by flipping one of the two bone images horizontally.
  • the X-ray image 106 near the subject's left foot shown in FIG. 12 is reversed left and right, and used together with the X-ray image 101 near the subject's right foot shown in FIG. 6 for machine learning.
  • the DRR image 107 near the subject's left foot shown in FIG. 13 is reversed left and right to be used together with the DRR image 103 near the subject's right foot shown in FIG. 9 for machine learning.
  • machine learning is performed using both X-ray images and DRR images.
  • machine learning may be performed using either one of the X-ray image and the DRR image.
  • a bone part is targeted as an organ, but an organ such as an organ may be targeted.
  • an organ such as an organ may be targeted.
  • the concentration of the organ region is low during X-ray imaging. According to the present invention, even in such a case, it is possible to create a learning model corresponding to a subject whose organ concentration is low. For this reason, it becomes possible to improve the detection accuracy of an organ.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Physiology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pulmonology (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A control unit (80) comprises: an X-ray image creation unit (81); an X-ray image memory unit (82) for storing an X-ray image and an X-ray image-use training bone part image; a DRR image creation unit (83) for creating a DRR image from a region containing a bone part; a DRR image memory unit (84) for storing the DRR image and a DRR image-use training bone part image for machine learning use; a learning unit (85) which uses the X-ray image and the X-ray image-use training bone part image to perform machine learning and uses the DRR image and the DRR image-use training bone part image to perform machine learning, thereby creating a learning model for recognizing the bone part; and a bone part image creation unit (86) which uses the learned model created in the learning unit (85) to convert an X-ray image of a region containing the bone part of a subject and create an image representing the bone part.

Description

画像解析方法、セグメンテーション方法、骨密度測定方法、学習モデル作成方法および画像作成装置Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device
 この発明は、画像解析方法、セグメンテーション方法、骨密度測定方法、学習モデル作成方法および画像作成装置に関する。 The present invention relates to an image analysis method, a segmentation method, a bone density measurement method, a learning model creation method, and an image creation device.
 近年、骨粗鬆症の診断のため、被検者の骨密度を測定する骨密度測定装置が使用されている。特許文献1には、放射線を発生する手段と、この放射線が照射される1枚の結晶格子と、この結晶格子で反射した放射線のうち所定の2つの反射角度のもののみをコリメートして2つの異なるエネルギーの放射線を被検体に対して同時に照射する手段と、これら2つのエネルギーの放射線が被検体を透過して入射する放射線検出手段と、該放射線検出手段の出力の波高分析を行うことによりそれぞれのエネルギーの放射線に関する透過データを分離する波高分析手段と、該分離されたデータを演算処理して骨密度算出する手段とからなる骨塩定量分析装置が開示されている。 In recent years, a bone density measuring device for measuring the bone density of a subject has been used for diagnosis of osteoporosis. Patent Document 1 discloses collimating only a means for generating radiation, a single crystal lattice irradiated with the radiation, and radiation having a predetermined two reflection angles out of the radiation reflected by the crystal lattice. By means of simultaneously irradiating the subject with radiation of different energies, radiation detection means through which the radiation of these two energies passes through the subject, and analyzing the pulse height of the output of the radiation detection means, respectively An apparatus for quantitatively analyzing bone mineral is disclosed which comprises a pulse height analyzing means for separating transmission data relating to radiation of different energy, and a means for calculating bone density by processing the separated data.
特許第2638875号公報Japanese Patent No. 2638875
 このような骨密度の測定は、臨床上の注意が必要となる腰椎や大腿骨の骨密度が対象となる。この時、大腿骨は形状に個人差が大きく、安定した経過観察を実施するには、被検者の骨部の領域の特定が重要となる。この骨部の領域を特定するためには、オペレータが手動で領域の特定を行っており、作業が煩雑であるばかりではなく、作業者により特定される領域にバラツキが生ずるという問題がある。 This measurement of bone density is targeted at the bone density of the lumbar vertebrae and femur, which requires clinical attention. At this time, the femur has a large individual difference in shape, and in order to perform stable follow-up observation, it is important to specify the bone region of the subject. In order to specify the region of the bone portion, the operator manually specifies the region, which is not only complicated, but also has a problem that the region specified by the operator varies.
 被検者の骨部を含む画像から骨部の領域を抽出するためのセグメンテーションを自動的に実行するため、ヒストグラムの閾値処理によるアルゴリズムを利用することも考えられるが、特に形状の個人差の大きい大腿骨のセグメンテーションに関しては、骨部の領域を正確に特定することは困難であった。このため、最終的な骨密度の測定結果の精度が悪化するという問題が生じている。 In order to automatically perform segmentation for extracting the bone region from the image including the bone part of the subject, it may be possible to use an algorithm based on the threshold processing of the histogram. Regarding femoral segmentation, it was difficult to accurately identify the bone region. For this reason, the problem that the precision of the measurement result of a final bone density deteriorates has arisen.
 この発明は上記課題を解決するためになされたものであり、被検者の器官を含む領域のX線画像から、正確に器官の領域を抽出した画像を作成することが可能な画像解析方法、セグメンテーション方法、骨密度測定方法、学習モデル作成方法および画像作成装置を提供することを目的とする。 The present invention has been made to solve the above problems, and an image analysis method capable of creating an image in which an organ region is accurately extracted from an X-ray image of a region including a subject's organ, An object of the present invention is to provide a segmentation method, a bone density measurement method, a learning model creation method, and an image creation device.
 請求項1に記載の発明は、被検者の器官を含む領域の画像を解析することにより前記器官の領域を特定するためのセグメンテーションを行う画像解析方法であって、前記セグメンテーションの手法として機械学習を用いるとともに、前記被検者の器官を含む画像における前記器官の領域の濃度を変化させた修正画像を作成する修正画像作成工程と、前記被検者の器官を含む画像と前記修正画像作成工程で作成した修正画像とを用いた学習処理により機械学習の学習モデルを作成する学習モデル作成工程と、を含むことを特徴とする。 The invention according to claim 1 is an image analysis method for performing segmentation for identifying a region of the organ by analyzing an image of the region including the organ of the subject, and machine learning is used as the segmentation technique. And a modified image creating step of creating a modified image in which the density of the region of the organ in the image including the organ of the subject is changed, and an image including the organ of the subject and the modified image creating step And a learning model creation step of creating a machine learning learning model by a learning process using the corrected image created in the above.
 請求項2に記載の発明は、請求項1に記載の発明において、前記被検者をX線撮影して得た前記被検者の器官を含む領域のX線画像に対して、前記学習モデル作成工程で作成した学習モデルを利用して変換を行うことにより、前記器官を表す画像を作成する。 The invention according to claim 2 is the learning model according to claim 1, wherein the learning model is used for an X-ray image of a region including the subject's organ obtained by X-ray imaging of the subject. An image representing the organ is created by performing conversion using the learning model created in the creation step.
 請求項3に記載の発明は、請求項1に記載の発明において、前記被検者の器官を含む領域の画像は、前記被検者のCT画像データから作成されたDRR画像であり、前記修正画像作成工程においては、前記CT画像データのCT値が所定の値となる領域を前記器官の領域としてその濃度を変化させる。 The invention according to claim 3 is the invention according to claim 1, wherein the image of the region including the organ of the subject is a DRR image created from CT image data of the subject, and the correction In the image creating step, the density of the CT image data is changed using the region where the CT value of the CT image data is a predetermined value as the region of the organ.
 請求項4に記載の発明は、請求項3に記載の発明において、DRR画像の作成時に、前記幾何学的条件の投影座標および角度の少なくとも一方を含むパラメータを変化させ、あるいは、画像の回転、変形および拡大縮小の少なくとも1つを含む画像処理を施して、複数のDRR画像を作成する。 According to a fourth aspect of the present invention, in the invention according to the third aspect, when a DRR image is created, a parameter including at least one of a projection coordinate and an angle of the geometric condition is changed, or an image is rotated. Image processing including at least one of deformation and enlargement / reduction is performed to create a plurality of DRR images.
 請求項5に記載の発明は、請求項3に記載の発明において、作成後のDRR画像に対して、コントラスト変化、ノイズ付加およびエッジ強調の少なくとも1つを実行する。 According to a fifth aspect of the present invention, in the third aspect of the invention, at least one of contrast change, noise addition, and edge enhancement is performed on the created DRR image.
 請求項6に記載の発明は、請求項1に記載の発明において、前記被検者の器官を含む領域の画像は、前記被検者をX線撮影することにより作成されたX線画像であり、前記修正画像作成工程においては、前記X線画像と、デュアルエナジーサブトラクションを利用して得られた前記器官の画像とを利用して前記器官の領域の濃度を変化させる。 The invention according to claim 6 is the invention according to claim 1, wherein the image of the region including the organ of the subject is an X-ray image created by X-ray imaging of the subject. In the modified image creating step, the density of the organ region is changed using the X-ray image and the image of the organ obtained by using dual energy subtraction.
 請求項7に記載の発明は、請求項2に記載の発明において、前記被検者をX線撮影して得た前記被検者の器官を含む領域のX線画像と、前記学習モデル作成工程で作成した学習モデルを利用して変換を行うことにより得た前記器官を表す画像とを、前記学習部による学習モデルの学習に利用する。 According to a seventh aspect of the present invention, in the second aspect of the present invention, an X-ray image of a region including the subject's organ obtained by X-raying the subject and the learning model creating step The image representing the organ obtained by performing the conversion using the learning model created in the above is used for learning of the learning model by the learning unit.
 請求項8に記載の発明は、請求項1に記載の発明において、前記器官は前記被検者の体軸に対して左右対称の形状を有し、前学習モデル作成工程においては、右側の器官の画像と左側の器官の画像のいずれか一方を左右反転することにより、左右の器官の画像に対して一括して機械学習の学習モデルを作成する。 The invention according to claim 8 is the invention according to claim 1, wherein the organ has a symmetrical shape with respect to the body axis of the subject, and in the pre-learning model creation step, the right organ A machine learning learning model is created collectively for the left and right organ images by horizontally flipping either the left image or the left organ image.
 請求項9に記載の発明は、前記器官は前記被検者の骨部である、請求項1に記載の画像解析方法を利用して前記骨部の領域をセグメンテーションする。 According to the ninth aspect of the present invention, the region of the bone part is segmented using the image analysis method according to the first aspect, wherein the organ is the bone part of the subject.
 請求項10に記載の発明は、請求項9に記載のセグメンテーション方法によりセグメンテーションされた骨部の領域に対して骨密度を測定する。 In the invention described in claim 10, the bone density is measured for the bone region segmented by the segmentation method described in claim 9.
 請求項11に記載の発明は、被検者の器官を含む領域の画像を、機械学習を利用して解析することにより、前記器官の領域を特定するためのセグメンテーションを行うときに用いられる学習モデルを作成する学習モデル作成方法であって、前記被検者の器官を含む画像と、前記被検者の器官を含む画像における前記器官の領域の濃度を変化させることにより作成された修正画像とを用い、機械学習の学習を実行して学習モデルを作成することを特徴とする。 The invention according to claim 11 is a learning model used when performing segmentation for specifying the region of the organ by analyzing an image of the region including the organ of the subject using machine learning. An image including the subject's organ and a modified image generated by changing the density of the region of the organ in the image including the subject's organ. Used to create a learning model by executing machine learning.
 請求項12に記載の発明は、被検者の器官を含む領域のX線画像から前記器官の領域を抽出した画像を作成する画像作成装置であって、前記器官を含む領域をX線撮影して得た複数のX線画像と、機械学習用の複数のX線画像用教師画像とを記憶するX線画像記憶部と、前記骨部を含む領域のDRR画像を作成するDRR画像作成部と、前記DRR画像作成部により作成された複数のDRR画像と、前記DRR画像作成部により作成されたDRR画像に基づいて作成された複数の機械学習用のDRR画像用教師画像とを記憶するDRR画像記憶部と、前記X線画像記憶部に記憶された前記複数のX線画像と前記複数のX線画像用教師画像とを使用して機械学習を実行するとともに、前記DRR画像記憶部に記憶された前記複数のDRR画像と前記複数のDRR画像用教師画像とを使用して機械学習を実行することによって予め作成された前記器官を認識するための学習モデルを使用して、前記被検者の器官を含む領域のX線画像に対して変換を行うことにより、前記器官を表す画像を作成する画像作成部と、を備えたことを特徴とする。 The invention according to claim 12 is an image creation device for creating an image obtained by extracting an area of the organ from an X-ray image of the area including the organ of the subject, and X-rays the area including the organ. An X-ray image storage unit that stores a plurality of X-ray images obtained in this manner, a plurality of X-ray image teacher images for machine learning, and a DRR image generation unit that generates a DRR image of an area including the bone part A DRR image that stores a plurality of DRR images created by the DRR image creation unit and a plurality of machine learning DRR image teacher images created based on the DRR image created by the DRR image creation unit Machine learning is performed using the storage unit, the plurality of X-ray images stored in the X-ray image storage unit, and the plurality of X-ray image teacher images, and is stored in the DRR image storage unit. The plurality of DRR images X-rays of a region including the organ of the subject using a learning model for recognizing the organ, which is created in advance by performing machine learning using the plurality of teacher images for DRR images And an image creation unit that creates an image representing the organ by performing conversion on the image.
 請求項13に記載の発明は、請求項11に記載の発明において、前記DRR画像作成部は、前記複数のDRR画像のうちの一部のDRR画像を、前記骨部を含む領域のうちの器官領域の濃度を変化させたDRR画像として作成する。 According to a thirteenth aspect of the present invention, in the invention according to the eleventh aspect, the DRR image creation unit may include a part of the plurality of DRR images as a part of the region including the bone part. Created as a DRR image in which the density of the region is changed.
請求項14に記載の発明は、請求項11に記載の発明において、前記X線画像記憶部に記憶される複数のX線画像のうちの一部のX線画像は、デユアルエナジーサブトラクションを利用することにより、前記器官を含む領域のうちの器官領域の濃度を変化させたX線画像である。 The invention described in claim 14 is the invention described in claim 11, wherein a part of the plurality of X-ray images stored in the X-ray image storage unit uses dual energy subtraction. This is an X-ray image in which the density of the organ region of the region including the organ is changed.
 請求項1から請求項8に記載の発明によれば、被検者の器官の領域の濃度を変化させた修正画像を機械学習に利用することから、器官の濃度が低い被検者にも対応した学習モデルを作成することが可能となる。このため、器官の検出精度を向上させることが可能となる。 According to the first to eighth aspects of the invention, since the corrected image in which the density of the organ region of the subject is changed is used for machine learning, it can be applied to a subject having a low organ density. It is possible to create a learning model. For this reason, it becomes possible to improve the detection accuracy of an organ.
 請求項4に記載の発明によれば、幾何学的透視条件の投影座標および角度を含むパラメータを変化させ、あるいは、画像の回転、変形、拡大縮小を含む画像処理を施すことから、被検者の位置や姿勢、あるいは、X線撮影時の撮影系の位置がわずかに偏向された場合においても、骨部の位置を正確に検出することが可能となる。そして、大量のDRR画像を作成することができることから、各患者に対応したオーダーメイドの識別器を学習することができ、さらには、低フレームレートのDRR画像を使用する場合においても、骨部の位置を正確に検出することが可能となる。 According to the fourth aspect of the present invention, the parameters including the projection coordinates and angles of the geometric perspective conditions are changed, or image processing including rotation, deformation, and enlargement / reduction of the image is performed. Even when the position and orientation of the X-ray or the position of the imaging system during X-ray imaging are slightly deflected, the position of the bone part can be accurately detected. Since a large amount of DRR images can be created, it is possible to learn a custom-made discriminator corresponding to each patient. Furthermore, even when using a low frame rate DRR image, The position can be accurately detected.
 請求項5に記載の発明によれば、作成後のDRR画像に対して、コントラスト変化、ノイズ付加、エッジ強調を実行することから、DRR画像とX線画像に画質の差異が生じた場合においても、骨部の位置を正確に検出することが可能となる。 According to the fifth aspect of the present invention, since contrast change, noise addition, and edge enhancement are performed on the created DRR image, even when there is a difference in image quality between the DRR image and the X-ray image. In addition, the position of the bone part can be accurately detected.
 請求項7に記載の発明によれば、複数のX線画像と学習した学習済みモデルを利用して変換を行うことにより得た器官を表す画像とを学習モデルの学習に再度利用することで、学習画像を拡充させてより精度の高い学習モデルを作成することが可能となる。 According to the seventh aspect of the present invention, by reusing a plurality of X-ray images and an image representing an organ obtained by performing conversion using a learned learned model for learning of a learning model, It is possible to create a learning model with higher accuracy by expanding the learning image.
 請求項8に記載の発明によれば、被検者の体軸に対して左右対称の形状を有する器官に対して、その検出精度を均一なものとすることができる。そして、左右の器官の画像に対して一括して機械学習を実行することから、学習画像を拡充させてより精度の高い学習モデルを作成することが可能となる。 According to the invention described in claim 8, it is possible to make the detection accuracy uniform for an organ having a symmetrical shape with respect to the body axis of the subject. Since machine learning is performed on the images of the left and right organs in a lump, it is possible to create a learning model with higher accuracy by expanding the learning image.
 請求項9に記載の発明によれば、骨部の領域のセグメンテーションを精度よく実行することが可能となる。 According to the invention described in claim 9, it is possible to accurately perform segmentation of the bone region.
 請求項10に記載の発明によれば、セグメンテーションされた骨部の領域に対して骨密度の測定を実行することが可能となる。 According to the invention of the tenth aspect, it is possible to execute the bone density measurement on the segmented bone region.
 請求項11に記載の発明によれば、被検者の器官の領域の濃度を変化させた修正画像を機械学習に利用することから、器官の濃度が低い被検者にも対応した学習モデルを作成することが可能となる。 According to the invention of claim 11, since the corrected image in which the density of the organ region of the subject is changed is used for machine learning, a learning model corresponding to the subject having a low organ density is also provided. It becomes possible to create.
 請求項12から請求項14に記載の発明によれば、器官の領域を機械学習により抽出することで、その抽出精度を向上させることが可能となる。この時、X線画像とDRR画像の両方を利用して機械学習を行うことから、学習画像を拡充させることが可能となり、学習用臨床データの収集を容易に実行することが可能となる。 According to the invention described in claims 12 to 14, it is possible to improve the extraction accuracy by extracting an organ region by machine learning. At this time, since machine learning is performed using both the X-ray image and the DRR image, the learning image can be expanded, and the collection of learning clinical data can be easily performed.
X線撮影装置としても機能するこの発明の実施形態に係る骨部画像作成装置の正面概要図である。1 is a schematic front view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus. X線撮影装置としても機能するこの発明の実施形態に係る骨部画像作成装置の側面概要図である。1 is a schematic side view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus. この発明の実施形態に係る骨部画像作成装置の制御系を示すブロック図である。It is a block diagram which shows the control system of the bone part image creation apparatus which concerns on embodiment of this invention. この発明の実施形態に係る骨部画像作成装置により機械学習を利用して被検者の骨部画像を作成する工程を説明するための模式図である。It is a schematic diagram for demonstrating the process of producing the bone part image of a subject using machine learning with the bone part image creation apparatus which concerns on embodiment of this invention. この発明の実施形態に係る骨部画像作成装置により被検者の骨部を含む領域のX線画像から骨部の領域を抽出した骨部画像を作成する動作を示すフローチャートである。It is a flowchart which shows the operation | movement which produces the bone part image which extracted the area | region of the bone part from the X-ray image of the area | region containing a subject's bone part by the bone part image creation apparatus which concerns on embodiment of this invention. X線画像作成部81により作成されたX線画像101の模式図である。3 is a schematic diagram of an X-ray image 101 created by an X-ray image creation unit 81. FIG. X線画像作成部81により作成されたX線画像用教師骨部画像102の模式図である。FIG. 6 is a schematic diagram of a teacher bone image for X-ray image 102 created by an X-ray image creation unit 81. 図1に示すX線照射部11とX線検出部12との幾何学的条件を模擬した仮想的投影によりDRR画像を作成する状態を模式的に示す説明図である。It is explanatory drawing which shows typically the state which produces a DRR image by the virtual projection which simulated the geometric condition of the X-ray irradiation part 11 and the X-ray detection part 12 which are shown in FIG. DRR画像作成部83により作成されたDRR画像103の模式図である。6 is a schematic diagram of a DRR image 103 created by a DRR image creation unit 83. FIG. DRR画像作成部83により作成された骨部の領域の濃度を小さな値に変更したDRR画像104の模式図である。FIG. 10 is a schematic diagram of a DRR image 104 in which the density of the bone region created by the DRR image creating unit 83 is changed to a small value. DRR画像作成部83により作成されたDRR画像用教師骨部画像105の模式図である。6 is a schematic diagram of a DRR image teacher bone image 105 created by a DRR image creation unit 83. FIG. X線画像作成部81により作成されたX線画像106の模式図である。3 is a schematic diagram of an X-ray image 106 created by an X-ray image creation unit 81. FIG. DRR画像作成部83により作成されたDRR画像107の模式図である。6 is a schematic diagram of a DRR image 107 created by a DRR image creation unit 83. FIG.
 以下、この発明の実施の形態を図面に基づいて説明する。図1は、X線撮影装置としても機能するこの発明の実施形態に係る骨部画像作成装置の正面概要図であり、図2は、その側面概要図である。なお、この実施形態においては、骨部や臓器等の器官のうち、被検者の骨部の画像を作成する骨部画像作成装置にこの発明を適用した場合について説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. FIG. 1 is a schematic front view of a bone image creating apparatus according to an embodiment of the present invention that also functions as an X-ray imaging apparatus, and FIG. 2 is a schematic side view thereof. In this embodiment, a case will be described in which the present invention is applied to a bone image creating apparatus that creates an image of a bone part of a subject among organs such as a bone part and an organ.
 このX線撮影装置としても機能する骨部画像作成装置は、X線透視撮影台とも呼称されるものであり、天板13と、X線管保持部材15と、このX線管保持部材15の先端に配設されたX線照射部11と、天板13に対してX線照射部11の逆側に配設されたフラットパネルディテクタやイメージインテンシファイア(I.I.)等のX線検出器を有するX線検出部12とを備える。 This bone image creating apparatus that also functions as an X-ray imaging apparatus is also referred to as an X-ray fluoroscopic imaging table, and includes a top plate 13, an X-ray tube holding member 15, and an X-ray tube holding member 15. X-ray irradiation unit 11 disposed at the tip, and X-rays such as a flat panel detector and an image intensifier (II) disposed on the opposite side of the X-ray irradiation unit 11 with respect to the top plate 13 And an X-ray detection unit 12 having a detector.
 これらの天板13、X線管保持部材15、X線照射部11およびX線検出部12は、図示しないモータを内蔵した回動機構16の作用により、図1および図2に示す、天板13の表面が水平方向を向く臥位位置と、天板13の表面が鉛直方向を向く立位位置との間を回動可能となっている。また、回動機構16自体は、ベースプレート18上に立設された主支柱17に対して昇降可能となっている。 The top plate 13, the X-ray tube holding member 15, the X-ray irradiation unit 11, and the X-ray detection unit 12 are shown in FIG. 1 and FIG. 2 by the action of a rotation mechanism 16 incorporating a motor (not shown). It is possible to rotate between a recumbent position where the surface of 13 faces the horizontal direction and a standing position where the surface of the top plate 13 faces the vertical direction. Further, the rotation mechanism 16 itself can be moved up and down with respect to the main column 17 erected on the base plate 18.
 天板13が臥位位置にあるときには、臥位状態の被検者に対してX線撮影が行われる。このときには、被検者は天板13上に載置される。また、天板13が立位位置にあるときには、立位状態の被検者に対してX線撮影が行われる。このときには、被検者は天板13の正面に起立する。 When the top 13 is in the prone position, X-ray imaging is performed on the subject in the prone position. At this time, the subject is placed on the top board 13. When the top 13 is in the standing position, X-ray imaging is performed on the subject in the standing position. At this time, the subject stands up in front of the top board 13.
 次に、この発明の実施形態に係る骨部画像測定装置の構成について説明する。図3は、この発明の実施形態に係る骨部画像作成装置の制御系を示すブロック図である。 Next, the configuration of the bone image measuring apparatus according to the embodiment of the present invention will be described. FIG. 3 is a block diagram showing a control system of the bone image creating apparatus according to the embodiment of the present invention.
 この骨部画像作成装置は、被検者の骨部を含む領域のX線画像から骨部の領域を抽出した骨部画像を作成するためのものであり、論理演算を実行するプロセッサーとしてのCPU、装置の制御に必要な動作プログラムが格納されたROM、制御時にデータ等が一時的にストアされるRAM等を備え、装置全体を制御する制御部80を備える。 This bone part image creating apparatus is for creating a bone part image obtained by extracting a bone region from an X-ray image of a region including a bone part of a subject, and a CPU as a processor for executing a logical operation And a ROM that stores an operation program necessary for controlling the apparatus, a RAM that temporarily stores data and the like during control, and a control unit 80 that controls the entire apparatus.
 この制御部80は、機能的構成として、X線画像を作成するためのX線画像作成部81と、被検者等の骨部を含む領域をX線撮影して得た複数のX線画像と、機械学習用の複数のX線画像用教師骨部画像とを記憶するX線画像記憶部82と、骨部を含む領域のCT画像データに対して、被検者をX線撮影するときのX線照射部11とX線検出部12の幾何学的条件を模擬した仮想的投影を行うことにより、骨部を含む領域のDRR(Digitally Reconstructed Radiograph:デジタル再構成シミュレーション画像)画像を作成するDRR画像作成部83と、DRR画像作成部83により作成された複数のDRR画像と、DRR画像作成部83により作成されたDRR画像に基づいて作成された複数の機械学習用のDRR画像用教師骨部画像とを記憶するDRR画像記憶部84と、X線画像記憶部82に記憶された複数のX線画像と複数のX線画像用教師骨部画像とを使用して機械学習を実行するとともに、DRR画像記憶部84に記憶された複数のDRR画像と複数のDRR画像用教師骨部画像とを使用して機械学習を実行することにより、骨部を認識するための学習モデルを作成する学習部85と、被検者の骨部を含む領域のX線画像に対して学習部85で学習した学習済みモデルを利用して変換を行うことにより骨部を表す画像を作成する骨部画像作成部86と、を備える。この制御部80は、ソフトウエアがインストールされたコンピュータから構成される。この制御部80に含まれる各部の機能は、コンピュータにインストールされているソフトウエアを実行することで実現される。 As a functional configuration, the control unit 80 includes an X-ray image creation unit 81 for creating an X-ray image, and a plurality of X-ray images obtained by X-ray imaging of a region including a bone part such as a subject. And X-ray image storage unit 82 that stores a plurality of X-ray image teacher bone images for machine learning, and X-ray imaging of a subject with respect to CT image data of a region including the bone portion By creating a virtual projection that simulates the geometric conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12, a DRR (Digitally Reconstructed Radiograph) image of the region including the bone part is created. DRR image creation unit 83, a plurality of DRR images created by DRR image creation unit 83, and a plurality of machines created based on the DRR image created by DRR image creation unit 83 A DRR image storage unit 84 for storing a training DRR image teacher bone image, a plurality of X-ray images and a plurality of X-ray image teacher bone images stored in the X-ray image storage unit 82 are used. Machine learning and recognizing a bone part by executing machine learning using a plurality of DRR images and a plurality of DRR image teacher bone images stored in the DRR image storage unit 84. An image representing a bone part by performing conversion using a learned model learned by the learning unit 85 on an X-ray image of a region including the bone part of the subject and a learning unit 85 that creates a learning model A bone image creating unit 86 for creating The control unit 80 is composed of a computer in which software is installed. The functions of each unit included in the control unit 80 are realized by executing software installed in the computer.
 なお、上述した構成において、DRR画像用教師骨部画像とX線画像用とがラベル画像である場合、両者は同一の画像となることもある。 In the above-described configuration, when the DRR image teacher bone image and the X-ray image are label images, both may be the same image.
 また、上述した構成において、学習部85は、装置の納入前の段階で機械学習を実行してその結果を予め記憶しておいてもよく、医療機関等への装置の納入後に追加で機械学習を実行してもよい。この時、学習部85は、FCN(Fully Convolutional Networks)、ニューラルマシンネットワーク、サポートベクターマシン(SVM)、ブースティング等の任意の手法を用いた各種の学習により識別器を作成する。 In the above-described configuration, the learning unit 85 may perform machine learning at the stage before delivery of the device and store the result in advance, and additionally machine learning after delivery of the device to a medical institution or the like. May be executed. At this time, the learning unit 85 creates a discriminator by various learnings using arbitrary methods such as FCN (Fully Convolutional Networks), a neural machine network, a support vector machine (SVM), and boosting.
 この制御部80は、上述したX線照射部11およびX線検出部12と接続されている。また、この制御部80は、液晶表示パネル等から構成されX線画像を含む各種の画像を表示する表示部21と、キーボードやマウス等の各種の入力手段を備えた操作部22と接続されている。さらに、この制御部80は、被検者をCT撮影して得たCT画像を記憶するCT画像記憶部70と、オンラインまたはオフラインにて接続されている。このCT画像記憶部70は、CT撮影装置に含まれるものであってもよく、被検者の治療計画を作成する治療計画装置に含まれるものであってもよい。 The control unit 80 is connected to the X-ray irradiation unit 11 and the X-ray detection unit 12 described above. The control unit 80 is connected to a display unit 21 configured by a liquid crystal display panel or the like and displaying various images including an X-ray image, and an operation unit 22 having various input means such as a keyboard and a mouse. Yes. Further, the control unit 80 is connected online or offline to a CT image storage unit 70 that stores a CT image obtained by CT imaging of the subject. The CT image storage unit 70 may be included in a CT imaging apparatus, or may be included in a treatment planning apparatus that creates a treatment plan for a subject.
 次に、以上のような構成を有する骨部画像作成装置を使用することにより、被検者の骨部を含む領域のX線画像から骨部の領域を抽出した骨部画像を作成する動作について説明する。 Next, an operation for creating a bone part image obtained by extracting a bone region from an X-ray image of a region including the bone part of a subject by using the bone part image creation device having the above-described configuration. explain.
 最初に、骨部画像を作成するための基本的な考え方について説明する。図4は、この発明の実施形態に係る骨部画像作成装置により機械学習を利用して被検者の骨部画像を作成する工程を説明するための模式図である。 First, the basic concept for creating a bone image will be described. FIG. 4 is a schematic diagram for explaining a process of creating a bone image of a subject using machine learning by the bone image creating apparatus according to the embodiment of the present invention.
 機械学習を利用して骨部の位置を特定するためには、最初に、学習モデルを作成する。この学習モデル作成工程においては、骨部を含む領域のX線画像およびDRR画像を入力層とし、骨部を示すX線画像用教師骨部画像およびDRR画像用教師骨部画像を出力層として、機械学習により、学習モデルとして使用する畳み込み層を学習する。次に、骨部画像を作成する。この骨部画像作成工程においては、撮影されたX線透視画像を入力層とし、先に学習された学習モデルを利用して変換を行うことにより、出力層としての骨部の領域を抽出した骨部画像を示す画像を作成する。 In order to specify the position of the bone using machine learning, first, a learning model is created. In this learning model creation step, an X-ray image and a DRR image of a region including a bone part are used as an input layer, an X-ray image teacher bone image indicating a bone part and a DRR image teacher bone part image are used as an output layer, A convolutional layer used as a learning model is learned by machine learning. Next, a bone part image is created. In this bone part image creation step, a bone obtained by extracting a bone part region as an output layer by using a captured X-ray fluoroscopic image as an input layer and performing conversion using a learning model learned earlier. An image showing a partial image is created.
 次に、このような工程による骨部画像の作成動作について詳細に説明する。図5は、この発明の実施形態に係る骨部画像作成装置により被検者の骨部を含む領域のX線画像から骨部の領域を抽出した骨部画像を作成する動作を示すフローチャートである。 Next, the operation of creating a bone image by such a process will be described in detail. FIG. 5 is a flowchart showing an operation of creating a bone portion image obtained by extracting a bone region from an X-ray image of a region including a bone portion of a subject by the bone portion image creating apparatus according to the embodiment of the present invention. .
 この発明の実施形態に係る骨部画像作成装置により骨部画像を作成するときには、最初に、X線画像作成工程を実行する(ステップS1)。この画像作成工程においては、図3に示すX線画像作成部81により図1に示すX線照射部11とX線検出部12とを使用して天板13上の被検者をX線撮影することによって、複数のX線画像を作成する。なお、X線画像は、その他のX線撮影装置により撮影されたものを取得してもよく、また、被検者に変えてファントムをX線撮影することにより作成してもよい。作成されたX線画像は、図3に示すX線画像記憶部82に記憶される(ステップS2)。 When creating a bone part image by the bone part image creating apparatus according to the embodiment of the present invention, first, an X-ray image creating step is executed (step S1). In this image creation process, an X-ray image of the subject on the top 13 is obtained by using the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in FIG. 1 by the X-ray image creation unit 81 shown in FIG. By doing so, a plurality of X-ray images are created. Note that an X-ray image may be obtained by taking an image taken by another X-ray imaging apparatus, or may be created by taking an X-ray image of a phantom instead of the subject. The created X-ray image is stored in the X-ray image storage unit 82 shown in FIG. 3 (step S2).
 次に、機械学習に使用されるX線画像用教師骨部画像を作成する(ステップS3)。このX線画像用教師骨部画像は、X線画像作成部81により、先に作成されたX線画像に対して被検者の骨部の領域をトリミングすることにより作成される。また、このX線画像用教師骨部画像の作成時には、トリミング後のX線画像をわずかに平行移動、回転、変形、拡大縮小した画像も作成される。トリミングしたX線画像を平行移動、回転、変形、拡大縮小した画像も学習に使用するのは、後述するX線撮影時に被検者が移動し、あるいは、X線照射部11とX線検出部12が移動した場合に対応するためである。作成されたX線画像用教師骨部画像は、図3に示すX線画像記憶部82に記憶される(ステップS4)。 Next, a teacher bone image for X-ray image used for machine learning is created (step S3). This X-ray image teacher bone part image is created by the X-ray image creation part 81 by trimming the region of the bone part of the subject with respect to the previously created X-ray image. Further, when the X-ray image teacher bone image is created, an image obtained by slightly translating, rotating, deforming, and enlarging / reducing the trimmed X-ray image is also created. An image obtained by translating, rotating, transforming, and enlarging / reducing the trimmed X-ray image is also used for learning because the subject moves during X-ray imaging described later, or the X-ray irradiation unit 11 and the X-ray detection unit This is to cope with a case where 12 moves. The created X-ray image teacher bone image is stored in the X-ray image storage unit 82 shown in FIG. 3 (step S4).
 なお、この時には、X線画像とX線画像用教師骨部画像の両方に対して、平行移動、回転、変形、拡大縮小を同一条件で実行する。 At this time, parallel movement, rotation, deformation, and enlargement / reduction are executed under the same conditions for both the X-ray image and the X-ray image teacher bone part image.
 図6は、X線画像作成部81により作成されたX線画像101の模式図であり、図7は、X線画像作成部81により作成されたX線画像用教師骨部画像102の模式図である。 FIG. 6 is a schematic diagram of the X-ray image 101 created by the X-ray image creation unit 81, and FIG. 7 is a schematic diagram of the teacher bone image 102 for the X-ray image created by the X-ray image creation unit 81. It is.
 X線画像101には、大腿骨51と、骨盤52と、軟部領域53とが表示されている。また、X線画像用教師骨部画像102には、大腿骨51と、骨盤52とが表示されている。 In the X-ray image 101, a femur 51, a pelvis 52, and a soft part region 53 are displayed. Further, the femur 51 and the pelvis 52 are displayed in the teacher bone image 102 for X-ray images.
 次に、図3に示すDRR画像作成部83により、CT画像記憶部70から取得したCT画像データに対して、図1に示すX線照射部11とX線検出部12との幾何学的条件を模擬した仮想的投影を行うことにより、骨部を含む領域を示す複数のDRR画像を作成し(ステップS5)、このDRR画像をDRR画像記憶部84に記憶するとともに(ステップS6)、骨部を含む領域を示す複数のDRR画像用教師骨部画像を作成し(ステップS7)、このDRR画像用教師骨部画像をDRR画像記憶部84に記憶する(ステップS8)。ここで、骨部を示すDRR画像用教師骨部画像を作成するときには、CT値が一定以上の値となる領域を骨部の領域としてDRR画像用教師骨部画像を作成する。例えば、CT値が200HU(Hounsfield Unit)以上の領域を骨部の領域と認定してDRR画像用教師骨部画像を作成する。 Next, geometric conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in FIG. 1 are applied to the CT image data acquired from the CT image storage unit 70 by the DRR image creation unit 83 shown in FIG. By performing virtual projection that simulates the above, a plurality of DRR images showing the region including the bone part are created (step S5), and the DRR image is stored in the DRR image storage unit 84 (step S6). A plurality of teacher bone part images for DRR images showing a region including the image are created (step S7), and the teacher bone part images for DRR images are stored in the DRR image storage unit 84 (step S8). Here, when creating a DRR image teacher bone image indicating a bone part, a DRR image teacher bone image is created with a region having a CT value equal to or greater than a certain value as a bone region. For example, a DRR image teacher bone image is created by identifying a region having a CT value of 200 HU (Hounsfield Unit) or more as a bone region.
 図8は、図1に示すX線照射部11とX線検出部12との幾何学的条件を模擬した仮想的投影によりDRR画像を作成する状態を模式的に示す説明図である。 FIG. 8 is an explanatory diagram schematically showing a state in which a DRR image is created by virtual projection simulating the geometric conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in FIG.
 この図において、符号300はCT画像データを示している。このCT画像データ300は、複数の2次元のCT画像データの集合である3次元のボクセルデータである。このCT画像データ300は、例えば、512×512ピクセルの2次元画像が被検者を横断する方向(図8に示す線分L1またはL2に沿った方向)に200枚程度積層された構造を有する。 In this figure, reference numeral 300 indicates CT image data. The CT image data 300 is three-dimensional voxel data that is a set of a plurality of two-dimensional CT image data. The CT image data 300 has a structure in which, for example, about 200 two-dimensional images of 512 × 512 pixels are stacked in a direction crossing the subject (direction along the line segment L1 or L2 shown in FIG. 8). .
 DRR画像作成部83によりDRR画像を作成する時には、CT画像データ300に対して仮想的に投影を行う。このときには、コンピュータ上に3次元のCT画像データ300を配置する。そして、コンピュータ上にX線撮影系の幾何学的配置であるジオメトリを再現する。この実施形態においては、CT画像データ300を挟んで、両側に、X線照射部11とX線検出部12を配置する。これらのCT画像データ300と、X線照射部11とX線検出部12の配置は、X線撮影を実行するときの被検者と、X線照射部11と、X線検出部12との配置と同じジオメトリとなっている。ここで、ジオメトリとは、撮影対象とX線照射部11とX線検出部12の幾何学的配置関係を意味する。 When the DRR image creating unit 83 creates a DRR image, it virtually projects the CT image data 300. At this time, the three-dimensional CT image data 300 is arranged on the computer. Then, the geometry which is the geometric arrangement of the X-ray imaging system is reproduced on the computer. In this embodiment, the X-ray irradiation unit 11 and the X-ray detection unit 12 are disposed on both sides of the CT image data 300. The arrangement of the CT image data 300, the X-ray irradiation unit 11, and the X-ray detection unit 12 is such that the subject when performing X-ray imaging, the X-ray irradiation unit 11, and the X-ray detection unit 12 are arranged. It has the same geometry as the arrangement. Here, the term “geometry” means a geometric arrangement relationship between the imaging target, the X-ray irradiation unit 11 and the X-ray detection unit 12.
 この状態で、X線照射部11と、CT画像データ300の各画素を介してX線検出部12の各画素とを結ぶ多数の線分Lを設定する。なお、図8においては、説明の便宜上、2本の線分L1、L2を図示している。そして、この線分L上に、各々、複数の計算点を設定し、各計算点のCT値を演算する。このCT値の演算時には、計算点の周囲のCTデータボクセルにおけるCT値を利用した補間が実行される。しかる後、線分L上の各計算点のCT値を累積する。この累積値が、線減弱係数の線積分に変換されて、X線の減弱を算出することにより、DRR画像が作成される。 In this state, a large number of line segments L connecting the X-ray irradiation unit 11 and each pixel of the X-ray detection unit 12 via each pixel of the CT image data 300 are set. In FIG. 8, two line segments L1 and L2 are shown for convenience of explanation. A plurality of calculation points are set on the line segment L, and the CT value of each calculation point is calculated. When calculating the CT value, interpolation is performed using the CT value in CT data voxels around the calculation point. Thereafter, the CT values of the calculation points on the line segment L are accumulated. This accumulated value is converted into a line integral of a line attenuation coefficient, and a DRR image is created by calculating attenuation of X-rays.
 このDRR画像の作成時には、CT画像データ300に対して投影座標および角度の少なくとも一方を含むDRR画像作成のためのパラメータを変化させてDRR画像を作成する。あるいは、わずかな平行移動、回転、変形及び拡大縮小の少なくとも1つを含む画像処理を実行する。この平行移動、回転、変形、拡大縮小を実行するのは、後述するX線撮影時に被検者が移動し、あるいは、X線照射部11とX線検出部12が移動した場合に対応するためである。 At the time of creating this DRR image, the DRR image is created by changing the parameters for creating the DRR image including at least one of the projection coordinates and the angle with respect to the CT image data 300. Alternatively, image processing including at least one of slight translation, rotation, deformation, and enlargement / reduction is executed. The parallel movement, rotation, deformation, and enlargement / reduction are executed in order to correspond to the case where the subject moves during the X-ray imaging described later, or the X-ray irradiation unit 11 and the X-ray detection unit 12 move. It is.
 また、作成されたDRR画像にコントラスト変化、ノイズ付加及びエッジ強調の少なくとも1つを実行する。このコントラスト変化、ノイズ付加、エッジ強調を実行するのは、DRR画像とX線画像の画質の違いを吸収し、骨部の領域をより確実に認識できるようにするためである。 Also, at least one of contrast change, noise addition, and edge enhancement is executed on the created DRR image. This contrast change, noise addition, and edge enhancement are performed in order to absorb the difference in image quality between the DRR image and the X-ray image and to more reliably recognize the bone region.
 上述した、投影座標や角度などのDRR画像作成のためのパラメータの変化、または、コントラスト変化、ノイズ付加、エッジ強調は、所定の範囲内でランダムに、あるいは、等間隔で様々に変化を与える態様で実施される。これにより、被検者一人のCT画像データ300から、多量のDRR画像を作成することができる。このため、このようにして作成された多量のDRR画像を使用して、各患者に対応したオーダーメイドの学習モデルを作成することが可能となる。なお、多数の患者のDRR画像を利用して学習モデルの作成を行うことも可能である。 The above-described changes in parameters for creating a DRR image such as projection coordinates and angles, or changes in contrast, addition of noise, and edge enhancement are varied in a predetermined range at random or at equal intervals. Will be implemented. Thereby, a large amount of DRR images can be created from the CT image data 300 of one subject. Therefore, it is possible to create a custom-made learning model corresponding to each patient using a large amount of DRR images created in this way. It is possible to create a learning model using DRR images of a large number of patients.
 なお、DRR画像の作成時とDRR画像用教師骨部画像の作成時においては、幾何学的透視条件の投影座標および角度を含むパラメータを同一条件で変化させ、あるいは、画像の回転、変形、拡大縮小を含む画像処理を同一条件で施す。 It should be noted that when the DRR image is created and when the DRR image teacher bone image is created, the parameters including the projection coordinates and angle of the geometric perspective condition are changed under the same conditions, or the image is rotated, deformed, or enlarged. Image processing including reduction is performed under the same conditions.
 また、DRR画像とDRR画像用教師骨部画像のうちDRR画像の作成時には、DRR画像作成部83は、複数のDRR画像のうちの一部のDRR画像を、骨部を含む領域のうちの骨部領域の濃度を変化させたDRR画像として作成する。より具体的には、CT値が一定以上の値となる骨部の領域のCT値を、実際のCT値より小さな値に設定する。これにより、骨密度が低下した骨部を模擬したDRR画像を得ることができる。従って、骨密度が低下した骨部を模擬したDRR画像により機械学習を行うことができ、骨密度が低下して骨粗鬆症に至った患者を含む患者に対して骨部の抽出精度を向上させることが可能となる。 Further, when creating a DRR image out of the DRR image and the DRR image teacher bone part image, the DRR image creation part 83 selects a part of the DRR images from the plurality of DRR images as a bone in the region including the bone part. It is created as a DRR image in which the density of the partial area is changed. More specifically, the CT value of the bone region where the CT value is a certain value or more is set to a value smaller than the actual CT value. Thereby, it is possible to obtain a DRR image simulating a bone part having a reduced bone density. Therefore, it is possible to perform machine learning using a DRR image simulating a bone part having a decreased bone density, and to improve the extraction accuracy of the bone part for patients including a patient having a decreased bone density and resulting in osteoporosis. It becomes possible.
 図9は、DRR画像作成部83により作成されたDRR画像103の模式図であり、図10は、DRR画像作成部83により作成された骨部の領域の濃度を小さな値に変更したDRR画像104の模式図であり、図11は、DRR画像作成部83により作成されたDRR画像用教師骨部画像105の模式図である。 FIG. 9 is a schematic diagram of the DRR image 103 created by the DRR image creation unit 83. FIG. 10 shows a DRR image 104 in which the density of the bone region created by the DRR image creation unit 83 is changed to a small value. FIG. 11 is a schematic diagram of the DRR image teacher bone part image 105 created by the DRR image creation unit 83.
 DRR画像103、104には、大腿骨51と、骨盤52と、軟部領域53とが表示されている。また、DRR画像用教師骨部画像105には、大腿骨51と、骨盤52とが表示されている。 In the DRR images 103 and 104, the femur 51, the pelvis 52, and the soft part region 53 are displayed. Further, the femur 51 and the pelvis 52 are displayed in the DRR image teacher bone part image 105.
 以上の工程が終了すれば、学習部85により、図6に示すX線画像101を入力層とし、図7に示すX線画像用教師骨部画像102を出力層として機械学習を実行するとともに、図9に示すDRR画像103および図10に示す骨部の領域の濃度を小さな値に変更したDRR画像104を入力層とし、図11に示すDRR画像用教師骨部画像105を出力層として機械学習を実行することにより、骨部(大腿骨51および骨盤52)を認識するための学習モデルを作成する(ステップS9)。この機械学習時には、例えば、FCNが使用される。FCNで用いる畳み込みニューラルネットワークは、上述した図4のような構成となる。すなわち、学習モデルを作成する場合においては、入力層はX線画像101およびDRR画像103、104で、出力層はX線画像用教師骨部画像102およびDRR画像用教師骨部画像105である。 When the above steps are completed, the learning unit 85 executes machine learning using the X-ray image 101 shown in FIG. 6 as an input layer and the X-ray image teacher bone image 102 shown in FIG. 7 as an output layer. Machine learning using the DRR image 103 shown in FIG. 9 and the DRR image 104 in which the density of the bone region shown in FIG. 10 is changed to a small value as an input layer, and the teacher bone image 105 for DRR image shown in FIG. 11 as an output layer. By executing the above, a learning model for recognizing the bone part (the femur 51 and the pelvis 52) is created (step S9). For this machine learning, for example, FCN is used. The convolutional neural network used in the FCN is configured as shown in FIG. That is, when creating a learning model, the input layer is an X-ray image 101 and DRR images 103 and 104, and the output layer is an X-ray image teacher bone image 102 and a DRR image teacher bone image 105.
 以上の工程により学習モデルが作成されれば、次に、被検者に対してX線撮影を実行する(ステップS10)。そして、撮影されたX線画像に対して、骨部画像作成部86により、先に作成した学習モデル(畳み込み層)を利用して変換を行うことにより、セグメンテーションを実行し、骨部(大腿骨51および骨盤52)の画像を作成する(ステップS11)。すなわち、X線撮影で得られるX線画像に対して、先に作成した学習モデルを使用し、出力層として骨部を表す画像を作成する。そして、セグメンテーションにより特定された骨部の領域を利用して、各種の手法により骨密度の測定が実行される。 If a learning model is created by the above steps, next, X-ray imaging is performed on the subject (step S10). Then, the bone part image creation unit 86 converts the captured X-ray image by using the learning model (convolution layer) created earlier, thereby executing segmentation, and the bone part (femur) 51 and the pelvis 52) are created (step S11). That is, the learning model created previously is used for an X-ray image obtained by X-ray imaging, and an image representing a bone part is created as an output layer. Then, the bone density is measured by various methods using the bone region specified by the segmentation.
 なお、この明細書等において、「セグメンテーション」とは、この実施形態における骨部等の領域を特定するプロセスに加え、骨部等の輪郭あるいは骨部等の外形を特定するプロセスをも含む概念である。 In this specification and the like, “segmentation” is a concept including a process of specifying an outline of a bone or the like or a process of specifying an outline of a bone or the like in addition to the process of specifying a region such as a bone in this embodiment. is there.
 以上の工程により骨部画像の作成が完了すれば、作成された骨部画像を、オペレータが必要に応じ修正する。そして、修正後の骨部画像と、その元になったX線画像とを、学習部85による学習モデルの作成、あるいは再学習に利用する。これにより、失敗例を含む学習画像を拡充させてより精度の高い学習モデルを作成することが可能となる。 If creation of the bone part image is completed through the above steps, the operator corrects the created bone part image as necessary. Then, the corrected bone image and the original X-ray image are used for creating a learning model by the learning unit 85 or for re-learning. As a result, it is possible to create a learning model with higher accuracy by expanding learning images including failure examples.
 以上のように、この発明の実施形態に係る骨部画像作成装置によれば、骨部の領域を機械学習により抽出することで、その抽出精度を向上させることが可能となる。この時、X線画像とDRR画像の両方を利用して機械学習を行うことから、学習画像を拡充させることが可能となり、学習用臨床データの収集を容易に実行することが可能となる。また、骨部領域の濃度を変化させたDRR画像を利用することにより、骨密度が低下した骨部を模擬したDRR画像により機械学習を行うことができ、骨密度が低下して骨粗鬆症に至った患者を含む患者に対して骨部の抽出精度を向上させることが可能となる。 As described above, according to the bone image creating apparatus according to the embodiment of the present invention, the extraction accuracy can be improved by extracting the bone region by machine learning. At this time, since machine learning is performed using both the X-ray image and the DRR image, the learning image can be expanded, and the collection of learning clinical data can be easily performed. In addition, by using a DRR image in which the density of the bone region is changed, it is possible to perform machine learning with a DRR image simulating a bone portion having a reduced bone density, resulting in a decrease in bone density and osteoporosis. Bone extraction accuracy can be improved for patients including patients.
 なお、上述した実施形態において、X線画像をガウスフィルタなどでぼかした後に学習モデルに入力してもよい。一般的に、DRR画像は低解像度のCT画像から作成されているため、X線画像と比べると低解像度である。このため、X線画像をぼかして、X線画像を、ノイズを低減し、また、学習時のDRR画像と同等の解像度とすることにより、より確実に骨部を識別することが可能となる。また、上述した実施形態において、学習モデルに入力するDRR画像およびX線画像はあらかじめコントラスト正規化を行った上で入力してもよい。また、中間層に局所コントラスト正規化層または局所応答正規化層を加えてもよい。 In the above-described embodiment, the X-ray image may be input to the learning model after being blurred by a Gaussian filter or the like. In general, since a DRR image is created from a low-resolution CT image, it has a lower resolution than an X-ray image. For this reason, it is possible to more reliably identify bone parts by blurring the X-ray image, reducing noise in the X-ray image, and setting the resolution to be equivalent to that of the DRR image at the time of learning. In the above-described embodiment, the DRR image and the X-ray image input to the learning model may be input after performing contrast normalization in advance. Further, a local contrast normalization layer or a local response normalization layer may be added to the intermediate layer.
 次に、この発明の他の実施形態について説明する。 Next, another embodiment of the present invention will be described.
 上述した実施形態においては、複数のDRR画像のうちの一部のDRR画像を、骨部を含む領域のうちの骨部領域の濃度を変化させたDRR画像として作成することにより、骨密度が低下した骨部を模擬したDRR画像を作成し、これを機械学習に利用している。これに対して、この実施形態においては、複数のX線画像のうちの一部のX線画像を、X線管に高電圧を付与した状態で撮影されたX線画像(高圧画像)と、X線管に低電圧を付与した状態で撮影されたX線画像(低圧画像)とに対してサブトラクション処理を行うデユアルエナジーサブトラクションを利用することにより、骨部を含む領域のうちの骨部領域の濃度を変化させたX線画像とする。 In the embodiment described above, the bone density is reduced by creating a part of the plurality of DRR images as a DRR image in which the density of the bone region of the region including the bone portion is changed. A DRR image simulating the bone part is created and used for machine learning. On the other hand, in this embodiment, an X-ray image (high-voltage image) obtained by imaging a part of the plurality of X-ray images with a high voltage applied to the X-ray tube; By using dual energy subtraction that performs subtraction processing on an X-ray image (low-pressure image) taken with a low voltage applied to the X-ray tube, the bone region of the region including the bone portion An X-ray image having a changed density is used.
 すなわち、骨粗鬆症の診断のため、被検者の骨密度を測定するときには、X線管に高電圧を付与した状態で撮影されたX線画像と、X線管に低電圧を付与した状態で撮影されたX線画像とに対してサブトラクション処理を行うデユアルエナジーサブトラクションにより骨密度の測定を行う構成が採用されている。骨部画像の特定時においても、このデユアルエナジーサブトラクションを利用し、X線管に高電圧を付与した状態で撮影されたX線画像と、X線管に低電圧を付与した状態で撮影されたX線画像とに対し、重みづけを行った後、これらの差分をとることにより、骨部を表すデユアルエナジーサブトラクション画像を作成する。そして、X線画像(高圧画像または低圧画像)からデユアルエナジーサブトラクション画像を減算することにより骨部を含む領域のうち骨部の領域の濃度を低くした画像(骨密度が低下した状態の骨部のX線画像に相当する画像)を得る。この骨部の領域の濃度が低いX線画像を機械学習に利用することにより、骨密度が低下した骨部を模擬したX線画像により機械学習を行うことができ、骨粗鬆症の患者を含む患者に対して骨部の抽出精度を向上させることが可能となる。 That is, when measuring the bone density of a subject for the diagnosis of osteoporosis, an X-ray image taken with a high voltage applied to the X-ray tube and a low voltage applied to the X-ray tube A configuration is adopted in which bone density is measured by dual energy subtraction for performing subtraction processing on the X-ray image. Even at the time of specifying the bone part image, this dual energy subtraction was used to capture an X-ray image taken with a high voltage applied to the X-ray tube and a low voltage applied to the X-ray tube. After weighting the X-ray image, a differential energy subtraction image representing the bone part is created by taking the difference between them. Then, by subtracting the dual energy subtraction image from the X-ray image (high pressure image or low pressure image), an image in which the density of the bone region is reduced among the regions including the bone portion (the bone portion in a state where the bone density is reduced). An image corresponding to an X-ray image) is obtained. By using this X-ray image with a low bone region density for machine learning, machine learning can be performed with an X-ray image simulating a bone part with reduced bone density, which can be applied to patients including osteoporosis patients. On the other hand, it is possible to improve the extraction accuracy of the bone part.
 この時には、機械学習に使用するX線画像としては、高圧画像、低圧画像あるいは、デユアルエナジーサブトラクション画像のいずれかを使用してもよく、これらの画像をチャンネル方向に連結した画像を使用してもよい。また、X線画像(高圧画像または低圧画像)からデユアルエナジーサブトラクション画像を減算するかわりに、デユアルエナジーサブトラクション画像に対してパラメータ調整を行うことにより、骨密度が低下した骨部を模擬したX線画像を得るようにしてもよい。 At this time, as an X-ray image used for machine learning, either a high pressure image, a low pressure image, or a dual energy subtraction image may be used, or an image obtained by connecting these images in the channel direction may be used. Good. In addition, instead of subtracting the dual energy subtraction image from the X-ray image (high-pressure image or low-pressure image), parameter adjustment is performed on the dual energy subtraction image, thereby simulating the bone portion having a reduced bone density. May be obtained.
 次に、この発明のさらに他の実施形態について説明する。図12は、X線画像作成部81により作成されたX線画像106の模式図であり、図13は、DRR画像作成部83により作成されたDRR画像107の模式図である。 Next, still another embodiment of the present invention will be described. FIG. 12 is a schematic diagram of the X-ray image 106 created by the X-ray image creation unit 81, and FIG. 13 is a schematic diagram of the DRR image 107 created by the DRR image creation unit 83.
 この実施形態は、大腿骨等の被検者の体軸に対して左右対称の形状を有する骨部に対して骨部画像を作成するときに利用されるものである。先に説明した図6は被検者の右足付近のX線画像101の模式図であり、図9は被検者の右足付近のDRR画像103の模式図である。これに対して、図12は被検者の左足付近のX線画像106の模式図であり、図13は被検者の左足付近のDRR画像107の模式図となっている。 This embodiment is used when a bone part image is created for a bone part having a symmetrical shape with respect to the body axis of a subject such as a femur. FIG. 6 described above is a schematic diagram of the X-ray image 101 near the subject's right foot, and FIG. 9 is a schematic diagram of the DRR image 103 near the subject's right foot. In contrast, FIG. 12 is a schematic diagram of an X-ray image 106 near the left foot of the subject, and FIG. 13 is a schematic diagram of a DRR image 107 near the left foot of the subject.
 このように、被検者の体軸に対して左右対称の形状を有する骨部(大腿骨51および骨盤52)が対象となる場合においては、学習部85が、右側の骨部の画像と左側の骨部の画像のいずれか一方を左右反転することにより、左右の骨部の画像に対して一括して機械学習を実行する。例えば、図12に示す被検者の左足付近のX線画像106を左右反転することにより、図6に示す被検者の右足付近のX線画像101と一括して機械学習に利用する。また、同様に、図13に示す被検者の左足付近のDRR画像107を左右反転することにより、図9に示す被検者の右足付近のDRR画像103と一括して機械学習に利用する。 Thus, in the case where a bone part (femur 51 and pelvis 52) having a symmetrical shape with respect to the body axis of the subject is a target, the learning unit 85 performs an image of the right bone part on the left side. The machine learning is executed on the left and right bone images collectively by flipping one of the two bone images horizontally. For example, the X-ray image 106 near the subject's left foot shown in FIG. 12 is reversed left and right, and used together with the X-ray image 101 near the subject's right foot shown in FIG. 6 for machine learning. Similarly, the DRR image 107 near the subject's left foot shown in FIG. 13 is reversed left and right to be used together with the DRR image 103 near the subject's right foot shown in FIG. 9 for machine learning.
 このような構成を採用することにより、被検者の体軸に対して左右対称の形状を有する骨部に対して、その検出精度を均一なものとすることができる。そして、左右の骨部の画像に対して一括して機械学習を実行することから、学習画像を拡充させてより精度の高い学習モデルを作成することが可能となる。 By adopting such a configuration, it is possible to make the detection accuracy uniform with respect to a bone portion having a symmetrical shape with respect to the body axis of the subject. And since machine learning is performed collectively on the images of the left and right bone parts, it is possible to expand the learning image and create a more accurate learning model.
 なお、上述した実施形態においては、X線画像とDRR画像の両方を利用して機械学習を行っている。しかしながら、X線画像とDRR画像のいずれか一方を利用して機械学習を行うようにしてもよい。 In the above-described embodiment, machine learning is performed using both X-ray images and DRR images. However, machine learning may be performed using either one of the X-ray image and the DRR image.
 また、上述した実施形態においては、器官として骨部を対象としているが、例えば、臓器等の器官を対象としてもよい。例えば、被検者の内臓脂肪が多い場合においては、X線撮影時においては、臓器領域の濃度が低くなる。この発明によれば、このような場合においても、臓器の濃度が低くなる被検者にも対応した学習モデルを作成することが可能となる。このため、臓器の検出精度を向上させることが可能となる。 In the above-described embodiment, a bone part is targeted as an organ, but an organ such as an organ may be targeted. For example, when the subject has a large amount of visceral fat, the concentration of the organ region is low during X-ray imaging. According to the present invention, even in such a case, it is possible to create a learning model corresponding to a subject whose organ concentration is low. For this reason, it becomes possible to improve the detection accuracy of an organ.
 11   X線照射部
 12   X線検出部
 13   天板
 14   支柱
 15   X線管保持部材
 16   回転機構
 17   主支柱
 18   ベースプレート
 21   表示部
 22   操作部
 70   CT画像記憶部
 80   制御部
 81   X線画像作成部
 82   X線画像記憶部
 83   DRR画像作成部
 84   DRR画像記憶部
 85   学習部
 86   骨部画像作成部
 300  CT画像データ 
 
DESCRIPTION OF SYMBOLS 11 X-ray irradiation part 12 X-ray detection part 13 Top plate 14 Support | pillar 15 X-ray tube holding member 16 Rotation mechanism 17 Main support | pillar 18 Base plate 21 Display part 22 Operation part 70 CT image memory | storage part 80 Control part 81 X-ray image creation part 82 X-ray image storage unit 83 DRR image generation unit 84 DRR image storage unit 85 learning unit 86 bone image generation unit 300 CT image data

Claims (14)

  1.  被検者の器官を含む領域の画像を解析することにより前記器官の領域を特定するためのセグメンテーションを行う画像解析方法であって、
     前記セグメンテーションの手法として機械学習を用いるとともに、
     前記被検者の器官を含む画像における前記器官の領域の濃度を変化させた修正画像を作成する修正画像作成工程と、
     前記被検者の器官を含む画像と前記修正画像作成工程で作成した修正画像とを用いた学習処理により機械学習の学習モデルを作成する学習モデル作成工程と、
     を含むことを特徴とする画像解析方法。
    An image analysis method for performing segmentation for identifying an area of an organ by analyzing an image of an area including an organ of a subject,
    While using machine learning as the segmentation technique,
    A modified image creating step for creating a modified image in which the density of the region of the organ in the image including the organ of the subject is changed;
    A learning model creating step of creating a learning model for machine learning by a learning process using an image including the organ of the subject and the modified image created in the modified image creating step;
    An image analysis method comprising:
  2.  請求項1に記載の画像解析方法において、
     前記被検者をX線撮影して得た前記被検者の器官を含む領域のX線画像に対して、前記学習モデル作成工程で作成した学習モデルを利用して変換を行うことにより、前記器官を表す画像を作成する画像解析方法。
    The image analysis method according to claim 1,
    By performing conversion using the learning model created in the learning model creation step, the X-ray image of the region including the organ of the subject obtained by X-ray imaging of the subject, An image analysis method for creating an image representing an organ.
  3.  請求項1に記載の画像解析方法において、
     前記被検者の器官を含む領域の画像は、前記被検者のCT画像データから作成されたDRR画像であり、
     前記修正画像作成工程においては、前記CT画像データのCT値が所定の値となる領域を前記器官の領域としてその濃度を変化させる画像解析方法。
    The image analysis method according to claim 1,
    The image of the region including the organ of the subject is a DRR image created from the CT image data of the subject,
    An image analysis method in which, in the modified image creation step, an area where the CT value of the CT image data is a predetermined value is defined as an area of the organ and the density thereof is changed.
  4.  請求項3に記載の画像解析方法において、
     DRR画像の作成時に、前記幾何学的条件の投影座標および角度の少なくとも一方を含むパラメータを変化させ、あるいは、画像の回転、変形および拡大縮小の少なくとも1つを含む画像処理を施して、複数のDRR画像を作成する画像解析方法。
    The image analysis method according to claim 3,
    At the time of creating the DRR image, a parameter including at least one of the projection coordinates and angle of the geometric condition is changed, or image processing including at least one of rotation, deformation, and enlargement / reduction of the image is performed, An image analysis method for creating a DRR image.
  5.  請求項3に記載の画像解析方法において、
     作成後のDRR画像に対して、コントラスト変化、ノイズ付加およびエッジ強調の少なくとも1つを実行する画像解析方法。
    The image analysis method according to claim 3,
    An image analysis method for executing at least one of contrast change, noise addition, and edge enhancement on a created DRR image.
  6.  請求項1に記載の画像解析方法において、
     前記被検者の器官を含む領域の画像は、前記被検者をX線撮影することにより作成されたX線画像であり、
     前記修正画像作成工程においては、前記X線画像と、デュアルエナジーサブトラクションを利用して得られた前記器官の画像とを利用して前記器官の領域の濃度を変化させる画像解析方法。
    The image analysis method according to claim 1,
    The image of the region including the subject's organ is an X-ray image created by X-raying the subject,
    An image analysis method in which the density of the region of the organ is changed using the X-ray image and the image of the organ obtained by using dual energy subtraction in the modified image creation step.
  7.  請求項2に記載の画像解析方法において、
     前記被検者をX線撮影して得た前記被検者の器官を含む領域のX線画像と、前記学習モデル作成工程で作成した学習モデルを利用して変換を行うことにより得た前記器官を表す画像とを、前記学習部による学習モデルの学習に利用する画像解析方法。
    The image analysis method according to claim 2,
    The organ obtained by performing conversion using an X-ray image of a region including the organ of the subject obtained by X-ray imaging of the subject and the learning model created in the learning model creation step An image analysis method that uses an image representing the image for learning of a learning model by the learning unit.
  8.  請求項1に記載の画像解析方法において、
     前記器官は前記被検者の体軸に対して左右対称の形状を有し、前学習モデル作成工程においては、右側の器官の画像と左側の器官の画像のいずれか一方を左右反転することにより、左右の器官の画像に対して一括して機械学習の学習モデルを作成する画像解析方法。
    The image analysis method according to claim 1,
    The organ has a bilaterally symmetric shape with respect to the body axis of the subject, and in the pre-learning model creation step, the right-side organ image or the left-side organ image is horizontally reversed. An image analysis method that creates a machine learning learning model for images of left and right organs at once.
  9.  前記器官は前記被検者の骨部である、請求項1に記載の画像解析方法を利用して前記骨部の領域をセグメンテーションするセグメンテーション方法。 The segmentation method of segmenting the region of the bone using the image analysis method according to claim 1, wherein the organ is a bone of the subject.
  10.  請求項9に記載のセグメンテーション方法によりセグメンテーションされた骨部の領域に対して骨密度を測定する骨密度測定方法。 A bone density measuring method for measuring bone density with respect to a region of a bone portion segmented by the segmentation method according to claim 9.
  11.  被検者の器官を含む領域の画像を、機械学習を利用して解析することにより、前記器官の領域を特定するためのセグメンテーションを行うときに用いられる学習モデルを作成する学習モデル作成方法であって、
     前記被検者の器官を含む画像と、前記被検者の器官を含む画像における前記器官の領域の濃度を変化させることにより作成された修正画像とを用い、機械学習の学習を実行して学習モデルを作成することを特徴とする学習モデル作成方法。
    This is a learning model creation method for creating a learning model used when performing segmentation for specifying a region of the organ by analyzing an image of the region including the organ of the subject using machine learning. And
    Learning by performing machine learning learning using an image including the organ of the subject and a modified image created by changing the density of the region of the organ in the image including the organ of the subject A learning model creation method characterized by creating a model.
  12.  被検者の器官を含む領域のX線画像から前記器官の領域を抽出した画像を作成する画像作成装置であって、
     前記器官を含む領域をX線撮影して得た複数のX線画像と、機械学習用の複数のX線画像用教師画像とを記憶するX線画像記憶部と、
     前記骨部を含む領域のDRR画像を作成するDRR画像作成部と、
     前記DRR画像作成部により作成された複数のDRR画像と、前記DRR画像作成部により作成されたDRR画像に基づいて作成された複数の機械学習用のDRR画像用教師画像とを記憶するDRR画像記憶部と、
     前記X線画像記憶部に記憶された前記複数のX線画像と前記複数のX線画像用教師画像とを使用して機械学習を実行するとともに、前記DRR画像記憶部に記憶された前記複数のDRR画像と前記複数のDRR画像用教師画像とを使用して機械学習を実行することによって予め作成された前記器官を認識するための学習モデルを使用して、前記被検者の器官を含む領域のX線画像に対して変換を行うことにより、前記器官を表す画像を作成する画像作成部と、
     を備えたことを特徴とする画像作成装置。
    An image creation device for creating an image obtained by extracting an area of an organ from an X-ray image of an area including an organ of a subject,
    An X-ray image storage unit for storing a plurality of X-ray images obtained by X-ray imaging of the region including the organ and a plurality of X-ray image teacher images for machine learning;
    A DRR image creation unit for creating a DRR image of a region including the bone part;
    DRR image storage for storing a plurality of DRR images created by the DRR image creation unit and a plurality of machine learning DRR image teacher images created based on the DRR image created by the DRR image creation unit And
    Machine learning is performed using the plurality of X-ray images and the plurality of X-ray image teacher images stored in the X-ray image storage unit, and the plurality of the plurality of X-ray images stored in the DRR image storage unit A region including the organ of the subject using a learning model for recognizing the organ created in advance by performing machine learning using a DRR image and the plurality of DRR image teacher images An image creating unit that creates an image representing the organ by converting the X-ray image of
    An image creating apparatus comprising:
  13.  請求項11に記載の画像作成装置において、
     前記DRR画像作成部は、前記複数のDRR画像のうちの一部のDRR画像を、前記骨部を含む領域のうちの器官領域の濃度を変化させたDRR画像として作成する画像作成装置。
    The image creating apparatus according to claim 11.
    The DRR image creation unit creates an DRR image of a part of the plurality of DRR images as a DRR image in which the density of an organ region in a region including the bone part is changed.
  14.  請求項11に記載の画像作成装置において、
     前記X線画像記憶部に記憶される複数のX線画像のうちの一部のX線画像は、デユアルエナジーサブトラクションを利用することにより、前記器官を含む領域のうちの器官領域の濃度を変化させたX線画像である画像作成装置。 
    The image creating apparatus according to claim 11.
    Some X-ray images of the plurality of X-ray images stored in the X-ray image storage unit change the concentration of the organ region of the region including the organ by using dual energy subtraction. An image creation device that is an X-ray image.
PCT/JP2019/011773 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device WO2019208037A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020207032563A KR102527440B1 (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device
CN201980035078.8A CN112165900A (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device
JP2020516112A JP7092190B2 (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model creation method and image creation device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-083340 2018-04-24
JP2018083340 2018-04-24

Publications (1)

Publication Number Publication Date
WO2019208037A1 true WO2019208037A1 (en) 2019-10-31

Family

ID=68293538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/011773 WO2019208037A1 (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device

Country Status (4)

Country Link
JP (1) JP7092190B2 (en)
KR (1) KR102527440B1 (en)
CN (1) CN112165900A (en)
WO (1) WO2019208037A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4056120A1 (en) * 2021-03-12 2022-09-14 FUJI-FILM Corporation Estimation device, estimation method, and estimation program
WO2022224558A1 (en) * 2021-04-22 2022-10-27 日本装置開発株式会社 X-ray inspection device
WO2022244495A1 (en) * 2021-05-17 2022-11-24 キヤノン株式会社 Radiation imaging device and radiation imaging system
JP2023065028A (en) * 2021-10-27 2023-05-12 堺化学工業株式会社 Teacher data production method, image analysis model production method, image analysis method, teacher data production program, image analysis program, and teacher data production device
WO2023224022A1 (en) * 2022-05-20 2023-11-23 国立大学法人大阪大学 Program, information processing method, and information processing device
US11963810B2 (en) 2021-03-12 2024-04-23 Fujifilm Corporation Estimation device, estimation method, and estimation program

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05184562A (en) * 1992-01-17 1993-07-27 Fuji Photo Film Co Ltd Radiographing direction recognizing method for radiation picture
JP2002236910A (en) * 2001-02-09 2002-08-23 Hitachi Medical Corp Three-dimensional image creating method
JP2007044485A (en) * 2005-08-05 2007-02-22 Ge Medical Systems Global Technology Co Llc Method and device for segmentation of part with intracerebral hemorrhage
JP2008167949A (en) * 2007-01-12 2008-07-24 Fujifilm Corp Radiographic image processing method and apparatus, and program
JP2014158628A (en) * 2013-02-20 2014-09-04 Univ Of Tokushima Image processor, image processing method, control program, and recording medium
US20150094564A1 (en) * 2012-05-03 2015-04-02 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Intelligent algorithms for tracking three-dimensional skeletal movement from radiographic image sequences
JP2017185007A (en) * 2016-04-05 2017-10-12 株式会社島津製作所 Radiographic apparatus, radiation image object detection program, and object detection method in radiation image
US20170323444A1 (en) * 2016-05-09 2017-11-09 Siemens Healthcare Gmbh Method and apparatus for atlas/model-based segmentation of magnetic resonance images with weakly supervised examination-dependent learning

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2638875B2 (en) 1988-01-31 1997-08-06 株式会社島津製作所 Bone mineral quantitative analyzer
JP2008011901A (en) * 2006-07-03 2008-01-24 Fujifilm Corp Image type discrimination device, method and program
JP2010246883A (en) * 2009-03-27 2010-11-04 Mitsubishi Electric Corp Patient positioning system
JP6430238B2 (en) * 2014-12-24 2018-11-28 好民 村山 Radiography equipment
JP6815586B2 (en) * 2015-06-02 2021-01-20 東芝エネルギーシステムズ株式会社 Medical image processing equipment and treatment system
KR101928984B1 (en) * 2016-09-12 2018-12-13 주식회사 뷰노 Method and apparatus of bone mineral density measurement

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05184562A (en) * 1992-01-17 1993-07-27 Fuji Photo Film Co Ltd Radiographing direction recognizing method for radiation picture
JP2002236910A (en) * 2001-02-09 2002-08-23 Hitachi Medical Corp Three-dimensional image creating method
JP2007044485A (en) * 2005-08-05 2007-02-22 Ge Medical Systems Global Technology Co Llc Method and device for segmentation of part with intracerebral hemorrhage
JP2008167949A (en) * 2007-01-12 2008-07-24 Fujifilm Corp Radiographic image processing method and apparatus, and program
US20150094564A1 (en) * 2012-05-03 2015-04-02 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Intelligent algorithms for tracking three-dimensional skeletal movement from radiographic image sequences
JP2014158628A (en) * 2013-02-20 2014-09-04 Univ Of Tokushima Image processor, image processing method, control program, and recording medium
JP2017185007A (en) * 2016-04-05 2017-10-12 株式会社島津製作所 Radiographic apparatus, radiation image object detection program, and object detection method in radiation image
US20170323444A1 (en) * 2016-05-09 2017-11-09 Siemens Healthcare Gmbh Method and apparatus for atlas/model-based segmentation of magnetic resonance images with weakly supervised examination-dependent learning

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4056120A1 (en) * 2021-03-12 2022-09-14 FUJI-FILM Corporation Estimation device, estimation method, and estimation program
US11963810B2 (en) 2021-03-12 2024-04-23 Fujifilm Corporation Estimation device, estimation method, and estimation program
WO2022224558A1 (en) * 2021-04-22 2022-10-27 日本装置開発株式会社 X-ray inspection device
WO2022244495A1 (en) * 2021-05-17 2022-11-24 キヤノン株式会社 Radiation imaging device and radiation imaging system
JP2023065028A (en) * 2021-10-27 2023-05-12 堺化学工業株式会社 Teacher data production method, image analysis model production method, image analysis method, teacher data production program, image analysis program, and teacher data production device
WO2023224022A1 (en) * 2022-05-20 2023-11-23 国立大学法人大阪大学 Program, information processing method, and information processing device

Also Published As

Publication number Publication date
JP7092190B2 (en) 2022-06-28
JPWO2019208037A1 (en) 2021-04-01
KR20200142057A (en) 2020-12-21
KR102527440B1 (en) 2023-05-02
CN112165900A (en) 2021-01-01

Similar Documents

Publication Publication Date Title
WO2019208037A1 (en) Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device
JP6881611B2 (en) Image creation device and trained model generation method
CA3068526A1 (en) Classification and 3d modelling of 3d dento-maxillofacial structures using deep learning methods
EP2056255B1 (en) Method for reconstruction of a three-dimensional model of an osteo-articular structure
US8660329B2 (en) Method for reconstruction of a three-dimensional model of a body structure
KR101105494B1 (en) A reconstruction method of patient-customized 3-D human bone model
CN109419526A (en) Method and system for locomotion evaluation and correction in the synthesis of digital breast tomography
US20220092787A1 (en) Systems and methods for processing x-ray images
CN105326524B (en) The medical imaging procedure and device of the artifact in image can be reduced
US20230157660A1 (en) Imaging systems and methods
US11963812B2 (en) Method and device for producing a panoramic tomographic image of an object to be recorded
CN115209808A (en) Learning model creation method, image generation method, and image processing device
JP4416823B2 (en) Image processing apparatus, image processing method, and computer program
CN113226181B (en) Method for calibrating X-ray projection geometry in X-ray cone beam computed tomography
EP4354395A1 (en) Artificial intelligence-based dual energy x-ray image motion correction training method and system
EP4368109A1 (en) Method for training a scatter correction model for use in an x-ray imaging system
Raheja Automated analysis of metacarpal cortical thickness in serial hand radiographs
JP2024063537A (en) Information processing device, control method thereof, radiation imaging system, and program
WO2022161898A1 (en) Adaptive collimation for interventional x-ray
JP2020127600A (en) Medical image processing device, x-ray diagnostic device, and medical information processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793450

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020516112

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20207032563

Country of ref document: KR

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19793450

Country of ref document: EP

Kind code of ref document: A1