CN112165900A - Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device - Google Patents

Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device Download PDF

Info

Publication number
CN112165900A
CN112165900A CN201980035078.8A CN201980035078A CN112165900A CN 112165900 A CN112165900 A CN 112165900A CN 201980035078 A CN201980035078 A CN 201980035078A CN 112165900 A CN112165900 A CN 112165900A
Authority
CN
China
Prior art keywords
image
organ
ray
drr
bone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980035078.8A
Other languages
Chinese (zh)
Inventor
高桥涉
押川翔太
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shimadzu Corp
Original Assignee
Shimadzu Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shimadzu Corp filed Critical Shimadzu Corp
Publication of CN112165900A publication Critical patent/CN112165900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • A61B6/505Clinical applications involving diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

A control unit (80) according to the present invention comprises: an X-ray image generation unit (81); an X-ray image storage unit (82) for storing the X-ray image and a training bone image for the X-ray image; a DRR image generation unit (83) that generates a DRR image of a region including a bone; a DRR image storage unit (84) for storing a DRR image and a training bone image for a DRR image for machine learning; a learning unit (85) that generates a learning model for identifying a bone by performing machine learning using the X-ray image and the X-ray image with the training bone image, and performing machine learning using the DRR image and the DRR image with the training bone image; a bone image generation unit (86) converts an X-ray image of a region including the bone of the subject using the learned model generated by the learning unit (85), thereby generating an image representing the bone.

Description

Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device
Technical Field
The invention relates to an image analysis method, a segmentation method, a bone density measurement method, a learning model generation method, and an image generation device.
Background
In recent years, a bone density measuring apparatus for measuring a bone density of a subject for diagnosis of osteoporosis has been used. Patent document 1 discloses a bone salt quantitative analysis device including: a mechanism for generating radiation; 1 lattice irradiated with the radiation; a mechanism for irradiating the subject with 2 kinds of radiation of different energies simultaneously by illuminating only the radiation of 2 reflection angles defined in the radiation reflected by the lattice; a radiation detection means to which the radiation of these 2 energies is incident after passing through the subject; a wave height analyzing means for separating transmission data relating to radiation of each energy by performing wave height analysis output from the radiation detecting means; and a means for calculating the bone density by performing arithmetic processing on the separated data.
Documents of the prior art
Patent document
Patent document 1: japanese patent No. 2638875
Disclosure of Invention
Technical problem to be solved by the invention
Such measurement of bone density is performed on the basis of bone density of lumbar vertebrae and femur which need to be clinically noted. In this case, since there is a large individual difference in the shape of the femur, it is important to specify the region of the bone of the subject for performing stable follow-up observation. In order to specify the region of the bony part, the operator manually specifies the region in advance, which not only complicates the work but also causes a problem that the region specified by the operator is not uniform.
In order to automatically perform segmentation of a region for extracting a bone part from an image including the bone part of a subject, an algorithm using a threshold process based on a histogram may be considered, but it is difficult to accurately determine the region of the bone part for segmentation of a femur having a large individual difference in shape. Therefore, there is a problem that the accuracy of the final measurement result of bone density is deteriorated.
The present invention has been made to solve the above-described problems, and an object thereof is to provide an image analysis method, a segmentation method, a bone density measurement method, a learning model generation method, and an image generation device that can generate an image in which a region of an organ of a subject is accurately extracted from an X-ray image of the region including the organ.
Solution for solving the above technical problem
The invention described in claim 1 is an image analysis method for performing segmentation for specifying a region of an organ of a subject by analyzing an image of the region including the organ, wherein machine learning is used as the segmentation method, and the image analysis method includes: a correction image generation step of generating a correction image in which the density of a region of an organ in an image of the organ including the subject is changed; a learning model generation step of generating a machine-learned learning model by a learning process using an image including the organ of the subject and the correction image generated by the correction image generation step.
The invention described in claim 2 is the invention described in claim 1, wherein an image representing the organ is generated by converting an X-ray image of a region including the organ of the subject obtained by X-ray imaging of the subject, using the learning model generated in the learning model generating step.
The invention described in claim 3 is the invention described in claim 1, wherein the image of the region including the organ of the subject is a DRR image generated from CT image data of the subject, and the correction image generation step is performed such that the region in which the CT value of the CT image data has a predetermined value is set as the region of the organ and the density of the region is changed.
The invention described in claim 4 is the invention described in claim 3, wherein, in the DRR image generation, a plurality of DRR images are generated by changing at least one of parameters including the projection coordinates and the angle of the geometric condition or performing image processing including at least one of rotation, deformation, and enlargement and reduction of the image.
The invention described in scheme 5 is the invention described in scheme 3, wherein at least one of contrast change, noise addition, and edge enhancement is performed on the generated DRR image.
The invention described in claim 6 is the invention described in claim 1, wherein the image of the region including the organ of the subject is an X-ray image generated by X-ray imaging of the subject, and the correction image generation step uses the X-ray image and an image of the organ obtained by dual energy subtraction to change the density of the region of the organ.
The invention described in claim 7 is the invention described in claim 2, wherein an X-ray image of a region including an organ of the subject obtained by X-ray imaging of the subject and an image representing the organ obtained by conversion using the learning model generated in the learning model generation step are used for learning of the learning model by the learning unit.
The invention described in claim 8 is the invention described in claim 1, wherein the organ has a shape that is bilaterally symmetric with respect to the body axis of the subject, and in the learning model generation step, the learning model for machine learning is generated for the images of the left and right organs at a time by inverting either one of the image of the right organ and the image of the left organ.
The invention described in claim 9 is a method of segmenting an organ, which is a bone portion of the subject, by segmenting a region of the bone portion using the image analysis method described in claim 1.
The invention according to claim 10 is a method for measuring bone density of a region of a bone portion divided by the division method according to claim 9.
The invention described in claim 11 is a learning model generation method for generating a learning model used for segmentation for specifying a region of an organ of a subject by analyzing an image of the region including the organ by machine learning, wherein the learning model is generated by performing machine learning using an image including the organ of the subject and a correction image generated by changing a density of the region of the organ in the image including the organ of the subject.
The invention described in claim 12 is an image generating apparatus that generates an image in which a region of an organ of a subject is extracted from an X-ray image of the region including the organ, the image generating apparatus including: an X-ray image storage unit that stores a plurality of X-ray images obtained by X-ray imaging of a region including the organ and a plurality of training images for X-ray images for machine learning; a DRR image generation unit for generating a DRR image of a region including the bone; a DRR image storage unit that stores the plurality of DRR images generated by the DRR image generation unit and training images for DRR images for machine learning generated based on the DRR images generated by the DRR image generation unit; an image generation unit that converts an X-ray image of a region including an organ of the subject using a learning model for identifying the organ, thereby generating an image representing the organ, the learning model being generated in advance by performing machine learning using the plurality of X-ray images stored in the X-ray image storage unit and the plurality of X-ray image-use teacher images, and by performing machine learning using the plurality of DRR images stored in the DRR image storage unit and the plurality of DRR image-use training images.
An invention described in claim 13 is the invention described in claim 11, wherein the DRR image generator generates a part of the plurality of DRR images as a DRR image in which a density of an organ region in a region including the bone portion is changed.
The invention described in claim 14 is the invention described in claim 11, wherein the X-ray image of a part of the plurality of X-ray images stored in the X-ray image storage unit is an X-ray image obtained by changing a concentration of an organ region in a region including the organ by using dual-energy subtraction.
Effects of the invention
According to the inventions described in claims 1 to 8, since the corrected image obtained by changing the density of the region of the organ of the subject is used for machine learning, it is possible to generate a learning model corresponding to a subject having a low density of the organ. Therefore, the detection accuracy of the organ can be improved.
According to the invention described in claim 4, since parameters including the projection coordinates and angles of the geometric perspective condition are changed or image processing including rotation, deformation, enlargement and reduction of the image is performed, even when the position and posture of the subject or the position of the imaging system at the time of X-ray imaging is slightly deviated, the position of the bony part can be accurately detected. Since a large number of DRR images can be generated, a customized identifier corresponding to each patient can be learned, and even when DRR images with a low frame rate are used, the position of the bone can be accurately detected.
According to the invention described in claim 5, since contrast change, noise addition, and edge enhancement are performed on the DRR image after generation, even when a difference in image quality occurs between the DRR image and the X-ray image, the bone position can be accurately detected.
According to the invention described in claim 7, the images representing the organs obtained by converting the plurality of X-ray images and the learned model are reused for learning of the learning model, so that the learning image can be expanded to generate a more accurate learning model.
According to the invention described in claim 8, the detection accuracy of an organ having a shape symmetrical with respect to the body axis of the subject can be made uniform. Further, since machine learning is collectively executed on the images of the left and right organs, the learning images can be expanded to generate a learning model with higher accuracy.
According to the invention described in claim 9, the region of the bone portion can be divided with high accuracy.
According to the invention of claim 10, the measurement of the bone density can be performed on the region of the bone portion obtained by the division.
According to the invention described in claim 11, since the corrected image obtained by changing the density of the region of the organ of the subject is used for machine learning, it is possible to generate a learning model corresponding to a subject whose organ density is low.
According to the inventions described in claims 12 to 14, the region of the organ can be extracted by machine learning, and the extraction accuracy can be improved. At this time, since machine learning is performed using both the X-ray image and the DRR image, the learning image can be expanded, and collection of clinical data for learning can be easily performed.
Drawings
Fig. 1 is a schematic front view of a bone image generating apparatus according to an embodiment of the present invention, which functions as an X-ray imaging apparatus.
Fig. 2 is a side schematic view of a bone image generating device according to an embodiment of the present invention, which functions also as an X-ray imaging device.
Fig. 3 is a block diagram showing a control system of the bone image generating apparatus according to the embodiment of the present invention.
Fig. 4 is a schematic diagram for explaining a process of generating a bone image of a subject by machine learning by the bone image generating apparatus according to the embodiment of the present invention.
Fig. 5 is a flowchart showing an operation of generating a bone image in which a bone region is extracted from an X-ray image of a region including a bone of a subject by a bone image generation device according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of the X-ray image 101 generated by the X-ray image generation unit 81.
Fig. 7 is a schematic diagram of the X-ray image training bone portion image 102 generated by the X-ray image generation unit 81.
Fig. 8 is an explanatory diagram schematically showing a state in which a DRR image is generated by a virtual projection simulating the geometrical conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in fig. 1.
Fig. 9 is a schematic diagram of the DRR image 103 generated by the DRR image generating section 83.
Fig. 10 is a schematic diagram of the DRR image 104 in which the density of the region of the bone portion generated by the DRR image generation unit 83 is changed to a small value.
Fig. 11 is a schematic diagram of the training bone portion image 105 for the DRR image generated by the DRR image generation unit 83.
Fig. 12 is a schematic diagram of the X-ray image 106 generated by the X-ray image generation unit 81.
Fig. 13 is a schematic diagram of the DRR image 107 generated by the DRR image generating section 83.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings. Fig. 1 is a schematic front view and fig. 2 is a schematic side view of a bone image generating device according to an embodiment of the present invention, which functions as an X-ray imaging device. In this embodiment, a case will be described in which the present invention is applied to a bone image generating apparatus that generates an image of a bone of a subject in an organ such as a bone and an organ.
The bone image generation device that functions as an X-ray imaging device is also referred to as an X-ray fluoroscopic table, and includes: a top plate 13; an X-ray tube holding member 15; an X-ray irradiation unit 11 disposed at the distal end of the X-ray tube holding member 15; the X-ray detector 12 includes an X-ray detector such as a flat panel detector or an image intensifier (I.I) disposed on the opposite side of the X-ray irradiation unit 11 with respect to the top plate 13.
The top plate 13, the X-ray tube holding member 15, the X-ray irradiation unit 11, and the X-ray detection unit 12 are rotatable between a lying position in which the surface of the top plate 13 faces in the horizontal direction and a standing position in which the surface of the top plate 13 faces in the vertical direction as shown in fig. 1 and 2 by an action of a not-shown motor-incorporated rotation mechanism 16. In addition, the turning mechanism 16 itself can be raised and lowered with respect to a main column 17 erected on a base plate 18.
When the top plate 13 is in the lying position, the subject in the lying position is subjected to X-ray imaging. At this time, the subject is placed on the top plate 13. When the top plate 13 is in the standing position, X-ray imaging is performed on the subject in the standing state. At this time, the subject stands on the front surface of the top plate 13.
Next, the configuration of the bone image measuring apparatus according to the embodiment of the present invention will be described. Fig. 3 is a block diagram showing a control system of the bone image generating apparatus according to the embodiment of the present invention.
The bone image generation device for generating a bone image in which a region of a bone of a subject is extracted from an X-ray image of the region including the bone, the bone image generation device comprising: a CPU as a processor that performs logical operations; a ROM storing an operation program required for controlling the device; a RAM for temporarily storing data during control, and a control unit 80 for controlling the entire apparatus.
The control unit 80 is functionally configured to include: an X-ray image generation unit 81 for generating an X-ray image; an X-ray image storage unit 82 that stores a plurality of X-ray images obtained by X-ray imaging of a region including a bone of a subject or the like, and a plurality of training bone images for X-ray images for machine learning; a DRR image generating unit 83 that generates a DRR (Digitally Reconstructed radio image) image of the region including the bony part by performing a virtual projection of CT image data of the region including the bony part, which simulates the geometrical conditions of the X-ray irradiating unit 11 and the X-ray detecting unit 12 when the subject is subjected to X-ray imaging; a DRR image storage unit 84 that stores the DRR image generated by the DRR image generation unit 83 and a plurality of DRR image training bone images for machine learning generated based on the DRR image generated by the DRR image generation unit 83; a learning unit 85 that generates a learning model for identifying a bone by performing machine learning using the plurality of X-ray images and the plurality of X-ray image training bone images stored in the X-ray image storage unit 82 and performing machine learning using the plurality of DRR images and the plurality of DRR image training bone images stored in the DRR image storage unit 84; the bone image generation unit 86 converts an X-ray image of a region including the bone of the subject using the learned model learned by the learning unit 85, thereby generating an image representing the bone. The control unit 80 is constituted by a computer in which software is installed. The functions of the respective units included in the control unit 80 are realized by executing software installed in a computer.
In the above configuration, when the DRR image training skeleton image and the X-ray image are label images, the both images may be the same image.
In the above configuration, the learning unit 85 may execute the machine learning at a stage before the delivery of the apparatus and store the result in advance, or may additionally execute the machine learning after the delivery to the apparatus such as a medical institution. In this case, the learning unit 85 generates the discriminators by various types of learning using any method such as FCN (full volume Networks), Neural Machine Networks (Neural Machine Networks), Support Vector Machines (SVMs), and Boosting methods.
The control unit 80 is connected to the X-ray irradiation unit 11 and the X-ray detection unit 12. The control unit 80 is connected to a display unit 21 which is configured by a liquid crystal display panel or the like and displays various images including X-ray images, and an operation unit 22 which is provided with various input devices such as a keyboard and a mouse. The control unit 80 is connected to a CT image storage unit 70 that stores an image obtained by CT imaging of the subject, on-line or off-line. The CT image storage unit 70 may be included in a CT imaging apparatus or a treatment planning apparatus that generates a treatment plan of a subject.
Next, an operation of generating a bone image in which a bone region is extracted from an X-ray image of a region including a bone of a subject by using the bone image generating apparatus having the above-described configuration will be described.
First, a basic concept for generating a bone image will be described. Fig. 4 is a schematic diagram for explaining a process of generating a bone image of a subject by machine learning by the bone image generating apparatus according to the embodiment of the present invention.
To determine the position of the bony part using machine learning, a learning model is first generated. In the learning model generation step, the convolutional layer used as the learning model is learned by machine learning using the X-ray image and the DRR image of the region including the bone as input layers, and using the training bone image for the X-ray image and the training bone image for the DRR image showing the bone as output layers. Then, a bone image is generated. In the bone image generation step, the captured fluoroscopic X-ray image is used as an input layer, and the image showing the bone image obtained by extracting the region of the bone is generated as an output layer by converting the fluoroscopic X-ray image into a previously learned learning model.
Next, the operation of generating a bone image through such a process will be described in detail. Fig. 5 is a flowchart showing an operation of generating a bone image in which a bone region is extracted from an X-ray image of a region including a bone of a subject by using the bone image generation device according to the embodiment of the present invention.
When a bone image is generated by the bone image generation device according to the embodiment of the present invention, first, an X-ray image generation step is performed (step S1). In this image generation step, the X-ray image generation unit 81 shown in fig. 3 generates a plurality of X-ray images by performing X-ray imaging on the subject on the top plate 13 using the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in fig. 1. The X-ray image may be an image captured by another X-ray imaging device, or may be generated by X-ray imaging of a phantom (phantom) instead of the subject. The generated X-ray image is stored in the X-ray image storage unit 82 shown in fig. 3 (step S2).
Next, a training bone image for X-ray image used for machine learning is generated (step S3). The X-ray image generation unit 81 generates a training bone image for the X-ray image by cutting out the region of the bone of the subject from the previously generated X-ray image. When the training bone image for the X-ray image is generated, an image obtained by slightly shifting, rotating, deforming, enlarging, and reducing the cut X-ray image is also generated. The reason why the image obtained by translating, rotating, deforming, enlarging, and reducing the cut X-ray image is also used for learning is to cope with the case where the subject moves during X-ray imaging or the X-ray irradiation unit 11 and the X-ray detection unit 12 move as described below. The generated X-ray image is stored in the X-ray image storage unit 82 shown in fig. 3 using the training bone portion image (step S4).
In this case, translation, rotation, deformation, enlargement, and reduction are performed on both the X-ray image and the training bone image for the X-ray image under the same conditions.
Fig. 6 is a schematic view of an X-ray image 101 generated by the X-ray image generation unit 81, and fig. 7 is a schematic view of a training bone portion image 102 for an X-ray image generated by the X-ray image generation unit 81.
The X-ray image 101 shows the femur 51, the pelvis 52, and the soft tissue region 53. The femur 51 and the pelvis 52 are displayed in the X-ray image training bone portion image 102.
Next, the DRR image generator 83 shown in fig. 3 performs a virtual projection simulating the geometric conditions of the X-ray irradiator 11 and the X-ray detector 12 shown in fig. 1 on the CT image data acquired from the CT image storage 70 to generate a plurality of DRR images showing the region including the bone part (step S5), stores the DRR images in the DRR image storage 84 (step S6), generates a plurality of training bone images for DRR images showing the region including the bone part (step S7), and stores the training bone images for DRR images in the DRR image storage 84 (step S8). Here, when generating a training bone image for a DRR image showing a bone, a region in which the CT value is equal to or greater than a predetermined value is used as a bone region to generate a training bone image for a DRR image. For example, a region having a CT value of 200HU (Hounsfield Unit) or more is identified as a region of a bone, and a training bone image for a DRR image is generated.
Fig. 8 is an explanatory diagram schematically showing a state in which a DRR image is generated by a virtual projection obtained by simulating the geometrical conditions of the X-ray irradiation unit 11 and the X-ray detection unit 12 shown in fig. 1.
In the figure, reference numeral 300 denotes CT image data. The CT image data 300 is three-dimensional voxel data that is a collection of a plurality of two-dimensional CT image data. The CT image data 300 has a structure in which, for example, 200 left and right two-dimensional images of 512 × 512 pixels are convolved in a direction crossing the subject (in a direction along a line segment L1 or L2 shown in fig. 8).
When the DRR image is generated by the DRR image generating unit 83, a virtual projection is performed on the CT image data 300. At this time, the three-dimensional CT image data 300 is disposed on a computer. The geometry, which is the geometrical configuration of the X-ray recording system, is then reproduced on the computer. In this embodiment, the X-ray irradiation unit 11 and the X-ray detection unit 12 are disposed on both sides of the CT image data 300. The arrangement of the CT image data 300, the X-ray irradiation unit 11, and the X-ray detection unit 12 is the same as the arrangement of the subject, the X-ray irradiation unit 11, and the X-ray detection unit 12 when performing X-ray imaging. Here, the geometric configuration represents a geometric arrangement relationship among the imaging target, the X-ray irradiation unit 11, and the X-ray detection unit 12.
In this state, a plurality of line segments L connecting the X-ray irradiation unit 11 and the X-ray detection unit 12 via the respective pixels of the CT image data 300 are set. In fig. 8, two line segments L1 and L2 are shown for convenience of explanation. Then, a plurality of calculation points are set on each line segment L, and the CT value of each calculation point is calculated. In the calculation of the CT value, interpolation using the CT value in the CT data voxel around the calculation point is performed. Then, the CT values of the respective calculation points on the line segment L are accumulated. The integrated value is converted into a line integral of a line attenuation coefficient, and the attenuation of the X-ray is calculated, thereby generating a DRR image.
When generating the DRR image, the DRR image is generated by generating a change in a parameter of the DRR image including at least one of the projection coordinate and the angle with respect to the CT image data 300. Alternatively, image processing including at least one of slight translation, rotation, deformation, and magnification and reduction is performed. The translation, rotation, deformation, enlargement and reduction are performed to cope with the case where the subject moves during X-ray imaging or the X-ray irradiation unit 11 and the X-ray detection unit 12 move as described below.
Further, at least one of contrast change, noise addition, and edge enhancement is performed on the generated DRR image. The contrast change, noise addition, and edge enhancement are performed to absorb the difference in image quality between the DRR image and the X-ray image, and thereby to more reliably identify the bone region.
The change of parameters such as the projection coordinates and angles used for generating the DRR image, the contrast change, the noise addition, and the edge enhancement as described above are implemented in such a manner that various changes are randomly or at equal intervals within a fixed range. Therefore, a large number of DRR images can be generated from the CT image data 300 of one subject. Thus, a large number of DRR images thus generated can be used to generate a customized learning model corresponding to each patient. In addition, the generation of a learning model can also be performed using DRR images of a large number of patients.
In addition, when a DRR image is generated and when a training bone image for the DRR image is generated, parameters including projection coordinates and angles under the geometric perspective condition are changed under the same condition, or image processing including rotation, deformation, and enlargement and reduction of the image is performed under the same condition.
Further, when generating the DRR image and the DRR image of the DRR image-use training bone region image, the DRR image generator 83 generates a part of the DRR images among the plurality of DRR images as a DRR image in which the density of the bone region in the region including the bone is changed. More specifically, the CT value of the bone region in which the CT value is equal to or greater than a predetermined value is set to a value smaller than the actual CT value. Therefore, a DRR image simulating a bone portion with decreased bone density can be obtained. Therefore, machine learning can be performed using the DRR image in which the bone portion having decreased bone density is simulated, and the bone portion extraction accuracy can be improved for patients including patients having decreased bone density or osteoporosis.
Fig. 9 is a schematic diagram of a DRR image 103 generated by the DRR image generation unit 83, fig. 10 is a schematic diagram of a DRR image 104 in which the density of the region of the bone portion generated by the DRR image generation unit 83 is changed to a smaller value, and fig. 11 is a schematic diagram of a training bone portion image 105 for the DRR image generated by the DRR image generation unit 83.
The DRR images 103 and 104 show the femur 51, the pelvis 52, and the soft tissue region 53. In addition, the femur 51 and the pelvis 52 are displayed in the DRR image training bone portion image 105.
After the above process is completed, the learning unit 85 generates a learning model for identifying the bones (the femur 51 and the pelvis 52) by performing machine learning with the X-ray image 101 shown in fig. 6 as an input layer and the X-ray image training bone image 102 shown in fig. 7 as an output layer, and performing machine learning with the DRR image 103 shown in fig. 9 and the DRR image 104 obtained by changing the density of the region of the bones to a smaller value shown in fig. 10 as input layers and the DRR image training bone image 105 shown in fig. 11 as an output layer (step S9). In this machine learning, for example, FCN may be used. The convolutional neural network used in the FCN has the configuration shown in fig. 4. That is, when the learning model is generated, the input layers are the X-ray image 101 and the DRR images 103 and 104, and the output layers are the X-ray image training bone part image 102 and the DRR image training bone part image 105.
When the learning model is generated by the above steps, X-ray imaging is performed on the subject (step S10). Then, the bone image generation unit 86 converts the captured X-ray image using the previously generated learning model (convolutional layer) and performs segmentation to generate an image of the bone (the femur 51 and the pelvis 52) (step S11). That is, for an X-ray image obtained by X-ray imaging, an image representing a bone portion is generated as an output layer using a previously generated learning model. Then, using the region of the bone portion determined by the segmentation, measurement of bone density is performed by various methods.
In addition, in this specification and the like, "division" means a concept including a process of specifying an outline of a bone portion or the like or an outer shape of the bone portion or the like in addition to a process of specifying a region of the bone portion or the like in the embodiment.
When the bone image is generated through the above steps, the operator corrects the generated bone image as necessary. Then, the corrected bone portion image and the original X-ray image thereof are used for generation of a learning model or relearning by the learning unit 85. Thus, the learning image including the failure case can be expanded to generate a learning model with high accuracy.
As described above, according to the bone part image generation device of the embodiment of the present invention, it is possible to improve the extraction accuracy by extracting the region of the bone part by machine learning. At this time, since machine learning is performed using both the X-ray image and the DRR image, the learning image can be expanded, and collection of clinical data for learning can be easily performed. Further, by using the DRR image obtained by changing the concentration of the bone region, machine learning can be performed by the DRR image simulating the bone portion with decreased bone density, and the bone portion extraction accuracy can be improved for patients including patients with decreased bone density or osteoporosis.
In the above-described embodiment, the X-ray image may be blurred with a gaussian filter or the like and input to the learning model. In general, since a DRR image is generated from a CT image of low resolution, it is low resolution compared to an X-ray image. Therefore, by blurring the X-ray image, reducing noise in the X-ray image, or setting the resolution to be equal to the DRR image at the time of learning, the bone can be recognized more reliably. In the above embodiment, the DRR image and the X-ray image input to the learning model may be input after being subjected to contrast normalization in advance. In addition, a local contrast normalization layer or a local response normalization layer may also be added to the intermediate layer.
Next, another embodiment of the present invention will be explained.
In the above-described embodiment, a part of the plurality of DRR images is generated as a DRR image in which the density of the bone region in the region including the bone is changed, thereby generating a DRR image simulating a bone in which the bone density is reduced, and using the DRR image in machine learning. In contrast, in this embodiment, a part of the plurality of X-ray images is set as an X-ray image in which the concentration of the bone region in the region including the bone is changed by dual energy subtraction (i.e., subtraction processing is performed on an X-ray image (high-voltage image) captured in a state where a high voltage is applied to the X-ray tube and an X-ray image (low-voltage image) captured in a state where a low voltage is applied to the X-ray tube.
That is, in order to diagnose osteoporosis, when measuring the bone density of a subject, a configuration is adopted in which the bone density is measured by using dual-energy subtraction in which an X-ray image taken in a state where a high voltage is applied to an X-ray tube and an X-ray image taken in a state where a low voltage is applied to the X-ray tube are subjected to subtraction processing. When bone images are identified, the X-ray images taken in a state where a high voltage is applied to the X-ray tube and the X-ray images taken in a state where a low voltage is applied to the X-ray tube are weighted by dual-energy subtraction, and then a dual-energy subtraction image representing the bone is generated by obtaining the difference between the X-ray images. Then, the dual-energy subtraction image is subtracted from the X-ray image (high-pressure image or low-pressure image) to obtain an image in which the density of the bone portion is reduced in the region including the bone portion (image corresponding to the X-ray image of the bone portion in a state in which the bone density is reduced). By using the X-ray image of the bone region with a low concentration for machine learning, machine learning can be performed by simulating the X-ray image of the bone region with a decreased bone density, and the bone extraction accuracy can be improved for patients including osteoporosis patients.
In this case, as the X-ray image used for machine learning, any of a high-voltage image, a low-voltage image, and a dual-energy subtraction image may be used, or an image obtained by connecting these images in the channel direction may be used. In addition, instead of subtracting the dual-energy subtraction image from the X-ray image (high-pressure image or low-pressure image), the parameter adjustment may be performed on the dual-energy subtraction image to obtain an X-ray image simulating the bone portion with decreased bone density.
Next, another embodiment of the present invention will be explained. Fig. 12 is a schematic diagram of an X-ray image 106 generated by the X-ray image generation unit 81, and fig. 13 is a schematic diagram of a DRR image 107 generated by the DRR image generation unit 83.
This embodiment is used when generating a bone image of a bone such as a femur having a shape symmetrical to the body axis of a subject. Fig. 6 explained previously is a schematic view of an X-ray image 101 in the vicinity of the right leg of the subject, and fig. 9 is a schematic view of a DRR image 103 in the vicinity of the right leg of the subject. In contrast, fig. 12 is a schematic view of an X-ray image 106 near the left leg of the subject, and fig. 13 is a schematic view of a DRR image 107 near the left leg of the subject.
In this way, when bones (the femur 51 and the pelvis 52) having shapes that are bilaterally symmetric with respect to the body axis of the subject are targeted, the learning unit 85 performs machine learning on the images of the left and right bones by inverting either one of the image of the right bone and the image of the left bone. For example, the X-ray image 106 near the left leg of the subject shown in fig. 12 is inverted left and right, and used for machine learning together with the X-ray image 101 near the right leg of the subject shown in fig. 6. Similarly, the DRR image 107 near the left leg of the subject shown in fig. 13 is inverted left and right, and is used for machine learning together with the DRR image 103 near the right leg of the subject shown in fig. 9.
With such a configuration, the bone portion having a shape symmetrical with respect to the body axis of the subject can be detected with uniform accuracy. Further, by performing machine learning on the images of the left and right bones together, the learning image can be expanded to generate a learning model with higher accuracy.
In the above-described embodiment, machine learning is performed using both the X-ray image and the DRR image. However, machine learning may be performed using either one of the X-ray image and the DRR image.
In the above-described embodiment, the bone is targeted as the organ, but an organ such as an organ may be targeted. For example, when the subject has a large amount of visceral fat, the concentration of the organ region is low during X-ray imaging. According to the present invention, even in such a case, it is possible to generate a learning model corresponding to a subject whose organ concentration is low. Therefore, the accuracy of organ detection can be improved.
Description of the reference numerals
11X-ray irradiation unit
12X-ray detector
13 Top plate
14 support post
15X-ray tube holding member
16 rotating mechanism
17 main support
18 bottom plate
21 display part
22 operating part
70 CT image storage unit
80 control part
81X-ray image generating unit
82X-ray image storage unit
83 DRR image generating part
84 DRR image storage unit
85 learning part
86 bone part image generating part
300 CT image data.

Claims (14)

1. An image analysis method for performing segmentation for determining a region of an organ of a subject by analyzing an image of the region including the organ,
using machine learning as the method of segmentation, and comprising:
a correction image generation step of generating a correction image in which the density of a region of an organ in an image of the organ including the subject is changed;
a learning model generation step of generating a machine-learned learning model by a learning process using an image including the organ of the subject and the correction image generated by the correction image generation step.
2. The image parsing method according to claim 1,
an image representing an organ of the subject is generated by converting an X-ray image of a region including the organ, which is obtained by X-ray imaging of the subject, with a learning model generated by the learning model generation step.
3. The image parsing method according to claim 1,
an image of a region including an organ of the subject is a DRR image generated from CT image data of the subject;
in the correction image generation step, a region in which the CT value of the CT image data is a predetermined value is set as a region of the organ, and the density of the region is changed.
4. The image parsing method according to claim 3,
when generating a DRR image, a plurality of DRR images are generated by changing at least one parameter of projection coordinates and angles including geometrical conditions, or by performing image processing including at least one of rotation, deformation, and enlargement and reduction of an image.
5. The image parsing method according to claim 3,
performing at least one of contrast change, noise addition, and edge enhancement on the generated DRR image.
6. The image parsing method according to claim 1,
an image of a region including an organ of the subject is an X-ray image generated by X-ray imaging of the subject,
in the correction image generation step, the concentration of the region of the organ is changed using the X-ray image and an image of the organ obtained by dual-energy subtraction.
7. The image parsing method according to claim 2,
an X-ray image of a region including an organ of the subject obtained by X-ray imaging of the subject and an image representing the organ converted by the learning model generated in the learning model generation step are used for learning of the learning model by a learning unit.
8. The image parsing method according to claim 1,
the organ has a shape that is bilaterally symmetric with respect to the body axis of the subject, and the learning model generation step is performed by inverting either one of the right organ image and the left organ image in the right and left directions, thereby generating a machine-learned learning model for the left and right organ images.
9. A method of segmentation is characterized in that,
the organ is a bone of the subject, and a region of the bone is segmented by the image analysis method according to claim 1.
10. A bone mineral density measuring method is characterized in that,
bone density is measured for a region of a bone portion divided by the dividing method according to claim 9.
11. A learning model generation method for generating a learning model used for segmentation for specifying a region of an organ of a subject by analyzing an image of the region including the organ by machine learning,
a learning model is generated by performing learning of machine learning using an image including an organ of the subject and a correction image generated by changing a density of a region of the organ in the image including the organ of the subject.
12. An image generation device that generates an image in which a region of an organ of a subject is extracted from an X-ray image of the region including the organ, the image generation device comprising:
an X-ray image storage unit that stores a plurality of X-ray images obtained by X-ray imaging of a region including the organ and a plurality of training images for X-ray images for machine learning;
a DRR image generation unit for generating a DRR image of a region including the bone;
a DRR image storage unit that stores the plurality of DRR images generated by the DRR image generation unit and training images for DRR images for machine learning generated based on the DRR images generated by the DRR image generation unit;
an image generation unit that converts an X-ray image of a region including an organ of the subject using a learning model for identifying the organ, thereby generating an image representing the organ, the learning model being generated in advance by performing machine learning using the plurality of X-ray images and the plurality of training images for X-ray images stored in the X-ray image storage unit, and by performing machine learning using the plurality of DRR images and the plurality of training images for DRR images stored in the DRR image storage unit.
13. The image generating apparatus according to claim 11,
the DRR image generation unit generates a part of the plurality of DRR images as a DRR image in which a density of an organ region in a region including the bone portion is changed.
14. The image generating apparatus according to claim 11,
a part of the plurality of X-ray images stored in the X-ray image storage unit is an X-ray image obtained by changing the concentration of an organ region in a region including the organ by means of dual-energy subtraction.
CN201980035078.8A 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device Pending CN112165900A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018083340 2018-04-24
JP2018-083340 2018-04-24
PCT/JP2019/011773 WO2019208037A1 (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model creation method, and image creation device

Publications (1)

Publication Number Publication Date
CN112165900A true CN112165900A (en) 2021-01-01

Family

ID=68293538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980035078.8A Pending CN112165900A (en) 2018-04-24 2019-03-20 Image analysis method, segmentation method, bone density measurement method, learning model generation method, and image generation device

Country Status (4)

Country Link
JP (1) JP7092190B2 (en)
KR (1) KR102527440B1 (en)
CN (1) CN112165900A (en)
WO (1) WO2019208037A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022140051A (en) 2021-03-12 2022-09-26 富士フイルム株式会社 Estimation device, method, and program
JP2022140050A (en) * 2021-03-12 2022-09-26 富士フイルム株式会社 Estimation device, method, and program
JP2022167132A (en) * 2021-04-22 2022-11-04 日本装置開発株式会社 X-ray inspection device
WO2022244495A1 (en) * 2021-05-17 2022-11-24 キヤノン株式会社 Radiation imaging device and radiation imaging system
WO2023224022A1 (en) * 2022-05-20 2023-11-23 国立大学法人大阪大学 Program, information processing method, and information processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008011901A (en) * 2006-07-03 2008-01-24 Fujifilm Corp Image type discrimination device, method and program
US20100246915A1 (en) * 2009-03-27 2010-09-30 Mitsubishi Electric Corporation Patient registration system
JP2016119954A (en) * 2014-12-24 2016-07-07 好民 村山 Radiographic apparatus
JP2016221100A (en) * 2015-06-02 2016-12-28 株式会社東芝 Medical image processing apparatus, and treatment system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2638875B2 (en) 1988-01-31 1997-08-06 株式会社島津製作所 Bone mineral quantitative analyzer
JP2764492B2 (en) * 1992-01-17 1998-06-11 富士写真フイルム株式会社 Radiation imaging direction recognition method
JP2002236910A (en) 2001-02-09 2002-08-23 Hitachi Medical Corp Three-dimensional image creating method
CN1907225B (en) 2005-08-05 2011-02-02 Ge医疗系统环球技术有限公司 Process and apparatus for dividing intracerebral hemorrhage injury
JP4919408B2 (en) 2007-01-12 2012-04-18 富士フイルム株式会社 Radiation image processing method, apparatus, and program
WO2013166299A1 (en) 2012-05-03 2013-11-07 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Intelligent algorithms for tracking three-dimensional skeletal movement from radiographic image sequences
JP2014158628A (en) * 2013-02-20 2014-09-04 Univ Of Tokushima Image processor, image processing method, control program, and recording medium
JP2017185007A (en) * 2016-04-05 2017-10-12 株式会社島津製作所 Radiographic apparatus, radiation image object detection program, and object detection method in radiation image
US9799120B1 (en) 2016-05-09 2017-10-24 Siemens Healthcare Gmbh Method and apparatus for atlas/model-based segmentation of magnetic resonance images with weakly supervised examination-dependent learning
KR101928984B1 (en) * 2016-09-12 2018-12-13 주식회사 뷰노 Method and apparatus of bone mineral density measurement

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008011901A (en) * 2006-07-03 2008-01-24 Fujifilm Corp Image type discrimination device, method and program
US20100246915A1 (en) * 2009-03-27 2010-09-30 Mitsubishi Electric Corporation Patient registration system
JP2016119954A (en) * 2014-12-24 2016-07-07 好民 村山 Radiographic apparatus
JP2016221100A (en) * 2015-06-02 2016-12-28 株式会社東芝 Medical image processing apparatus, and treatment system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
C . LINDNER: "Fully Automatic Segmentation of the Proximal Femur Using Random Forest Regression Voting", IEEE TRANSACTIONS ON MEDICAL IMAGING *

Also Published As

Publication number Publication date
KR102527440B1 (en) 2023-05-02
KR20200142057A (en) 2020-12-21
JPWO2019208037A1 (en) 2021-04-01
JP7092190B2 (en) 2022-06-28
WO2019208037A1 (en) 2019-10-31

Similar Documents

Publication Publication Date Title
JP7092190B2 (en) Image analysis method, segmentation method, bone density measurement method, learning model creation method and image creation device
CN111789618B (en) Imaging system and method
US7639866B2 (en) Method of radiographic imaging for three-dimensional reconstruction, and a computer program and apparatus for implementing the method
EP1400203B1 (en) Computer assisted bone densitometer
US8660329B2 (en) Method for reconstruction of a three-dimensional model of a body structure
JP5345947B2 (en) Imaging system and imaging method for imaging an object
CN109472835B (en) Method for processing medical image data and image processing system for medical image data
JPWO2019003474A1 (en) Radiotherapy tracking device, position detection device, and moving body tracking method
CN109419526B (en) Method and system for motion estimation and correction in digital breast tomosynthesis
US9142020B2 (en) Osteo-articular structure
CN111601552A (en) Image forming apparatus
US9351695B2 (en) Hybrid dual energy imaging and bone suppression processing
CN112041890A (en) System and method for reducing artifacts in images
US20220092787A1 (en) Systems and methods for processing x-ray images
CN106780649A (en) The artifact minimizing technology and device of image
CN105326524B (en) The medical imaging procedure and device of the artifact in image can be reduced
KR20110115762A (en) A reconstruction method of patient-customized 3-d human bone model
JP4416823B2 (en) Image processing apparatus, image processing method, and computer program
JP5576631B2 (en) Radiographic apparatus, radiographic method, and program
CN117437144A (en) Method and system for image denoising
EP4354395A1 (en) Artificial intelligence-based dual energy x-ray image motion correction training method and system
JP2023154994A (en) Image processing device, method for operating image processing device, and program
Brehler Intra-operative visualization and assessment of articular surfaces in C-arm computed tomography images
WO2022241121A1 (en) Systems, devices, and methods for segmentation of anatomical image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination