EP4616807A1 - Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm - Google Patents

Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm

Info

Publication number
EP4616807A1
EP4616807A1 EP25162809.5A EP25162809A EP4616807A1 EP 4616807 A1 EP4616807 A1 EP 4616807A1 EP 25162809 A EP25162809 A EP 25162809A EP 4616807 A1 EP4616807 A1 EP 4616807A1
Authority
EP
European Patent Office
Prior art keywords
radiation
image
thickness
component
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP25162809.5A
Other languages
English (en)
French (fr)
Inventor
Tomoyuki Takahashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of EP4616807A1 publication Critical patent/EP4616807A1/de
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/482Diagnostic techniques involving multiple energy imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1075Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions by non-invasive methods, e.g. for determining thickness of tissue layer
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/542Control of apparatus or devices for radiation diagnosis involving control of exposure
    • A61B6/544Control of apparatus or devices for radiation diagnosis involving control of exposure dependent on patient size

Definitions

  • the present disclosure relates to a radiation image processing device, a radiation image processing method, and a radiation image processing program.
  • the energy subtraction processing is a method in which respective pixels of the two radiation images obtained as described above are associated with each other, and subtraction is performed after multiplying a weight coefficient based on an attenuation coefficient in accordance with a composition between pixels to acquire an image in which specific components, such as a bone part and a soft part, included in the radiation image are separated.
  • a method of separating a soft tissue of the subject into fat and muscle by energy subtraction processing has also been proposed (see JP2023-047911A and JP2022-056084A ).
  • the attenuation coefficient of the soft part is required.
  • the soft part does not consist of a single composition, and a plurality of components, such as fat and muscle, are complicatedly mixed. Also, the composition of the soft part varies greatly among individuals. Therefore, in a case in which the attenuation coefficient of the soft part is not obtained in accordance with a ratio between the fat and the muscle, it is not possible to accurately separate a plurality of components, such as the bone part and the soft part, in a case in which the processing is performed using the characteristics related to the attenuation of the radiation, such as the energy subtraction processing.
  • the present disclosure has been made in view of the above-described circumstances, and an object of the present disclosure is to accurately separate n components of a subject included in a radiation image by using n radiation images obtained by n - 1 (n ⁇ 3) types of radiation having different energy distributions.
  • the present disclosure relates to a radiation image processing device comprising: at least one processor, in which the processor is configured to: acquire first to (n - 1)th (n ⁇ 3, n is a natural number) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; acquire a body thickness of the subject; derive thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and derive first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.
  • the processor may be configured to acquire the body thickness by deriving the body thickness of the subject based on at least one of the first to (n - 1)th radiation images.
  • the processor may be configured to: acquire a first radiation image and a second radiation image that are acquired by imaging the subject with two types of radiation having different energy components; derive the body thickness of the subject based on at least one of the first radiation image or the second radiation image; derive thicknesses of a first component, a second component, and a third component of the subject by using the body thickness, the first radiation image, and the second radiation image; and derive a first component image, a second component image, and a third component image in which the first component, the second component, and the third component are enhanced, respectively, based on the thicknesses of the first component, the second component, and the third component.
  • the first component, the second component, and the third component may be fat, muscle, and a bone, respectively.
  • the present disclosure relates to a radiation image processing method executed by a computer, the radiation image processing method comprising: acquiring first to (n - 1)th (n ⁇ 3) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; acquiring a body thickness of the subject; deriving thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and deriving first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.
  • the present disclosure relates to a radiation image processing program causing a computer to execute: a procedure of acquiring first to (n - 1)th (n ⁇ 3) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; a procedure of acquiring a body thickness of the subject; a procedure of deriving thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and a procedure of deriving first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.
  • the present disclosure it is possible to accurately separate the n components of the subject included in the radiation image by using the n radiation images obtained by the n - 1 (n ⁇ 3) types of radiation having different energy distributions.
  • Fig. 1 is a schematic block diagram showing a configuration of a radiography system to which a radiation image processing device according to the embodiment of the present disclosure is applied.
  • the radiography system according to the present embodiment comprises an imaging apparatus 1 and a radiation image processing device 10 according to the present embodiment.
  • the imaging apparatus 1 is an imaging apparatus for performing energy subtraction imaging by a so-called one-shot method for changing energy of each of radiation, such as X-rays, emitted from a radiation source 3 and transmitted through a subject H and irradiating a first radiation detector 5 and a second radiation detector 6 with the converted radiation.
  • the first radiation detector 5, a radiation energy conversion filter 7 made of a copper plate or the like, and the second radiation detector 6 are disposed in order from a side closest to the radiation source 3, and the radiation source 3 is driven. It should be noted that the first and second radiation detectors 5 and 6 are closely attached to the radiation energy conversion filter 7.
  • a first radiation image G1 of the subject H by low-energy radiation also including so-called soft rays is acquired.
  • a second radiation image G2 of the subject H by high-energy radiation from which the soft rays are removed is acquired.
  • the first and second radiation images G1 and G2 are input to the radiation image processing device 10.
  • the first and second radiation detectors 5 and 6 can perform recording and reading-out of the radiation image repeatedly, and a so-called direct-type radiation detector that directly receives emission of the radiation and generates an electric charge may be used, or a so-called indirect-type radiation detector that converts the radiation into visible light and then converts the visible light into an electric charge signal may be used.
  • a so-called thin film transistor (TFT) readout method in which the radiation image signal is read out by turning a TFT switch on and off
  • a so-called optical readout method in which the radiation image signal is read out by emission of read out light
  • the radiation image processing device 10 is a computer, such as a workstation, a server computer, and a personal computer, and comprises a central processing unit (CPU) 11, a nonvolatile storage 13, and a memory 16 as a transitory storage region.
  • CPU central processing unit
  • the radiation image processing device 10 comprises a display 14, such as a liquid crystal display, an input device 15, such as a keyboard and a mouse, and a network interface (I/F) 17 connected to a network (not shown).
  • the CPU 11, the storage 13, the display 14, the input device 15, the memory 16, and the network I/F 17 are connected to a bus 18. It should be noted that the CPU 11 is an example of a processor according to the present disclosure.
  • the storage 13 is implemented by a hard disk drive (HDD), a solid state drive (SSD), a flash memory, and the like.
  • a radiation image processing program 12 installed in the radiation image processing device 10 is stored in the storage 13 as a storage medium.
  • the CPU 11 reads out the radiation image processing program 12 from the storage 13, loads the readout radiation image processing program 12 to the memory 16, and executes the loaded radiation image processing program 12.
  • the radiation image processing program 12 is stored in a storage device of the server computer connected to the network or in a network storage in a state of being accessible from the outside, and is downloaded and installed in the computer that configures the radiation image processing device 10 in response to the request.
  • the radiation image processing program 12 is distributed in a state of being recorded on a recording medium, such as a digital versatile disc (DVD) or a compact disc read only memory (CD-ROM), and is installed in the computer that configures the radiation image processing device 10 from the recording medium.
  • a recording medium such as a digital versatile disc (DVD) or a compact disc read only memory (CD-ROM
  • Fig. 3 is a diagram showing the functional configuration of the radiation image processing device according to the present embodiment.
  • the radiation image processing device 10 comprises an image acquisition unit 21, a body thickness acquisition unit 22, a component thickness derivation unit 23, an image derivation unit 24, and a display controller 25.
  • the CPU 11 executes the radiation image processing program 12 to function as the image acquisition unit 21, the body thickness acquisition unit 22, the component thickness derivation unit 23, the image derivation unit 24, and the display controller 25.
  • the image acquisition unit 21 acquires the first radiation image G1 and the second radiation image G2 of the subject H from the first and second radiation detectors 5 and 6 by causing the imaging apparatus 1 to perform the energy subtraction imaging of the subject H.
  • imaging conditions such as an imaging dose, an energy distribution, a tube voltage, and an SID, are set.
  • the imaging conditions need only be set by input from the input device 15 by a user.
  • the set imaging conditions are stored in the storage 13.
  • the first and second radiation images G1 and G2 may be acquired by a program different from the radiation image processing program according to the present embodiment.
  • the image acquisition unit 21 reads out the first and second radiation images G1 and G2 stored in the storage 13 from the storage 13 for processing.
  • Fig. 4 is a diagram showing the first and second radiation images.
  • a region of the subject H and a direct radiation region obtained by directly irradiating the radiation detectors 5 and 6 with the radiation are included in the first and second radiation images G1 and G2.
  • a soft region and a bone region are included in the region of the subject H.
  • a soft part component of a human body includes muscle, fat, blood, and water.
  • a non-fat tissue including blood and water is treated as the muscle.
  • the fat component, the muscle component, and the bone part component of the subject H are examples of a first component, a second component, and a third component according to the present disclosure, respectively.
  • the body thickness acquisition unit 22 acquires the body thickness by deriving the body thickness of the subject H for each pixel of the first and second radiation images G1 and G2 based on at least one of the first radiation image G1 or the second radiation image G2.
  • the body thickness is a thickness in a direction in which the radiation is transmitted through the subject H, that is, a thickness in a direction perpendicular to the first and second radiation images G1 and G2. Since the body thickness is derived for each pixel of the first and second radiation images G1 and G2, the body thickness acquisition unit 22 derives a body thickness distribution in at least one of the first radiation image G1 or the second radiation image G2.
  • the body thickness acquisition unit 22 uses the first radiation image G1 acquired by the radiation detector 5 on the side close to the subject H. It should be noted that the second radiation image G2 may be used. In addition, regardless of which image is used, a low-frequency image representing a low-frequency component of the image may be derived, to use the low-frequency image to derive the body thickness.
  • the body thickness acquisition unit 22 derives the body thickness of the subject H by assuming that a brightness distribution of the first radiation image G1 coincides with the body thickness distribution of the subject H, and converting the pixel value of the first radiation image G1 into the thickness by using an attenuation coefficient of the soft part of the subject H.
  • the body thickness acquisition unit 22 may measure the thickness of the subject H by using a sensor or the like.
  • the body thickness acquisition unit 22 may derive the body thickness by approximating the body thickness of the subject H with a model such as a cube or an elliptical column.
  • the body thickness acquisition unit 22 may derive the body thickness of the subject H by any method such as a method described in JP2015-043959A .
  • the component thickness derivation unit 23 derives the thicknesses of the first component to the third component included in the subject H, that is, the thickness of the fat, the thickness of the muscle, and the thickness of the bone part, by using the body thickness derived by the body thickness acquisition unit 22, the first radiation image G1, and the second radiation image G2.
  • the component thickness derivation unit 23 derives a first attenuation image CL and a second attenuation image CH representing the attenuation amount of the radiation by the subject H from each of the first radiation image G1 and the second radiation image G2.
  • a pixel value of the first attenuation image CL represents the attenuation amount of the low-energy radiation due to the subject H
  • a pixel value of the second attenuation image CH represents the attenuation amount of the high-energy radiation due to the subject H.
  • the component thickness derivation unit 23 derives the first attenuation image CL and the second attenuation image CH from the first radiation image G1 and the second radiation image G2 by Expression (1) and Expression (2).
  • Gd1 is the pixel value of the direct radiation region in the first radiation image G1
  • Gd2 is the pixel value of the direct radiation region in the second radiation image G2.
  • CL x y Gd 1 ⁇ G 1 x y CH
  • x y Gd 2 ⁇ G 2 x y
  • the attenuation amount of the radiation by the subject H is determined depending on the thickness of the fat, the thickness of the muscle, the thickness of the bone part, and the radiation quality (high-energy or low-energy). Therefore, in a case in which the attenuation coefficient representing an attenuation rate per unit thickness is ⁇ , attenuation amounts CL0 and CH0 of the radiation at each pixel position in each of the low-energy image and the high-energy image can be represented by Expression (3) and Expression (4).
  • tf is a thickness of the fat
  • tm is a thickness of the muscle
  • tb is a thickness of the bone part
  • ⁇ Lf is a fat attenuation coefficient of the low-energy radiation
  • ⁇ Lm is a muscle attenuation coefficient of the low-energy radiation
  • ⁇ Lb is a bone part attenuation coefficient of the low-energy radiation
  • ⁇ Hf is a fat attenuation coefficient of the high-energy radiation
  • ⁇ Hm is a muscle attenuation coefficient of the high-energy radiation
  • ⁇ Hb is a bone part attenuation coefficient of the high-energy radiation.
  • the attenuation coefficient represents an attenuation rate of radiation per unit thickness.
  • the attenuation amount CL0 of the low-energy image corresponds to the pixel value of the first attenuation image CL
  • the attenuation amount CH0 of the high-energy image corresponds to the pixel value of the second attenuation image CH.
  • Expression (3) and Expression (4) represent a relationship between the first attenuation image CL and the second attenuation image CH in each pixel, but (x,y) representing the pixel position is omitted. In Expression (5) to Expression (11) shown later, (x,y) representing the pixel position is also omitted.
  • the variables are the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part. Since there are three variables, the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part cannot be derived only by two Expressions (3) and (4).
  • the body thickness acquisition unit 22 acquires the body thickness of the subject H.
  • a relationship between the body thickness T and the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part is represented by Expression (5).
  • Expression (5) is modified for the thickness tb of the bone part, to obtain Expression (6).
  • the variables in Expression (7) and Expression (8) are two variables of the thickness tf of the fat and the thickness tm of the muscle. Therefore, the thickness tf of the fat and the thickness tm of the muscle can be derived by solving Expression (7) and Expression (8) with the thickness tf of the fat and the thickness tm of the muscle as variables. Further, the thickness tb of the bone part can be derived by Expression (6) from the body thickness T, the derived thickness tf of the fat, and the derived thickness tm of the muscle.
  • the fat attenuation coefficients ⁇ Lf and ⁇ Hf, the muscle attenuation coefficients ⁇ Lm and ⁇ Hm, and the bone part attenuation coefficients ⁇ Lb and ⁇ Hb for each of the low-energy radiation and the high-energy radiation are required.
  • the muscle attenuation coefficients ⁇ Lm and ⁇ Hm, the fat attenuation coefficients ⁇ Lf and ⁇ Hf, and the bone part attenuation coefficients ⁇ Lb and ⁇ Hb are derived in consideration of beam hardening in advance, and then stored in the storage 13.
  • the fat and the muscle are mixed, and the bone part is present in the human body, so that the radiation is transmitted through the fat, the muscle, and the bone part in a complicated order in accordance with the overlap thereof inside the human body.
  • the monochromatic radiation does not have a change in spectrum, so that the fat, the muscle, and the bone part have a constant attenuation coefficient.
  • the radiation dose after the transmission through the subject is always the same regardless of the order in which the radiation is transmitted through the respective components in the subject.
  • Radiation actually emitted to the subject is continuous radiation having a wide spectrum, but is a collection of the monochromatic radiation. Therefore, even in a case of the continuous radiation, in a case in which the total thicknesses of the fat, the muscle, and the bone part are the same, the radiation dose after the radiation is transmitted through the subject is always the same regardless of the order in which the components are transmitted.
  • the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part are required to derive a component image described below, and the information on how the fat, the muscle, and the bone part overlap each other is not required. Therefore, in the present embodiment, the attenuation coefficient is derived by using three types of models, that is, a model consisting of only the fat, a simple model in which the radiation is first transmitted through the fat and then transmitted through the muscle, and a simple model in which the radiation is first transmitted through the fat, then transmitted through the muscle, and then transmitted through the bone.
  • the fat, the muscle, and the bone part are objects having the attenuation coefficients corresponding to the fat, the muscle, and the bone part, respectively.
  • the order of the components in the model is not limited to the fat, the muscle, and the bone part, and any order can be adopted.
  • a model consisting of only the fat that is, a model consisting of only an object having the attenuation coefficient corresponding to the fat is used, and the low-energy image and the high-energy image are derived by irradiating the model with the low-energy radiation and the high-energy radiation while changing the thickness of the fat.
  • a ratio of the pixel value of each pixel of the low-energy image and the high-energy image to an image (hereinafter, referred to as a direct image) acquired by the low-energy radiation and the high-energy radiation in a case in which the fat is not present is derived, and the ratio is further divided by the thickness of the fat to derive the fat attenuation coefficients ⁇ Lf and ⁇ Hf for various thicknesses of the fat.
  • a model consisting of the fat and the muscle that is, a model consisting of an object having the attenuation coefficient corresponding to each of the fat and the muscle is used, and the low-energy radiation and the high-energy radiation are respectively emitted to the model while changing the thicknesses of the fat and the muscle, to derive the low-energy image and the high-energy image.
  • a ratio of the pixel value of each pixel of the low-energy image and the high-energy image to the pixel value of the direct image is derived, and the ratio is further divided by the thickness of the muscle to derive the muscle attenuation coefficients ⁇ Lm and ⁇ Hm for various thicknesses of the muscle.
  • the thickness of the fat is also taken into consideration, so that the muscle attenuation coefficients ⁇ Lm and ⁇ Hm are obtained in consideration of the beam hardening of the fat.
  • a model consisting of the fat, the muscle, and the bone part that is, a model consisting of an object having the attenuation coefficient corresponding to each of the fat, the muscle, and the bone part is used, and the low-energy radiation and the high-energy radiation are respectively emitted to the model while changing the thicknesses of the fat, the muscle, and the bone part, to derive the low-energy image and the high-energy image.
  • a ratio of the pixel value of each pixel of the low-energy image and the high-energy image to the pixel value of the direct image is derived, and the ratio is further divided by the thickness of the bone part to derive the bone part attenuation coefficients ⁇ Lb and ⁇ Hb for various thicknesses of the bone part.
  • the thicknesses of the fat and the muscle are also taken into consideration, so that the bone part attenuation coefficients ⁇ Lb and ⁇ Hb are obtained in consideration of the beam hardening of the fat and the muscle.
  • ⁇ Lb 1 tf 1 , tm 1 , tb 1 CL 1 / CLd
  • the thicknesses of the three types of substances included in the model are discrete. Therefore, for the three types of substances, the attenuation coefficients derived for various thicknesses can be interpolated to derive a relationship between the thickness and the attenuation coefficient of each component.
  • the fat attenuation coefficients ⁇ Lf and ⁇ Hf derived as described above are represented by a one-dimensional look-up table that depends on only the thickness of the fat.
  • the muscle attenuation coefficients ⁇ Lm and ⁇ Hm are represented by a two-dimensional look-up table that depends on only the thicknesses of the fat and the muscle.
  • the bone part attenuation coefficients ⁇ Lb and ⁇ Hb are represented by a three-dimensional look-up table that depends on the thicknesses of the fat, the muscle, and the bone part.
  • the component thickness derivation unit 23 derives the thickness tf of the fat and the thickness tm of the muscle by solving Expression (7) and Expression (8) with the thickness tf of the fat and the thickness tm of the muscle as variables. It should be noted that the thickness tf of the fat and the thickness tm of the muscle, which are derived, are derived for each pixel of the first attenuation image CL and the second attenuation image CH, but, in the following description, (x,y) representing the pixel position will be omitted.
  • the component thickness derivation unit 23 first calculates the thickness tm of the muscle in a case in which the thickness tf of the fat is set to zero by Expression (8).
  • the component thickness derivation unit 23 calculates the thickness tf0 of the fat in a case in which the thickness tm of the muscle is set to zero, by using Expression (8).
  • CH ⁇ Hf(tf0,0,T - 0 - tf0) ⁇ tf0 + ⁇ Hm(tf0,0,T - 0 - tf0) ⁇ 0 + ⁇ Hb(tf0,0,T - 0 - tf0) ⁇ (T - 0 - tf0)
  • tf0 is calculated by Expression (11).
  • Fig. 5 is a diagram showing a relationship between the attenuation amounts in accordance with the thickness of the muscle and the thickness of the fat.
  • an attenuation amount 33 indicates the attenuation amount CL which is the pixel value of the low-energy image and the attenuation amount CH which is the pixel value of the high-energy image which are derived by actually imaging the subject.
  • the pixel value that is, the attenuation amount of the first attenuation image (here, a provisional first attenuation image CL') derived by Expression (7) is less than the pixel value of the first attenuation image CL derived from the actual thickness of the muscle and the actual thickness of the fat as shown in an attenuation amount 34 of Fig. 5 (that is, CL > CL').
  • the pixel value that is, the attenuation amount of the provisional first attenuation image CL' derived by Expression (7) by using the actual thickness of the muscle and the actual thickness of the fat is the same as the pixel value of the first attenuation image CL as shown in an attenuation amount 36 of Fig. 5 .
  • the component thickness derivation unit 23 derives the thickness tm of the muscle and the thickness tf of the fat as follows.
  • a provisional thickness tfk of the fat is set. It should be noted that zero is used as an initial value of the provisional thickness tmk of the muscle.
  • the provisional first attenuation image CL' is calculated by Expression (7) using the provisional thickness tfk of the fat, the provisional thickness tmk of the muscle, the fat attenuation coefficient ⁇ Lf, the muscle attenuation coefficient ⁇ Lm, and the bone part attenuation coefficient ⁇ Lb for the low-energy radiation.
  • the fat attenuation coefficient ⁇ Lf, the muscle attenuation coefficient ⁇ Lm, and the bone part attenuation coefficient ⁇ Lb the values in accordance with the provisional thickness tfk of the fat, the provisional thickness tmk of the muscle, and the provisional thickness of the bone part (that is, T - tfk - tmk) are used.
  • the difference value ⁇ CL between the provisional first attenuation image CL' and the first attenuation image CL is calculated.
  • the provisional thickness tmk of the muscle is updated on the assumption that the difference value ⁇ CL is the pixel value corresponding to the amount of the radiation attenuated by the bone.
  • the provisional second attenuation image CH' is calculated by Expression (8) using the updated thickness tmk of the muscle and thickness tfk of the fat.
  • the difference value ⁇ CH between the provisional second attenuation image CH' and the second attenuation image CH is calculated.
  • the provisional thickness tfk of the fat is updated on the assumption that the difference value ⁇ CH is the pixel value corresponding to the amount of the radiation attenuated by the fat.
  • the thickness tf of the fat and the thickness tm of the muscle are derived by repeating the processing of steps S1 to S5 until the absolute values of the difference values ⁇ CL and ⁇ CH are less than a predetermined threshold value. It should be noted that the thickness tf of the fat and the thickness tm of the muscle may be derived by repeating the processing of steps S1 to S5 a predetermined number of times.
  • the image derivation unit 24 derives the first to third component images, that is, the fat image Gf, the muscle image Gm, and the bone part image Gb, based on the thickness tf of the fat and the thickness tm of the muscle that are derived by the component thickness derivation unit 23 and the thickness tb of the bone part derived by Expression (6).
  • the fat image Gf has a pixel value having a size in accordance with the thickness tf of the fat.
  • the muscle image Gm has a pixel value having a size in accordance with the thickness tm of the muscle.
  • the bone part image Gb has a pixel value having a size in accordance with the thickness tb (that is, T - tf - tm) of the bone part.
  • the display controller 25 displays the fat image Gf, the muscle image Gm, and the bone part image Gb that are derived.
  • Fig. 6 is a diagram showing a display screen 40 of the fat image Gf, the muscle image Gm, and the bone part image Gb.
  • Fig. 7 is a flowchart showing the processing performed in the present embodiment.
  • the image acquisition unit 21 causes the imaging apparatus 1 to perform the energy subtraction imaging of the subject H to acquire the first and second radiation images G1 and G2 (radiation image acquisition: step ST1). Then, the body thickness acquisition unit 22 acquires the body thickness T of the subject H (step ST2).
  • the component thickness derivation unit 23 derives the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part by using the body thickness T, the first radiation image G1, and the second radiation image G2 (component thickness derivation: step ST3).
  • the image derivation unit 24 derives the fat image Gf, the muscle image Gm, and the bone part image Gb based on the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part (component image derivation: step ST4).
  • the display controller 25 displays the fat image Gf, the muscle image Gm, and the bone part image Gb (step ST5), and the processing ends.
  • the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part are derived by using the body thickness T, the first radiation image G1, and the second radiation image G2, and the fat image Gf, the muscle image Gm, and the bone part image Gb are derived based on the thickness tf of the fat, the thickness tm of the muscle, and the thickness tb of the bone part. Therefore, it is possible to acquire the fat image Gf, the muscle image Gm, and the bone part image Gb for the subject H only by acquiring two images having different energy distributions, which are the first and second radiation images G1 and G2. Therefore, the fat component, the muscle component, and the bone part component in the subject H can be accurately separated by using two images having different energy distributions, which are the first and second radiation images G1 and G2.
  • the images of three components of the fat component, the muscle component, and the bone part component in the subject H are derived by using the first and second radiation images G1 and G2, but the present disclosure is not limited to this.
  • the first to (n - 1)th radiation images may be acquired by n - 1 types of radiation having different energy distributions, and the first to nth component images may be derived by using the body thickness and the first to (n - 1)th radiation images.
  • the thickness of the component of the artificial object in the subject H may be derived in addition to the fat component, the muscle component, and the bone part component in the subject H from the body thickness of the subject H and three radiation images having different energy distributions by using the three radiation images, to derive four component images including the fat image, the muscle image, the bone part image, and the artificial object image.
  • the artificial object include a screw for fixing a fracture portion, a stent disposed in a blood vessel, a silicone disposed in a breast, and a contrast agent.
  • the first and second radiation images G1 and G2 are acquired by the one-shot method, but the present disclosure is not limited to this.
  • the first and second radiation images G1 and G2 may be acquired by a so-called two-shot method in which the imaging is performed twice by using only one radiation detector.
  • a position of the subject H included in the first radiation image G1 and the second radiation image G2 may shift due to a body movement of the subject H. Therefore, in the first radiation image G1 and the second radiation image G2, it is preferable to perform the processing according to the present embodiment after registration of the subject is performed.
  • the radiation image acquired in the system that images the subject H using the first and second radiation detectors 5 and 6 is used, but it goes without saying that the technology of the present disclosure can be applied even in a case in which the first and second radiation images G1 and G2 are acquired using an accumulative phosphor sheet instead of the radiation detector.
  • the first and second radiation images G1 and G2 need only be acquired by stacking two accumulative phosphor sheets, emitting the radiation transmitted through the subject H, accumulating and recording radiation image information of the subject H in each of the accumulative phosphor sheets, and photoelectrically reading the radiation image information from each of the accumulative phosphor sheets.
  • the two-shot method may also be used in a case in which the first and second radiation images G1 and G2 are acquired by using the accumulative phosphor sheet.
  • the radiation in the embodiment described above is not particularly limited, and ⁇ -rays or ⁇ -rays can be used in addition to X-rays.
  • various processors shown below can be used as the hardware structure of processing units that execute various types of processing, such as the image acquisition unit 21, the body thickness acquisition unit 22, the component thickness derivation unit 23, the image derivation unit 24, and the display controller 25.
  • the various processors include, in addition to the CPU that is a general-purpose processor which executes software (program) and functions as various processing units, a programmable logic device (PLD) that is a processor whose circuit configuration can be changed after manufacture, such as a field programmable gate array (FPGA), and a dedicated electrical circuit that is a processor having a circuit configuration which is designed for exclusive use in order to execute specific processing, such as an application specific integrated circuit (ASIC).
  • PLD programmable logic device
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • One processing unit may be configured by one of these various processors, or may be configured by a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of the CPU and the FPGA).
  • a plurality of the processing units may be configured by one processor.
  • the various processing units are configured by using one or more of the various processors described above.
  • circuitry circuitry in which circuit elements, such as semiconductor elements, are combined.
  • a radiation image processing device comprising: at least one processor, in which the processor is configured to: acquire first to (n - 1)th (n ⁇ 3) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; acquire a body thickness of the subject; derive thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and derive first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.
  • the radiation image processing device in which the processor is configured to acquire the body thickness by deriving the body thickness of the subject based on at least one of the first to (n - 1)th radiation images.
  • the radiation image processing device in which the processor is configured to: acquire a first radiation image and a second radiation image that are acquired by imaging the subject with two types of radiation having different energy components; derive the body thickness of the subject based on at least one of the first radiation image or the second radiation image; derive thicknesses of a first component, a second component, and a third component of the subject by using the body thickness, the first radiation image, and the second radiation image; and derive a first component image, a second component image, and a third component image in which the first component, the second component, and the third component are enhanced, respectively, based on the thicknesses of the first component, the second component, and the third component.
  • the radiation image processing device in which the first component, the second component, and the third component are fat, muscle, and a bone, respectively.
  • a radiation image processing method executed by a computer comprising: acquiring first to (n - 1)th (n ⁇ 3) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; acquiring a body thickness of the subject; deriving thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and deriving first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.
  • a radiation image processing program causing a computer to execute: a procedure of acquiring first to (n - 1)th (n ⁇ 3) radiation images acquired by imaging a subject, which includes first to nth components each consisting of a single composition, with n - 1 types of radiation having different energy distributions; a procedure of acquiring a body thickness of the subject; a procedure of deriving thicknesses of the first to nth components by using the body thickness and the first to (n - 1)th radiation images; and a procedure of deriving first to nth component images in which the first to nth components are enhanced, respectively, based on the thicknesses of the first to nth components.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Physiology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
EP25162809.5A 2024-03-12 2025-03-11 Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm Pending EP4616807A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2024037899A JP2025139125A (ja) 2024-03-12 2024-03-12 放射線画像処理装置、方法およびプログラム

Publications (1)

Publication Number Publication Date
EP4616807A1 true EP4616807A1 (de) 2025-09-17

Family

ID=94970238

Family Applications (1)

Application Number Title Priority Date Filing Date
EP25162809.5A Pending EP4616807A1 (de) 2024-03-12 2025-03-11 Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm

Country Status (3)

Country Link
US (1) US20250288266A1 (de)
EP (1) EP4616807A1 (de)
JP (1) JP2025139125A (de)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010005252A (ja) * 2008-06-30 2010-01-14 Fujifilm Corp エネルギーサブトラクション処理装置、方法、およびプログラム
JP2015043959A (ja) 2013-07-31 2015-03-12 富士フイルム株式会社 放射線画像解析装置および方法並びにプログラム
JP2022056084A (ja) 2020-09-29 2022-04-08 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
US20220167935A1 (en) * 2019-09-02 2022-06-02 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging system, image processing method, and non-transitory computer-readable storage medium
JP2023047911A (ja) 2021-09-27 2023-04-06 富士フイルム株式会社 画像処理装置、方法およびプログラム
US20230134187A1 (en) * 2021-11-02 2023-05-04 Fujifilm Corporation Radiation image processing device, radiation image processing method, and radiation image processing program
US20230404510A1 (en) * 2022-06-21 2023-12-21 Fujifilm Corporation Radiation image processing device, radiation image processing method, and radiation image processing program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010005252A (ja) * 2008-06-30 2010-01-14 Fujifilm Corp エネルギーサブトラクション処理装置、方法、およびプログラム
JP2015043959A (ja) 2013-07-31 2015-03-12 富士フイルム株式会社 放射線画像解析装置および方法並びにプログラム
US20220167935A1 (en) * 2019-09-02 2022-06-02 Canon Kabushiki Kaisha Image processing apparatus, radiation imaging system, image processing method, and non-transitory computer-readable storage medium
JP2022056084A (ja) 2020-09-29 2022-04-08 キヤノン株式会社 画像処理装置、画像処理方法及びプログラム
US20230153970A1 (en) * 2020-09-29 2023-05-18 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
JP2023047911A (ja) 2021-09-27 2023-04-06 富士フイルム株式会社 画像処理装置、方法およびプログラム
US20230134187A1 (en) * 2021-11-02 2023-05-04 Fujifilm Corporation Radiation image processing device, radiation image processing method, and radiation image processing program
US20230404510A1 (en) * 2022-06-21 2023-12-21 Fujifilm Corporation Radiation image processing device, radiation image processing method, and radiation image processing program

Also Published As

Publication number Publication date
US20250288266A1 (en) 2025-09-18
JP2025139125A (ja) 2025-09-26

Similar Documents

Publication Publication Date Title
Seibert Tradeoffs between image quality and dose
EP3804623A1 (de) Röntgengerät, röntgenverfahren und programm
US20210236078A1 (en) Information processing apparatus and method, and radiography system
JP7016293B2 (ja) 骨塩情報取得装置、方法およびプログラム
US20250127474A1 (en) Information processing apparatus, information processing method, and information processing program
JP7220643B2 (ja) 画像処理装置、方法およびプログラム
JP7812972B2 (ja) 放射線画像処理装置、方法およびプログラム
JP2022163614A (ja) 推定装置、方法およびプログラム
JP2017080342A (ja) 放射線撮像システム、放射線画像の情報処理装置、放射線画像の情報処理方法、及び、そのプログラム
US20220323032A1 (en) Learning device, learning method, and learning program, radiation image processing device, radiation image processing method, and radiation image processing program
EP4616807A1 (de) Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm
US20240104729A1 (en) Radiation image processing device, radiation image processing method, and radiation image processing program
EP4616810A1 (de) Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm
US20240016464A1 (en) Radiation image processing device, radiation image processing method, and radiation image processing program
US20240023919A1 (en) Radiation image processing device, radiation image processing method, and radiation image processing program
US20240016465A1 (en) Radiation image processing device, radiation image processing method, and radiation image processing program
EP4535280A1 (de) Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm
US11911204B2 (en) Scattered ray model derivation device, scattered ray model derivation method, scattered ray model derivation program, radiation image processing device, radiation image processing method, and radiation image processing program
WO2021095447A1 (ja) 画像処理装置、放射線撮影装置、画像処理方法及びプログラム
EP4609791A1 (de) Bildverarbeitungsvorrichtung, -verfahren und -programm sowie lernvorrichtung, -verfahren und -programm
Baader et al. Risk‐minimizing tube current and tube voltage modulation for CT: A simulation study
EP4530969A1 (de) Strahlungsbildverarbeitungsvorrichtung, strahlungsbildverarbeitungsverfahren und strahlungsbildverarbeitungsprogramm
Lang Bone mineral assessment of the axial skeleton: technical aspects
Kotre Suppression of the low spatial frequency effects of scattered radiation in digital radiography
Zhang et al. MTF compensation for digital radiography system with indirect conversion flat panel detector

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE