WO2021210612A1 - 放射線画像取得装置、放射線画像取得システム、及び放射線画像取得方法 - Google Patents
放射線画像取得装置、放射線画像取得システム、及び放射線画像取得方法 Download PDFInfo
- Publication number
- WO2021210612A1 WO2021210612A1 PCT/JP2021/015464 JP2021015464W WO2021210612A1 WO 2021210612 A1 WO2021210612 A1 WO 2021210612A1 JP 2021015464 W JP2021015464 W JP 2021015464W WO 2021210612 A1 WO2021210612 A1 WO 2021210612A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- noise
- ray
- radiation
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 67
- 230000005855 radiation Effects 0.000 claims abstract description 136
- 238000001514 detection method Methods 0.000 claims abstract description 107
- 238000012545 processing Methods 0.000 claims abstract description 93
- 238000010801 machine learning Methods 0.000 claims abstract description 41
- 238000004364 calculation method Methods 0.000 claims description 109
- 238000003384 imaging method Methods 0.000 claims description 50
- 238000012549 training Methods 0.000 claims description 38
- 238000011156 evaluation Methods 0.000 claims description 34
- 230000008569 process Effects 0.000 claims description 23
- 230000032258 transport Effects 0.000 claims description 20
- 230000001678 irradiating effect Effects 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 description 144
- 239000000463 material Substances 0.000 description 41
- 230000006870 function Effects 0.000 description 33
- 230000013016 learning Effects 0.000 description 27
- 238000002834 transmittance Methods 0.000 description 27
- 238000004088 simulation Methods 0.000 description 25
- 238000012986 modification Methods 0.000 description 23
- 230000004048 modification Effects 0.000 description 23
- 238000010586 diagram Methods 0.000 description 22
- 238000010276 construction Methods 0.000 description 21
- 230000008859 change Effects 0.000 description 19
- 238000005259 measurement Methods 0.000 description 15
- 238000001228 spectrum Methods 0.000 description 15
- 238000006243 chemical reaction Methods 0.000 description 13
- 230000006866 deterioration Effects 0.000 description 12
- 238000002083 X-ray spectrum Methods 0.000 description 9
- 238000007689 inspection Methods 0.000 description 9
- 230000003321 amplification Effects 0.000 description 7
- 230000007423 decrease Effects 0.000 description 7
- 238000003199 nucleic acid amplification method Methods 0.000 description 7
- 238000012546 transfer Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 6
- 230000006872 improvement Effects 0.000 description 6
- 239000013077 target material Substances 0.000 description 6
- 238000012937 correction Methods 0.000 description 5
- 238000009795 derivation Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000003595 spectral effect Effects 0.000 description 4
- 241000287828 Gallus gallus Species 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 229920006395 saturated elastomer Polymers 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 230000002146 bilateral effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 235000013305 food Nutrition 0.000 description 2
- 229920000139 polyethylene terephthalate Polymers 0.000 description 2
- 239000005020 polyethylene terephthalate Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000002787 reinforcement Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 239000013076 target substance Substances 0.000 description 2
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical group [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 2
- 229910052721 tungsten Inorganic materials 0.000 description 2
- 239000010937 tungsten Substances 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 229920001971 elastomer Polymers 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 229910052500 inorganic mineral Inorganic materials 0.000 description 1
- 235000013372 meat Nutrition 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000011707 mineral Substances 0.000 description 1
- -1 polyethylene terephthalate Polymers 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 235000014102 seafood Nutrition 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01T—MEASUREMENT OF NUCLEAR OR X-RADIATION
- G01T1/00—Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
- G01T1/16—Measuring radiation intensity
- G01T1/20—Measuring radiation intensity with scintillation detectors
- G01T1/208—Circuits specially adapted for scintillation detectors, e.g. for the photo-multiplier section
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/42—Arrangements for detecting radiation specially adapted for radiation diagnosis
- A61B6/4208—Arrangements for detecting radiation specially adapted for radiation diagnosis characterised by using a particular type of detector
- A61B6/4233—Arrangements for detecting radiation specially adapted for radiation diagnosis characterised by using a particular type of detector using matrix detectors
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/06—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
- G01N23/083—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption the radiation being X-rays
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01T—MEASUREMENT OF NUCLEAR OR X-RADIATION
- G01T1/00—Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
- G01T1/16—Measuring radiation intensity
- G01T1/17—Circuit arrangements not adapted to a particular type of detector
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/10—Different kinds of radiation or particles
- G01N2223/101—Different kinds of radiation or particles electromagnetic radiation
- G01N2223/1016—X-ray
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/40—Imaging
- G01N2223/401—Imaging image processing
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/50—Detectors
- G01N2223/501—Detectors array
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/50—Detectors
- G01N2223/505—Detectors scintillation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
Definitions
- One aspect of the embodiment relates to a radiation image acquisition device, a radiation image acquisition system, and a radiation image acquisition method.
- electromagnetic waves such as X-rays transmitted through an object are transmitted by providing a plurality of rows of line sensors arranged orthogonally to the transport direction of the object and adding detection data output from the line sensors in the plurality of rows.
- An apparatus is used to acquire the distribution of X-rays as image data. According to such a device, the integrated exposure effect can be obtained in the image data in which the electromagnetic wave transmitted through the object is detected.
- the S / N ratio may not be sufficiently improved in the image data.
- one aspect of the embodiment is made in view of such a problem, and provides a radiographic image acquisition device, a radiographic image acquisition system, and a radiological image acquisition method capable of effectively improving the S / N ratio in a radiographic image.
- the task is to do.
- the radiation image acquisition device is provided on an imaging device that scans the radiation transmitted through an object in one direction and captures an image to acquire a radiation image, and a radiation image acquisition device that converts the radiation into light. It is equipped with a scintillator to be converted and an image processing module that executes noise removal processing to remove noise from the radiation image by inputting the radiation image into a trained model constructed by machine learning using image data in advance.
- a pixel line having M pixels (M is an integer of 2 or more) arranged along one direction is arranged in N columns (N is an integer on 2) in a direction orthogonal to one direction.
- a detection element that outputs a detection signal related to light for each pixel and a detection signal output from at least two of the M pixels are added to each pixel line in the N column of the detection element.
- a reading circuit that outputs a radiographic image by sequentially outputting the added N detection signals is included.
- the radiation image acquisition system transports the above-mentioned radiation image acquisition device, a source for irradiating the object with radiation, and the object in one direction with respect to the image pickup device. It is equipped with a device.
- the radiation image acquisition method includes a step of scanning the scintillation light corresponding to the radiation transmitted through the object in one direction and capturing the image to acquire the radiation image, and the radiation image in advance. It includes a step of inputting to a trained model constructed by machine learning using image data to perform a noise removal process for removing noise from a radiation image, and a step of acquiring the image data arranged along one direction. Pixel lines having M (M is an integer of 2 or more) are arranged in N columns (N is an integer on 2) in a direction orthogonal to one direction, and scintillation light is applied to each pixel. By using a detection element that outputs a detection signal related to A radiographic image is output by sequentially outputting the detection signals.
- the scintillation light corresponding to the radiation transmitted through the object is arranged in N rows of pixel lines having M pixels arranged in the scanning direction of the object.
- the detection signals of at least two pixels out of the detection signals of M pixels detected by the detected detection element and output for each pixel line are added, and the added N detection signals are sequentially output.
- a radiographic image is output.
- noise removal processing is performed on the radiation image.
- the noise component can be removed while increasing the signal component in the radiographic image, and the S / N ratio in the radiographic image can be effectively improved.
- the S / N ratio in the radiographic image can be effectively improved.
- FIG. 5 is a graph showing an example of a simulation calculation result of the relationship between the thickness of an object and the transmittance of X-rays by the calculation unit 202A of FIG.
- FIG. 5 is a graph showing an example of a simulation calculation result of the relationship between the thickness of an object and the average energy of transmitted X-rays by the calculation unit 202A of FIG.
- FIG. 5 is a graph showing an example of the relationship between the pixel value and the standard deviation of the noise value when the material of the object changes, which is derived from the calculation unit 202A of FIG.
- FIG. 5 is a block diagram which shows the functional structure of the control device 20B which concerns on another modification of this disclosure.
- It is a flowchart which shows the procedure of the observation processing by the image acquisition apparatus 1 which concerns on another modification of this disclosure.
- FIG. 23 It is a perspective view which shows an example of the structure of the jig used for image pickup in the image acquisition apparatus 1 which concerns on another modification of this disclosure. It is a figure which shows an example of the captured image of the jig of FIG. 23. It is a block diagram which shows the functional structure of the control device 20C which concerns on 2nd Embodiment. It is a figure which shows an example of the image data which is the teacher data used for constructing the trained model 206C of FIG. It is a figure which shows an example of the X-ray transmission image of the analysis target of the selection part 204C of FIG. It is a figure which shows an example of the characteristic graph of the thickness-luminance acquired by the selection part 204C of FIG.
- FIG. 1 is a configuration diagram of a radiation image acquisition device and an image acquisition device 1 which is a radiation image acquisition system according to the present embodiment.
- the image acquisition device 1 irradiates the object F transported in the transport direction TD with X-rays (radiation), and obtains the object F based on the X-rays transmitted through the object F. It is a device that acquires an captured X-ray transmission image (radiation image).
- the image acquisition device 1 uses an X-ray transmission image to perform foreign matter inspection, weight inspection, inspection inspection, etc. on the object F, and its applications include food inspection, baggage inspection, substrate inspection, battery inspection, and material. Inspection etc. can be mentioned.
- the image acquisition device 1 includes a belt conveyor (conveyor device) 60, an X-ray irradiator (radiation source) 50, an X-ray detection camera (imaging device) 10, a control device (image processing module) 20, and a display device. 30 and an input device 40 for performing various inputs are provided.
- the radiation image in the embodiment of the present disclosure is not limited to an X-ray image, but also includes an image by electromagnetic radiation other than X-rays such as ⁇ -rays.
- the belt conveyor 60 has a belt portion on which the object F is placed, and by moving the belt portion in the transport direction (one direction) TD, the object F is transported in the transport direction at a predetermined transport speed. Transport to TD.
- the transport speed of the object F is, for example, 48 m / min.
- the belt conveyor 60 can change the transport speed to, for example, 24 m / min, 96 m / min, or the like, if necessary. Further, the belt conveyor 60 can change the height position of the belt portion as appropriate to change the distance between the X-ray irradiator 50 and the object F.
- the object F transported by the belt conveyor 60 includes, for example, foods such as meat, seafood, agricultural products, and confectionery, rubber products such as tires, resin products, metal products, resource materials such as minerals, waste, and the like. And various articles such as electronic parts and electronic boards can be mentioned.
- the X-ray irradiator 50 is a device that irradiates (outputs) X-rays to the object F as an X-ray source.
- the X-ray irradiator 50 is a point light source, and irradiates X-rays by diffusing them in a predetermined angle range in a fixed irradiation direction.
- the X-ray irradiator 50 uses the belt conveyor 60 so that the X-ray irradiation direction is directed toward the belt conveyor 60 and the diffused X-rays cover the entire width direction (direction intersecting the transport direction TD) of the object F. It is arranged above the belt conveyor 60 at a predetermined distance from the above. Further, in the X-ray irradiator 50, in the length direction of the object F (direction parallel to the transport direction TD), a predetermined division range in the length direction is set as the irradiation range, and the object F is placed on the belt conveyor 60. By being transported in the transport direction TD, X-rays are irradiated to the entire length direction of the object F.
- the X-ray irradiator 50 has a tube voltage and a tube current set by the control device 20, and irradiates the belt conveyor 60 with X-rays having a predetermined energy and radiation amount corresponding to the set tube voltage and tube current. .. Further, a filter 51 for transmitting a predetermined wavelength range of X-rays is provided in the vicinity of the X-ray irradiator 50 on the belt conveyor 60 side.
- the X-ray detection camera 10 detects the X-rays transmitted through the object F among the X-rays irradiated to the object F by the X-ray irradiator 50, and acquires and outputs a detection signal based on the X-rays.
- the image acquisition device 1 sequentially outputs a detection signal based on X-rays transmitted through the object F conveyed by the belt conveyor 60, thereby scanning the X-ray transmission image in the transfer direction TD and imaging the image. Outputs the X-ray transmission image.
- the X-ray detection camera 10 includes a filter 19, a scintillator 11, a scan camera 12 (detection element), a sensor control unit 13, an amplifier 14, an AD converter 15, a correction circuit 16, an output interface 17, and an amplifier. It has a control unit 18.
- the scintillator 11, the scan camera 12, the amplifier 14, the AD converter 15, the correction circuit 16, and the output interface 17 are each electrically connected.
- the scintillator 11 is fixed on the scan camera 12 by adhesion or the like, and converts X-rays transmitted through the object F into scintillation light.
- the scintillator 11 outputs the scintillation light to the scan camera 12.
- the filter 19 transmits a predetermined wavelength region of X-rays toward the scintillator 11.
- FIG. 2 is a plan view showing the configuration of the scan camera 12.
- a plurality of pixels 72 which are photodiodes (photoelectric conversion elements) two-dimensionally arranged on the substrate 71, and the plurality of pixels 72 photoelectrically convert scintillation light. It includes a read-out circuit 73 that outputs a detection signal to be output to the outside, and a wiring unit W that electrically connects the read-out circuit 73 and each of the plurality of pixels 72.
- pixel lines (pixel groups) 74 composed of M pixels (M is an integer of 2 or more) arranged along the transport direction TD on the substrate 71 are arranged in the transport direction. It has a configuration in which N columns (N is an integer of 2 or more) are arranged in a direction substantially orthogonal to the TD.
- N is an integer of 2 or more
- the number of pixels M is 4, and the number of pixel lines N is an arbitrary integer of 200 or more and 30,000 or less.
- the readout circuit 73 sequentially receives detection signals output from M pixels 72 for each pixel line 74 at intervals of a predetermined detection cycle (details will be described later) in response to control by the sensor control unit 13, and M Of the detection signals from the pixels 72, the detection signals of at least two pixels 72 are added (added), and the detection signals subjected to the addition processing are combined for each pixel line 74 to combine the detection signals in the transport direction TD. It is output to the outside as a detection signal of one line of the object F orthogonal to. In the present embodiment, the read circuit 73 performs addition processing on all M detection signals.
- the readout circuit 73 performs addition processing on the detection signals sequentially output from the M pixels 72 with a predetermined detection cycle shifted, so that the reading circuit 73 is the next one of the object F orthogonal to the transport direction TD. Output the line detection signal. Similarly, the reading circuit 73 sequentially outputs detection signals of a plurality of lines of the object F orthogonal to the transport direction TD.
- the sensor control unit 13 repeatedly takes an image of the scan camera 12 at a predetermined detection cycle so that all the pixels 72 in the pixel line 74 of the scan camera 12 can take an image of X-rays transmitted through the same region of the object F. Control.
- the predetermined detection cycle may be set based on the pixel width of the pixel 72 in the pixel line 74 in the scan camera 12.
- the predetermined detection cycle is, for example, the distance between the pixels 72 in the pixel line 74 in the scan camera 12, the speed of the belt conveyor 60, and the distance between the X-ray irradiator 50 and the object F on the belt conveyor 60 (FOD (Focus)).
- a deviation (delay time) in the detection timing of the pixel 72 may be specified, and a predetermined detection cycle may be set based on the deviation.
- the amplifier 14 amplifies the detection signal at a predetermined amplification factor to generate an amplification signal, and outputs the amplification signal to the AD converter 15.
- the set amplification factor is an amplification factor set by the amplifier control unit 18.
- the amplifier control unit 18 sets the set amplification factor of the amplifier 14 based on a predetermined imaging condition.
- the AD converter 15 converts the amplified signal (voltage signal) output by the amplifier 14 into a digital signal and outputs it to the correction circuit 16.
- the correction circuit 16 performs a predetermined correction such as signal amplification on the digital signal, and outputs the corrected digital signal to the output interface 17.
- the output interface 17 outputs a digital signal to the outside of the X-ray detection camera 10.
- the control device 20 is, for example, a computer such as a PC (Personal Computer).
- the control device 20 generates an X-ray transmission image based on a digital signal (amplified signal) corresponding to a plurality of lines of detection signals sequentially output from the X-ray detection camera 10 (more specifically, an output interface 17).
- the control device 20 generates one X-ray transmission image based on the digital signals for 128 lines output from the output interface 17.
- the generated X-ray transmission image is output to the display device 30 after being subjected to noise removal processing described later, and is displayed by the display device 30.
- the control device 20 controls the X-ray irradiator 50, the amplifier control unit 18, and the sensor control unit 13.
- the control device 20 of the present embodiment is a device independently provided outside the X-ray detection camera 10, it may be integrated inside the X-ray detection camera 10.
- FIG. 3 shows the hardware configuration of the control device 20.
- the control device 20 is physically a processor CPU (Central Processing Unit) 101 and GPU 105 (Graphic Processing Unit), a recording medium RAM (Random Access Memory) 102, and a ROM (Read). Only Memory) 103, a communication module 104, an input / output module 106, and the like, each of which is electrically connected.
- the control device 20 may include a display, a keyboard, a mouse, a touch panel display, and the like as the input device 40 and the display device 30, or may include a data recording device such as a hard disk drive and a semiconductor memory. Further, the control device 20 may be composed of a plurality of computers.
- FIG. 4 is a block diagram showing a functional configuration of the control device 20.
- the control device 20 includes an input unit 201, a calculation unit 202, an image acquisition unit 203, a noise map generation unit 204, a processing unit 205, and a construction unit 206.
- Each functional unit of the control device 20 shown in FIG. 4 controls the CPU 101 and the GPU 105 by loading a program (radiation image processing program of the first embodiment) on the hardware such as the CPU 101, the GPU 105, and the RAM 102. This is achieved by operating the communication module 104, the input / output module 106, and the like, and reading and writing data in the RAM 102.
- the CPU 101 and GPU 105 of the control device 20 make the control device 20 function as each functional unit of FIG.
- the CPU 101 and GPU 105 may be single-unit hardware, or only one of them may be used. Further, the CPU 101 and the GPU 105 may be implemented in a programmable logic such as an FPGA such as a soft processor.
- the RAM and ROM may be single-unit hardware, or may be built in programmable logic such as FPGA.
- Various data necessary for executing this computer program and various data generated by executing this computer program are all stored in an internal memory such as ROM 103 and RAM 102, or a storage medium such as a hard disk drive.
- the built-in memory or storage medium in the control device 20 is read by the CPU 101 and the GPU 105, so that the CPU 101 and the GPU 105 execute a noise removal process on the X-ray image (X-ray transmission image). Is stored in advance (described later).
- the input unit 201 receives input of condition information indicating either the condition of the radiation source or the imaging condition when the object F is imaged by irradiating the radiation.
- the input unit 201 is a condition indicating an operating condition of the X-ray irradiator (radiation source) 50 when capturing an X-ray image of the object F, an imaging condition by the X-ray detection camera 10, and the like.
- the input of information is accepted from the user of the image acquisition device 1.
- the operating conditions include all or part of the tube voltage, target angle, target material, and the like.
- the condition information indicating the imaging conditions includes the material and thickness of the filters 51 and 19 arranged between the X-ray irradiator 50 and the X-ray detection camera 10, and the X-ray irradiator 50 and the X-ray detection camera 10.
- Information on distance (FDD) type of window material of X-ray detection camera 10, material and thickness of scintillator 11 of X-ray detection camera 10, X-ray detection camera information (for example, gain setting value, circuit noise value, All or part of the saturated charge amount, conversion coefficient value (e- / count), camera line rate (Hz) or line speed (m / min)), information on the object F, and the like can be mentioned.
- the input unit 201 may accept the input of the condition information as a direct input of information such as a numerical value, or may accept the input of the information such as a numerical value set in the internal memory in advance as a selective input.
- the input unit 201 accepts the input of the above-mentioned condition information from the user, but may acquire some condition information (tube voltage, etc.) according to the detection result of the control state by the control device 20.
- the calculation unit 202 calculates the average energy related to X-rays (radiation) transmitted through the object F based on the condition information.
- the condition information includes any one of the tube voltage of the source, the information about the object F, the filter information of the camera used for imaging the object F, the scintillator information of the camera, and the filter information of the X-ray source. At least one is included.
- the calculation unit 202 transmits the object F using the image acquisition device 1 based on the condition information received by the input unit 201, and causes the X-ray detection camera 10 to detect the X-rays. Calculate the average energy value.
- the calculation unit 202 includes the tube voltage, the target angle, the target material, the materials and thicknesses of the filters 51 and 19, and the presence or absence thereof, the type of the window material of the X-ray detection camera 10, and the presence or absence thereof. Based on information such as the material and thickness of the scintillator 11 of the X-ray detection camera 10, the spectrum of X-rays detected by the X-ray detection camera 10 is calculated using, for example, an approximate expression such as a known Tucker. Then, the calculation unit 202 further calculates the spectral intensity integrated value and the photon number integrated value from the X-ray spectrum, and divides the spectral intensity integrated value by the photon number integrated value to obtain the value of the average energy of the X-ray. calculate.
- Em can be determined from the information on the tube voltage, A, ⁇ , and ⁇ (E) can be determined from the information on the material of the object F, and ⁇ can be determined from the information on the angle of the object F.
- the calculation unit 202 can calculate the X-ray energy spectrum that passes through the filter and the object F and is absorbed by the scintillator by using the X-ray attenuation formula of the following formula (2).
- ⁇ is the attenuation coefficient of the object F, the filter, the scintillator, etc.
- x is the thickness of the object F, the filter, the scintillator, etc.
- ⁇ can be determined from the information on the material of the object F, the filter, and the scintillator
- x can be determined from the information on the thickness of the object F, the filter, and the scintillator.
- the X-ray photon number spectrum can be obtained by dividing this X-ray energy spectrum by the energy of each X-ray.
- the calculation unit 202 calculates the average energy of X-rays using the following equation (3) by dividing the integrated value of the energy intensity by the integrated value of the number of photons.
- Average energy E Integral spectrum intensity / Integral photon number ... (3)
- the calculation unit 202 calculates the average energy of X-rays.
- an approximate expression by known Kramers, Birch et al. May be used.
- the image acquisition unit 203 acquires a radiation image in which the object F is irradiated with radiation and the radiation transmitted through the object F is captured. Specifically, the image acquisition unit 203 generates an X-ray image based on the digital signal (amplified signal) output from the X-ray detection camera 10 (more specifically, the output interface 17). The image acquisition unit 203 generates one X-ray image based on the digital signals for a plurality of lines output from the output interface 17.
- FIG. 5 is a diagram showing an example of an X-ray image acquired by the image acquisition unit 203.
- the noise map generation unit 204 derives an evaluation value from the pixel value of each pixel of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value, and each pixel of the radiation image.
- a noise map which is data associated with the evaluation values derived from, is generated.
- the noise map generation unit 204 derives an evaluation value from the average energy related to the radiation transmitted through the object F and the pixel value of each pixel of the radiation image.
- the noise map generation unit 204 was calculated by the calculation unit 202 using the relational expression (relationship data) between the pixel value and the standard deviation of the noise value (evaluation value for evaluating the spread of the noise value).
- the standard deviation of the noise value is derived from the average energy of the X-ray and the pixel value of each pixel of the X-ray image (radio image) acquired by the image acquisition unit 203.
- the noise map generation unit 204 generates a noise standard deviation map (noise map) by associating each pixel of the X-ray image with the standard deviation of the derived noise value.
- the variable Noise is the standard deviation of the noise value
- the variable Signal is the signal value (pixel value) of the pixel
- the constant F is the noise factor
- the constant M is the multiplication factor by the scintillator
- the constant C is.
- the coupling efficiency between the scan camera 12 and the scintillator 11 Coupling Efficiency
- the constant Q is the quantum efficiency of the scan camera 12
- the constant cf is the charge of the pixel signal value in the scan camera 12.
- the conversion coefficient to be converted the variable Em is the average energy of X-rays
- the constant D is the dark current noise generated by the thermal noise in the image sensor
- the constant R is the reading noise in the scan camera 12.
- the noise map generation unit 204 substitutes the pixel value of each pixel of the X-ray image acquired by the image acquisition unit 203 into the variable Signal, and the calculation unit 202 into the variable Em. The numerical value of the average energy calculated by is substituted. Then, the noise map generation unit 204 obtains the variable Noise calculated using the above equation (4) as a numerical value of the standard deviation of the noise value. Other parameters including the average energy may be acquired by receiving the input by the input unit 201, or may be set in advance.
- FIG. 6 is a diagram showing an example of generating a noise standard deviation map by the noise map generation unit 204.
- the noise map generation unit 204 uses the relational expression (4) between the pixel value and the standard deviation of the noise value, substitutes various pixel values into the variable Signal, and acquires the correspondence between the pixel value and the variable Noise. Therefore, a relationship graph G3 showing the correspondence between the pixel value and the standard deviation of the noise value is derived. Then, the noise map generation unit 204 derives the relationship data G2 representing the correspondence relationship between each pixel position and the pixel value from the X-ray image G1 acquired by the image acquisition unit 203.
- the noise map generation unit 204 derives the standard deviation of the noise value corresponding to the pixel at each pixel position in the X-ray image by applying the correspondence relationship shown in the relationship graph G3 to each pixel value in the relationship data G2. do. As a result, the noise map generation unit 204 associates the derived noise standard deviation with each pixel position, and derives the relational data G4 showing the correspondence between each pixel position and the noise standard deviation. Then, the noise map generation unit 204 generates the noise standard deviation map G5 based on the derived relational data G4.
- the processing unit 205 inputs the radiation image and the noise map into the trained model 207 constructed in advance by machine learning, and executes image processing for removing noise from the radiation image. That is, as shown in FIG. 7, the processing unit 205 acquires the learned model 207 (described later) constructed by the construction unit 206 from the built-in memory or storage medium in the control device 20. The processing unit 205 inputs the X-ray image G1 acquired by the image acquisition unit 203 and the noise standard deviation map G5 generated by the noise map generation unit 204 into the trained model 207. As a result, the processing unit 205 generates the output image G6 by executing image processing for removing noise from the X-ray image G1 using the trained model 207. Then, the processing unit 205 outputs the generated output image G6 to the display device 30 or the like.
- the construction unit 206 includes a training image which is a radiation image, a noise map generated from the training image based on the relational expression between the pixel value and the standard deviation of the noise value, and noise which is data obtained by removing noise from the training image. Using the removed image data as training data, a trained model 207 that outputs noise-removed image data based on the training image and the noise map is constructed by machine learning. The construction unit 206 stores the constructed trained model 207 in the built-in memory or storage medium in the control device 20.
- Machine learning includes supervised learning, unsupervised learning, and enhanced learning, and among these learnings are deep learning and neural network learning.
- the two-dimensional convolutional neural network described in the paper "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising” by Kai Zhang et al. Is adopted.
- the trained model 207 may be generated by an external computer or the like and downloaded to the control device 20 in addition to being constructed by the construction unit 206.
- the radiographic image used for machine learning includes a radiological image obtained by capturing a known structure or an image reproducing the radiological image.
- FIG. 8 is an example of a training image which is one of the training data used for constructing the trained model 207.
- the training image an X-ray image in which patterns of various thicknesses, various materials, and various resolutions are captured can be used.
- the example shown in FIG. 8 is a training image G7 generated for chicken.
- As the training image G7 an X-ray image actually generated by using the image acquisition device 1 for a plurality of types of known structures may be used, or an image generated by simulation calculation may be used. ..
- the X-ray image may be acquired by using an apparatus different from the image acquisition apparatus 1.
- the construction unit 206 calculates an evaluation value from the pixel value of each pixel of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value.
- a noise map is generated, which is data obtained by associating the derived evaluation values with each pixel of the radiographic image.
- the construction unit 206 acquires a training image generated by actual imaging, simulation calculation, or the like from the image acquisition unit 203 or the like. Then, the construction unit 206 sets, for example, the operating conditions of the X-ray irradiator 50 of the image acquisition device 1, the imaging conditions of the image acquisition device 1, and the like.
- the construction unit 206 sets the operating conditions or imaging conditions of the X-ray irradiator 50 at the time of simulation calculation.
- the construction unit 206 calculates the average energy of X-rays based on the above-mentioned operating conditions or imaging conditions by using the same method as the calculation unit 202. Further, the construction unit 206 generates a noise standard deviation map based on the average energy of X-rays and the training image by using the same method as the method by the noise map generation unit 204 as shown in FIG.
- the evaluation value is derived from the pixel value of each pixel of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value, and the radiation is emitted. It includes a noise map generation step of generating a noise map which is data in which evaluation values derived from each pixel of an image are associated with each other.
- the construction unit 206 constructs the trained model 207 by machine learning using the training image, the noise map generated from the training image, and the noise removal image data which is the data obtained by removing noise from the training image in advance as training data. do. Specifically, the construction unit 206 acquires noise-removed image data in which noise is removed from the training image in advance.
- the construction unit 206 uses the image before noise is added in the training image generation process as the noise removal image data.
- the construction unit 206 sets an average value filter or a median from the X-ray image.
- the construction unit 206 constructs a trained model 207 that outputs noise removal image data based on the training image and the noise standard deviation map by executing training by machine learning.
- FIG. 9 is a flowchart showing a procedure for creating image data which is teacher data (training data) used for constructing the trained model 207 by the construction unit 206.
- Image data (also called teacher image data), which is teacher data, is created by a computer in the following procedure.
- structure image an image of a structure having a predetermined structure
- an image of a structure having a predetermined structure may be created by simulation calculation.
- an X-ray image of a structure such as a chart having a predetermined structure may be acquired to create a structure image.
- the sigma value which is the standard deviation of the pixel values, is calculated for one pixel selected from the plurality of pixels constituting the structure image (step S302).
- a normal distribution (Poisson distribution) showing a noise distribution is set based on the sigma value obtained in step S302 (step S303).
- step S304 the noise value set at random is calculated along the normal distribution set based on the sigma value in step S303. Further, by adding the noise value obtained in step S304 to the pixel value of one pixel, the pixel value constituting the image data which is the teacher data is generated (step S305). The processes from step S302 to step S305 are performed for each of the plurality of pixels constituting the structure image (step S306), and teacher image data to be teacher data is generated (step S307).
- the teacher image data is further required, it is determined that the processes from step S301 to step S307 are performed on another structure image (step S308), and another teacher image data to be the teacher data is used. Generate.
- the other structure image may be an image of a structure having the same structure or an image of a structure having another structure.
- the structure image is preferably an image with less noise, and ideally, an image without noise is preferable. Therefore, if the structure image is generated by the simulation calculation, many noise-free images can be generated. Therefore, it is effective to generate the structure image by the simulation calculation.
- FIG. 10 is a flowchart showing the procedure of the observation process by the image acquisition device 1.
- the construction unit 206 uses the training image, the noise standard deviation map generated from the training image based on the relational expression, and the noise removal image data as training data, and noise based on the training image and the noise standard deviation map.
- a trained model 207 that outputs the removed image data is constructed by machine learning.
- the input unit 201 receives input of condition information indicating the operating conditions of the X-ray irradiator 50 or the imaging conditions by the X-ray detection camera 10 from the operator (user) of the image acquisition device 1 (step S101). ..
- the calculation unit 202 calculates the value of the average energy of the X-rays detected by the X-ray detection camera 10 based on the condition information (step S102).
- the object F is set in the image acquisition device 1, the object F is imaged, and the control device 20 acquires an X-ray image of the object F (step S103). Further, the control device 20 derives and derives the standard deviation of the noise value from the average energy of the X-ray and the pixel value of each pixel of the X-ray image based on the relational expression between the pixel value and the standard deviation of the noise value. A noise standard deviation map is generated by associating the noise standard deviation with each pixel value (step S104).
- the processing unit 205 inputs the X-ray image of the object F and the noise standard deviation map to the trained model 207 constructed and stored in advance, and performs noise removal processing on the X-ray image (Step S105). Further, the processing unit 205 outputs an output image, which is an X-ray image subjected to noise removal processing, to the display device 30. (Step S106).
- the pixel line 74 having M pixels 72 arranged in the scanning direction TD of the object F by the scintillation light corresponding to the X-ray transmitted through the object F is N.
- the detection signals of at least two pixels 72 out of the detection signals of M pixels 72 detected by the scan cameras 12 arranged in a row and output for each pixel line 74 are added, and the added N N elements are added.
- an X-ray image is output.
- noise removal processing is performed on the X-ray image.
- the noise component can be removed while increasing the signal component in the X-ray image, and the S / N ratio in the X-ray image can be effectively improved.
- the CNR Contrast to Noise Ratio
- the improvement effect was larger than the improvement effect of about 1.9 times the CNR by the noise removal treatment by the bilateral filter.
- the trained model 207 is constructed by machine learning using image data obtained by adding noise values along a normal distribution to an X-ray image of a predetermined structure as teacher data. Will be done. As a result, it becomes easy to prepare image data which is teacher data used for constructing the trained model 207, and the trained model 207 can be efficiently constructed.
- the standard deviation of the noise value is derived from the pixel value of each image of the X-ray image by using the relational expression of the standard deviation of the pixel value and the noise value, and each pixel of the X-ray image is derived.
- a noise standard deviation map which is data associated with the standard deviations of the noise values derived from, is generated. Then, the X-ray image and the noise standard deviation map are input to the trained model 207 constructed in advance by machine learning, and image processing for removing noise from the X-ray image is executed.
- the standard deviation of the noise value derived from the pixel value of each pixel of the X-ray image is taken into consideration, and the noise in each pixel of the X-ray image is removed by machine learning.
- the trained model 207 noise removal corresponding to the relationship between the pixel value and the standard deviation of the noise value in the X-ray image can be realized.
- noise in the X-ray image can be effectively removed.
- X-ray images include tube voltage, filters, scintillators, X-ray detection camera conditions (gain setting value, circuit noise value, saturated charge amount, conversion coefficient value (e- / count), camera line rate), and target.
- the mode of noise changes depending on the difference in objects. Therefore, when trying to realize noise removal by machine learning, it is conceivable to prepare a learning model trained under various conditions. That is, as a comparative example, a method of constructing a plurality of learning models according to the conditions at the time of measuring the X-ray image, selecting the learning models for each condition, and executing the noise removal processing can be adopted.
- a learning model must be constructed for each noise condition such as the average energy of the X-ray, the gain of the X-ray detection camera, and the type of the X-ray camera, and a huge number of learning models must be constructed.
- the training model needs to be generated and can take a lot of time to build. As an example, when the average energy of X-rays is 10 ways, the gain of the X-ray detection camera is 8 ways, and the product types are 3 types, 240 trained models are required. If one day is required for one model, it takes 240 days for machine learning.
- the present embodiment by generating a noise map from an X-ray image and using the noise map as input data for machine learning, it is possible to reduce noise conditions that require generation of a trained model. , The training time for constructing the trained model 207 is greatly reduced.
- FIG. 11 is a block diagram showing a functional configuration of the control device 20A in the modified example of the first embodiment.
- the control device 20A has a function of deriving the average energy of X-rays from the pixel values of the X-ray image in the calculation unit 202A, and the X-ray image in the noise map generation unit 204A.
- the difference is that it has a function of deriving a noise standard deviation map based on the pixel value of the above and the average energy of X-rays derived from the X-ray image.
- FIG. 12 is a flowchart showing a procedure of observation processing by the image acquisition device 1 including the control device 20A of FIG. As shown in FIG.
- step S103 of the control device 20 according to the first embodiment shown in FIG. 10 is performed immediately after step S100. Then, in the control device 20A, the processes shown in S102A and S104A are replaced with the processes in steps S102 and S104 of the control device 20 and executed.
- the calculation unit 202A calculates the average energy from the pixel values of each pixel of the radiation image (step S102A). Specifically, the calculation unit 202A derives the relationship between the pixel value and the average energy in advance for each condition information by simulation calculation of the X-ray spectrum or the like. The calculation unit 202A acquires condition information including at least the tube voltage acquired by the input unit 201 and the information of the scintillator included in the X-ray detection camera 10. Then, the calculation unit 202A selects the relationship corresponding to the condition information from the relationship between the pixel value and the average energy derived in advance based on the condition information. Further, the calculation unit 202A derives the average energy for each pixel from the pixel value of each pixel of the X-ray image acquired by the image acquisition unit 203 based on the selected relationship.
- the calculation unit 202A shows the graph G18 showing the relationship between the thickness of the object F and the transmittance of X-rays, and the relationship between the thickness of the object F and the average energy of X-rays, based on the condition information.
- the graph G19 is derived.
- the calculation unit 202A is a target based on condition information including at least information on the tube voltage and the scintillator included in the X-ray detection camera 10.
- the energy spectra G14 to G17 of the X-rays transmitted when the thickness of the object F is changed variously are calculated by simulation calculation.
- FIG. 13 is a graph showing an example of a simulation calculation result of an X-ray energy spectrum transmitted through an object F by the calculation unit 202A.
- the energy spectra G14 to G17 of the transmitted X-rays when the thickness of the object F composed of water is gradually increased and the simulation calculation is performed are illustrated.
- the calculation unit 202A calculates the average energy of X-rays transmitted when the thickness of the object F is variously changed based on the calculated energy spectra G14 to G17.
- the calculation unit 202A obtains the relationship between the thickness of the object F and the average energy based on the X-ray image obtained by imaging a structure having a known thickness. May be good.
- the calculation unit 202A also derives the relationship between the thickness of the object F and the X-ray transmittance based on the above simulation result.
- FIG. 14 is a chart showing an example of the relationship between the thickness of the object F and the average energy and the transmittance, which is derived by the calculation unit 202A. As shown in FIG. 14, the average energy of transmitted X-rays and the transmittance of X-rays are derived corresponding to each of the energy spectra G14 to G17 calculated for each thickness of the object F.
- the calculation unit 202A derives a graph G18 showing the relationship between the thickness of the object F and the X-ray transmittance from the X-ray transmittances derived for the objects F having various thicknesses.
- FIG. 15 is a graph derived by the calculation unit 202A showing the relationship between the thickness of the object F and the transmittance of X-rays with respect to the object F.
- the calculation unit 202A derives a graph G19 showing the relationship between the thickness of the object F and the average energy of the X-rays from the average energy of the X-rays derived for the objects F having various thicknesses.
- FIG. 16 is a graph showing the relationship between the thickness of the object F and the average energy of X-rays transmitted through the object F, which is derived by the calculation unit 202A.
- the calculation unit 202A displays a graph G20 showing the relationship between the pixel value of the X-ray image and the average energy as shown in FIG. 17 based on the two graphs G18 and G19 derived for each of various condition information. Derived for each of various condition information.
- FIG. 17 is a graph showing the relationship between the pixel value of the X-ray image derived by the calculation unit 202A and the average energy.
- the calculation unit 202A derives the pixel value I 0 of the X-ray image transmission image when the object F does not exist, based on the condition information.
- the calculation unit 202A sets the pixel value I of the X-ray image when the object F is present, and calculates I / I 0, which is the transmittance of the X-ray. Further, the calculation unit 202A calculates the thickness of the object F from I / I 0, which is the calculated transmittance of the X-rays, based on the graph G18 of the thickness of the object F and the transmittance of the X-rays with respect to the object F. Is derived.
- the calculation unit 202A is based on the graph G19 of the derived thickness of the object F, the thickness of the object F, and the average energy of the transmitted X-rays, and the calculation unit 202A has the average energy of the transmitted X-rays corresponding to the thickness. Is derived. Subsequently, the calculation unit 202A performs the above derivation for each of various condition information while changing the pixel value I of the X-ray image in various ways, so that the pixel value of the X-ray image and the average energy of the transmitted X-rays are averaged. A graph G20 showing the relationship with is derived for each condition information.
- I 500 is set.
- the calculation unit 202A has a thickness of 30 mm corresponding to the X-ray transmittance of 0.1 based on the graph G18 showing the relationship between the thickness of the object F and the X-ray transmittance with respect to the object F. Derived that there is. Further, the calculation unit 202A derives that the average energy corresponding to the pixel value 500 is 27 keV based on the graph G19 showing the relationship between the thickness of the object F and the average energy of the transmitted X-rays. Finally, the calculation unit 202A repeats the derivation of the average energy of the X-ray for each pixel value, and derives the graph G20 showing the relationship between the pixel value of the X-ray image and the average energy.
- the calculation unit 202A selects the graph G20 corresponding to the condition information acquired by the input unit 201 from the plurality of graphs G20 derived in advance by the above procedure. Based on the selected graph G20, the calculation unit 202A derives the average energy of transmitted X-rays corresponding to the pixel values of each pixel of the X-ray image acquired by the image acquisition unit 203.
- the calculation unit 202A does not derive the relationship between the pixel value and the average energy of the X-ray for each condition information in advance, but uses the condition information acquired by the input unit 201 and the pixel value of each pixel of the X-ray image.
- the average energy of X-rays may be derived with reference to the graphs G18 and G19.
- the calculation unit 202A derives the pixel value I 0 of the X-ray image when the object does not exist based on the condition information. Then, the calculation unit 202A calculates the transmittance by obtaining the ratio to the pixel value I 0 for each pixel value I of each pixel of the X-ray image acquired by the image acquisition unit 203.
- the calculation unit 202A derives the thickness based on the graph G18 showing the relationship between the thickness and the X-ray transmittance and the calculated transmittance. Then, the calculation unit 202A derives the average energy for each pixel value of each pixel of the X-ray image by deriving the average energy based on the graph G19 showing the relationship between the thickness and the average energy and the derived thickness. do.
- the noise map generation unit 204A generates a noise standard deviation map from the X-ray image acquired by the image acquisition unit 203 and the average energy of the X-rays corresponding to each pixel of the X-ray image derived by the calculation unit 202A. (Step S104A). Specifically, the noise map generation unit 204A calculates the pixel value of each pixel of the X-ray image acquired by the image acquisition unit 203 and the average energy derived for each pixel by the calculation unit 202A in the relational expression (4). By substituting into), the standard deviation of the noise value for each pixel is derived in consideration of the thickness of the object. The noise map generation unit 204A generates a standard deviation of the noise value corresponding to each pixel of the X-ray image as a noise standard deviation map.
- FIG. 18 is a graph showing an example of the relationship between the pixel value and the standard deviation of the noise value.
- This graph shows the relationship between the standard deviation of the noise value derived from the pixel value of the X-ray image by the calculation unit 202A and the noise map generation unit 204A according to this modification and the pixel value of the X-ray image. ..
- the standard deviation of the noise value is derived in consideration of the thickness of the object, the thickness of the object becomes smaller as the pixel value increases, and the average energy in the pixel decreases. Therefore, as estimated from the relational expression (4), the change in the standard deviation of the noise value when the pixel value increases differs between the first embodiment and the present modification.
- the graph G22 of the present modification has a smaller degree of increase in the standard deviation of the noise value when the pixel value is increased than the graph G21 of the first embodiment.
- the average energy is calculated from the pixel value of each pixel of the X-ray image.
- the average energy differs greatly for each object, and noise cannot be sufficiently removed from the X-ray image.
- the average energy of X-rays transmitted through the object F is calculated for each pixel value of each pixel of the X-ray image. Therefore, for example, considering the difference in thickness and material, the X-ray image It is possible to realize noise removal corresponding to the relationship between the pixel value of each pixel and noise. As a result, noise in the X-ray image can be effectively removed.
- the control device 20A derives the average energy from the pixel values of the X-ray image using the graph G20 derived for each of various condition information.
- the average energy may be derived from the pixel value while ignoring the difference in the material of the object F.
- FIG. 19 is a graph showing the relationship between the pixel value of the X-ray image derived by the calculation unit 202A and the standard deviation of the noise value.
- the relationship is derived by taking into consideration the change in the material of the object F as condition information.
- the graph G24 shows the graph G25 when the material is aluminum
- the graph G23 shows the graph G25 when the material is PET (polyethylene terephthalate). Shows an example of derivation when the material is copper.
- the control device 20A can derive the average energy from the pixel value of the X-ray image, ignoring the difference in the material of the object F as the condition information. Even in such a case, according to the control device 20A of the present modification, noise removal corresponding to the relationship between the pixel value and the standard deviation of noise can be realized. As a result, noise in the X-ray image can be removed more effectively.
- FIG. 20 is a block diagram showing a functional configuration of the control device 20B according to another modification of the first embodiment.
- the control device 20B has a function of acquiring an X-ray image of the jig in the image acquisition unit 203B, and an X-ray image of the jig in the noise map generation unit 204B. The difference is that it has a function of deriving a graph showing the relationship between the pixel value and the standard deviation of the noise value.
- FIG. 21 is a flowchart showing a procedure of observation processing by the image acquisition device 1 including the control device 20B of FIG. 20. As shown in FIG. 21, in the control device 20B according to the present modification, the processing shown in steps S201 and S202 is the processing of steps S101, S102 and S104 by the control device 20 according to the first embodiment shown in FIG. Is replaced with and executed.
- the image acquisition unit 203B acquires a radiation image of the jig in which the jig is irradiated with radiation and the radiation transmitted through the jig is imaged (step S201). Specifically, the image acquisition unit 203B acquires an X-ray image captured by irradiating the jig and the object F with X-rays using the image acquisition device 1. As the jig, a flat plate-shaped member or the like whose thickness and material are known is used. That is, the image acquisition unit 203B acquires an X-ray image of the jig captured by the image acquisition device 1 prior to the observation process of the object F.
- the image acquisition unit 203B acquires an X-ray image of the object F imaged by using the image acquisition device 1.
- the acquisition timing of the X-ray image of the jig and the object F is not limited to the above, and may be simultaneous or vice versa (step S103).
- the image acquisition unit 203B acquires an X-ray image in which the object F is irradiated with X-rays and the X-rays transmitted through the object F are captured, similarly to the image acquisition unit 203.
- the jig is set in the image acquisition device 1, the jig is imaged, and the noise map generation unit 204B has a relationship between the pixel value and the evaluation value that evaluates the spread of the noise value from the radiation image of the jig obtained as a result.
- the relational data representing the above is derived (step S202).
- the noise map generation unit 204B derives a noise standard deviation map representing the relationship between the pixel value and the standard deviation of the noise value from the X-ray image of the jig.
- FIG. 22 is a diagram showing an example of generating a noise standard deviation map by the noise map generation unit 204B.
- the noise map generation unit 204B derives a relationship graph G27 showing the correspondence between the pixel value and the standard deviation of the noise value from the X-ray image G26 of the jig. Then, the noise map generation unit 204B derives the relational data G2 representing the correspondence between each pixel position and the pixel value from the X-ray image G1 acquired by the image acquisition unit 203B in the same manner as in the first embodiment. do.
- the noise map generation unit 204 derives the standard deviation of the noise value corresponding to the pixel at each pixel position in the X-ray image by applying the correspondence relationship shown in the relationship graph G27 to each pixel in the relationship data G2. .. As a result, the noise map generation unit 204 associates the derived noise standard deviation with each pixel position, and derives the relational data G4 showing the correspondence between each pixel position and the noise standard deviation. Then, the noise map generation unit 204 generates the noise standard deviation map G5 based on the derived relational data G4.
- FIG. 23 shows an example of the structure of the jig used for imaging in this modified example.
- a member P1 whose thickness changes stepwise in one direction can be used.
- FIG. 24 shows an example of an X-ray image of the jig of FIG. 23.
- the noise map generation unit 204B derives a pixel value (hereinafter referred to as a true pixel value) when there is no noise for each step of the jig in the X-ray image G26 of the jig, and derives a true pixel value.
- the standard deviation of the noise value is derived based on.
- the noise map generation unit 204B derives the average value of the pixel values in the step with the jig.
- the noise map generation unit 204B sets the average value of the derived pixel values as the true pixel value in that step.
- the noise map generation unit 204B derives the difference between each pixel value and the true pixel value as a noise value.
- the noise map generation unit 204B derives the standard deviation of the noise value from the derived noise value for each pixel value.
- the noise map generation unit 204B derives the relationship between the true pixel value and the standard deviation of the noise value as the relationship graph G27 between the pixel value and the standard deviation of the noise value. Specifically, the noise map generation unit 204B derives the true pixel value and the standard deviation of the noise value for each step of the jig.
- the noise map generator 204B plots the relationship between the derived true pixel value and the standard deviation of the noise value on a graph, and draws an approximate curve to represent the relationship between the pixel value and the standard deviation of the noise value.
- the graph G27 is derived. For the approximate curve, exponential approximation, linear approximation, logarithmic approximation, polynomial approximation, power approximation, or the like is used.
- control device 20B In the control device 20B according to this modification, related data is generated based on a radiation image obtained by imaging an actual jig. As a result, the optimum relational data for removing noise from the radiation image of the object F can be obtained. As a result, noise in the radiographic image can be removed more effectively.
- the noise map generation unit 204B derives the relationship between the pixel value and the standard deviation of the noise value from the captured image when the tube current or the exposure time is changed in the absence of an object without using a jig. You may. According to such a configuration, since the relational data is generated based on the radiation image actually obtained by imaging and the noise map is generated, noise removal corresponding to the relation between the pixel value and the spread of noise can be realized. As a result, noise in the radiographic image can be removed more effectively.
- the image acquisition unit 203B acquires a plurality of radiation images captured in the absence of an object (step S201), and the noise map generation unit 204B obtains the radiation images acquired by the image acquisition unit 203B.
- the relationship between the pixel value and the standard deviation of the noise value may be derived (step S202).
- the plurality of radiographic images are a plurality of images in which at least one of the conditions of the source of radiation and the imaging condition is different from each other.
- the image acquisition unit 203B receives a plurality of X-rays captured by the image acquisition device 1 in the absence of the object F prior to the observation process of the object F while the tube current or the exposure time is changed. Get an image.
- the noise map generation unit 204B derives a true pixel value for each X-ray image, and derives a standard deviation of noise based on the true pixel value in the same manner as in this modification. Further, the noise map generation unit 204B plots the relationship between the true pixel value and the standard deviation of the noise on a graph and draws an approximate curve in the same manner as in this modification, thereby drawing a standard deviation of the pixel value and the noise value. A relationship graph showing the relationship with is derived. Finally, the noise map generation unit 204B generates a noise standard deviation map from the X-ray image acquired by the image acquisition unit 203B based on the derived relationship graph in the same manner as in the first embodiment.
- FIG. 25 is a block diagram showing a functional configuration of the control device 20C according to the second embodiment.
- the control device 20C includes an input unit 201C, a calculation unit 202C, a narrowing unit 203C, a selection unit 204C, and a processing unit 205C.
- a plurality of trained models 206C for executing noise removal processing on the X-ray transmission image are stored in advance.
- Each of the plurality of trained models 206C is a learning model by machine learning constructed in advance using image data as teacher data.
- Machine learning includes supervised learning, deep learning, reinforcement learning, neural network learning, and the like.
- the deep learning algorithm the two-dimensional convolutional neural network described in the paper "Beyonda Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising" by Kai Zhang et al. Is adopted.
- the plurality of trained models 206C may be generated by an external computer or the like and downloaded to the control device 20C, or may be generated in the control device 20C.
- FIG. 26 shows an example of image data which is teacher data used for constructing the trained model 206C.
- the teacher data an X-ray transmission image in which patterns of various thicknesses, various materials, and various resolutions are captured can be used.
- the example shown in FIG. 26 is an example of an X-ray transmission image generated for chicken.
- an X-ray transmission image actually generated by using the image acquisition device 1 for a plurality of types of objects may be used, or image data generated by simulation calculation may be used.
- the X-ray transmission image may be acquired by using an apparatus different from the image acquisition apparatus 1. Further, the X-ray transmission image and the image data generated by the simulation calculation may be used in combination.
- Each of the plurality of trained models 206C is image data obtained for transmitted X-rays having different average energies, and is pre-constructed using image data having a known noise distribution.
- the average energy of X-rays in the image data can be determined by setting the operating conditions of the X-ray irradiator (radiation source) 50 of the image acquisition device 1, the imaging conditions of the image acquisition device 1, etc., or at the time of simulation calculation. By setting the operating conditions or the imaging conditions of the X-ray irradiator 50, different values are set in advance (the method of setting the average energy according to the operating conditions or the imaging conditions will be described later).
- the plurality of trained models 206C indicate the operating conditions of the X-ray irradiator (radiation source) 50 when capturing the X-ray transmission image of the object F, the imaging conditions by the X-ray detection camera 10, and the like. It is constructed by machine learning using a training image which is an X-ray image corresponding to the average energy of X-rays transmitted through the object F calculated based on the condition information as training data (construction step).
- the plurality of trained models 206C have a plurality of frames (for example, 20, It is constructed using (000 frames).
- the image data which is the teacher data used for constructing the trained model 206C, is generated by the same creation procedure as the creation procedure in the first embodiment described above.
- the input unit 201C inputs condition information indicating the operating conditions of the X-ray irradiator (radiation source) 50 when capturing the X-ray transmission image of the object F, the imaging conditions by the X-ray detection camera 10, and the like. , Accepted from the user of the image acquisition device 1.
- the operating conditions include all or part of the tube voltage, target angle, target material, and the like.
- the condition information indicating the imaging conditions includes filters 51 and 19 (filters included in the camera used for imaging the object or filters provided in the source) arranged between the X-ray irradiator 50 and the X-ray detection camera 10.
- the input unit 201C may accept the input of the condition information as a direct input of information such as a numerical value, or may accept the input of the information such as a numerical value set in the internal memory in advance as a selective input.
- the input unit 201C accepts the input of the above-mentioned condition information from the user, but may acquire some condition information (tube voltage, etc.) according to the detection result of the control state by the control device 20C.
- the calculation unit 202C transmits the object F using the image acquisition device 1 and causes the X-ray detection camera 10 to detect the average energy value of the X-ray (radiation). Is calculated.
- the calculation unit 202C includes the tube voltage, the target angle, the target material, the filter material and thickness and the presence or absence thereof, the type of the window material and the presence or absence thereof, and the scintillator 11 of the X-ray detection camera 10. Based on information such as material and thickness, the spectrum of X-rays detected by the X-ray detection camera 10 is calculated using, for example, an approximate expression such as a known Tucker.
- the calculation unit 202C further calculates the spectral intensity integrated value and the photon number integrated value from the X-ray spectrum, and divides the spectral intensity integrated value by the photon number integrated value to obtain the value of the X-ray average energy. calculate.
- a calculation method using a known Tucker approximation formula will be described.
- Em kinetic energy at the time of electron target collision
- T electron kinetic energy in the target
- A proportionality constant determined by the atomic number of the target substance.
- ⁇ Target density
- ⁇ (E) Line attenuation coefficient of target material
- B Slowly changing functions of Z and T
- C Thomson-Whiddington constant
- ⁇ Target angle
- c Light velocity in vacuum , Can be determined.
- the calculation unit 202C can calculate the irradiation X-ray spectrum by calculating the above equation (1) based on them.
- the calculation unit 202C can calculate the X-ray energy spectrum that passes through the filter and the object F and is absorbed by the scintillator by using the X-ray attenuation formula of the above formula (2).
- the X-ray photon number spectrum can be obtained by dividing this X-ray energy spectrum by the energy of each X-ray.
- the calculation unit 202C calculates the average energy of X-rays using the above equation (3) by dividing the integrated value of the energy intensity by the integrated value of the number of photons. Through the above calculation process, the calculation unit 202C calculates the average energy of X-rays.
- an approximate expression by known Kramers, Birch et al. May be used.
- the narrowing unit 203C narrows down the candidates of the trained model from the plurality of trained models 206C constructed in advance based on the value of the average energy calculated by the calculation unit 202C. That is, the narrowing unit 203C compares the calculated average energy value with the X-ray average energy value in the image data used for constructing the plurality of trained models 206C, and the average energy values are close to each other.
- a plurality of trained models 206C constructed from image data are narrowed down as candidates. More specifically, when the average energy value calculated by the calculation unit 202C is 53 keV, the narrowing unit 203C has an average energy value of 40 keV in which the difference from the value is less than a predetermined threshold value (for example, 15 keV).
- the trained model 206C constructed from the image data of 50 keV and 60 keV is used as a candidate for the trained model.
- the selection unit 204C finally selects the trained model 206C to be used for the noise removal processing of the X-ray transmission image of the object F from the candidates narrowed down by the narrowing unit 203C. Specifically, the selection unit 204C acquires an X-ray transmission image captured by irradiating the jig with X-rays in the image acquisition device 1, and finally obtains an X-ray transmission image based on the image characteristics of the X-ray transmission image. Select the trained model 206C to use. At this time, the selection unit 204C analyzes the energy characteristic, the noise characteristic, the resolution characteristic, etc. as the image characteristic of the X-ray transmission image, and selects the trained model 206C based on the analysis result.
- the selection unit 204C transmits X-rays to a flat plate member whose thickness and material are known as a jig and whose relationship between the average energy of X-rays and the X-ray transmittance is known.
- An image is acquired, and the brightness of the X-ray image transmitted through the jig is compared with the brightness of the X-ray image transmitted through the air to determine the transmittance of one point (or the average of multiple points) of the X-ray in the jig. calculate. For example, when the brightness of the X-ray image transmitted through the jig is 5,550 and the brightness of the X-ray image transmitted through the air is 15,000, the transmittance is calculated to be 37%.
- the selection unit 204C specifies the average energy of transmitted X-rays (for example, 50 keV) estimated from the transmittance of 37% as the energy characteristics of the X-ray transmitted image of the jig.
- the selection unit 204C selects one trained model 206C constructed by the image data of the average energy closest to the specified average energy value.
- the selection unit 204C may analyze the characteristics at a plurality of points of the jig whose thickness or material changes as the energy characteristics of the X-ray transmission image of the jig.
- FIG. 27 is a diagram showing an example of an X-ray transmission image to be analyzed by the selection unit 204C.
- FIG. 27 is an X-ray transmission image for a jig having a shape in which the thickness changes in steps.
- the selection unit 204C selects a plurality of measurement regions (ROI: Region Of Interest) having different thicknesses from such an X-ray transmission image, analyzes the brightness average value for each of the plurality of measurement areas, and analyzes the thickness-luminance. Acquire the characteristic graph as the energy characteristic.
- FIG. 28 shows an example of the thickness-luminance characteristic graph acquired by the selection unit 204C.
- the selection unit 204C similarly acquires a thickness-brightness characteristic graph for the image data used for constructing the trained model 206C narrowed down by the narrowing unit 203C, and targets the jig.
- the trained model 206C constructed by the image data having the characteristics closest to the acquired characteristic graph is selected as the final trained model 206C.
- the image characteristics of the image data used for constructing the trained model 206C may refer to those calculated in advance outside the control device 20C.
- the selection unit 204C can analyze the brightness value and noise for each of a plurality of measurement regions as the noise characteristic of the X-ray transmission image of the jig, and acquire the characteristic graph of the brightness-noise ratio as the noise characteristic. That is, the selection unit 204C selects a plurality of measurement region ROIs having different thicknesses or materials from the X-ray transmission image, analyzes the standard deviation of the brightness values and the average value of the brightness values of the plurality of measurement region ROIs, and performs brightness-.
- the characteristic graph of SNR (SN ratio) is acquired as a noise characteristic.
- FIG. 29 shows an example of the characteristic graph of luminance-SNR acquired by the selection unit 204C. Then, the selection unit 204C selects the trained model 206C constructed by the image data having the noise characteristic closest to the acquired characteristic graph as the final trained model 206C.
- the selection unit 204C may acquire a characteristic graph in which the vertical axis is noise calculated from the standard deviation of the luminance value instead of the above-mentioned luminance-SNR characteristic graph.
- noise factors shot noise, readout noise
- the trained model 206C can be selected based on the specific result.
- FIG. 30 is a diagram for explaining the selection function of the trained model based on the image characteristics by the selection unit 204C.
- (a) portion characteristic graphs G 1 of the luminance -SNR of each image data used for constructing a plurality of the learned model 206C, shows the G 2, G 3, part (b) is in addition to these characteristic graphs G 1, G 2, G 3 , shows a characteristic graph G T luminance -SNR of X-ray transmission image of the captured jig.
- the learning constructed by the nearest characteristic graph G 2 of the image data on the characteristics of the characteristic graph G T It functions to select the finished model 206C.
- the selection unit 204C is between a characteristic graph G T and the characteristic graphs G 1, G 2, G 3 , and calculates an error of SNR of each luminance value of the predetermined intervals, the average of those errors square error (RMSE: root mean squared error) is calculated, and the mean square error selects the smallest characteristic graphs G 1, G 2, learned model 206C corresponding to G 3. Further, the selection unit 204C can also select the trained model 206C in the same manner when selecting using the energy characteristics.
- RMSE root mean squared error
- the selection unit 204C can also select the trained model 206C based on the characteristics of the image after applying the plurality of trained models and executing the noise removal processing on the X-ray transmission image of the jig. ..
- the selection unit 204C applies a plurality of trained models 206C to the X-ray transmission images obtained by imaging jigs having various resolution charts, and the resulting noise-removed image is generated. To evaluate. Then, the selection unit 204C selects the trained model 206C used for the image having the smallest change in resolution before and after the noise removal processing.
- FIG. 31 shows an example of an X-ray transmission image used for evaluating the resolution. In this X-ray transmission image, a chart whose resolution changes stepwise along one direction is targeted for imaging. The resolution of the X-ray transfer image can be measured by using MTF (Modulation Transfer Function) or CTF (Contrast Transfer Function).
- the selection unit 204C evaluates the characteristics of the brightness-noise ratio of the image after noise removal, and selects the trained model 206C used to generate the image having the highest characteristics.
- FIG. 32 shows an example of the structure of the jig used for evaluating the brightness-noise ratio.
- a jig a jig in which foreign substances P2 having various materials and various sizes are scattered in a member P1 whose thickness changes in a step-like manner in one direction can be used.
- FIG. 33 shows an X-ray transmission image of the jig of FIG. 32 after noise removal processing.
- the selection unit 204C selects an image region R1 containing an image of a foreign matter P2 in an X-ray transmission image and an image region R2 not including an image of a foreign matter P2 in the vicinity of the region R1, and minimizes the brightness in the image region R1.
- the value L MIN , the average value L AVE of the brightness in the image region R2, and the standard deviation L SD of the brightness in the image region R2 are calculated.
- the selection unit 204C calculates the brightness-noise ratio CNR for each of the X-ray transmission images after the application of the plurality of trained models 206C, and generates an X-ray transmission image having the highest brightness-noise ratio CNR. Select the trained model 206C used.
- calculating selection unit 204C includes a mean value L AVE_R1 of luminance in the image area R1, and the average value L AVE_R2 of luminance in the image area R2, based on the standard deviation L SD of luminance in the image area R2, the following formula You may.
- CNR (L AVE_R1- L MIN_R2 ) / L SD
- the processing unit 205C applies the trained model 206C selected by the selection unit 204C to the X-ray transmission image acquired for the object F, and executes image processing for removing noise to output an output image. Generate. Then, the processing unit 205C outputs the generated output image to the display device 30 or the like.
- FIG. 34 is a flowchart showing the procedure of the observation process by the image acquisition device 1.
- control device 20C receives input of condition information indicating the operating conditions of the X-ray irradiator 50, the imaging conditions by the X-ray detection camera 10, and the like from the operator (user) of the image acquisition device 1 (step S1). .. Next, the control device 20C calculates the value of the average energy of the X-rays detected by the X-ray detection camera 10 based on the condition information (step S2).
- control device 20C specifies the value of the average energy of X-rays in the image data used for constructing the trained model 206C stored in the control device 20C (step S3). After that, the specification of the average energy value of the X-ray is repeated for all the trained models 206C stored in the control device 20C (step S4).
- control device 20C compares the calculated average energy values of the X-rays, so that a plurality of trained model 206C candidates are narrowed down (step S5). Further, when the jig is set in the image acquisition device 1 and the jig is imaged, an X-ray transmission image of the jig is acquired (step S6).
- control device 20C uses the image characteristics of the X-ray transmission image of the jig (X-ray average energy value, thickness-luminance characteristics, brightness-noise ratio characteristics, brightness-noise characteristics, resolution change characteristics). Etc.) are acquired (step S7). Then, the control device 20C selects the final trained model 206C based on the acquired image characteristics (step S8).
- step S9 when the object F is set in the image acquisition device 1 and the object F is imaged, an X-ray transmission image of the object F is acquired (step S9).
- the control device 20C applies the finally selected trained model 206C to the X-ray transmission image of the object F, so that noise removal processing is executed on the X-ray transmission image (step S10). ..
- the control device 20C outputs an output image, which is an X-ray transmission image that has been subjected to noise removal processing, to the display device 30 (step S11).
- the image acquisition device 1 described above can also remove the noise component while increasing the signal component in the X-ray transmission image, and can effectively improve the S / N ratio in the X-ray transmission image. Further, the average energy of the X-rays transmitted through the object F is calculated based on the operating conditions of the X-ray source when acquiring the X-ray transmission image of the object F or the imaging conditions of the X-ray transmission image. .. Then, based on the average energy, candidates for the trained model 206C used for noise removal are narrowed down from the trained model 206C constructed in advance.
- the trained model 206C corresponding to the average energy of the X-ray to be imaged is used for noise removal, noise removal corresponding to the relationship between the brightness and the noise in the X-ray transmission image can be realized.
- noise in the X-ray transmission image can be effectively removed, and for example, foreign matter detection performance can be improved.
- tube voltage, filter, scintillator, X-ray detection camera conditions gain setting value, circuit noise value, saturation charge amount, conversion coefficient value (e- / count), camera line rate
- the mode of noise changes depending on the difference in the object and the like. Therefore, when trying to realize noise removal by machine learning, it is necessary to prepare a plurality of learning models trained under various conditions.
- the X-ray transmission image contains noise derived from X-ray generation. It is conceivable to increase the X-ray dose in order to improve the signal-to-noise ratio of the X-ray transmission image, but in that case, increasing the X-ray dose increases the exposure dose of the sensor and shortens the life of the sensor. There is a problem that the life of the source is shortened, and it is difficult to achieve both an improvement in the SN ratio and a long life. In the present embodiment, since it is not necessary to increase the X dose, it is possible to achieve both an improvement in the SN ratio and a long life.
- control device 20C of the present embodiment has a function of executing image processing for removing noise from the X-ray transmission image of the object F using the selected learned model 206C. With such a function, noise removal corresponding to the relationship between the brightness and noise in the X-ray transmission image can be realized, and the noise in the X-ray transmission image can be effectively removed.
- control device 20C of the present embodiment compares the value of the average energy of X-rays calculated from the selection information with the value of the average energy specified from the image data used for constructing the trained model 206C. As a result, it has a function to narrow down the candidates of the trained model. With such a function, noise removal corresponding to the relationship between the brightness and the noise in the X-ray transmission image can be surely realized.
- control device 20C of the present embodiment has a function of selecting the trained model 206C from the candidates based on the image characteristics of the X-ray transmission image of the jig. With such a function, the trained model 206C that is most suitable for removing noise from the X-ray transmission image of the object F can be selected. As a result, noise removal corresponding to the relationship between the brightness and the noise in the X-ray transmission image can be realized more reliably.
- the control device 20C of the second embodiment has selected the candidate of the trained model 206C based on the value of the average energy of X-rays calculated from the condition information, but the performance deterioration of the X-ray detection camera 10 and the X-rays It may have a function corresponding to the output fluctuation or the performance deterioration of the irradiator 50.
- FIG. 35 is a block diagram showing a functional configuration of the control device 20D according to the modified example of the second embodiment.
- the control device 20D is different from the control device 20C according to the second embodiment in that it has the measurement unit 207C and the functions of the calculation unit 202D and the narrowing unit 203D.
- the relationship between the brightness and noise in the X-ray transmission image is estimated from the average energy of the X-rays.
- the trained model 206C is narrowed down on the premise that it can be done.
- the X-ray conversion coefficient is calculated in consideration of the performance deterioration of the X-ray detection camera 10, the output fluctuation of the X-ray irradiator 50, or the performance deterioration thereof.
- the X-ray conversion coefficient is a parameter indicating the efficiency of X-rays being converted into visible light by a scintillator and then converted into electrons (electrical signals) by a camera sensor.
- the X-ray transform coefficient FT is assumed that the average energy of X-rays is E [keV], the scintillator emission amount is EM [photon / keV], the coupling efficiency in the sensor is C, and the quantum efficiency of the sensor is QE.
- the following formula; F T E ⁇ EM ⁇ C ⁇ QE Can be calculated by.
- the measuring unit 207C of the control device 20D has a decrease in the amount of light emission EM as a performance deterioration of the scintillator 11, a decrease in the quantum efficiency QE of the sensor as a performance deterioration of the scan camera 12, and an output fluctuation and performance of the X-ray irradiator 50. It has a function of measuring the amount of change in the average energy E as deterioration. For example, the measuring unit 207C measures the amount of decrease in the amount of light emitted between the scintillator 11 without deterioration in performance (state when new) and the current amount of light emitted between the scintillator 11 and uses the amount of decrease to the current amount of light emitted EM. To estimate.
- the measuring unit 207C measures the amount of decrease in brightness between the scan camera 12 in a state where there is no performance deterioration (state in a new state) and the current scan camera 12, and the current quantum efficiency QE is obtained from the amount of decrease. To estimate. Further, the measuring unit 207C calculates the current average energy E from the amount of change in the average energy between the state where the performance of the X-ray irradiator 50 is not deteriorated (the state when it is new) and the current X-ray irradiator 50. presume.
- the average energy E is a jig whose thickness or material is known and can be obtained from imaging data of a flat plate member whose thickness and material are known and the relationship between the average energy of X-rays and X-ray transmittance is known. It may be obtained from the imaging data at a plurality of points.
- Calculator 202D of the control unit 20D includes the average energy E of the calculated X-ray, to calculate the X-ray conversion coefficient F T by using the light emission amount EM and quantum efficiency QE estimated by measuring unit 207C.
- Narrowing part 203D of the control unit 20D includes a X-ray conversion coefficient F T that calculated, by comparing the X-ray conversion factor F T in the image data used to construct the learned model 206C, learned model 206C It has a function to narrow down the candidates.
- FIG. 36 is a flowchart showing a procedure of observation processing by the image acquisition device 1 according to another modification. As described above, the processing of steps S6 to S8 in FIG. 34 can be omitted, and the noise removal processing can be executed using the trained model narrowed down based on the average energy.
- FIG. 37 is a block diagram showing a functional configuration of the control device 20E according to the third embodiment.
- the control device 20E includes an acquisition unit 201E, a specific unit 202E, a selection unit 204E, and a processing unit 205E.
- a plurality of trained models 206E for executing noise removal processing on the X-ray transmission image are stored in advance.
- Each of the plurality of trained models 206E is a learning model by machine learning constructed in advance using image data as teacher data.
- Machine learning includes supervised learning, deep learning, reinforcement learning, neural network learning, and the like.
- the deep learning algorithm the two-dimensional convolutional neural network described in the paper "Beyonda Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising" by Kai Zhang et al. Is adopted.
- the plurality of trained models 206E may be generated by an external computer or the like and downloaded to the control device 20E, or may be generated in the control device 20E.
- FIG. 38 shows an example of image data which is teacher data used for constructing the trained model 206E.
- the teacher data an X-ray transmission image in which patterns of various thicknesses, various materials, and various resolutions are captured can be used.
- the example shown in FIG. 38 is an example of an X-ray transmission image generated for chicken.
- an X-ray transmission image actually generated by using the image acquisition device 1 for a plurality of types of objects may be used, or image data generated by simulation calculation may be used.
- the X-ray transmission image may be acquired by using an apparatus different from the image acquisition apparatus 1. Further, the X-ray transmission image and the image data generated by the simulation calculation may be used in combination.
- Each of the plurality of trained models 206E is image data obtained for transmitted X-rays having different average energies, and is pre-constructed using image data having a known noise distribution.
- the average energy of X-rays in the image data can be determined by setting the operating conditions of the X-ray irradiator (radiation source) 50 of the image acquisition device 1, the imaging conditions of the image acquisition device 1, etc., or at the time of simulation calculation. By setting the operating conditions or the imaging conditions of the X-ray irradiator 50, different values are set in advance.
- the plurality of trained models 206E indicate the operating conditions of the X-ray irradiator (radiation source) 50 when capturing the X-ray transmission image of the object F, the imaging conditions by the X-ray detection camera 10, and the like. It is constructed by machine learning using a training image which is an X-ray image corresponding to the average energy of X-rays transmitted through the object F calculated based on the condition information as training data (construction step).
- the plurality of trained models 206E have a plurality of frames (for example, 20, It is constructed using (000 frames).
- the image data which is the teacher data used for constructing the trained model 206E, is generated by the same creation procedure as the creation procedure in the first embodiment described above.
- the acquisition unit 201E acquires an X-ray transmission image captured by irradiating the jig and the object F with X-rays using the image acquisition device 1.
- the jig a flat plate member whose thickness and material are known and whose relationship between the average energy of X-rays and the X-ray transmittance is known, or a jig having a chart imaged at various resolutions. Used. That is, the acquisition unit 201E acquires an X-ray transmission image of the jig captured by the image acquisition device 1 prior to the observation process of the object F.
- the acquisition unit 201E acquires the X-ray transmission image of the object F captured by the image acquisition device 1 at the timing after the trained model 206E is selected based on the X-ray transmission image of the jig. do.
- the acquisition timing of the X-ray transmission image of the jig and the object F is not limited to the above, and may be simultaneous or vice versa.
- the identification unit 202E specifies the image characteristics of the X-ray transmission image of the jig acquired by the acquisition unit 201E. Specifically, the selection unit 204E specifies energy characteristics, noise characteristics, resolution characteristics, frequency characteristics, and the like as image characteristics of the X-ray transmission image.
- the specific portion 202E has the brightness of the X-ray image transmitted through the jig and the brightness of the X-ray image transmitted through the air.
- the X-ray transmittance of one point (or the average of a plurality of points) in the jig is calculated by comparing with. For example, when the brightness of the X-ray image transmitted through the jig is 5,550 and the brightness of the X-ray image transmitted through the air is 15,000, the transmittance is calculated to be 37%.
- the identification unit 202E specifies the average energy of transmitted X-rays (for example, 50 keV) estimated from the transmittance of 37% as the energy characteristics of the X-ray transmitted image of the jig.
- the specific unit 202E may analyze the characteristics at a plurality of points of the jig whose thickness or material changes as the energy characteristics of the X-ray transmission image of the jig.
- FIG. 39 is a diagram showing an example of an X-ray transmission image to be analyzed by the specific unit 202E.
- FIG. 39 is an X-ray transmission image for a jig having a shape in which the thickness changes in steps.
- the specific unit 202E selects a plurality of measurement regions (ROI: Region Of Interest) having different thicknesses from such an X-ray transmission image, analyzes the brightness average value for each of the plurality of measurement areas, and analyzes the thickness-luminance. Acquire the characteristic graph as the energy characteristic.
- FIG. 40 shows an example of the thickness-luminance characteristic graph acquired by the specific unit 202E.
- the specific unit 202E can analyze the brightness value and noise for each of a plurality of measurement regions as the noise characteristic of the X-ray transmission image of the jig, and acquire the characteristic graph of the brightness-noise ratio as the noise characteristic. That is, the specific unit 202E selects a plurality of measurement region ROIs having different thicknesses or materials from the X-ray transmission image, analyzes the standard deviation of the brightness values of the plurality of measurement region ROIs, and analyzes the average value of the brightness values, and the brightness-.
- the characteristic graph of SNR (SN ratio) is acquired as a noise characteristic.
- FIG. 41 shows an example of the characteristic graph of luminance-SNR acquired by the specific unit 202E.
- the specific unit 202E may acquire a characteristic graph in which the vertical axis is noise calculated from the standard deviation of the luminance value instead of the above-mentioned characteristic graph of luminance-SNR.
- the specific unit 202E can also acquire the distribution of the resolution in the X-ray transmission image of the jig as a resolution characteristic. Further, the specific unit 202E has a function of acquiring the resolution characteristic of the image after the noise removal processing is performed by applying the plurality of trained models 206E to the X-ray transmission image of the jig.
- FIG. 42 shows an example of an X-ray transmission image used for evaluating the resolution. In this X-ray transmission image, a chart whose resolution changes stepwise along one direction is targeted for imaging. The resolution of the X-ray transfer image can be measured by using MTF (Modulation Transfer Function) or CTF (Contrast Transfer Function).
- the selection unit 204E finally selects the object F from among the plurality of trained models 206E stored in the control device 20E based on the image characteristics acquired by the specific unit 202E.
- the trained model 206E used for the noise removal processing of the X-ray transmission image of is selected. That is, the selection unit 204E compares the image characteristics specified by the specific unit 202E with the image characteristics specified from the image data used for constructing the plurality of trained models 206E, and both are similar trained models. Select 206E.
- the selection unit 204E selects one trained model 206E constructed by the image data of the average energy closest to the value of the average energy of the transmitted X-rays specified by the specific unit 202E.
- the selection unit 204E acquires a thickness-brightness characteristic graph for the image data used for constructing the plurality of trained models 206E in the same manner as the identification method by the specific unit 202E, and targets the jig.
- the trained model 206E constructed from the image data having the characteristics closest to the thickness-brightness characteristic graph acquired in is selected as the final trained model 206E.
- the image characteristics of the image data used for constructing the trained model 206E may refer to those calculated in advance outside the control device 20E.
- the selection unit 204E uses the trained model 206E constructed by the image data having the characteristics of the brightness-noise ratio closest to the characteristics of the brightness-noise ratio acquired by the specific unit 202E as the final trained model 206E. May be selected as.
- the image characteristics of the image data used for constructing the trained model 206E may be acquired by the selection unit 204E from the image data, or may refer to those calculated in advance outside the control device 20E. ..
- the selection unit 204E may select the trained model 206E as the noise characteristic by using the luminance-noise characteristic instead of the luminance-noise ratio characteristic.
- noise factors shots noise, readout noise, etc.
- the trained model 206E can be selected based on the specific result.
- FIG. 43 is a diagram for explaining the selection function of the trained model based on the image characteristics by the selection unit 204E.
- G T luminance -SNR of X-ray transmission image of the captured jig.
- the selection unit 204E is between a characteristic graph G T and the characteristic graphs G 1, G 2, G 3 , and calculates an error of SNR of each luminance value of the predetermined intervals, the average of those errors square error (RMSE: root mean squared error) is calculated, and the mean square error selects the smallest characteristic graphs G 1, G 2, learned model 206E corresponding to G 3. Further, the selection unit 204E can also select the trained model 206E in the same manner when selecting using the energy characteristics.
- RMSE root mean squared error
- the selection unit 204E generates an image having relatively excellent characteristics based on the characteristics of the image after applying a plurality of trained models and performing noise removal processing on the X-ray transmission image of the jig. It is also possible to select the trained model 206E used in.
- the selection unit 204E applies a plurality of trained models 206E to the X-ray transmission images obtained by imaging jigs having charts having various resolutions, and the resulting image after noise removal. Evaluate the resolution characteristics of. Then, the selection unit 204E selects the trained model 206E used for the image in which the change in the resolution of each distribution before and after the noise removal processing is the smallest.
- the selection unit 204E evaluates the characteristics of the brightness-noise ratio of the image after noise removal, and selects the trained model 206E used to generate the image having the highest characteristics.
- FIG. 44 shows an example of the structure of the jig used for evaluating the brightness-noise ratio. For example, as a jig, a jig in which foreign substances P2 having various materials and various sizes are scattered in a member P1 whose thickness changes in a step-like manner in one direction can be used.
- FIG. 45 shows an X-ray transmission image obtained for the jig of FIG. 44 after noise removal processing.
- the selection unit 204E selects an image region R1 containing an image of a foreign matter P2 in an X-ray transmission image and an image region R2 not including an image of a foreign matter P2 in the vicinity of the region R1, and minimizes the brightness in the image region R1.
- the value L MIN , the average value L AVE of the brightness in the image region R2, and the standard deviation L SD of the brightness in the image region R2 are calculated.
- the selection unit 204E calculates the brightness-noise ratio CNR for each of the X-ray transmission images after the application of the plurality of trained models 206E, and generates an X-ray transmission image having the highest brightness-noise ratio CNR. Select the trained model 206E used.
- calculating selection section 204E includes a mean value L AVE_R1 of luminance in the image area R1, and the average value L AVE_R2 of luminance in the image area R2, based on the standard deviation L SD of luminance in the image area R2, the following formula You may.
- CNR (L AVE_R1- L MIN_R2 ) / L SD
- the processing unit 205E applies the trained model 206E selected by the selection unit 204E to the X-ray transmission image acquired for the object F, and executes image processing for removing noise to output an output image. Generate. Then, the processing unit 205E outputs the generated output image to the display device 30 or the like.
- FIG. 46 is a flowchart showing the procedure of the observation process by the image acquisition device 1.
- the operator (user) of the image acquisition device 1 sets the imaging conditions in the image acquisition device 1 such as the tube voltage of the X-ray irradiator 50 or the gain in the X-ray detection camera 10 (step S1E).
- a jig is set in the image acquisition device 1, and an X-ray transmission image is acquired by the control device 20E for the jig (step S2E).
- X-ray transmission images of a plurality of types of jigs may be sequentially acquired.
- control device 20E specifies the image characteristics (energy characteristics, noise characteristics, and resolution characteristics) of the X-ray transmission image of the jig (step S3E). Further, the control device 20E applies a plurality of trained models 206E to the X-ray transmission images of the jig, and the image characteristics (resolution characteristics or resolution characteristics) of the respective X-ray transmission images after the application of the plurality of trained models 206E. The brightness-noise ratio value, etc.) is specified (step S4E).
- the control device 20E compares the energy characteristics of the X-ray transmission image of the jig with the energy characteristics of the image data used for constructing the trained model 206E, and the resolution of the X-ray transmission image of the jig.
- the trained model 206E is selected based on the degree of change before and after the application of the trained model of the characteristic (step S5E).
- the trained model 206E may be selected based on the state of change before and after the application of. Further, in step S5E, instead of the above processing, the trained model 206E having the highest brightness-noise ratio CNR after the trained model of the X-ray transmission image of the jig is applied may be selected.
- step S7E when the object F is set in the image acquisition device 1 and the object F is imaged, an X-ray transmission image of the object F is acquired (step S7E).
- the control device 20E applies the finally selected trained model 206E to the X-ray transmission image of the object F, so that noise removal processing is executed on the X-ray transmission image (step S8E). ..
- the control device 20E outputs an output image, which is an X-ray transmission image that has been subjected to noise removal processing, to the display device 30 (step S9E).
- the image acquisition device 1 described above can also remove the noise component while increasing the signal component in the X-ray transmission image, and can effectively improve the S / N ratio in the X-ray transmission image.
- the image characteristics of the X-ray transmission image of the jig are specified, and a trained model to be used for noise removal is selected from the trained models constructed in advance based on the image characteristics.
- the characteristics of the X-ray transmission image that changes depending on the operating conditions of the X-ray irradiator 50 in the image acquisition device 1 can be estimated, and the trained model 206E selected according to the estimation result is used for noise removal.
- Noise removal corresponding to the relationship between brightness and noise in an X-ray transmission image can be realized.
- noise in the X-ray transmission image can be effectively removed.
- the X-ray transmission image contains noise derived from X-ray generation. It is conceivable to increase the X-ray dose in order to improve the signal-to-noise ratio of the X-ray transmission image, but in that case, increasing the X-ray dose increases the exposure dose of the sensor and shortens the life of the sensor. There is a problem that the life of the source is shortened, and it is difficult to achieve both an improvement in the SN ratio and a long life. In the present embodiment, since it is not necessary to increase the X dose, it is possible to achieve both an improvement in the SN ratio and a long life.
- the image characteristics of the X-ray transmission image of the jig and the image characteristics of the image data used for constructing the trained model are compared.
- the trained model 206E constructed with the image data corresponding to the image characteristics of the X-ray transmission image of the jig is selected, so that the noise in the X-ray transmission image of the object F can be effectively removed.
- the trained model is selected by using the image characteristics of the image obtained by applying the plurality of trained models 206E to the X-ray transmission image of the jig.
- the trained model 206E is selected according to the image characteristics of the X-ray transmission image of the jig to which the plurality of trained models 206E are actually applied, the noise in the X-ray transmission image of the object F is effectively eliminated. Can be removed.
- energy characteristics or noise characteristics are used as image characteristics.
- the trained model 206E constructed by an image having characteristics similar to the energy characteristics or noise characteristics of the X-ray transmission image of the jig that changes depending on the imaging conditions of the image acquisition device 1 is selected. As a result, it is possible to remove noise in the X-ray transmission image of the object F corresponding to the change in the conditions of the image acquisition device 1.
- the resolution characteristic or the brightness-noise ratio is also used as the image characteristic. According to such a configuration, by applying the selected trained model 206E, it becomes possible to obtain an X-ray transmission image having a good resolution characteristic or a brightness-noise ratio. As a result, it is possible to remove noise in the X-ray transmission image of the object corresponding to the change in the conditions of the image acquisition device 1.
- the trained model may be constructed by machine learning using image data obtained by adding noise values along a normal distribution to a radiation image of a predetermined structure as teacher data. Suitable. As a result, it becomes easy to prepare image data which is teacher data used for constructing the trained model, and the trained model can be efficiently constructed.
- the image processing module derives an evaluation value from the pixel value of each pixel of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value.
- a noise map generator that generates a noise map that is data associated with the evaluation values derived for each pixel of the radiation image, and the radiation image and noise map are input to the trained model to remove noise from the radiation image. It is also preferable to have a processing unit that executes noise removal processing.
- the evaluation value is derived from the pixel value of each pixel of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value, and each pixel of the radiation image is derived. It is also preferable to generate a noise map which is data associated with the evaluation values derived from the above, input the radiation image and the noise map to the trained model, and execute the noise removal processing for removing the noise from the radiation image. be.
- the evaluation value is derived from the pixel value of each image of the radiation image based on the relational data representing the relationship between the pixel value and the evaluation value for evaluating the spread of the noise value, and the evaluation derived to each pixel of the radiation image is derived.
- a noise map which is data associated with values, is generated. Then, the radiation image and the noise map are input to the trained model constructed in advance by machine learning, and the noise removal process for removing noise from the radiation image is executed.
- the spread of the noise value evaluated from the pixel value of each pixel of the radiation image is taken into consideration, the noise in each pixel of the radiation image is removed by machine learning, and the pixel value in the radiation image is removed using the trained model. It is possible to realize noise removal corresponding to the relationship between the noise spread and the noise spread. As a result, noise in the radiographic image can be effectively removed.
- the image processing module has an input unit and a condition for receiving input of condition information indicating either the condition of the radiation source or the imaging condition when irradiating the object with radiation to image the object.
- input of condition information indicating either the condition of the radiation source or the imaging condition when irradiating the object with radiation is accepted, and the object is transmitted based on the condition information.
- the average energy related to the radiation and narrow down the trained models used for noise removal processing from a plurality of trained models constructed by machine learning in advance using image data based on the average energy. Suitable.
- the average energy of the radiation transmitted through the object is calculated based on the condition of the source of the radiation or the imaging condition when acquiring the radiation image of the object.
- candidates for the trained model used for noise removal are narrowed down from the trained models constructed in advance.
- the image processing module is a machine that uses image data in advance based on a specific unit that specifies the image characteristics of a radiation image acquired by an imaging device for a jig and the image characteristics. It is also preferable to have a selection unit that selects a trained model from a plurality of trained models constructed by learning, and a processing unit that executes noise removal processing using the selected trained model. Is.
- the image characteristics of the radiation image acquired for the jig are specified, and based on the image characteristics, among a plurality of trained models constructed by machine learning in advance using image data. Therefore, it is also preferable to select a trained model and perform noise removal processing using the selected trained model.
- the image characteristics of the radiation image of the jig are specified, and the trained model used for noise removal is selected from the trained models constructed in advance based on the image characteristics.
- This makes it possible to estimate the characteristics of the radiation image that change depending on the conditions of the radiation source in the system, and the trained model selected according to this estimation result is used for noise removal. Noise removal corresponding to the relationship can be realized. As a result, noise in the radiographic image can be effectively removed.
- a radiographic image acquisition device In the embodiment, a radiographic image acquisition device, a radiographic image acquisition system, and a radiological image acquisition method are used, and the S / N ratio in the radiographic image can be effectively improved.
- Image acquisition device radio image acquisition device, radiation image acquisition system
- 10 ... X-ray detection camera (imaging device), 11 ... scintillator, 12 ... scan camera (detection element), 20, 20A to 20E ... control device ( Image processing module), 50 ... X-ray irradiator (radiation source), 60 ... belt conveyor (conveyor), 72 ... pixels, 74 ... pixel lines (pixel group), 73 ... readout circuit, 201, 201C ... input unit , 202, 202A, 202C, 202D ... Calculation unit, 202E ... Specific unit, 203C, 203D ... Narrowing unit, 204, 204A, 204B ...
- Noise map generation unit 204C, 204E ... Selection unit, 205, 205C, 205E ... Processing Part, 206C, 206E, 207 ... Trained model, F ... Object, TD ... Transport direction (one direction).
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Immunology (AREA)
- Biochemistry (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- High Energy & Nuclear Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Toxicology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Radiology & Medical Imaging (AREA)
- Surgery (AREA)
- Optics & Photonics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
ここで、μは対象物F、フィルタ、シンチレータ等の減弱係数、xは対象物F、フィルタ、シンチレータ等の厚さである。μは対象物F、フィルタ、及びシンチレータの材質の情報から決定でき、xは対象物F、フィルタ、及びシンチレータの厚さの情報から決定できる。X線光子数スペクトルは、このX線エネルギースペクトルを各X線のエネルギーで除することで求まる。算出部202は、X線の平均エネルギーを、エネルギー強度の積分値を光子数の積分値で除することにより、下記式(3)を用いて算出する。
平均エネルギーE=スペクトル強度積分値/光子数積分値 …(3)
上記の計算過程により、算出部202は、X線の平均エネルギーを算出する。なお、X線スペクトルの算出に関しては、公知のKramersや、Birchらによる近似式を使用してもよい。
上記式(4)中、変数Noiseはノイズ値の標準偏差、変数Signalは画素の信号値(画素値)、定数Fはノイズファクター(Noise Factor)、定数Mはシンチレータによる増倍率、定数Cは、X線検出カメラ10においてスキャンカメラ12とシンチレータ11とのカップリング効率(Coupling Efficiency)、定数Qはスキャンカメラ12の量子効率(Quantum Efficiency)、定数cfはスキャンカメラ12において画素の信号値を電荷に変換する変換係数、変数EmはX線の平均エネルギー、定数Dは、イメージセンサにおいて熱雑音によって発生した暗電流ノイズ、定数Rはスキャンカメラ12における読み出しノイズ、をそれぞれ表す情報である。上記式(4)が使用される際、ノイズマップ生成部204により、変数Signalには画像取得部203によって取得されたX線画像の各画素の画素値が代入され、変数Emには算出部202によって算出された平均エネルギーの数値が代入される。そして、ノイズマップ生成部204により、上記式(4)を用いて計算された変数Noiseが、ノイズ値の標準偏差の数値として得られる。なお、平均エネルギーを含むその他のパラメータは、入力部201によって入力を受け付けられることで取得されてもよいし、予め設定されていてもよい。
図25は、第2実施形態に係る制御装置20Cの機能構成を示すブロック図である。制御装置20Cは、入力部201C、算出部202C、絞込部203C、選択部204C、及び処理部205Cを備える。
CNR=(LAVE-LMIN)/LSD
を用いて、輝度-ノイズ比CNRを算出する。さらに、選択部204Cは、複数の学習済みモデル206Cの適用後のX線透過画像のそれぞれを対象に輝度-ノイズ比CNRを算出し、輝度-ノイズ比CNRが最も高いX線透過画像の生成に用いられた学習済みモデル206Cを選択する。
CNR=(LAVE_R1-LMIN_R2)/LSD
上記第2実施形態の制御装置20Cは、条件情報から算出したX線の平均エネルギーの値を基に学習済みモデル206Cの候補を選択していたが、X線検出カメラ10の性能劣化、X線照射器50の出力変動あるいは性能劣化に対応した機能を有していてもよい。
FT=E×EM×C×QE
により計算できる。また、X線透過画像におけるSN比(SNR)は、X線変換係数FTと、X線フォトン数NPと、カメラの読み出しノイズNrとを用いて、下記式;
SNR=FTNP/{(FTNP+Nr2)1/2}
より求められることから、X線変換係数FTを基に、カメラの性能劣化を考慮した上でのX線透過画像における輝度とノイズとの関係が推定できる。
図37は、第3実施形態に係る制御装置20Eの機能構成を示すブロック図である。制御装置20Eは、取得部201E、特定部202E、選択部204E、及び処理部205Eを備える。
CNR=(LAVE-LMIN)/LSD
を用いて、輝度-ノイズ比CNRを算出する。さらに、選択部204Eは、複数の学習済みモデル206Eの適用後のX線透過画像のそれぞれを対象に輝度-ノイズ比CNRを算出し、輝度-ノイズ比CNRが最も高いX線透過画像の生成に用いられた学習済みモデル206Eを選択する。
CNR=(LAVE_R1-LMIN_R2)/LSD
Claims (12)
- 対象物を透過した放射線を一の方向にスキャンして撮像して放射線画像を取得する撮像装置と、
前記撮像装置上に設けられ、前記放射線を光に変換するシンチレータと、
前記放射線画像を予め画像データを用いて機械学習によって構築された学習済みモデルに入力させて、前記放射線画像からノイズを除去するノイズ除去処理を実行する画像処理モジュールと、
を備え、
前記撮像装置は、
前記一の方向に沿って配列されたM個(Mは、2以上の整数)の画素を有する画素ラインが、前記一の方向に直交する方向にN列(Nは2上の整数)配列されて構成され、前記画素毎に前記光に関する検出信号を出力する検出素子と、
前記検出素子のN列の前記画素ライン毎に、M個の前記画素のうちの少なくとも2個の前記画素から出力される前記検出信号を加算し、加算したN個の前記検出信号を順次出力することにより、前記放射線画像を出力する読出回路と、
を含む、
放射線画像取得装置。 - 前記学習済みモデルは、所定の構造体の放射線画像に対し正規分布に沿ったノイズ値を付加して得られた画像データを教師データとした機械学習によって構築される、
請求項1に記載の放射線画像取得装置。 - 前記画像処理モジュールは、
画素値とノイズ値の広がりを評価した評価値との関係を表す関係データに基づいて、前記放射線画像の各画素の画素値から前記評価値を導出し、前記放射線画像の各画素に導出した前記評価値を対応付けたデータであるノイズマップを生成するノイズマップ生成部と、
前記放射線画像及び前記ノイズマップを、前記学習済みモデルに入力し、前記放射線画像からノイズを除去するノイズ除去処理を実行する処理部と、
を有する請求項1又は2に記載の放射線画像取得装置。 - 前記画像処理モジュールは、
放射線を照射して対象物を撮像する際の前記放射線の発生源の条件あるいは撮像条件のいずれかを示す条件情報の入力を受け付ける入力部と、
前記条件情報を基に、前記対象物を透過した前記放射線に関する平均エネルギーを算出する算出部と、
前記平均エネルギーを基に、予め画像データを用いて機械学習によってそれぞれ構築された複数の学習済みモデルの中から、前記ノイズ除去処理に用いる学習済みモデルを絞り込む絞込部と、
を有する請求項1又は2に記載の放射線画像取得装置。 - 前記画像処理モジュールは、
治具を対象として前記撮像装置によって取得された放射線画像の画像特性を特定する特定部と、
前記画像特性を基に、予め画像データを用いて機械学習によってそれぞれ構築された複数の学習済みモデルの中から、学習済みモデルを選択する選択部と、
選択された前記学習済みモデルを用いて前記ノイズ除去処理を実行する処理部と、
を有する請求項1又は2に記載の放射線画像取得装置。 - 請求項1~5のいずれか1項に記載の放射線画像取得装置と、
前記対象物に放射線を照射する発生源と、
前記対象物を前記撮像装置に対して前記一の方向に搬送する搬送装置と、
を備える放射線画像取得システム。 - 対象物を透過した放射線に応じたシンチレーション光を一の方向にスキャンして撮像して放射線画像を取得するステップと、
前記放射線画像を予め画像データを用いて機械学習によって構築された学習済みモデルに入力させて、前記放射線画像からノイズを除去するノイズ除去処理を実行するステップと、を備え、
前記取得するステップでは、
前記一の方向に沿って配列されたM個(Mは、2以上の整数)の画素を有する画素ラインが、前記一の方向に直交する方向にN列(Nは2上の整数)配列されて構成され、前記画素毎に前記シンチレーション光に関する検出信号を出力する検出素子を用いて、前記検出素子のN列の前記画素ライン毎に、M個の前記画素のうちの少なくとも2個の前記画素から出力される前記検出信号を加算し、加算したN個の前記検出信号を順次出力することにより、前記放射線画像を出力する、
放射線画像取得方法。 - 前記学習済みモデルは、所定の構造体の放射線画像に対し正規分布に沿ったノイズ値を付加して得られた画像データを教師データとした機械学習によって構築される、
請求項7に記載の放射線画像取得方法。 - 前記実行するステップでは、画素値とノイズ値の広がりを評価した評価値との関係を表す関係データに基づいて、前記放射線画像の各画素の画素値から前記評価値を導出し、前記放射線画像の各画素に導出した前記評価値を対応付けたデータであるノイズマップを生成し、前記放射線画像及び前記ノイズマップを、前記学習済みモデルに入力し、前記放射線画像からノイズを除去するノイズ除去処理を実行する、
請求項7又は8に記載の放射線画像取得方法。 - 前記実行するステップでは、放射線を照射して対象物を撮像する際の前記放射線の発生源の条件あるいは撮像条件のいずれかを示す条件情報の入力を受け付け、前記条件情報を基に、前記対象物を透過した前記放射線に関する平均エネルギーを算出し、前記平均エネルギーを基に、予め画像データを用いて機械学習によってそれぞれ構築された複数の学習済みモデルの中から、前記ノイズ除去処理に用いる学習済みモデルを絞り込む、
請求項7又は8に記載の放射線画像取得方法。 - 前記実行するステップでは、治具を対象として取得された放射線画像の画像特性を特定し、前記画像特性を基に、予め画像データを用いて機械学習によってそれぞれ構築された複数の学習済みモデルの中から、学習済みモデルを選択し、選択された前記学習済みモデルを用いて前記ノイズ除去処理を実行する、
請求項7又は8に記載の放射線画像取得方法。 - 前記対象物に放射線を照射するステップと、
前記対象物を前記検出素子に対して前記一の方向に搬送するステップと、
をさらに備える請求項7~11のいずれか1項に記載の放射線画像取得方法。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180028373.8A CN115427795A (zh) | 2020-04-16 | 2021-04-14 | 放射线图像取得装置、放射线图像取得系统及放射线图像取得方法 |
US17/918,380 US20230135988A1 (en) | 2020-04-16 | 2021-04-14 | Radiographic image acquiring device, radiographic image acquiring system, and radiographic image acquisition method |
JP2022515416A JPWO2021210612A1 (ja) | 2020-04-16 | 2021-04-14 | |
KR1020227038922A KR20230002581A (ko) | 2020-04-16 | 2021-04-14 | 방사선 화상 취득 장치, 방사선 화상 취득 시스템, 및 방사선 화상 취득 방법 |
EP21789365.0A EP4130724A4 (en) | 2020-04-16 | 2021-04-14 | DEVICE, SYSTEM AND METHOD FOR ACQUIRING RADIOGRAPHIC IMAGE |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020073578 | 2020-04-16 | ||
JP2020-073576 | 2020-04-16 | ||
JP2020-073578 | 2020-04-16 | ||
JP2020073576 | 2020-04-16 | ||
JP2021021673 | 2021-02-15 | ||
JP2021-021673 | 2021-02-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021210612A1 true WO2021210612A1 (ja) | 2021-10-21 |
Family
ID=78083882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/015464 WO2021210612A1 (ja) | 2020-04-16 | 2021-04-14 | 放射線画像取得装置、放射線画像取得システム、及び放射線画像取得方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20230135988A1 (ja) |
EP (1) | EP4130724A4 (ja) |
JP (1) | JPWO2021210612A1 (ja) |
KR (1) | KR20230002581A (ja) |
CN (1) | CN115427795A (ja) |
WO (1) | WO2021210612A1 (ja) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006318103A (ja) * | 2005-05-11 | 2006-11-24 | Fuji Photo Film Co Ltd | 画像処理装置および方法並びにプログラム |
JP2008229161A (ja) * | 2007-03-22 | 2008-10-02 | Fujifilm Corp | 画像成分分離装置、方法、およびプログラム、ならびに、正常画像生成装置、方法、およびプログラム |
JP2013512024A (ja) * | 2009-11-25 | 2013-04-11 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 向上された画像データ/線量低減 |
JP2018206382A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社東芝 | 画像処理システム及び医用情報処理システム |
CN109697476A (zh) * | 2019-02-01 | 2019-04-30 | 重庆大学 | 一种基于深度学习的x射线光子计数探测器一致性校准方法 |
WO2019082276A1 (ja) | 2017-10-24 | 2019-05-02 | 株式会社システムスクエア | 電磁波検出モジュール、電磁波検出モジュール列、及び非破壊検査装置 |
JP2019111322A (ja) * | 2017-12-20 | 2019-07-11 | キヤノンメディカルシステムズ株式会社 | 医用信号処理装置 |
JP2019158663A (ja) | 2018-03-14 | 2019-09-19 | 株式会社 システムスクエア | 検査装置 |
JP2019208990A (ja) * | 2018-06-07 | 2019-12-12 | キヤノンメディカルシステムズ株式会社 | 医用画像診断装置 |
WO2020031984A1 (ja) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | 部品の検査方法及び検査システム |
-
2021
- 2021-04-14 KR KR1020227038922A patent/KR20230002581A/ko active Search and Examination
- 2021-04-14 CN CN202180028373.8A patent/CN115427795A/zh active Pending
- 2021-04-14 EP EP21789365.0A patent/EP4130724A4/en active Pending
- 2021-04-14 JP JP2022515416A patent/JPWO2021210612A1/ja active Pending
- 2021-04-14 WO PCT/JP2021/015464 patent/WO2021210612A1/ja unknown
- 2021-04-14 US US17/918,380 patent/US20230135988A1/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006318103A (ja) * | 2005-05-11 | 2006-11-24 | Fuji Photo Film Co Ltd | 画像処理装置および方法並びにプログラム |
JP2008229161A (ja) * | 2007-03-22 | 2008-10-02 | Fujifilm Corp | 画像成分分離装置、方法、およびプログラム、ならびに、正常画像生成装置、方法、およびプログラム |
JP2013512024A (ja) * | 2009-11-25 | 2013-04-11 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 向上された画像データ/線量低減 |
JP2018206382A (ja) * | 2017-06-01 | 2018-12-27 | 株式会社東芝 | 画像処理システム及び医用情報処理システム |
WO2019082276A1 (ja) | 2017-10-24 | 2019-05-02 | 株式会社システムスクエア | 電磁波検出モジュール、電磁波検出モジュール列、及び非破壊検査装置 |
JP2019111322A (ja) * | 2017-12-20 | 2019-07-11 | キヤノンメディカルシステムズ株式会社 | 医用信号処理装置 |
JP2019158663A (ja) | 2018-03-14 | 2019-09-19 | 株式会社 システムスクエア | 検査装置 |
JP2019208990A (ja) * | 2018-06-07 | 2019-12-12 | キヤノンメディカルシステムズ株式会社 | 医用画像診断装置 |
WO2020031984A1 (ja) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | 部品の検査方法及び検査システム |
CN109697476A (zh) * | 2019-02-01 | 2019-04-30 | 重庆大学 | 一种基于深度学习的x射线光子计数探测器一致性校准方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4130724A4 |
Also Published As
Publication number | Publication date |
---|---|
CN115427795A (zh) | 2022-12-02 |
US20230135988A1 (en) | 2023-05-04 |
EP4130724A1 (en) | 2023-02-08 |
EP4130724A4 (en) | 2024-06-12 |
JPWO2021210612A1 (ja) | 2021-10-21 |
KR20230002581A (ko) | 2023-01-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2951615B1 (en) | Method and device for generating an energy-resolved x-ray image with adapted energy threshold | |
Richard et al. | Cascaded systems analysis of noise reduction algorithms in dual‐energy imaging | |
US7623616B2 (en) | Computer tomography apparatus and method for examining an object of interest | |
JP7073380B2 (ja) | 自己較正ct検出器、自己較正を行うためのシステムおよび方法 | |
CN109242920B (zh) | 物质分解方法、装置和系统 | |
US9589373B2 (en) | Monte carlo modeling of field angle-dependent spectra for radiographic imaging systems | |
US10559100B2 (en) | Method and devices for image reconstruction | |
WO2023281890A1 (ja) | 放射線画像取得装置、放射線画像取得システム、及び放射線画像取得方法 | |
JP7060446B2 (ja) | X線ラインセンサ及びそれを用いたx線異物検出装置 | |
TW202234049A (zh) | 放射線圖像獲取裝置、放射線圖像獲取系統及放射線圖像獲取方法 | |
WO2021210612A1 (ja) | 放射線画像取得装置、放射線画像取得システム、及び放射線画像取得方法 | |
JP7546048B2 (ja) | 放射線画像処理方法、学習済みモデル、放射線画像処理モジュール、放射線画像処理プログラム、及び放射線画像処理システム | |
WO2021210617A1 (ja) | 放射線画像処理方法、学習済みモデル、放射線画像処理モジュール、放射線画像処理プログラム、放射線画像処理システム、及び機械学習方法 | |
WO2024157532A1 (ja) | 画像処理方法、訓練方法、訓練済みモデル、放射線画像処理モジュール、放射線画像処理プログラム、及び放射線画像処理システム | |
TW202307465A (zh) | 放射線圖像處理方法、學習完成模型、放射線圖像處理模組、放射線圖像處理程式、放射線圖像處理系統及機器學習方法 | |
Kotwaliwale et al. | Calibration of a soft X-ray digital imaging system for biological materials | |
TW202307785A (zh) | 放射線圖像處理方法、學習完成模型、放射線圖像處理模組、放射線圖像處理程式及放射線圖像處理系統 | |
WO2024036278A1 (en) | System and method for generating denoised spectral ct images from spectral ct image data acquired using a spectral ct imaging system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21789365 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022515416 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021789365 Country of ref document: EP Effective date: 20221024 |
|
ENP | Entry into the national phase |
Ref document number: 20227038922 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |