WO2014181889A1 - Substance identification device and substance identification method employing x-ray panoramic/ct photographing - Google Patents

Substance identification device and substance identification method employing x-ray panoramic/ct photographing Download PDF

Info

Publication number
WO2014181889A1
WO2014181889A1 PCT/JP2014/062631 JP2014062631W WO2014181889A1 WO 2014181889 A1 WO2014181889 A1 WO 2014181889A1 JP 2014062631 W JP2014062631 W JP 2014062631W WO 2014181889 A1 WO2014181889 A1 WO 2014181889A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detector
ray
data
substance
Prior art date
Application number
PCT/JP2014/062631
Other languages
French (fr)
Japanese (ja)
Inventor
勉 山河
浩一 尾川
政廣 辻田
明敏 勝又
Original Assignee
株式会社テレシステムズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社テレシステムズ filed Critical 株式会社テレシステムズ
Priority to JP2015515917A priority Critical patent/JPWO2014181889A1/en
Publication of WO2014181889A1 publication Critical patent/WO2014181889A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/025Tomosynthesis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/42Arrangements for detecting radiation specially adapted for radiation diagnosis
    • A61B6/4208Arrangements for detecting radiation specially adapted for radiation diagnosis characterised by using a particular type of detector
    • A61B6/4241Arrangements for detecting radiation specially adapted for radiation diagnosis characterised by using a particular type of detector using energy resolving detectors, e.g. photon counting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5205Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5258Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/027Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis characterised by the use of a particular data acquisition trajectory, e.g. helical or spiral
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/408Dual energy

Definitions

  • the present invention relates to a substance identification apparatus and a substance identification method using X-ray panorama / CT imaging, and in particular, by combining panoramic image and CT image information obtained from X-ray panoramic imaging and CT imaging.
  • the present invention relates to a substance identification apparatus and a substance identification method using a hybrid X imaging apparatus for identifying (specifying or estimating) at least a type of a structure (substance) at a position of interest.
  • One field of diagnostic apparatus using X-rays is a dental panoramic X-ray imaging apparatus.
  • tomography of a subject based on tomosynthesis has been actively performed.
  • the principle of this tomosynthesis method has been known for a long time (see, for example, Patent Document 1).
  • a tomography method has also been proposed which seeks to enjoy the simplicity of image reconstruction based on the tomosynthesis method (see, for example, Patent Document 2 and Patent Document 3).
  • Many examples are seen in dental and mammography (see, for example, Patent Document 4, Patent Document 5, and Patent Document 6).
  • an X-ray panoramic imaging apparatus that uses a detector capable of collecting X-ray detection data at high speed (for example, 300 FPS) as in Patent Document 7 and loads all the detection data into a computer and executes a tomosynthesis method.
  • the detection data is processed by the tomosynthesis method to generate a panoramic image of the tomographic plane, and the position of the tomographic plane is changed in the front-rear direction of the plane, and the panoramic image of the changed tomographic plane can be generated. .
  • This dental panoramic X-ray imaging device can visualize the entire oral cavity with low exposure. However, since the information of the tomographic direction cannot be obtained and the blur of the image exists, it has not yet been used for the diagnosis of the detailed examination for grasping the internal structure of the oral cavity.
  • the diagnosis of scrutiny relies exclusively on intraoral radiography and a dental X-ray CT apparatus with a film or X-ray sensor in the mouth.
  • a dental X-ray CT apparatus with a film or X-ray sensor in the mouth.
  • three-dimensional shape information of the oral cavity can be obtained.
  • the X-ray tube voltage is as low as about 80 kV and the oral cavity has a lot of hard tissue.
  • a stable CT value of an imaging target cannot be obtained due to the effects of beam hardening and scattered radiation.
  • an integration type detector that integrates an X-ray transmission signal for a certain time is often used regardless of a panoramic apparatus or a CT apparatus.
  • Electric noise generated in the preamplifier processing circuit for the detection is mixed.
  • the detector is optimized in order to image hard tissue such as enamel and cancellous bone, the image of soft tissue such as gums and muscles will be out of contrast and difficult to image.
  • the soft tissue is optimized, the hard tissue is buried with noise and the contrast cannot be applied. Under these circumstances, it can be said that it is difficult to shoot both images at once.
  • Patent Document 8 a combined X-ray imaging apparatus capable of performing both panoramic imaging and CT imaging has also been proposed.
  • the present invention has been made in view of the above-described conventional situation.
  • the features of the panoramic imaging function and the CT imaging function are complementarily fused, and normal panoramic imaging and CT imaging can be performed by one imaging (scanning).
  • a substance identification device using X-ray panorama / CT imaging forms a two-dimensional pixel group by forming an X-ray tube for irradiating X-rays and a cell for outputting an electric pulse corresponding to the energy of the photon whenever the photon of the X-ray is detected.
  • the detection circuit arranged in a two-dimensional manner and the number of photons detected by each cell of the detection unit are divided into two or more energy bands (energy band number M ⁇ 2: M is a positive integer)
  • a detector configured to include a measurement circuit that measures each time, and an output circuit that outputs a digital amount of an electrical signal corresponding to the output of each cell measured by the measurement circuit as frame data for each energy band; and
  • the X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is used as the energy band.
  • a scanning unit Collect every time A scanning unit; a panoramic image generating unit that generates panoramic image data of the imaging target for each energy band based on the frame data; and a CT image generating unit that generates the CT (computed tomography) image of the imaging target;
  • Panoramic image display means for displaying the panoramic image on a monitor; interest position designation means for designating a position of interest on the panoramic image displayed on the monitor according to an operator's instruction; and the position corresponding to the position of interest
  • At least the kind of one or more substances having a thickness in the imaging target existing in the projection direction in the imaging space between the X-ray tube and the detector is obtained from the X-ray transmission data of the panoramic image and the CT image.
  • a substance identifying means for identifying based on the obtained form information of the substance.
  • an X-ray tube that irradiates X-rays, and a cell that outputs an electric pulse corresponding to the energy of the photons each time the photons of the X-rays are detected
  • a two-dimensional pixel group The detection circuits arranged two-dimensionally to form and the number of photons detected by each cell of the detection unit are divided into two or more energy bands (energy band number M ⁇ 2: M is a positive integer).
  • a detector configured to include a measurement circuit that measures each pixel, and an output circuit that outputs a digital amount of an electrical signal corresponding to the output of each cell measured by the measurement circuit as frame data for each energy band;
  • the X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is converted into the energy.
  • Collect by band Applied material identification method to a system with a scanning unit, the for is provided.
  • the method for identifying a substance includes generating a panoramic image data of the imaging target for each energy band based on the frame data, generating a CT (computed tomography) image of the imaging target, and the panoramic image.
  • a step of designating a position of interest on the panoramic image displayed on the monitor according to an operator's instruction Based on X-ray transmission data of the panoramic image and morphological information of the substance obtained from the CT image, the at least one kind of substance having a thickness in the imaging object existing in the projection direction in the imaging space. And identifying.
  • the features of the panorama imaging function and the CT imaging function are complementarily fused, and normal panorama imaging and CT imaging can be selectively performed by one imaging (scanning). it can. Furthermore, according to the present invention, at least the types of structures having a thickness in the region of interest to be imaged can be identified (specified or estimated), and can be used at the clinical level.
  • FIG. 1 is a perspective view showing an outline of the overall configuration of a hybrid X-ray imaging apparatus that functionally provides a substance identification apparatus of the present invention
  • FIG. 2 is a view for explaining the positional relationship between the imaging system of the apparatus and the jaw of the subject
  • FIG. 3 is a diagram for explaining the positional relationship between the rotation for scanning of the imaging system and the jaw of the subject
  • FIG. 4 is a plan view illustrating a detection surface (collection window) of the detector
  • FIG. 5 is a circuit diagram illustrating the electrical configuration of the detector.
  • FIG. 6 is a diagram for explaining an incident pulse of X-rays and a threshold for photon counting;
  • FIG. 6 is a diagram for explaining an incident pulse of X-rays and a threshold for photon counting
  • FIG. 7 is a diagram for explaining an energy region for X-ray energy distribution and photon counting;
  • FIG. 8 is a block diagram showing the main part of the overall electrical configuration of the X-ray imaging apparatus;
  • FIG. 9 is a functional block diagram showing processing executed by the controller and the data processor in order to identify the type of composition of the subject's jaw.
  • FIG. 10 is a display diagram illustrating an example of an optimally focused 3D autofocus image;
  • FIG. 11 is a diagram for explaining a scan range;
  • FIG. 12 is a diagram for explaining the positional relationship between the dentition, the reference tomographic plane of the dentition, and the scan of the X-ray tube / detector;
  • FIG. 13 is an explanatory diagram illustrating a tooth specified for substance identification and an irradiation direction of an X-ray beam that passes through the tooth;
  • FIG. 14 is a schematic flowchart illustrating the flow of removing metal artifacts,
  • FIG. 15 is a diagram for explaining a part of the process of removing metal artifacts.
  • FIG. 16 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 17 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 18 is a diagram illustrating another part of the process of removing metal artifacts,
  • FIG. 19 is a diagram illustrating another part of the process of removing metal artifacts.
  • FIG. 16 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 17 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 18 is a diagram illustrating another part of the process of
  • FIG. 20 is a diagram for explaining another partial process of removing metal artifacts
  • FIG. 21 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 22 is a diagram illustrating another part of the process of removing metal artifacts
  • FIG. 23 is a diagram illustrating another part of the process of removing metal artifacts.
  • FIG. 24 is a graph showing an example of a scatter diagram calculated for substance identification
  • FIG. 25 is a graph showing an example of a reference scatter diagram prepared in advance.
  • FIG. 26 is a graph showing another example of a reference scatter diagram prepared in advance.
  • FIG. 27 is a flowchart schematically showing a flow of scan and obstacle shadow removal executed in the embodiment;
  • FIG. 28 is a diagram showing a part of the process of removing the obstruction shadow
  • FIG. 29 is a graph showing an example of a gain curve for reconstructing a panoramic image of a dentition
  • FIG. 30 is a graph showing an example of a gain curve for reconstructing a panoramic image of the cervical spine
  • FIG. 31 is an image showing an example of a panoramic image of a dentition (before removal of an obstacle shadow)
  • FIG. 32 is an image showing an example of a panoramic image of the cervical spine
  • FIG. 33 is an image (an image of a patient different from the patient in FIG. 13) showing an example of the panoramic image of the dentition before removing the obstacle shadow
  • FIG. 34 is an image showing an example after removal (reduction) of obstacle shadows in the panoramic image of the dentition to be compared with FIG.
  • the “identification” of the substance identification means that at least the kind of the substance is specified or estimated based on the X-ray information transmitted through the substance (the structure that forms the imaging target).
  • this X-ray imaging apparatus is implemented as a dental hybrid X-ray imaging apparatus.
  • the hybrid type indicates that there is a function of performing both X-ray panoramic imaging (or X-ray panoramic imaging) and X-ray CT imaging (or X-ray CT imaging).
  • X-ray panoramic imaging or X-ray panoramic imaging
  • X-ray CT imaging or X-ray CT imaging
  • FIG. 1 shows an appearance of such a hybrid X-ray imaging apparatus 1.
  • the imaging apparatus 1 scans the subject's jaw with X-rays, and has both imaging functions of panoramic imaging and CT imaging of the jaw from the digital amount of X-ray transmission data.
  • the X-ray imaging apparatus 1 includes a casing 11 that collects data from a subject (patient) P, for example, while the subject P is standing or sitting in a wheelchair, and the casing 11 For controlling the collection of data to be performed, capturing the collected data to create a panoramic image, and performing post-processing of the panoramic image interactively or automatically with an operator (doctor, engineer, etc.) And a control / arithmetic unit 12 (also called a console) constituted by a computer.
  • a control / arithmetic unit 12 also called a console
  • the housing 11 includes a stand unit 13 and a photographing unit 14 that can move up and down with respect to the stand unit 13.
  • the imaging unit 14 is attached to the support column of the stand unit 13 so as to be movable up and down within a predetermined range.
  • an XYZ orthogonal coordinate system having the longitudinal direction of the stand unit 13, that is, the vertical direction as the Z axis is set.
  • the photographing unit 14 includes a vertically moving unit 15 that is substantially U-shaped when viewed from the side, and a rotating unit 16 that is supported by the vertically moving unit 15 so as to be rotatable (rotatable).
  • the vertical movement unit 15 can move up and down relative to the stand unit 13 in accordance with control from the control / arithmetic apparatus 12.
  • the rotation unit 16 is configured by a substantially U-shaped arm as viewed from the side.
  • the arm is suspended downwardly by the vertical movement unit 15 with the arm facing downward.
  • the rotating unit 16 is also rotated in accordance with control from the control / arithmetic apparatus 12.
  • An X-ray tube 21 serving as an X-ray source and an X-ray detector 22 serving as an X-ray detection means (hereinafter simply referred to as a detector) are opposed to the two arms 16A and 16B extending in the vertical direction of the rotary unit 16. It is arranged to do.
  • the rotation unit 16 rotates along the XY plane, the pair of the X-ray tube 21 and the detector 22 also rotate in a state of facing each other, and a space between them is defined as an imaging space S.
  • the jaw JW of the subject P is positioned using the chin rest 19 and the headrest 20.
  • the X-ray irradiated from the X-ray tube 21 passes through the jaw JW of the subject P and enters the detector 22 as shown in FIG. Become.
  • the X-ray tube 21 is composed of, for example, a rotating anode X-ray tube, and radiates X-rays radially from the target (anode).
  • the focal point of the electron beam colliding with the target is as small as about 0.1 to 0.5 mm in diameter, for example, and therefore the X-ray tube 21 functions as a point X-ray source.
  • a slit 23 is attached to a predetermined position on the front surface of the X-ray tube 21. With this slit 23, the X-rays incident on the detector 22 can be narrowed in a cone shape in accordance with the shape of a desired collection window on the detection surface.
  • the detector 22 has an array (sensor circuit) of a plurality of detection modules B1 to Bm in which X-ray imaging elements are two-dimensionally arranged.
  • the plurality of detection modules B1 to Bm are created as blocks independent of each other, and are mounted in a predetermined shape (for example, a rectangular shape) on a substrate (not shown) to form the entire detector 22.
  • a plurality of detection modules B1 to Bm are provided in two dimensions in the vertical (X-axis) and horizontal (Y-axis) directions (15 in the vertical direction, horizontal direction) while providing a certain gap between the individual modules. 8, together with arranging by further placing the 5 upper and lower ends), they are arranged obliquely inclined by an angle ⁇ with respect to the scanning direction O Y individual modules. This angle ⁇ is set to about 14 °, for example.
  • the surface of a rectangular shape (in the case of CT imaging) having a small length ratio (in the case of CT imaging) or a large ratio of length to width, that is, an elongated rectangular shape (in the case of panoramic imaging), is created by the plurality of detection modules B1 to Bm.
  • An X-ray detection surface 22A is formed. Since the detection modules B1 to Bm are arranged obliquely, the X-ray detection surface 22A is formed so as to follow (inscribe) the inside of each detection surface of the plurality of modules B1 to Bm.
  • the structure of the detector 22 having this detection module arranged obliquely and the processing of the detection signal by the sub-pixel method are known, for example, from WO 2012/0866648 A1.
  • a plurality of modules arranged in a column in the left-hand column in FIG. 4 function as panoramic shooting modules.
  • the opening area for panoramic photography is indicated by reference numeral 22B.
  • a module group of a two-dimensional array formed by all the modules except the two modules at the upper and lower ends of the left side row and the remaining modules functions as a module for CT imaging.
  • the aperture area of the small field for CT imaging is indicated by reference numeral 22A.
  • the detector 22 has a collection window of “a shape having a long portion that is rectangular and further extends vertically on one end side”, in which the opening areas 22A and 22B are combined.
  • Selection of the module group according to whether it is for panoramic imaging or CT imaging can be performed by controlling the opening area of the slit 23, but in this embodiment, panoramic imaging and CT imaging are performed simultaneously in one scan. It is also a feature to do. For this reason, the slit 23 narrows the fan-shaped X-ray beam so as to match the shape of the collection window in which the opening areas 22A and 22B are combined.
  • reference symbol AXd in FIG. 4 is a central axis when the detector 22 itself rotates (rotates).
  • the detector 22 since the detector 22 is always controlled to take a posture facing the X-ray tube 21, it is not always necessary to control this rotation operation.
  • Individual detection modules B1 ( ⁇ Bm) are made of a semiconductor material that directly converts X-rays into electrical pulse signals. For this reason, the detector 22 is a photon counting X-ray detector of a direct conversion method using a semiconductor.
  • the detector 22 is formed as an array of a plurality of detection modules B1 to Bm.
  • Each detection module Bm includes a detection circuit Cp (see FIG. 5) for detecting X-rays and a data counting circuit 51 n (see FIG. 5) stacked together with the detection circuit Cp, as is well known.
  • the detection circuit Cp includes, for each detection module, a semiconductor layer that directly converts X-rays into an electrical signal, and a charging electrode and a collecting electrode that are respectively stacked on both sides (not shown). X-rays are incident on the charged electrode.
  • the charged electrode is a common electrode, and a high bias voltage is applied between the charged electrodes.
  • the semiconductor layer and the collecting electrode are divided into a grid pattern, and by this division, a plurality of small regions are formed that are arranged in a two-dimensional array at a certain distance from each other. As a result, a plurality of stacked bodies of semiconductor cells C (see FIGS. 4 and 5) and collecting electrodes arranged in a two-dimensional manner on the charged electrode are formed.
  • the plurality of stacked bodies to form a plurality of pixels S n arranged in a two dimensional grid pattern.
  • a plurality of pixels S n (n 1 to N) occupying a predetermined area necessary for the detector 22 by the whole of the plurality of detection modules B1 to Bm (however, depending on the opening area 22A at the time of CT imaging: see FIG. 4). ) Is formed.
  • the plurality of pixels S n detection circuits (pixel group) constituting the Cp (refer to FIG. 5).
  • each pixel S n is, for example, 200 [mu] m ⁇ 200 [mu] m, the pixel size is set to a detectable value X-rays incident as a set of multiple photons.
  • Each pixel S n is responsive to incident of each photon of X-ray, and outputs an electrical pulse of amplitude corresponding to the energy possessed by the photon. That is, each pixel S n may convert the X-rays incident on that pixel directly, into electric signals.
  • the detector 22 the photon constituting the cone beam-like X-rays incident, counts for each pixel S n constituting the detection surface of the detector 22 (acquisition window), to reflect the counted value
  • the amount of electricity data is output at a relatively high frame rate of, for example, 75 fps. This data is also called frame data.
  • the semiconductor material of the semiconductor layer that is, the semiconductor cell C, cadmium telluride semiconductor (CdTe semiconductor), cadmium zinc telluride semiconductor (CdZnTe semiconductor (CZT semiconductor)), silicon semiconductor (Si semiconductor), thallium bromide (T1Br) Mercury iodide or the like is used.
  • this semiconductor cell is composed of a cell that combines a scintillator material that is subdivided into columns and optically shielded from each column, and a photoelectric converter composed of a combination of fine avalanche photodiodes. May be.
  • the size (200 [mu] m ⁇ 200 [mu] m) of each pixel S n described above is capable of detecting the number of incident photons about negligible counting loss as photons (particles) the X-rays incident in one pixel sufficiently It is a small value.
  • the size capable of detecting X-rays as the particles is “between electric pulses responding to each incident when a plurality of radiation (for example, X-ray) particles are successively incident at or near the same position.
  • the occurrence of the superposition phenomenon (also called pile-up) is defined as “a size that can be substantially ignored or whose amount is predictable”.
  • X-ray particle countdown also called pile-up count loss
  • the size of the pixel S n to form the detector 22 the magnitude of which can be regarded as the counting loss does not occur or does not substantially occur, or are set to an extent counting the drop amount can be estimated.
  • a latch circuit 58 and a serial converter 59 are provided.
  • Each charge amplifier 52 is connected to each collector electrode of each semiconductor cell C, charges up the charge collected in response to the incidence of X-ray particles, and outputs it as a pulse signal of electric quantity.
  • the output terminal of the charge amplifier 52 is connected to a waveform shaping circuit 53 whose gain and offset can be adjusted.
  • the waveform of the detected pulse signal is processed with the previously adjusted gain and offset to shape the waveform.
  • the gain and offset of the waveform shaping circuit 53 in consideration of the variation in non-uniformity and the circuit characteristics for charge-charge characteristic for each pixel S n of semiconductor cell C, is calibrated. As a result, it is possible to increase the output of the waveform shaping signal from which non-uniformity has been eliminated, and the relative threshold setting accuracy.
  • each pixel S n i.e., the characteristics reflecting the energy value of the X-ray particle pulse signal waveform formatted output from the waveform shaping circuit 53 for each collection channel CN n is substantially incident Have. Therefore, the variation between the collection channels CN n is greatly improved.
  • the output terminal of the waveform shaping circuit 53 is connected to the comparison input terminals of the plurality of comparators 54 1 to 54 4 .
  • analog amount threshold values voltage values
  • FIG. 6 shows the magnitude relationship (th 1 ⁇ th 2 ⁇ th 3 ⁇ threshold) between the peak value (representing energy) of the pulse voltage generated in response to the input of one X-ray photon and the threshold values th 1 to th 4. th 4 ) schematically.
  • the reason for this comparison is to examine which region (discrimination) the energy value of the incident X-ray particle enters among the energy regions set in advance divided into a plurality. A determination is made as to which of the analog amount threshold values th 1 to th 4 exceeds the peak value of the pulse signal (that is, the energy value of the incident X-ray photon). Thereby, the energy area
  • the lowest analog amount threshold th 1 is usually to prevent detection of disturbances, noise caused by circuits such as the semiconductor cell C and the charge amplifier 52, or low-energy radiation that is not necessary for imaging. Is set as the threshold value.
  • the number of thresholds i.e., the number of comparators is not necessarily limited to four, three, including the amount of the analog amount threshold th 1, or may be five or more.
  • Analog amount threshold th 1 ⁇ th 4 described above, specifically, given from the calibration computing unit 38 of the console 17 for each pixel S n in a digital value through the interface 31, i.e., for each acquisition channels. Therefore, the reference input terminals of the comparators 54 1 to 54 4 are connected to the output terminals of the four D / A converters 57 1 to 574, respectively.
  • the D / A converter 57 1-57 4 is connected to the threshold receiving end T 1 via the latch circuits 58 ( ⁇ T N), the interface 31 of the threshold receiving end T 1 ( ⁇ T N) console 17 It is connected.
  • the latch circuit 58 latches the thresholds th 1 ′ to th 4 ′ of digital quantities given from the threshold applier 40 via the interface 31 and the threshold receiving end T 1 ( ⁇ T N ) at the time of shooting, and the corresponding D / are output to a converters 57 1-57 4. Therefore, the D / A converters 57 1 to 57 4 can supply the commanded analog amount thresholds th 1 to th 4 to the comparators 54 1 to 54 4 as voltage amounts, respectively.
  • FIG. 7 schematically shows an X-ray spectrum when an appropriate material is used for the anode material of the X-ray tube 21.
  • the horizontal axis indicates X-ray energy, and the vertical axis indicates the incidence frequency of X-ray photons. This incidence frequency is a factor representative of the count value (count) or intensity of X-ray photons.
  • the analog amount threshold th i is an analog voltage applied to the comparator 54 i in each discrimination circuit DS i
  • the energy threshold TH i is an analog value for discriminating the X-ray energy (keV) of the energy spectrum.
  • the waveform shown in FIG. 7 shows a continuous spectrum of the energy of X-rays exposed from an X-ray tube that is normally used, for example, using tungsten as an anode material.
  • the count value (count) on the vertical axis is an amount proportional to the photon generation frequency corresponding to the energy value on the horizontal axis
  • the energy value on the horizontal axis is an amount depending on the tube voltage of the X-ray tube 21.
  • the first analog quantity threshold th 1 is set as the X-ray photon count non-counting area (the area where there is no meaningful X-ray information for counting and the circuit noise is mixed) and the lower first to set corresponding to the energy region ER 1 and energy threshold value TH 1 capable discrimination of.
  • the second and third analog amount threshold values th 2 and th 3 are set so as to sequentially provide the second and third energy threshold values TH 2 and TH 3 which are higher than the first energy threshold value TH 1 .
  • These energy thresholds TH i are determined so that one or more subjects as a reference are assumed and the count value for a predetermined time for each energy region is substantially constant.
  • the output of the comparator 54 1-54 3, as shown in FIG. 5, is connected to the input ends of the plurality of counters 56 1-56 4.
  • Each of the counters 56 1-56 4 counts up every time the output of the comparator 54 1-54 3 (pulse) is turned on. As a result, the number of X-ray photons having energy equal to or higher than the energy value discriminated into the energy region ER 1 (to ER 4 ) that each counter 56 1 (to 56 4 ) takes charge of is accumulated value W 1 ′ ( it can be counted for each pixel S n as ⁇ W 4 ').
  • noise components smaller than the energy threshold TH 1 defined as the input energy counting limit are not counted. This noise component corresponds to an energy value signal belonging to the non-countable region ERx in FIG.
  • V dec ⁇ th 1 the minimum threshold th 1 (V dec ⁇ th 1 )
  • the number of photons is counted. If the relationship is V dec ⁇ th 1 , the outputs of all the comparators 54 1 to 54 4 are turned on. That is, the count value W 1 of all the counters 56 1 ⁇ 56 4 ' ⁇ W 4' is counted up.
  • the output of the comparator module 54 4 of the fourth stage is turned on, only the counter 56 4 count value W 4 'of the fourth stage is counted up.
  • the energy value of the photon related to the input is a noise component belonging to the region ER 4 exceeding the third high energy region ER 3 , disturbance, etc., which is not suitable for imaging or counting.
  • the count value W 4 ′ can be used as information for estimating or excluding photons that have caused a superposition phenomenon or simultaneously incident photons.
  • X-ray photon number W 1 to W 4 belonging to each of the first through fourth energy regions ER 1 to ER 4 the actual count value W 1 ' ⁇ W 4' Is obtained by calculation (subtraction).
  • a circuit for deciphering which energy region ER1 to ER4 the current event, that is, the incidence of X-ray photons belongs becomes unnecessary from the combination of turning on and off the outputs of the comparators 54 1 to 54 4 . This simplifies the circuit configuration mounted on the data counting circuit 51 n of the detector 22.
  • the counter 56 1-56 4 described above start and stop signals is supplied via a start-stop terminal T2 from below to the controller of the console 17. Counting for a fixed time is managed from the outside using a reset circuit included in the counter itself.
  • the interface 31 receives these count values and stores them in a storage unit to be described later.
  • the count values of the respective pixels S n (with common pixels) for panoramic imaging and CT imaging are formatted in a predetermined order and sequentially output serially, so that a frame by panoramic imaging and CT imaging is obtained. Data is collected at every sampling time. For this reason, by specifying the address of the frame data, it is possible to obtain frame data for only panoramic imaging and frame data for CT imaging.
  • the data counting circuit 51 n is integrally constructed in CMOS by the semiconductor cell C and the data counting circuit 51 n corresponding to N pixels S n described above ASIC (Application Specific Integrated Circuit).
  • the data counting circuit 51 n may be configured as a circuit or device separate from the group of semiconductor cells C.
  • the plurality of detection modules B1 to Bm are connected to a scintillator array in which a plurality of scintillators processed into columnar shapes are bundled, and receives light incident from the scintillator.
  • a plurality of avalanche photodiodes are mounted on the light receiving surface, and the avalanche photodiodes belonging to the region are electrically connected by a quenching element for each rectangular region having a predetermined size corresponding to the cell on the light receiving surface.
  • a silicon photomultiplier is used to a silicon photomultiplier.
  • the material of the scintillator is LFS (lutetium silicate), GAGG: Ce (gadolinium aluminum gallium garnet), LuAG: Pr (praseodymium-added lutetium aluminum garnet), or the same decay time and light emission amount as the LuAG: Pr. It may be a material having a specific gravity.
  • the console 17 includes an interface (I / F) 31 that performs input and output of signals, a controller 33 that is communicably connected to the interface 31 via a bus 32, and a first storage unit 34, a data processor 35, a display unit 36, an input unit 37, a calibration calculator 38, a second storage unit 39, ROMs 40A to 40D, and a threshold value assigner 41.
  • the controller 33 controls driving of the X-ray imaging apparatus 1 in accordance with a program given in advance to the ROM 40A. This control includes sending a command value to the high voltage generator 42 that supplies a high voltage to the X-ray tube 21 and a drive command to the calibration calculator 38.
  • the first storage unit 34 stores frame data and image data that are count values sent from the detector 22 via the interface 31.
  • the data processor 35 operates based on a program given in advance to the ROM 40B under the control of the controller 33. By the operation, the data processor 35 processes the frame data stored in the first storage unit 34 by a desired CT reconstruction method to perform a CT image reconstruction process. Further, the data processor 35 performs a tomosynthesis method based on a known calculation method called “shift and add” on the frame data stored in the first storage unit 34 by the operation. Thereby, a CT image and a panoramic image of the jaw JW of the subject P are obtained.
  • the display unit 36 is responsible for displaying an image to be created, information indicating the operation status of the apparatus, and operator operation information given via the input unit 37.
  • the input device 37 is used for an operator to give information necessary for photographing to the apparatus.
  • the calibration computing unit 38 under the control of the controller 33, operating under program built in advance in ROM40C, giving for each energy discriminator circuit for each pixel S n in the data counting circuit, X-rays energy Calibrate the digital quantity threshold for discrimination.
  • the threshold value assigner 41 calls the digital amount threshold value stored in the second storage unit 39 at the time of photographing for each pixel and for each discrimination circuit, and uses the threshold value as a command value as an interface. 31 to the detector 22. In order to execute this process, the threshold value assigner 41 executes a program stored in advance in the ROM 40D.
  • the controller 33, the data processor 35, the calibration calculator 38, and the threshold value assigner 41 are all provided with a CPU (central processing unit) that operates according to a given program. Those programs are stored in advance in each of the ROMs 40A to 40D.
  • the data processor 35 reads the count value stored in the first storage unit 34 in response to an operator command from the input device 37, and uses this count value for image processing and substance identification.
  • the commanded process such as the above process and the measurement process is executed.
  • the image processing includes, for example, generation of a panoramic image of a cross section of a dentition based on a mosynthesis method for “panoramic imaging” and generation of a tomographic image based on a desired reconstruction method for “CT imaging”.
  • the concept of substance identification includes identification (specification) of the types of structures (substances) constituting the jaw and the state of the structures using beam hardening information. This substance identification process is one of the features of the present application.
  • the substance identification referred to in the present embodiment identifies at least the type of substance at the position of interest (part) designated by a user such as a dentist in the jaw of the subject P scanned by X-rays.
  • the substance is a substance having a thickness in the X-ray irradiation direction (tongue, cancellous bone, enamel, cortical bone, metal (stuffing, covering), etc.).
  • a scan for panoramic imaging can be performed by rotating both the X-ray tube 21 and the detector 22 along a circular orbit, and a scan for CT imaging may be a half scan.
  • the range in which the pair of the X-ray tube 21 and the detector 22 is actually rotated is a range of an angle ⁇ from the initial position as shown in FIG. 10, for example, and this angle ⁇ is an angle range of a half scan for CT imaging. This is a range determined by the angle range necessary for panoramic photography (for example, 210 °).
  • these two panoramic and CT scans are integrated while the X-ray tube 21 and the detector 22 are rotated only once around the jaw JW of the subject P over the angular range ⁇ . To be executed. That is, since panoramic imaging and CT imaging can be performed simultaneously, it is possible to shorten the scanning time, reduce the X-ray exposure amount, reduce the operator's labor, and the like.
  • CT scanning may be full scan instead of half scan.
  • panoramic imaging and CT imaging may be performed as separate scans.
  • the controller 33 rotates the X-ray tube 21 and the detector 22 that rotate the rotating unit 16 around the jaw portion JW.
  • continuous X-rays are exposed radially from the X-ray focal point FP of the X-ray tube 21.
  • This X-ray is narrowed by the slit 23 so as to follow the shape of the collection window “22A + 22B” (see FIG. 4) of the detector 22.
  • the narrowed X-rays pass through various substances present in the jaw JW and enter the collection window of the detector 22.
  • the detector 22 samples incident X-rays at a constant frame rate (for example, 75 fps).
  • digital frame data corresponding to the number of X-ray photons incident on each pixel is output from the rotating detector at regular intervals for each energy region ER 1 ( ⁇ ER 3 ). These frame data are sequentially or collectively transferred to the first storage unit 34 of the console 17.
  • Step S31 Creation of a panoramic image (step S31), Creating a panorama 3D image (step S32); Removal / reduction of obstacle shadow (step S33), X-ray direction setting (step S34), CT image reconstruction (step S35), 3D display of CT image (step S36), Material thickness measurement (step S37), Calculating the absorption coefficient of the substance (step S38); Creating a scatter diagram and identifying the type of substance (step S39), and Identification result display (step S40), Is included.
  • step group from one step S31 to S34 may be executed in this order, and the other step group from step S35 to S36 may be executed in this order. Either group may be processed first.
  • the data processor 35 executes the processes of steps S37 to S40 when the processing of both groups is completed.
  • the frame data read out here are frames collected at regular intervals (for example, 75 fps) from the module Bm that forms the collection window 22B for the left vertical column shown in FIG.
  • a panoramic image is created for each of the three energy regions ER 1 to ER 3 shown in FIG. 7, but the panoramic image may be reconstructed for two or more energy regions ER n. .
  • the tomosynthesis method accompanying this reconstruction is executed in accordance with International Publication No. WO2012 / 008492. Since the calibrated gain of the imaging space IS is held in advance, the actual position of the dentition is reflected by shifting the frame data by the shift amount along this gain and adding the pixel values to each other. A pseudo three-dimensional panoramic image PI focus is automatically generated (see FIG. 10). This panoramic image PI focus is called a 3D autofocus image.
  • the data processor 35 reads correction data based on the prepared tough bone from the first storage unit 34 for each energy region ERn, and corrects the panoramic image PI focus for each energy region ER n using this correction data. .
  • standardized panoramic 3D images (3D autofocus images) are generated for the three energy regions ER n (see FIGS. 10 and 11).
  • Step S33 The data processor 35 executes the removal / reduction processing of the obstacle shadow on the panorama 3D image of each of the three energy regions ER n generated in this way. As a result, the obtained panorama 3D image is displayed on the display 36.
  • the cervical vertebra CS is an object that obstructs imaging.
  • the shadow of the cervical spine is reflected in the panoramic image. This reflection is inevitable in a specific X-ray irradiation angle range. Therefore, when this dentition (gum) is to be imaged, it is desirable to remove / reduce the obstruction shadow caused by the cervical spine from the created panoramic 3D image. This removal / reduction method will be described later in detail as a separate item.
  • the data processor 35 designates the position of interest P int on the currently displayed panoramic 3D image through interactive information exchange with an operator (such as a dentist). For example, it is assumed that the operator designates the position P int while observing a panoramic 3D image and wants to know the type of the substance (structure of the jaw). This designation is also performed, for example, when it is desired to know the type of metal object used as a tooth filling.
  • the data processor 35 designates “L ⁇ 1” positions P add in the vicinity of the position of interest P int . Thereby, a total of L positions P int and P add are set. These L-number of points are required to solve the communicating columns equation according to L C N-number of combinations to be described later required when calculating the X-ray absorption coefficient of a substance having a thickness.
  • N is the number of substances having a thickness that exist in the X X-ray irradiation directions corresponding to “L ⁇ 1” positions P add in the vicinity of the position of interest P int .
  • “near” means within a range of a substance considered to be the same as the substance indicated by the position of interest.
  • the data processor 35 is processed in the block B22, and the geometric structure in the imaging space IS stored in advance in the first storage unit 34 is obtained.
  • the analysis data shown is read out, and based on this analysis data, the three-dimensional X-ray directions (virtual three-dimensional X-ray beam directions) projected onto the total L positions P int and P add are calculated.
  • the three-dimensional X-ray direction includes a direction in which a direction sandwiching each of the positions P int and P add on the panorama 3D image from the X-ray focal point FP of the X-ray tube 21 is projected on the XY plane, and the XY plane in that direction Is calculated based on the inclination angle with respect to. Information on the direction and angle is acquired from the analysis data and used for calculation.
  • the data indicating the three-dimensional direction of the X-rays projected on the L positions (points) thus obtained is temporarily stored in the first storage unit 34.
  • the data processor 35 reads the CT image frame data stored in the first storage unit 34.
  • This frame data is a frame collected at regular time intervals (for example, 75 fps) from the module Bm forming the collection window 22A shown in FIG.
  • the data processor 35 adds the frame data of the three energy regions ERn collected for each X-ray irradiation angle, that is, each projection angle, to each other to generate projection data.
  • a plurality of sets of projection data generated for a plurality of projection angles for these half scans are subjected to CT reconstruction based on, for example, a successive approximation method. Thereby, a three-dimensional CT image of the jaw JW is generated.
  • Step S1 (creation of sinogram):
  • a sinogram is created based on the value of projection data P init from 0 to 360 degrees (see FIG. 15).
  • the horizontal axis of the sinogram is the detector position, and the vertical axis is the projection angle.
  • the sinogram is a logarithmic operation by dividing between the number of X-rays actually emitted and the value measured by the detector.
  • Step S2 detection of metal part: Next, in step S2, the value of the projection data Pinit is scanned in the direction of the horizontal axis of the sinogram, and the value of the portion that greatly changes from the value of the nearby projection data is detected (see FIG. 16). At this time, the standard deviation with respect to the average value of the projection data may be used as a scale for determining whether or not the metal portion is greatly attenuated, that is, whether it is a metal part, or analysis of variance from a one-dimensional ROI set in the vicinity. Thus, information on the sudden rise and fall of the projection data of the portion approaching the metal portion may be used.
  • step S3 assuming that the position of the projection data of the metal part can be completely specified in the process of step S2, the processes after step S3 are executed. If the metal portion cannot be identified from the one-dimensional ROI, a process corresponding to step S2 may be performed using a sinogram (two-dimensional). Even in this case, if the metal part cannot be specified in all projection data, the area on the projection data of the metal part becomes a sine curve, and the part that failed to extract the metal part is used. It is also possible to estimate. Furthermore, even if not all metal parts can be necessarily extracted, it is possible to estimate in the processing after step S3.
  • Step S3 image reconstruction:
  • the projection data of the metal part extracted in step S2 is set to 0, and 1 is assigned to other projection data areas, and this operation is performed for all projection angles.
  • This projection data is reconstructed into an image by simple back projection calculation (see FIG. 17).
  • back projection calculation is performed using only the projection data excluding the angle.
  • the image created in this way only the metal portion is 0, and the other images have values other than 0.
  • an appropriate value for example, 1 (1 / cm) is substituted for the 0 pixel of this simple backprojection image, and the other pixel values are set to 0.
  • the data of the basic image base I metal is used as the metal part. Function as. Using this data, it is clear from which position of the specific projection data to which position is the metal part.
  • Step S4 (calculation of projection data): Next, in step S4, projection data P metal of only the metal portion of the basic image I metal created in step S3 is calculated (see FIG. 18). Further, to the projection data other than the metal portion (P metal ⁇ 0) of the projection data P init is called a P org (see FIG. 19).
  • Step S5 image reconstruction:
  • I recon an image in a region where I metal ⁇ 0 is referred to as I recon .
  • Step S7 projection processing: Further, in step S7, the added image I all is subjected to projection processing, and projection data P all is created (see FIG. 22).
  • Step S9 image addition, projection calculation:
  • Step S10 difference calculation and correction: Next, in step S10, as in step S8, both projection data P all (1) and P org are compared (differed) for each pixel and corrected. That is, the image I recon (1) is corrected to create I recon (n + 1)
  • a metal part is recognized with projection data, and the recognized projection data of the metal part is reconstructed by filling a fixed value to remove or suppress metal artifacts from the metal part.
  • CT images obtained are obtained. Thereby, the three-dimensional shape of the metal part can be measured more accurately.
  • Metal artifacts are reduced in images created with the convergence criteria (in some cases exceeding the maximum number of comparisons), and a preset line attenuation coefficient is given to the metal part.
  • the portion is given a value estimated from correctly measured projection data.
  • the above-described metal artifact reduction operation may be performed in a successive approximation image reconstruction simultaneously with the beam hardening removal process of CT images and scattered radiation removal.
  • step S36 the data processor 35 three-dimensionally displays the generated three-dimensional CT image of the jaw JW on the display 36.
  • an operator such as a dentist
  • the form of the jaw JW three-dimensionally how many substances have thickness in which X-ray irradiation direction, which substance and which substance. It is possible to estimate the state of the same type.
  • the data processor 35 automatically or interactively interacts with the operator along the direction of the substance existing along each of the total L X-ray directions including the total L points P int and P add .
  • This thickness is measured, for example, by an operator specifying a bridge position of each substance in each X-ray direction on the 3D CT image with a point ROI and calculating this specified position from the analysis data (block B22).
  • the thicknesses of N (L ⁇ N) substances in each of a plurality of X-ray directions of interest are measured.
  • the total number of observed materials in the X-ray direction is 3, and their thicknesses are t 1 , t 2 , t 3 , and their X
  • Logarithmic value of pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 1 ⁇ 11 * t 1 + ⁇ 12 * t 2 + ⁇ 13 * t 3 + k 1 (k 1 : constant)
  • Logarithmic value of a pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 2 ⁇ 21 * t 1 + ⁇ 22 * t 2 + ⁇ 23 * t 3 + k 2 (k 2 : constant)
  • Logarithmic value of a pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 3 ⁇ 31 * t 1
  • the data processor 35 creates a two-dimensional scatter diagram for each substance from the solution of the X-ray absorption coefficient ⁇ Mj obtained in step S38.
  • This two-dimensional scatter diagram includes a relative attenuation index RAI (Relative Attenuation Index) shown on the vertical axis forming one dimension and a quality change index SDI (Spectrum Deformation Index) shown on the horizontal axis forming another dimension.
  • RAI Relative Attenuation Index
  • SDI Spectrum Deformation Index
  • the data processor 35 reads a reference scatter diagram (see FIG. 25) from the database of the first storage unit 34 prepared in advance (block B23), and compares it with the scatter diagram generated in step S39. .
  • a reference scatter diagram see FIG. 25
  • the data processor 35 reads a reference scatter diagram from the database of the first storage unit 34 prepared in advance (block B23), and compares it with the scatter diagram generated in step S39. .
  • the map position on the scatter diagram differs depending on the types of the metals, and thus different types of metals can be distinguished and identified. It should be noted that in the discrimination between metals, there may be a difference in discrimination accuracy depending on the X-ray transmission characteristics of the metals.
  • FIG. 26 shows another scatter diagram for reference. According to this, it is also possible to provide information for diagnosing the presence or absence of the influence of a medical condition such as whether a substance such as bone has osteoporosis or periodontal disease.
  • the site-specific distribution such as the type of metal, diagnosis of osteoporosis, and cancellous bone that has been solved by periodontal disease can be obtained.
  • Draw and identify substances There are only L CN combinations for the number of data of each part, but even one point can be estimated.
  • CT for medical use, the substance is identified only by the RAI on the vertical axis based on water (so-called CT value), but in the photon counting detector that can add a SDI axis and make a two-dimensional scatter plot, The great advantage is that the differences can be emphasized and expressed. Thereby, identification accuracy improves.
  • g (i) f 1 (i) * h 1 (i) + f 2 (i) * h 2 (i) +... + f j (i) * h j (i) +... + f n (i) * h n (i)
  • a specific layer g j (i) is imaged by making h j (i) of a specific layer a delta function and making the others as uniform functions as possible.
  • g 2 (i) f 1 (i) * h 1 (i) + f 2 (i)... (3)
  • g 1 (i) f 1 (i) + f 2 (i) * h 2 (i)... (4)
  • g 1 (i) is an image focused on the dentition
  • g 2 (i) is an image focused on the cervical vertebra
  • f 1 (i) (g 1 (i) -g 2 (i) * h 2 (i)) / (1-h 1 (i) * h 2 (i))... (5)
  • equation (5) (Image focused on dentition)- (Image focused on cervical spine * blurring function at dentition position) This means that an image of only the dentition can be obtained by dividing by the superposition integration of the blur functions corresponding to the positions.
  • frame data relating to panoramic imaging of the jaw of the subject P is read (FIG. 27, step S51).
  • This frame data is stored in the first storage unit 34.
  • this data collection is performed so that the reference tomographic plane (cross section) preset in the dentition TR is focused.
  • Symbol CS shown in FIG. 28A indicates the cervical spine.
  • the data processor 35 reconstructs a panoramic image focused on the reference tomographic plane (see FIG. 28A) of the dentition TR using the frame data stored in the first storage unit 34. Displayed (step S52). An example of this panoramic image is shown in FIG. The reconstructed panoramic image data is stored in the first storage unit 34.
  • step S52 specifically, the image of the optimum focus on the reference tomographic plane is reconstructed by the tomosynthesis method, that is, shift-and-add processing.
  • the second ROM 40B stores gain curve information for optimally focusing a reference tomographic plane of a predetermined dentition TR. An example of this gain curve is shown in FIG.
  • the gain curve is a curve that indicates an amount (differential value of the curve) by which each piece of strip-shaped frame data (frame image) is shifted (shifted) with respect to each other in the shift-and-add process.
  • the shift amount in this gain curve corresponds to the blur amount when creating a blur function at the dentition position. Therefore, the data processor 35 uses the shape of the blur as a Gaussian function, matches its standard deviation to this shift amount, and convolves with the frame data as a shift variant blur function that changes for each angular position. Thereafter, the data processor 35 adds frame data to each other at each position (mutual addition of pixel values) to create a panoramic image of the reference tomographic plane along the tooth row TR.
  • the data processor 35 uses the frame data stored in the first storage unit 34 to generate a panoramic image focused on a tomographic plane (see FIG. 28A) passing through the cervical vertebra CS. Reconstructed and displayed (step S53). An example of this panoramic image is shown in FIG. The reconstructed panoramic image data is also stored in the first storage unit 34.
  • the cervical vertebrae CS focused on the image creates a gain curve assuming a trajectory T CS folded back trajectory T TR focus on teeth surfaces.
  • a gain curve is created depending on the position of the wire (lead phantom) of the calibration phantom.
  • the gain curve is created in the form of inverting the above-described gain curve (see FIG. 30).
  • a shift-and-add operation is performed. In this case, generally, in the reconstruction of the dentition TR, the shift amount in the vicinity of the front teeth is small, but in the reconstruction of the cervical vertebra CS, the collected frame data is moved greatly to perform the addition operation.
  • the frame data used for panoramic image reconstruction in the above steps S52 and S53 may be frame data having energy belonging to all energy regions ER 1 to ER 3 , or may be included in a part of the energy region. It may be frame data of the energy to which it belongs. Even when using frame data of energy over all energy regions, in addition to simple averaging, weighting can be performed for each region.
  • the frame data can be detected in a state where energy is discriminated for each pixel. This makes it possible to use transmission data (frame data) that sensitively reflects parameters such as a change in radiation quality when X-ray photons pass through the jaw material. For this reason, a panoramic image in which a specific substance in the jaw is emphasized can be acquired, and the obstacle shadow removal method can be performed based on the panoramic image.
  • Figure 31 shows the best focus image I TR along the reference tomographic plane of the dentition TR with the vertical line. This vertical line shows the position of the specific angle measured with the calibration phantom. Further, FIG. 32 shows the best focus image I CS along the fault plane cervical CS with the vertical line. This vertical line also indicates the position of a specific angle measured with the calibration phantom.
  • FIGS. 31 and 32 In order to remove the obstacle shadow such as the cervical spine, FIGS. 31 and 32 must be scaled to have the same image size.
  • the distance between the vertical lines in FIG. 31 and the distance between the vertical lines in FIG. 32 are different for each position, and the size of FIG. 32 cannot be matched to the size of FIG. 31 using one reduction ratio. Therefore, the scales of the two images are adjusted using the frame defined by the vertical lines.
  • the plurality of rectangular regions defined by vertical lines 15 represent components of the optimum focus image I TR dentition TR, 15 present those regions by the vertical line of best focus image I CS cervical CS Each area of the image ICS is reduced so as to match. Thus, the size of both images I TR and I CS are the same.
  • the data processor 35 executes a blurring process of the optimally focused image I CS of the cervical vertebra CS (step S55).
  • a gain curve is used. More specifically, the full width at half maximum of the gain value at each angular position of the gain curve: create a Gaussian function with a (FWHM full width at half maximum) , which is convolution optimally focused image I CS cervical CS.
  • the dentition is greatly blurred, whereas the cervical vertebra is in contrast to the cervical vertebra in the image focused on the dentition surface.
  • the blur is about the same.
  • step S5 may be omitted depending on the situation.
  • the data processor 35 performs, for each pixel, between the optimal focus image I TR of the dentition TR and the optimal focus image I CS ′ of the cervical vertebra CS reduced and blurred as described above.
  • the pixel value is subtracted or divided to perform a process of taking the image difference (step S56).
  • pixel values are expressed in natural logarithm, subtraction is performed, and “I TR -I CS ′” is performed between pixel values.
  • the natural logarithm of the pixel value is not taken, the division of “I TR / I CS ′” is performed between the pixel values.
  • step S56 since the density unevenness depending on the blur function occurs in the pixel value, the blur function on the cervical vertebra focal plane and the blur function on the dentition surface are superimposed and integrated, and a value obtained by subtracting this from 1 is used. By dividing each angular position, it is possible to obtain an image in which the influence of the cervical spine is reduced as much as possible and the dentition surface is focused. It is desirable to add this process to step S56.
  • Figure 33 shows another example of the best focus image I TR dentition TR before cervical removal.
  • a white shadow (disturbance shadow) of the cervical vertebra CS is reflected in the central portion thereof, the vicinity of the front teeth is blurred, and the drawing ability is low.
  • the optimally focused image I TR_REV of the dentition TR after cervical vertebra removal the cervical vertebra image is almost removed, and the anterior teeth are more clearly depicted.
  • the hybrid X-ray imaging apparatus 1 can function as an all-in-one X-ray diagnostic apparatus with a dental X-ray imaging apparatus capable of material identification. Substance identification is possible even with CT imaging in a small field of view. Even when there is a metal prosthesis, the substance can be identified stably. This is because the thickness of the metal part can be measured more accurately from a CT image in which metal artifacts due to the metal part are removed or reduced. Since panoramic imaging and CT imaging can be performed simultaneously in a circular orbit in one scan, the amount of X-ray exposure to the patient is small.
  • the photon counting technique is used for the detector, and images from soft tissue to hard tissue can be visualized.
  • a successive approximation CT reconstruction technique is employed, and a CT image with a clear tissue boundary can be obtained with a small field detector.
  • the substance identification can be performed in combination with the panoramic image.
  • panoramic imaging and CT imaging can be performed simultaneously in one scan, it is not necessary to align the reconstructed panoramic image and CT image.
  • one hybrid X-ray imaging apparatus can play two roles of a conventional panoramic imaging apparatus and CT apparatus. That is, highly reliable diagnosis can be performed using only the hybrid X-ray imaging apparatus 1, and treatment time can be shortened, treatment accuracy can be improved, and throughput can be improved. In addition, the peri-implantitis is observed, the positional relationship between the soft tissue tumor and the hard tissue is observed, and the clinical application range is dramatically expanded.
  • block B1 corresponds to the scanning means
  • step S31 of block B3 corresponds to the panoramic image generating means
  • step S35 corresponds to the CT image generating means
  • step S32 corresponds to the panoramic image display.
  • Step S34 corresponds to the position of interest designation means
  • steps S36 to S40 correspond to the substance identification means.
  • panoramic imaging and CT imaging are completed by one scan at the same time, but the X-ray diagnostic apparatus according to the present invention is not necessarily limited to this.
  • panoramic imaging and CT imaging can be separately executed twice, the reconstructed panoramic image and CT image can be aligned, and the same processing as described above can be performed.
  • the detector used in the CT imaging apparatus may be a conventional integral X-ray detector, and it is not always necessary to use a photon counting X-ray detector.
  • the X-ray imaging apparatus 1 described above may be configured to take an image in a state where the patient is lying on his / her back on the dental chair (supposed position). What is necessary is just to have an imaging system that rotates with the jaw of the subject positioned between the X-ray tube and the detector. It may be a device configuration that receives an image in a sitting position or standing position. Further, such a photographing system may be attached to a fixed structure such as a house or a vehicle wall or ceiling. Furthermore, such an imaging system may be configured as a portable unit, and may be configured to perform imaging by placing on a patient's shoulder or behind a general chair.
  • the X-ray imaging apparatus 1 described above includes an imaging system in which the X-ray tube 21 and the detector 22 can be driven to rotate independently of each other while varying the distance between them around the same rotation center. Good.
  • the detector 22 is a photon counting type detector.
  • the scintillator and the photoelectric element are combined to accumulate electrical signals for a predetermined time and output frame data.
  • a so-called integral type detector may be used.
  • the obstacle shadow removal method of the present application can be applied to an X-ray imaging apparatus using a detector using a scintillator and a CCD (charge coupled device), to which the tomosynthesis method cannot be applied.
  • an image taken under the rotational trajectory of the X-ray tube and detector focused on the cross-section passing through the dentition, and the rotational trajectory of the X-ray tube and detector focused on the cross-section passing through the cervical spine It is sufficient to prepare images taken under the image separately and perform the above-described obstacle shadow removal method between these images.
  • the position, distance, and angle parameters of the X-ray tube, 3D reference tomographic plane, and detector that define the structure of the imaging space can be easily and accurately analyzed by measurement using a phantom. Can be calibrated and prepared for shooting. Therefore, it is possible to provide an imaging apparatus using radiation that can accurately image an object three-dimensionally.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Veterinary Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pulmonology (AREA)
  • General Physics & Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Provided are a device and a method for identifying (specifying or estimating) at least the type of a substance. For example, the present device and method are functionally provided by an X-ray photographing device that can perform X-ray panoramic photographing and X-ray CT photographing in combination. In this X-ray photographing device, frame data that is output from a photon-counting X-ray detector (22) is used for the reconstitution of a panoramic image. Further, the panoramic image is displayed (B3; S32), and a location of concern is designated on the displayed panoramic image in accordance with an instruction from an operator (B3; S34). Further, at least the type of one or more structures that have a thickness in an object being photographed and that are present in the projecting direction in a photographing space between an X-ray tube and the detector and corresponding to the location of concern is identified on the basis of data of said panoramic image and morphological information on said substance(s) as obtained from a CT image (B3; S35 to S40).

Description

X線パノラマ・CT撮影を利用した物質同定装置及び物質同定方法Substance identification apparatus and substance identification method using X-ray panorama / CT imaging
 本発明は、X線パノラマ・CT撮影を利用した物質同定装置及び物質同定方法に係り、とくに、X線によるパノラマ撮影及びCT撮影から得られるパノラマ画像及びCT画像の情報を融合して撮影対象の関心位置に在る構造物(物質)の少なくとも種類を同定(特定又は推定)する、ハイブリッド型X撮影装置を利用した物質同定装置及び物質同定方法に関する。 The present invention relates to a substance identification apparatus and a substance identification method using X-ray panorama / CT imaging, and in particular, by combining panoramic image and CT image information obtained from X-ray panoramic imaging and CT imaging. The present invention relates to a substance identification apparatus and a substance identification method using a hybrid X imaging apparatus for identifying (specifying or estimating) at least a type of a structure (substance) at a position of interest.
 X線を用いた診断装置の一分野に、歯科用のパノラマX線撮影装置がある。
このX線撮影装置の分野では、近年、トモシンセシス法(tomosynthesis)に依る被検体の断層撮影法が盛んに行われている。このトモシンセシス法の原理はかなり古くから知られている(例えば特許文献1を参照)。近年では、そのトモシンセシス法に依る画像再構成の簡便さを享受しようとする断層撮影法も提案されている(例えば特許文献2及び特許文献3を参照)。また、歯科用及びマンモグラフィでその例が多数見られるようになっている(例えば特許文献4、特許文献5、特許文献6を参照)。
One field of diagnostic apparatus using X-rays is a dental panoramic X-ray imaging apparatus.
In the field of this X-ray imaging apparatus, in recent years, tomography of a subject based on tomosynthesis has been actively performed. The principle of this tomosynthesis method has been known for a long time (see, for example, Patent Document 1). In recent years, a tomography method has also been proposed which seeks to enjoy the simplicity of image reconstruction based on the tomosynthesis method (see, for example, Patent Document 2 and Patent Document 3). Many examples are seen in dental and mammography (see, for example, Patent Document 4, Patent Document 5, and Patent Document 6).
 また、近年、特許文献7のように、X線の検出データを高速(例えば300FPS)に収集できる検出器を使用し、その検出データをすべてコンピュータに取り込み、トモシンセシス法を実行するX線パノラマ撮影装置が開発されている。この装置の場合、検出データをトモシンセシス法で処理して断層面のパノラマ画像を生成するとともに、その断層面の位置をその面の前後方向に変更し、その変更した断層面のパノラマ画像を生成できる。 In recent years, an X-ray panoramic imaging apparatus that uses a detector capable of collecting X-ray detection data at high speed (for example, 300 FPS) as in Patent Document 7 and loads all the detection data into a computer and executes a tomosynthesis method. Has been developed. In the case of this apparatus, the detection data is processed by the tomosynthesis method to generate a panoramic image of the tomographic plane, and the position of the tomographic plane is changed in the front-rear direction of the plane, and the panoramic image of the changed tomographic plane can be generated. .
 この歯科用のパノラマX線撮影装置は、低被爆で口腔部全体を映像化できる。しかしながら、断層方向の情報が得られないことや、画像のボケが存在することから、口腔部の内部構造を把握するための精査の診断に使用されるまでには至っていない。 This dental panoramic X-ray imaging device can visualize the entire oral cavity with low exposure. However, since the information of the tomographic direction cannot be obtained and the blur of the image exists, it has not yet been used for the diagnosis of the detailed examination for grasping the internal structure of the oral cavity.
 そのほか、歯列を画像化しようとするときに頸椎の陰影は障害陰影になり、この障害陰影はどうしても画像の描出能を低下させてしまう。また、積分型のX線検出器を用いた場合、必ずしも正確なX線吸収が出ない等の課題もある。 In addition, when trying to image the dentition, the shadow of the cervical spine becomes an obstacle shadow, and this obstacle shadow inevitably reduces the image rendering ability. Further, when an integral type X-ray detector is used, there is a problem that accurate X-ray absorption is not always obtained.
 このため、精査の診断は、現状では、専ら口の中にフィルムやX線センサーを入れる口内撮影や専ら歯科用X線CT装置に頼っている。特に歯科用X線CT装置を使えば、口腔部の三次元形態情報を得ることができる。 For this reason, at present, the diagnosis of scrutiny relies exclusively on intraoral radiography and a dental X-ray CT apparatus with a film or X-ray sensor in the mouth. In particular, if a dental X-ray CT apparatus is used, three-dimensional shape information of the oral cavity can be obtained.
 しかしながら、歯科用X線CT装置の場合、X線の管電圧が80kV前後と低いこと及び口腔部には硬組織が多いことから、X線の線質変化が大きい、金属補綴物による画像歪(アーチファクト)が発生する、ビームハードニングや散乱線の影響に因り撮影対象の安定したCT値が出ないこと、などの問題がある。 However, in the case of a dental X-ray CT apparatus, the X-ray tube voltage is as low as about 80 kV and the oral cavity has a lot of hard tissue. There is a problem that a stable CT value of an imaging target cannot be obtained due to the effects of beam hardening and scattered radiation.
 また、歯科用のX線撮影装置においては、パノラマ装置やCT装置を問わず、一定時間、X線透過信号を積分する積分方式の検出器を用いていることが多い。そのような場合。その検出のためのプリアンプ処理回路で発生する電気的ノイズが混入する。また、プリアンプの非線形特性により、高線量部から低線量部まで忠実に吸収値を反映させることは難しく、所謂、ダイナミックレンジが狭くなる。このためにエナメル質、海綿骨などの硬組織の映像化を行うために検出器の最適化を行うと、歯肉、筋肉などの軟組織の映像は、コントラストが飛んでしまい映像化が難しい。その逆で軟組織に最適化した場合は、硬組織がノイズで埋もれてしまい、コントラスがつかない。このような事情から一度にどちらの映像化も可能な、撮影は難しいと言える。 Also, in a dental X-ray imaging apparatus, an integration type detector that integrates an X-ray transmission signal for a certain time is often used regardless of a panoramic apparatus or a CT apparatus. In such a case. Electric noise generated in the preamplifier processing circuit for the detection is mixed. In addition, due to the non-linear characteristics of the preamplifier, it is difficult to accurately reflect the absorption value from the high dose portion to the low dose portion, and so-called dynamic range becomes narrow. For this reason, if the detector is optimized in order to image hard tissue such as enamel and cancellous bone, the image of soft tissue such as gums and muscles will be out of contrast and difficult to image. On the contrary, when the soft tissue is optimized, the hard tissue is buried with noise and the contrast cannot be applied. Under these circumstances, it can be said that it is difficult to shoot both images at once.
 そこで、近年では、特許文献8に示すように、パノラマ撮影とCT撮影の両方の撮影が行える兼用型のX線撮影装置も提案されている。 Therefore, in recent years, as shown in Patent Document 8, a combined X-ray imaging apparatus capable of performing both panoramic imaging and CT imaging has also been proposed.
特開昭57-203430JP-A-57-203430 特開平6-88790JP-A-6-88790 特開平10-295680JP-A-10-295680 特開平4-144548JP-A-4-144548 特開2008-11098JP2008-11098A 米国特許公開 US2006/0203959 A1US Patent Publication US2006 / 0203959 A1 特開2007-136163JP2007-136163A 特開2011-206534JP2011-206534
 しかし、従来の兼用型のX線撮影装置の場合、パノラマ撮影モードとCT撮影モードとを切り替えて使用するのみである。このため、上記の問題の解決には至っていない。 However, in the case of the conventional dual-purpose X-ray imaging apparatus, only the panoramic imaging mode and the CT imaging mode are switched and used. For this reason, the above problem has not been solved.
 本発明は、上述の従来の状況に鑑みてなされたもので、パノラマ撮影機能とCT撮影機能の特徴を相補的に融合させ、1回の撮影(スキャン)で、通常のパノラマ撮影やCT撮影は勿論のこと、撮影対象の関心部位にある厚さのある構造物の少なくとも種類を同定(特定又は推定)でき、その同定結果を臨床レベルで使用できるようにすることを、その目的とする。 The present invention has been made in view of the above-described conventional situation. The features of the panoramic imaging function and the CT imaging function are complementarily fused, and normal panoramic imaging and CT imaging can be performed by one imaging (scanning). Of course, it is an object of the present invention to be able to identify (specify or estimate) at least a type of a thick structure in a region of interest to be imaged and to use the identification result at a clinical level.
 上述した目的を達成するために、その1つの態様として、X線パノラマ・CT撮影を利用した物質同定装置が提供される。この物質同定装置は、X線を照射するX線管と、前記X線のフォトンが入射を検知する度に当該フォトンのエネルギに応じた電気パルスを出力するセルを、2次元の画素群を形成するように2次元に配列した検出回路と、この検出部の各セルが検知した前記フォトンの数を2つ以上のエネルギ帯域(エネルギ帯域数M≧2:Mは正の整数)に分けて画素毎に計測する計測回路と、この計測回路が計測した各セルの出力に応じたデジタル量の電気信号をエネルギ帯域毎にフレームデータとして出力する出力回路とを備えて構成された検出器と、前記X線管と前記検出器とを撮影対象を挟んで互いに対向させて当該撮影対象の周りに当該X線管と当該検出器とを回転させ、前記検出器が出力する前記フレームデータを前記エネルギ帯域毎に収集するスキャン手段と、前記フレームデータに基づいて前記撮影対象のパノラマ画像のデータを前記エネルギ帯域毎に生成するパノラマ画像生成手段と、前記撮影対象のCT(computed tomography)画像を生成するCT画像生成手段と、前記パノラマ画像をモニタに表示するパノラマ画像表示手段と、前記モニタに表示された前記パノラマ画像上でオペレータの指示に応じて関心位置を指定する関心位置指定手段と、前記関心位置に対応した前記X線管及び前記検出器の間の撮影空間における投影方向に存在する、前記撮影対象における厚みを有する1つ又は複数の物質の少なくとも種類を、前記パノラマ画像のX線透過データと前記CT画像から得られる当該物質の形態情報とに基づいて同定する物質同定手段と、を備える。 In order to achieve the above-described object, as one aspect thereof, a substance identification device using X-ray panorama / CT imaging is provided. This material identification apparatus forms a two-dimensional pixel group by forming an X-ray tube for irradiating X-rays and a cell for outputting an electric pulse corresponding to the energy of the photon whenever the photon of the X-ray is detected. The detection circuit arranged in a two-dimensional manner and the number of photons detected by each cell of the detection unit are divided into two or more energy bands (energy band number M ≧ 2: M is a positive integer) A detector configured to include a measurement circuit that measures each time, and an output circuit that outputs a digital amount of an electrical signal corresponding to the output of each cell measured by the measurement circuit as frame data for each energy band; and The X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is used as the energy band. Collect every time A scanning unit; a panoramic image generating unit that generates panoramic image data of the imaging target for each energy band based on the frame data; and a CT image generating unit that generates the CT (computed tomography) image of the imaging target; Panoramic image display means for displaying the panoramic image on a monitor; interest position designation means for designating a position of interest on the panoramic image displayed on the monitor according to an operator's instruction; and the position corresponding to the position of interest At least the kind of one or more substances having a thickness in the imaging target existing in the projection direction in the imaging space between the X-ray tube and the detector is obtained from the X-ray transmission data of the panoramic image and the CT image. A substance identifying means for identifying based on the obtained form information of the substance.
 また、別の態様として、X線を照射するX線管と、前記X線のフォトンが入射を検知する度に当該フォトンのエネルギに応じた電気パルスを出力するセルを、2次元の画素群を形成するように2次元に配列した検出回路と、この検出部の各セルが検知した前記フォトンの数を2つ以上のエネルギ帯域(エネルギ帯域数M≧2:Mは正の整数)に分けて画素毎に計測する計測回路と、この計測回路が計測した各セルの出力に応じたデジタル量の電気信号をエネルギ帯域毎にフレームデータとして出力する出力回路とを備えて構成された検出器と、前記X線管と前記検出器とを撮影対象を挟んで互いに対向させて当該撮影対象の周りに当該X線管と当該検出器とを回転させ、前記検出器が出力する前記フレームデータを前記エネルギ帯域毎に収集するスキャン手段と、を備えたシステムに適用される物質同定方法が提供される。この物質同定方法は、前記フレームデータに基づいて前記撮影対象のパノラマ画像のデータを前記エネルギ帯域毎に生成するステップと、前記撮影対象のCT(computed tomography)画像を生成するステップと、前記パノラマ画像をモニタに表示するステップと、前記モニタに表示された前記パノラマ画像上でオペレータの指示に応じて関心位置を指定するステップと、前記関心位置に対応した前記X線管及び前記検出器の間の撮影空間における投影方向に存在する、前記撮影対象における厚みを有する1つ又は複数の物質の少なくとも種類を、前記パノラマ画像のX線透過データと前記CT画像から得られる当該物質の形態情報とに基づいて同定するステップと、を有する。 As another aspect, an X-ray tube that irradiates X-rays, and a cell that outputs an electric pulse corresponding to the energy of the photons each time the photons of the X-rays are detected include a two-dimensional pixel group. The detection circuits arranged two-dimensionally to form and the number of photons detected by each cell of the detection unit are divided into two or more energy bands (energy band number M ≧ 2: M is a positive integer). A detector configured to include a measurement circuit that measures each pixel, and an output circuit that outputs a digital amount of an electrical signal corresponding to the output of each cell measured by the measurement circuit as frame data for each energy band; The X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is converted into the energy. Collect by band Applied material identification method to a system with a scanning unit, the for is provided. The method for identifying a substance includes generating a panoramic image data of the imaging target for each energy band based on the frame data, generating a CT (computed tomography) image of the imaging target, and the panoramic image. Between the X-ray tube and the detector corresponding to the position of interest, a step of designating a position of interest on the panoramic image displayed on the monitor according to an operator's instruction, Based on X-ray transmission data of the panoramic image and morphological information of the substance obtained from the CT image, the at least one kind of substance having a thickness in the imaging object existing in the projection direction in the imaging space. And identifying.
 以上のように、本発明によれば、パノラマ撮影機能とCT撮影機能の特徴を相補的に融合させ、1回の撮影(スキャン)で、通常のパノラマ撮影及びCT撮影を選択的に行うことができる。さらに、本発明によれば、撮影対象の関心部位にある厚さのある構造物の少なくとも種類を同定(特定又は推定)でき、臨床レベルでも使用できる。 As described above, according to the present invention, the features of the panorama imaging function and the CT imaging function are complementarily fused, and normal panorama imaging and CT imaging can be selectively performed by one imaging (scanning). it can. Furthermore, according to the present invention, at least the types of structures having a thickness in the region of interest to be imaged can be identified (specified or estimated), and can be used at the clinical level.
 添付図面において、
図1は、本発明の物質同定装置を機能的に提供するハイブリッド型X線撮像装置の全体構成の概略を示す斜視図、 図2は、同装置の撮影系と被検体の顎部との位置関係を説明する図、 図3は、撮影系のスキャンのための回転と被検体の顎部との位置関係を説明する図、 図4は、検出器の検出面(収集窓)を説明する平面図、 図5は、検出器の電気的な構成を説明する回路図、 図6は、X線の入射パルスと光子計数のための閾値を説明する図、 図7は、X線のエネルギ分布と光子計数のためのエネルギ領域を説明する図、 図8は、X線撮像装置の電気的な全体構成の要部を示すブロック図、 図9は、被検者の顎部の構成物の種類を同定するためにコントローラ及びデータプロセッサにより実行される処理を示す機能ブロック図、 図10は、最適焦点化された3Dオートフォーカス画像の例を示す表示図、 図11は、スキャン範囲を説明する図、 図12は、歯列、歯列の基準断層面、及びX線管・検出器のスキャンの位置関係を説明する図、 図13は、物質同定のために指定される、歯とその歯を透過するX線ビームの照射方向とを例示する説明図、 図14は、メタルアーチファクトの除去の流れを説明する概略フローチャート、 図15は、メタルアーチファクトの除去の一部の過程を説明する図、 図16は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図17は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図18は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図19は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図20は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図21は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図22は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図23は、メタルアーチファクトの除去の別の一部の過程を説明する図、 図24は、物質同定のために演算される散布図の一例を示すグラフ、 図25は、事前に準備される参照用の散布図の一例を示すグラフ、 図26は、事前に準備される参照用の散布図の別の例を示すグラフ、 図27は、実施形態で実行されるスキャン及び障害陰影除去の流れを概略的に示すフローチャート、 図28は、障害陰影除去の一部の工程を示す図、 図29は、歯列のパノラマ画像を再構成するためのゲインカーブの一例を示すグラフ、 図30は、頸椎のパノラマ画像を再構成するためのゲインカーブの一例を示すグラフ、 図31は、歯列のパノラマ画像(障害陰影除去前)の一例を示す画像、 図32は、頸椎のパノラマ画像の一例を示す画像、 図33は、歯列のパノラマ画像の障害陰影除去前の一例を示す画像(図13の患者とは別の患者の画像)、及び、 図34は、図25と比較すべき、歯列のパノラマ画像の障害陰影除去(軽減)後の一例を示す画像、である。
In the accompanying drawings,
FIG. 1 is a perspective view showing an outline of the overall configuration of a hybrid X-ray imaging apparatus that functionally provides a substance identification apparatus of the present invention, FIG. 2 is a view for explaining the positional relationship between the imaging system of the apparatus and the jaw of the subject; FIG. 3 is a diagram for explaining the positional relationship between the rotation for scanning of the imaging system and the jaw of the subject; FIG. 4 is a plan view illustrating a detection surface (collection window) of the detector, FIG. 5 is a circuit diagram illustrating the electrical configuration of the detector. FIG. 6 is a diagram for explaining an incident pulse of X-rays and a threshold for photon counting; FIG. 7 is a diagram for explaining an energy region for X-ray energy distribution and photon counting; FIG. 8 is a block diagram showing the main part of the overall electrical configuration of the X-ray imaging apparatus; FIG. 9 is a functional block diagram showing processing executed by the controller and the data processor in order to identify the type of composition of the subject's jaw. FIG. 10 is a display diagram illustrating an example of an optimally focused 3D autofocus image; FIG. 11 is a diagram for explaining a scan range; FIG. 12 is a diagram for explaining the positional relationship between the dentition, the reference tomographic plane of the dentition, and the scan of the X-ray tube / detector; FIG. 13 is an explanatory diagram illustrating a tooth specified for substance identification and an irradiation direction of an X-ray beam that passes through the tooth; FIG. 14 is a schematic flowchart illustrating the flow of removing metal artifacts, FIG. 15 is a diagram for explaining a part of the process of removing metal artifacts. FIG. 16 is a diagram illustrating another part of the process of removing metal artifacts, FIG. 17 is a diagram illustrating another part of the process of removing metal artifacts, FIG. 18 is a diagram illustrating another part of the process of removing metal artifacts, FIG. 19 is a diagram illustrating another part of the process of removing metal artifacts. FIG. 20 is a diagram for explaining another partial process of removing metal artifacts; FIG. 21 is a diagram illustrating another part of the process of removing metal artifacts, FIG. 22 is a diagram illustrating another part of the process of removing metal artifacts, FIG. 23 is a diagram illustrating another part of the process of removing metal artifacts. FIG. 24 is a graph showing an example of a scatter diagram calculated for substance identification; FIG. 25 is a graph showing an example of a reference scatter diagram prepared in advance. FIG. 26 is a graph showing another example of a reference scatter diagram prepared in advance. FIG. 27 is a flowchart schematically showing a flow of scan and obstacle shadow removal executed in the embodiment; FIG. 28 is a diagram showing a part of the process of removing the obstruction shadow; FIG. 29 is a graph showing an example of a gain curve for reconstructing a panoramic image of a dentition, FIG. 30 is a graph showing an example of a gain curve for reconstructing a panoramic image of the cervical spine, FIG. 31 is an image showing an example of a panoramic image of a dentition (before removal of an obstacle shadow), FIG. 32 is an image showing an example of a panoramic image of the cervical spine, FIG. 33 is an image (an image of a patient different from the patient in FIG. 13) showing an example of the panoramic image of the dentition before removing the obstacle shadow, and FIG. 34 is an image showing an example after removal (reduction) of obstacle shadows in the panoramic image of the dentition to be compared with FIG.
 以下、添付図面を参照して、本発明に係る物質同定装置及び物質同定方法の実施形態を説明する。なお、この物質同定の「同定」とは、物質(撮影対象を成す構造物)を透過したX線の情報に基づいて、その物質の少なくとも種類を特定又は推定する、ことを意味する。 Hereinafter, embodiments of a substance identification apparatus and a substance identification method according to the present invention will be described with reference to the accompanying drawings. The “identification” of the substance identification means that at least the kind of the substance is specified or estimated based on the X-ray information transmitted through the substance (the structure that forms the imaging target).
 図1~24を参照して、上記物質同定装置及び物質同定方法を機能的に備えたハイブリッド型X線撮影装置を説明する。本実施形態では、このX線撮影装置は、歯科用のハイブリッド型X線撮影装置として実施されている。 1 to 24, a hybrid X-ray imaging apparatus functionally equipped with the substance identification apparatus and the substance identification method will be described. In this embodiment, this X-ray imaging apparatus is implemented as a dental hybrid X-ray imaging apparatus.
 ここで、ハイブリッド型とは、X線パノラマ撮影(又はX線パノラマ撮影)及びX線CT撮影(又はX線CT撮像)の両方を行う機能があることを示している。このハイブリッド型X線撮影装置を稼働させることにより、「X線パノラマ・CT撮影を利用した物質同定装置及び物質同定方法」を機能的に提供できる。 Here, the hybrid type indicates that there is a function of performing both X-ray panoramic imaging (or X-ray panoramic imaging) and X-ray CT imaging (or X-ray CT imaging). By operating this hybrid X-ray imaging apparatus, a “substance identification apparatus and substance identification method using X-ray panorama / CT imaging” can be provided functionally.
 図1に、かかるハイブリッド型X線撮影装置1の外観を示す。この撮影装置1は、被検体の顎部をX線でスキャンし、そのデジタル量のX線透過データから顎部のパノラマ撮影とCT撮影の両撮影機能を有している。 FIG. 1 shows an appearance of such a hybrid X-ray imaging apparatus 1. The imaging apparatus 1 scans the subject's jaw with X-rays, and has both imaging functions of panoramic imaging and CT imaging of the jaw from the digital amount of X-ray transmission data.
 このハイブリッド型X線撮影装置1(以下、単にX線撮影装置と呼ぶ)の構成の概要を説明する。図1に示すように、このX線撮影装置1は、被検体(患者)Pからデータを例えば被検体Pの立位や車椅子に座った姿勢で収集する筐体11と、この筐体11が行うデータの収集を制御し、その収集したデータを取り込んでパノラマ画像を作成し、かつ、操作者(医師、技師など)との間でインターラクティブに又は自動的にパノラマ画像の後処理を行うための、コンピュータで構成される制御・演算装置12(コンソールとも呼ばれる)とを備える。 An outline of the configuration of the hybrid X-ray imaging apparatus 1 (hereinafter simply referred to as an X-ray imaging apparatus) will be described. As shown in FIG. 1, the X-ray imaging apparatus 1 includes a casing 11 that collects data from a subject (patient) P, for example, while the subject P is standing or sitting in a wheelchair, and the casing 11 For controlling the collection of data to be performed, capturing the collected data to create a panoramic image, and performing post-processing of the panoramic image interactively or automatically with an operator (doctor, engineer, etc.) And a control / arithmetic unit 12 (also called a console) constituted by a computer.
 筐体11は、スタンド部13と、このスタンド部13に対して上下動可能な撮影部14とを備える。撮影部14は、スタンド部13の支柱に所定範囲で上下動可能に取り付けられている。 The housing 11 includes a stand unit 13 and a photographing unit 14 that can move up and down with respect to the stand unit 13. The imaging unit 14 is attached to the support column of the stand unit 13 so as to be movable up and down within a predetermined range.
 ここで、説明の便宜のため、X線撮影装置1については、スタンド部13の長手方向、すなわち上下方向をZ軸とするXYZ直交座標系を設定する。 Here, for convenience of explanation, for the X-ray imaging apparatus 1, an XYZ orthogonal coordinate system having the longitudinal direction of the stand unit 13, that is, the vertical direction as the Z axis is set.
 撮影部14は、側面からみて、略コ字状を成す上下動ユニット15と、この上下動ユニット15に回転(回動)可能に支持された回転ユニット16とを備える。上下動ユニット15は制御・演算装置12からの制御に応じて、スタンド部13に対して上下動可能である。 The photographing unit 14 includes a vertically moving unit 15 that is substantially U-shaped when viewed from the side, and a rotating unit 16 that is supported by the vertically moving unit 15 so as to be rotatable (rotatable). The vertical movement unit 15 can move up and down relative to the stand unit 13 in accordance with control from the control / arithmetic apparatus 12.
 また、回転ユニット16は、その側面からみて、略コ字状のアームで構成される。このアームが下向きにした状態で上下動ユニット15に回転可能に垂下されている。この回転ユニット16も、制御・演算装置12からの制御に応じて回転されるようになっている。 Further, the rotation unit 16 is configured by a substantially U-shaped arm as viewed from the side. The arm is suspended downwardly by the vertical movement unit 15 with the arm facing downward. The rotating unit 16 is also rotated in accordance with control from the control / arithmetic apparatus 12.
 回転ユニット16の上下方向に伸びた2つのアーム16A,16Bには、X線源としてのX線管21とX線検出手段としてのX線検出器22(以下、単に検出器)とが互いに対向するように配設されている。回転ユニット16がXY面に沿って回転すると、X線管21及び検出器22の対も、互いに対向した状態で回転し、それらの間の空間は撮影空間Sとして画成される。 An X-ray tube 21 serving as an X-ray source and an X-ray detector 22 serving as an X-ray detection means (hereinafter simply referred to as a detector) are opposed to the two arms 16A and 16B extending in the vertical direction of the rotary unit 16. It is arranged to do. When the rotation unit 16 rotates along the XY plane, the pair of the X-ray tube 21 and the detector 22 also rotate in a state of facing each other, and a space between them is defined as an imaging space S.
 この撮影空間Sに被検体Pの顎部JWがチンレスト19及びヘッドレスト20を使って位置付けられる。 In this imaging space S, the jaw JW of the subject P is positioned using the chin rest 19 and the headrest 20.
 このため、本実施形態では、スキャン時に、図2に示すように、X線管21から照射されたX線が、被検体Pの顎部JWと透過して検出器22に入射するというジオメトリになる。 For this reason, in this embodiment, the X-ray irradiated from the X-ray tube 21 passes through the jaw JW of the subject P and enters the detector 22 as shown in FIG. Become.
 また、回転ユニット16に装備されているX線管21及び検出器22の対は、図3に示すように回転中心Oからの径Dx,Ddが互いに異なる2つの円軌道Tx.Td上を互いに正対した姿勢を維持したまま回転する。Dx=Ddであってもよい。 Further, the pair of the X-ray tube 21 and the detector 22 provided in the rotating unit 16 includes two circular orbits Tx.Dx having different diameters Dx and Dd from the rotation center O as shown in FIG. Rotate while maintaining the postures facing each other on Td. Dx = Dd may be sufficient.
 なお、上述した「常に互いに対向(又は正対)」とは、図3に示すように、Z軸方向から見た場合、X線管21の点状のX線焦点FPから照射され、後述するスリット23によりコーン状に成形されたX線ビームの中心軸Tが常に、検出器22のX線検出面22A(後述する)の幅方向の中心点Cに交差(又は90°で交差)することを言う。特に、90°で交差することを、ここでは正対していると言う(図3参照)。 Note that the above-mentioned “always facing each other (or facing each other)” as shown in FIG. 3 is irradiated from a dotted X-ray focal point FP of the X-ray tube 21 when viewed from the Z-axis direction, which will be described later. The center axis T of the X-ray beam shaped like a cone by the slit 23 always intersects (or intersects at 90 °) the center point C in the width direction of the X-ray detection surface 22A (described later) of the detector 22. Say. In particular, crossing at 90 ° is said to be facing here (see FIG. 3).
 X線管21は、例えば回転陽極X線管で構成されており、そのターゲット(陽極)からX線を放射状に放射させる。このターゲットに衝突させる電子線の焦点は、例えば径0.1~0.5mm程度と小さく、したがって、このX線管21は点状のX線源として機能する。X線管21の前面の所定位置には、スリット23が取り付けられている。このスリット23により、検出器22に入射させるX線を、その検出面の所望の収集窓の形状に合わせてコーン状に絞ることができる。 The X-ray tube 21 is composed of, for example, a rotating anode X-ray tube, and radiates X-rays radially from the target (anode). The focal point of the electron beam colliding with the target is as small as about 0.1 to 0.5 mm in diameter, for example, and therefore the X-ray tube 21 functions as a point X-ray source. A slit 23 is attached to a predetermined position on the front surface of the X-ray tube 21. With this slit 23, the X-rays incident on the detector 22 can be narrowed in a cone shape in accordance with the shape of a desired collection window on the detection surface.
 検出器22は、図4に示すように、X線撮影素子を2次元に配列した複数の検出モジュールB1~Bmのアレイ(センサ回路)を有する。複数の検出モジュールB1~Bmは互いに独立したブロックとして作成され、それらを基板(図示せず)上に所定形状(例えば矩形状)に実装して検出器22の全体が作成される。 As shown in FIG. 4, the detector 22 has an array (sensor circuit) of a plurality of detection modules B1 to Bm in which X-ray imaging elements are two-dimensionally arranged. The plurality of detection modules B1 to Bm are created as blocks independent of each other, and are mounted in a predetermined shape (for example, a rectangular shape) on a substrate (not shown) to form the entire detector 22.
 なお、複数の検出モジュールB1~Bmは、個々のモジュールの間は一定の隙間を設けつつ、縦(X軸)及び横(Y軸)の2次元に複数個(縦方向に15個、横方向に8個、更に上下端それぞれ5を配置)ずつ並べるとともに、個々のモジュールをスキャン方向Oに対して角度θだけ斜めに傾けて配置している。この角度θは例えば約14°に設定される。この複数の検出モジュールB1~Bmが作る縦横の長さの比が小さい矩形状(CT撮影の場合)又は縦横の長さの比が大きい、つまり、細長い長方形状(パノラマ撮影の場合)の表面がX線検出面22Aを成している。検出モジュールB1~Bmを斜めに配置しているため、X線検出面22Aは複数のモジュールB1~Bmの個々の検出面の内側を辿る(内接する)ように形成されている。 A plurality of detection modules B1 to Bm are provided in two dimensions in the vertical (X-axis) and horizontal (Y-axis) directions (15 in the vertical direction, horizontal direction) while providing a certain gap between the individual modules. 8, together with arranging by further placing the 5 upper and lower ends), they are arranged obliquely inclined by an angle θ with respect to the scanning direction O Y individual modules. This angle θ is set to about 14 °, for example. The surface of a rectangular shape (in the case of CT imaging) having a small length ratio (in the case of CT imaging) or a large ratio of length to width, that is, an elongated rectangular shape (in the case of panoramic imaging), is created by the plurality of detection modules B1 to Bm. An X-ray detection surface 22A is formed. Since the detection modules B1 to Bm are arranged obliquely, the X-ray detection surface 22A is formed so as to follow (inscribe) the inside of each detection surface of the plurality of modules B1 to Bm.
 この斜め配置の検出モジュールを有する検出器22の構造及びその検出信号のサブピクセル法による処理は、例えばWO 2012/086648A1公報により知られている。 The structure of the detector 22 having this detection module arranged obliquely and the processing of the detection signal by the sub-pixel method are known, for example, from WO 2012/0866648 A1.
 なお、図4における左側一列に並んだ縦列配置の複数のモジュールはパノラマ撮影用のモジュールとして機能する。このパノラマ撮影用の開口面積は符号22Bで示す。また、この左側一列のうちの上下端の2個のモジュールを除く全部のモジュールと残りのモジュールとで作る2次元配列のモジュール群がCT撮影用のモジュールとして機能する。このCT撮影用の小視野の開口面積を符号22Aで示す。これにより、検出器22は、開口面積22A、22Bを合わせた、「矩形であって一端側の上下に更に伸びた長尺部を有する形状」の収集窓を有する。 Note that a plurality of modules arranged in a column in the left-hand column in FIG. 4 function as panoramic shooting modules. The opening area for panoramic photography is indicated by reference numeral 22B. Further, a module group of a two-dimensional array formed by all the modules except the two modules at the upper and lower ends of the left side row and the remaining modules functions as a module for CT imaging. The aperture area of the small field for CT imaging is indicated by reference numeral 22A. As a result, the detector 22 has a collection window of “a shape having a long portion that is rectangular and further extends vertically on one end side”, in which the opening areas 22A and 22B are combined.
 このパノラマ撮影用か、CT撮影用かに応じたモジュール群の選択は、スリット23の開口面積の制御により可能であるが、本実施形態では、パノラマ撮影とCT撮影とを同時に1回のスキャンで行うことも1つの特徴である。このため、スリット23は、ファン状のX線ビームを、開口面積22A、22Bを合わせた上記収集窓の形状に合致するように絞る。 Selection of the module group according to whether it is for panoramic imaging or CT imaging can be performed by controlling the opening area of the slit 23, but in this embodiment, panoramic imaging and CT imaging are performed simultaneously in one scan. It is also a feature to do. For this reason, the slit 23 narrows the fan-shaped X-ray beam so as to match the shape of the collection window in which the opening areas 22A and 22B are combined.
 なお、図4における参照符号AXdは、検出器22自身を自転(回転)させるときの中心軸である。なお、本実施形態では、検出器22は常にX線管21に正対した姿勢を採るように制御されているので、この自転動作を制御する必要は必ずしもない。 In addition, reference symbol AXd in FIG. 4 is a central axis when the detector 22 itself rotates (rotates). In the present embodiment, since the detector 22 is always controlled to take a posture facing the X-ray tube 21, it is not always necessary to control this rotation operation.
 個々の検出モジュールB1(~Bm)はX線を直接、電気パルス信号に変換する半導体材料で作成される。このため、検出器22は、半導体による直接変換方式の光子計数型X線検出器である。 Individual detection modules B1 (˜Bm) are made of a semiconductor material that directly converts X-rays into electrical pulse signals. For this reason, the detector 22 is a photon counting X-ray detector of a direct conversion method using a semiconductor.
 この検出器22は、上述したように、複数の検出モジュールB1~Bmのアレイとして形成される。各検出モジュールBmは、周知のように、X線を検出する検出回路Cp(図5参照)と、その検出回路Cpと一体に積層されたデータ計数回路51(図5参照)を備える。 As described above, the detector 22 is formed as an array of a plurality of detection modules B1 to Bm. Each detection module Bm includes a detection circuit Cp (see FIG. 5) for detecting X-rays and a data counting circuit 51 n (see FIG. 5) stacked together with the detection circuit Cp, as is well known.
 検出回路Cpは、検出モジュール毎に、X線を直接、電気信号に変換する半導体層と、この両面にそれぞれ積層させた荷電電極及び集電電極とを備える(図示せず)。荷電電極にX線を入射させる。荷電電極は共通の1枚の電極であり、荷電電極との間にバイアスの高電圧が印加される。半導体層及び集電電極は碁盤目状に分割され、この分割により、相互に一定の距離を置いて2次元アレイ状に配置される複数の小領域が形成される。これにより、荷電電極上に2次元状に配列された複数の、半導体セルC(図4,5参照)及び集電電極の積層体が形成される。この複数の積層体が、2次元の碁盤目状に配列された複数の画素Sを構成する。 The detection circuit Cp includes, for each detection module, a semiconductor layer that directly converts X-rays into an electrical signal, and a charging electrode and a collecting electrode that are respectively stacked on both sides (not shown). X-rays are incident on the charged electrode. The charged electrode is a common electrode, and a high bias voltage is applied between the charged electrodes. The semiconductor layer and the collecting electrode are divided into a grid pattern, and by this division, a plurality of small regions are formed that are arranged in a two-dimensional array at a certain distance from each other. As a result, a plurality of stacked bodies of semiconductor cells C (see FIGS. 4 and 5) and collecting electrodes arranged in a two-dimensional manner on the charged electrode are formed. The plurality of stacked bodies to form a plurality of pixels S n arranged in a two dimensional grid pattern.
 この結果、複数の検出モジュールB1~Bmの全体によって(ただしCT撮影時の開口面積22Aによる:図4参照)、検出器22に必要な所定領域を占める複数の画素S(n=1~N)が形成される。この複数の画素Sが検出回路(画素群)Cpを構成する(図5参照)。 As a result, a plurality of pixels S n (n = 1 to N) occupying a predetermined area necessary for the detector 22 by the whole of the plurality of detection modules B1 to Bm (however, depending on the opening area 22A at the time of CT imaging: see FIG. 4). ) Is formed. The plurality of pixels S n detection circuits (pixel group) constituting the Cp (refer to FIG. 5).
 各画素Sのサイズは、例えば200μm×200μmであり、この画素サイズは、入射するX線を多数の光子の集まりとして検出可能な値に設定されている。各画素Sは、X線の各光子の入射に反応し、各光子が持つエネルギに応じた振幅の電気パルスを出力する。つまり、各画素Sは、その画素に入射するX線を直接、電気信号に変換することができる。 The size of each pixel S n is, for example, 200 [mu] m × 200 [mu] m, the pixel size is set to a detectable value X-rays incident as a set of multiple photons. Each pixel S n is responsive to incident of each photon of X-ray, and outputs an electrical pulse of amplitude corresponding to the energy possessed by the photon. That is, each pixel S n may convert the X-rays incident on that pixel directly, into electric signals.
 このため、検出器22は、入射するコーンビーム状のX線を成す光子を、検出器22の検出面(収集窓)を構成する画素S毎に計数して、その計数した値を反映させた電気量のデータを例えば75fpsの比較的高いフレームレートで出力する。このデータはフレームデータとも呼ばれる。 Therefore, the detector 22, the photon constituting the cone beam-like X-rays incident, counts for each pixel S n constituting the detection surface of the detector 22 (acquisition window), to reflect the counted value The amount of electricity data is output at a relatively high frame rate of, for example, 75 fps. This data is also called frame data.
 半導体層、すなわち半導体セルCの半導体材料としては、テルル化カドミウム半導体(CdTe半導体)、カドミュームジンクテルライド半導体(CdZnTe半導体(CZT半導体))、シリコン半導体(Si半導体)、臭化タリューム(T1Br)、ヨウ化水銀などが用いられる。なお、この半導体セルの代わりに、柱状に細分化し、光学的に各柱が遮光された構造を持つシンチレータ素材と、微細なアバランシェフォトダイオードの組合せで構成した光電変換器を組み合わせたセルで構成してもよい。 As the semiconductor material of the semiconductor layer, that is, the semiconductor cell C, cadmium telluride semiconductor (CdTe semiconductor), cadmium zinc telluride semiconductor (CdZnTe semiconductor (CZT semiconductor)), silicon semiconductor (Si semiconductor), thallium bromide (T1Br) Mercury iodide or the like is used. Instead of this semiconductor cell, it is composed of a cell that combines a scintillator material that is subdivided into columns and optically shielded from each column, and a photoelectric converter composed of a combination of fine avalanche photodiodes. May be.
 このため、半導体セルCにX線が入射すると、セル内部に電荷(電子、正孔)が発生して、その電荷量に応じたパルス電流が流れる。このパルス電流は集電電極により検出される。この結果、電荷量はX線の光子のエネルギ値により変わる。このため、検出器22は、その画素S毎に光子のエネルギ値に応じた電気パルス信号を出力する。 For this reason, when X-rays enter the semiconductor cell C, charges (electrons, holes) are generated inside the cell, and a pulse current corresponding to the amount of the charge flows. This pulse current is detected by the current collecting electrode. As a result, the amount of charge varies depending on the energy value of the X-ray photons. Therefore, the detector 22 outputs an electrical pulse signal corresponding to the energy value of the photons for respective pixels S n.
 この検出器22は更に、半導体セルCのそれぞれ、すなわち、複数の画素Sそれぞれの出力側にデータ計数回路51(n=1~N)を備える。ここで、画素Sのそれぞれ、すなわち半導体セルCのそれぞれから各データ計数回路51(~51)に至る経路を、必要に応じて、収集チャンネルCN(n=1~N)と呼ぶ(図5参照)。 The detector 22 further comprises respective semiconductor cell C, that the data counting circuit 51 on the output side of each of the plurality of pixels S n n the (n = 1 ~ N). Here, each pixel S n, i.e., the path from each semiconductor cell C to the data counting circuit 51 1 (~ 51 N), optionally, referred to as acquisition channels CN n (n = 1 ~ N ) (See FIG. 5).
 なお、この半導体セルCの群の構造は、特開2000-69369号公報、特開2004-325183号公報、特開2006-101926号公報によっても知られている。 The structure of this group of semiconductor cells C is also known from Japanese Patent Application Laid-Open Nos. 2000-69369, 2004-325183, and 2006-101926.
 ところで、前述した各画素Sのサイズ(200μm×200μm)は、1つの画素の中に入射するX線を光子(粒子)として数え落としを無視できるぐらい入射フォトン数を検出することが可能な十分小さい値になっている。本実施形態において、X線をその粒子として検出可能なサイズとは、「放射線(例えばX線)粒子が同一位置又はその近傍に複数個連続して入射したときの各入射に応答した電気パルス間の重畳現象(パイルアップとも呼ばれる)の発生を実質的に無視可能な又はその量が予測可能なサイズ」であると定義される。 Meanwhile, the size (200 [mu] m × 200 [mu] m) of each pixel S n described above is capable of detecting the number of incident photons about negligible counting loss as photons (particles) the X-rays incident in one pixel sufficiently It is a small value. In the present embodiment, the size capable of detecting X-rays as the particles is “between electric pulses responding to each incident when a plurality of radiation (for example, X-ray) particles are successively incident at or near the same position. The occurrence of the superposition phenomenon (also called pile-up) is defined as “a size that can be substantially ignored or whose amount is predictable”.
 しかしながら、このような画素サイズを以ってしても、重畳現象の発生を全て回避できる訳でない。2つ或いはそれ以上の電気パルスが共に同一画素において観測される場合でも、時間的に互いに分離していれば、重畳現象が起きない。これに対し、2つ或いはそれ以上の電気パルスが共に同一画素において時間的に分離し難い場合、重畳現象が起きて、2つの電気パルスが重なって波高値が高くなった1つの電気パルスとして観測される。 However, even with such a pixel size, it is not possible to avoid all occurrences of the superposition phenomenon. Even when two or more electrical pulses are both observed in the same pixel, the superposition phenomenon does not occur if they are separated from each other in time. On the other hand, when two or more electric pulses are difficult to separate in time in the same pixel, a superposition phenomenon occurs, and the two electric pulses overlap to be observed as one electric pulse having a high peak value. Is done.
 この重畳現象が発生すると、X線粒子の「入射数対実際の計数値」の特性にX線粒子の数え落とし(パイルアップカウントロスとも呼ばれる)が発生する。このため、検出器22に形成する画素Sのサイズは、この数え落としが発生しない又は実質的に発生しないとみなせる大きさに、又は、数え落し量が推定できる程度に設定されている。 When this superposition phenomenon occurs, X-ray particle countdown (also called pile-up count loss) occurs in the characteristic of “number of incidents versus actual count value” of X-ray particles. Therefore, the size of the pixel S n to form the detector 22, the magnitude of which can be regarded as the counting loss does not occur or does not substantially occur, or are set to an extent counting the drop amount can be estimated.
 続いて、図5を用いて、検出器22に電気的に繋がる回路を説明する。複数のデータ計数回路51(n=1~N)のそれぞれは、各半導体セルCから出力されたアナログ量の電気信号を受けるチャージアンプ52を有し、このチャージアンプ52の後段に、波形整形回路53、多段の比較器54(ここではi=1~4)、多段のカウンタ56(ここではi=1~4)、多段のD/A変換器57(ここではi=1~4)、ラッチ回路58、及びシリアル変換器59を備える。 Subsequently, a circuit electrically connected to the detector 22 will be described with reference to FIG. Each of the plurality of data counting circuits 51 n (n = 1 to N) includes a charge amplifier 52 that receives an electrical signal of an analog amount output from each semiconductor cell C. Circuit 53, multi-stage comparator 54 i (here i = 1 to 4), multi-stage counter 56 i (here i = 1 to 4), multi-stage D / A converter 57 i (here i = 1 to 4) 4) A latch circuit 58 and a serial converter 59 are provided.
 各チャージアンプ52は、各半導体セルCの各集電電極に接続され、X線粒子の入射に応答して集電される電荷をチャージアップして電気量のパルス信号として出力する。このチャージアンプ52の出力端は、ゲイン及びオフセットが調整可能な波形整形回路53に接続されており、検知したパルス信号の波形を、予め調整されているゲイン及びオフセットで処理して波形整形する。この波形整形回路53のゲイン及びオフセットは、半導体セルCから成る画素S毎の電荷チャージ特性に対する不均一性と各回路特性のバラツキを考慮して、キャリブレーションされる。これにより、不均一性を排除した波形整形信号の出力とそれに対する相対的な閾値の設定精度とを上げることができる。この結果、各画素Sに対応した、即ち、各収集チャンネルCNの波形整形回路53から出力された波形整形済みのパルス信号は実質的に入射するX線粒子のエネルギ値を反映した特性を有する。したがって、収集チャンネルCN間のばらつきは大幅に改善される。 Each charge amplifier 52 is connected to each collector electrode of each semiconductor cell C, charges up the charge collected in response to the incidence of X-ray particles, and outputs it as a pulse signal of electric quantity. The output terminal of the charge amplifier 52 is connected to a waveform shaping circuit 53 whose gain and offset can be adjusted. The waveform of the detected pulse signal is processed with the previously adjusted gain and offset to shape the waveform. The gain and offset of the waveform shaping circuit 53, in consideration of the variation in non-uniformity and the circuit characteristics for charge-charge characteristic for each pixel S n of semiconductor cell C, is calibrated. As a result, it is possible to increase the output of the waveform shaping signal from which non-uniformity has been eliminated, and the relative threshold setting accuracy. As a result, corresponding to each pixel S n, i.e., the characteristics reflecting the energy value of the X-ray particle pulse signal waveform formatted output from the waveform shaping circuit 53 for each collection channel CN n is substantially incident Have. Therefore, the variation between the collection channels CN n is greatly improved.
 この波形整形回路53の出力端は、複数の比較器54~54の比較入力端にそれぞれ接続されている。この複数の比較器54~54それぞれの基準入力端には、図5に示す如くそれぞれ値が異なるアナログ量の閾値(電圧値)th(ここではi=1~4)が印加されている。これにより、1つのパルス信号と異なるアナログ量閾値th~thのそれぞれとを比較することができる。図6に、1つのX線光子の入力に応じて生起されるパルス電圧の波高値(エネルギを表す)とそれらの閾値th~thとの大小関係(th<th<th<th)模式的に示す。 The output terminal of the waveform shaping circuit 53 is connected to the comparison input terminals of the plurality of comparators 54 1 to 54 4 . As shown in FIG. 5, analog amount threshold values (voltage values) th i (here, i = 1 to 4) having different values are applied to the reference input terminals of the plurality of comparators 54 1 to 54 4, respectively. Yes. This makes it possible to compare one pulse signal with each of the different analog amount thresholds th 1 to th 4 . FIG. 6 shows the magnitude relationship (th 1 <th 2 <th 3 <threshold) between the peak value (representing energy) of the pulse voltage generated in response to the input of one X-ray photon and the threshold values th 1 to th 4. th 4 ) schematically.
 この比較の理由は、入射したX線粒子のエネルギ値が、事前に複数に分けて設定したエネルギ領域のうちのどの領域に入るのか(弁別)について調べるためである。パルス信号の波高値(つまり、入射するX線光子のエネルギ値を表す)がアナログ量閾値th~thのどの値を超えているかについて判断される。これにより、弁別されるエネルギ領域が異なる。なお、最も低いアナログ量閾値thは、通常、外乱や、半導体セルC、チャージアンプ52などの回路に起因するノイズ、或いは、画像化に必要のない低エネルギの放射線を検出しないようにするための閾値として設定される。また、閾値の数、すなわち比較器の数は、必ずしも4個に限定されず、上記アナログ量閾値thの分を含めて3個、又は、5個以上であってもよい。 The reason for this comparison is to examine which region (discrimination) the energy value of the incident X-ray particle enters among the energy regions set in advance divided into a plurality. A determination is made as to which of the analog amount threshold values th 1 to th 4 exceeds the peak value of the pulse signal (that is, the energy value of the incident X-ray photon). Thereby, the energy area | region discriminated differs. Note that the lowest analog amount threshold th 1 is usually to prevent detection of disturbances, noise caused by circuits such as the semiconductor cell C and the charge amplifier 52, or low-energy radiation that is not necessary for imaging. Is set as the threshold value. Further, the number of thresholds, i.e., the number of comparators is not necessarily limited to four, three, including the amount of the analog amount threshold th 1, or may be five or more.
 上述したアナログ量閾値th~thは、具体的には、コンソール17のキャリブレーション演算器38からインターフェース31を介してデジタル値で画素S毎、即ち収集チャンネル毎に与えられる。このため、比較器54~54それぞれの基準入力端は4つのD/A変換器57~57の出力端にそれぞれ接続されている。このD/A変換器57~57はラッチ回路58を介して閾値受信端T(~T)に接続され、この閾値受信端T(~T)がコンソール17のインターフェース31に接続されている。 Analog amount threshold th 1 ~ th 4 described above, specifically, given from the calibration computing unit 38 of the console 17 for each pixel S n in a digital value through the interface 31, i.e., for each acquisition channels. Therefore, the reference input terminals of the comparators 54 1 to 54 4 are connected to the output terminals of the four D / A converters 57 1 to 574, respectively. The D / A converter 57 1-57 4 is connected to the threshold receiving end T 1 via the latch circuits 58 (~ T N), the interface 31 of the threshold receiving end T 1 (~ T N) console 17 It is connected.
 ラッチ回路58は、撮影時に、閾値付与器40からインターフェース31及び閾値受信端T(~T)を介して与えられたデジタル量の閾値th´~th´をラッチし、対応するD/A変換器57~57にそれぞれ出力される。このため、D/A変換器57~57は指令されたアナログ量の閾値th~thを電圧量として比較器54~54それぞれに与えることができる。各収集チャンネルCNは、D/A変換器57(i=1~4)から比較器54(i=1~4)を介してカウンタ56(i=1~4)に至る1つ又は複数の回路系につながっている。この回路系を「弁別回路」DS(i=1~4)と呼ぶ。 The latch circuit 58 latches the thresholds th 1 ′ to th 4 ′ of digital quantities given from the threshold applier 40 via the interface 31 and the threshold receiving end T 1 (˜T N ) at the time of shooting, and the corresponding D / are output to a converters 57 1-57 4. Therefore, the D / A converters 57 1 to 57 4 can supply the commanded analog amount thresholds th 1 to th 4 to the comparators 54 1 to 54 4 as voltage amounts, respectively. Each collection channel CN n is one from the D / A converter 57 i (i = 1 to 4) to the counter 56 i (i = 1 to 4) via the comparator 54 i (i = 1 to 4). Or it is connected to a plurality of circuit systems. This circuit system is called “discrimination circuit” DS i (i = 1 to 4).
 図7に、このアナログ量閾値th(i=1~4)に相当するエネルギ閾値TH(i=1~4)の設定例を示す。このエネルギ閾値TH(i=1~4)は勿論、離散的に設定されるとともに、ユーザが任意の値に設定可能な弁別値である。なお、図7は、X線管21の陽極材に適度な材料を用いたときのX線スペクトルを模式的に示す。横軸はX線エネルギを示すと共に、縦軸はX線光子の入射頻度を示す。この入射頻度はX線光子の計数値(カウント)又は強度を代表するファクタである。 FIG. 7 shows a setting example of the energy threshold TH i (i = 1 to 4) corresponding to the analog amount threshold th i (i = 1 to 4). This energy threshold TH i (i = 1 to 4) is of course a discrete value that is set discretely and can be set to an arbitrary value by the user. FIG. 7 schematically shows an X-ray spectrum when an appropriate material is used for the anode material of the X-ray tube 21. The horizontal axis indicates X-ray energy, and the vertical axis indicates the incidence frequency of X-ray photons. This incidence frequency is a factor representative of the count value (count) or intensity of X-ray photons.
 アナログ量閾値thは、各弁別回路DSにおいて比較器54iに与えるアナログ電圧であり、エネルギ閾値THはエネルギスペクトラムのX線エネルギ(keV)を弁別するアナログ値である。図7に示す波形は、通常に使用されている、例えば陽極材としてタングステンを用いたX線管球から曝射されるX線のエネルギの連続スペクトルを示す。なお、縦軸の計数値(カウント)は横軸のエネルギ値に相当するフォトンの発生頻度に比例する量であり、横軸のエネルギ値はX線管21の管電圧に依存する量である。このスペクトルに対して、第1のアナログ量閾値thを、X線光子数を計数不要領域(計数に意味のあるX線情報がなく、かつ回路ノイズが混在する領域)と低目の第1のエネルギ領域ERとを弁別可能なエネルギ閾値THに対応して設定する。また、第2及び第3のアナログ量閾値th、thを、第1のエネルギ閾値THより高い、第2、第3のエネルギ閾値TH,THを順に供するように設定している。さらに、第4のエネルギ閾値THはエネルギスペクトラムにおける、重畳現象が無ければX光子の計数値=0となる、X線管への印加電圧に等しいエネルギ値に設定されている。ここで、第4のエネルギ閾値THを、画素S毎に、計数値=0となるエネルギ値に合わせていることは本願の重要な特徴の一つである。 The analog amount threshold th i is an analog voltage applied to the comparator 54 i in each discrimination circuit DS i , and the energy threshold TH i is an analog value for discriminating the X-ray energy (keV) of the energy spectrum. The waveform shown in FIG. 7 shows a continuous spectrum of the energy of X-rays exposed from an X-ray tube that is normally used, for example, using tungsten as an anode material. The count value (count) on the vertical axis is an amount proportional to the photon generation frequency corresponding to the energy value on the horizontal axis, and the energy value on the horizontal axis is an amount depending on the tube voltage of the X-ray tube 21. With respect to this spectrum, the first analog quantity threshold th 1 is set as the X-ray photon count non-counting area (the area where there is no meaningful X-ray information for counting and the circuit noise is mixed) and the lower first to set corresponding to the energy region ER 1 and energy threshold value TH 1 capable discrimination of. Further, the second and third analog amount threshold values th 2 and th 3 are set so as to sequentially provide the second and third energy threshold values TH 2 and TH 3 which are higher than the first energy threshold value TH 1 . . Further, the fourth energy threshold TH 4 is set to an energy value equal to the applied voltage to the X-ray tube, in which X photon count value = 0 if there is no superposition phenomenon in the energy spectrum. Here, the fourth energy threshold TH 4, each pixel S n, it is one of the important features of the present application are in accordance with the energy value of the count value = 0.
 これにより、エネルギスペクトラムの特性や設計値に基づいた適宜な弁別点が規定され、エネルギ領域ER~ERが設定される。 As a result, appropriate discrimination points based on energy spectrum characteristics and design values are defined, and energy regions ER 1 to ER 4 are set.
 また、これらのエネルギ閾値THは、基準となる一つ以上の被写体を想定し、エネルギ領域毎の所定時間の計数値が概略一定になるように決定される。 These energy thresholds TH i are determined so that one or more subjects as a reference are assumed and the count value for a predetermined time for each energy region is substantially constant.
 このため、比較器54~54の出力端は、図5に示すように、複数のカウンタ56~56の入力端にそれぞれ接続されている。 Therefore, the output of the comparator 54 1-54 3, as shown in FIG. 5, is connected to the input ends of the plurality of counters 56 1-56 4.
 カウンタ56~56のそれぞれは、比較器54~54の出力(パルス)がオンなる度にカウントアップを行う。これにより、各カウンタ56(~56)が担当するエネルギ領域ER(~ER)に弁別されるエネルギ値以上のエネルギを持つX線光子数を一定時間毎の累積値W´(~W´)として画素S毎に計数することができる。 Each of the counters 56 1-56 4 counts up every time the output of the comparator 54 1-54 3 (pulse) is turned on. As a result, the number of X-ray photons having energy equal to or higher than the energy value discriminated into the energy region ER 1 (to ER 4 ) that each counter 56 1 (to 56 4 ) takes charge of is accumulated value W 1 ′ ( it can be counted for each pixel S n as ~ W 4 ').
 具体的には、この計数動作は、4つの比較器54~54に入力する検出電圧Vdec(光子の検出エネルギ値)と閾値th~thとの関係により決まる。つまり、検出電圧Vdec<th~thのときには、全ての比較器54~54の出力=オフとなる。すなわち、その画素Sの出力=0となる。これにより、入力エネルギの計数限界として定めたエネルギ閾値THよりも小さいノイズ成分は計数されない。このノイズ成分は、図7の計数不能領域ERxに属するエネルギ値の信号に相当する。 Specifically, this counting operation is determined by the relationship between the detection voltage V dec (detected energy value of photons) input to the four comparators 54 1 to 54 4 and the threshold values th 1 to th 4 . That is, when the detection voltage V dec <th 1 to th 4 , the outputs of all the comparators 54 1 to 54 4 are turned off. In other words, the output = 0 of the pixel S n. As a result, noise components smaller than the energy threshold TH 1 defined as the input energy counting limit are not counted. This noise component corresponds to an energy value signal belonging to the non-countable region ERx in FIG.
 しかしながら、検出電圧Vdecが最小の閾値thを超える場合(Vdec≧th)、光子数は計数される。それらの関係がVdec≧thあれば、全ての比較器54~54の出力がオンとなる。つまり、全てのカウンタ56~56の計数値W´~W´がカウントアップされる。 However, if the detection voltage V dec exceeds the minimum threshold th 1 (V dec ≧ th 1 ), the number of photons is counted. If the relationship is V dec ≧ th 1 , the outputs of all the comparators 54 1 to 54 4 are turned on. That is, the count value W 1 of all the counters 56 1 ~ 56 4 '~ W 4' is counted up.
 Vdec≧thの関係になれば、2段目以降の3つの比較器54~54の出力がオンとなる。これにより、3つのカウンタ56~56の計数値W´~W´がカウントアップされる。Vdec≧thの関係になれば、3段目及び4段目の比較器54、54の出力がオンとなる。これにより、2つのカウンタ56、56の計数値W´、W´がカウントアップされる。 When the relationship of V dec ≧ th 2 is established, the outputs of the three comparators 54 2 to 54 4 in the second and subsequent stages are turned on. Thus, the three counters 56 2-56 4 counts W 2 '~ W 4' is counted up. If the relationship of V dec ≧ th 3 is established, the outputs of the third-stage and fourth-stage comparators 54 3 and 54 4 are turned on. As a result, the count values W 3 ′ and W 4 ′ of the two counters 56 3 and 56 4 are counted up.
 さらに、Vdec≧thの関係になれば、4段目の比較器54のみの出力がオンになって、4段目のカウンタ56の計数値W´のみがカウントアップされる。この場合、その入力に関わる光子のエネルギ値はイメージングや計数には適さない、第3の高いエネルギ領域ERを超える領域ERに属するノイズ成分、外乱などである。その一方で、この計数値W´は重畳現象を起こした光子や同時に入射した光子を推定したり除外したりするための情報として使用することができる。 Furthermore, if the relationship between V dec ≧ th 4, the output of the comparator module 54 4 of the fourth stage is turned on, only the counter 56 4 count value W 4 'of the fourth stage is counted up. In this case, the energy value of the photon related to the input is a noise component belonging to the region ER 4 exceeding the third high energy region ER 3 , disturbance, etc., which is not suitable for imaging or counting. On the other hand, the count value W 4 ′ can be used as information for estimating or excluding photons that have caused a superposition phenomenon or simultaneously incident photons.
 このように本実施形態では、カウンタ56~56は、それぞれ、自己が計数担当するべきエネルギ領域ER(~ER)及びそれを超えるエネルギを持つ光子数をカウントする。このため、第1~第4のエネルギ領域ER~ERそれぞれに属するエネルギを持つX線光子数、つまり、エネルギ領域毎の求めたいX線光子数をW、W、W,Wとすると、カウンタ56~56の計数値W´、W´、W´、W´との関係は、
   W=W´-W´
   W=W´-W´
   W=W´-W´
となる。なお、W=W´は重畳現象に因る、意味の無い(つまり、X線光子が持つエネルギ領域を特定できない)情報であるので演算されない。
As described above, in the present embodiment, the counters 56 1 to 56 4 count the number of photons having energy exceeding the energy range ER 1 (to ER 4 ) to be counted by the counters 56 1 to 56 4 , respectively. Therefore, the number of X-ray photons having energy belonging to each of the first to fourth energy regions ER 1 to ER 4 , that is, the number of X-ray photons to be obtained for each energy region is expressed as W 1 , W 2 , W 3 , W 4 , the relationship between the count values W 1 ′, W 2 ′, W 3 ′, W 4 ′ of the counters 56 1 to 56 4 is
W 1 = W 1 '-W 2 '
W 2 = W 2 '-W 3 '
W 3 = W 3 '-W 4 '
It becomes. Note that W 4 = W 4 ′ is not calculated because it is meaningless information (that is, the energy region of the X-ray photon cannot be specified) due to the superposition phenomenon.
 そこで、真に求めたい計数値W~Wは、後述するデータプロセッサで上式に基づく減算処理に求める。なお、理想的には、W=W´=0である。 Therefore, the count values W 1 to W 4 that are to be truly obtained are obtained by subtraction processing based on the above equation by a data processor described later. Ideally, W 4 = W 4 ′ = 0.
 このように、本実施形態にあっては、第1~第4のエネルギ領域ER~ERそれぞれに属するX線光子数W~Wは、実際の計数値W´~W´から演算(減算)によって求める。このため、比較器54~54の出力のオン、オフの組合せから、今の事象、すなわちX線光子の入射がどのエネルギ領域ER1~ER4に属するかを解読する回路が不要になる。これにより、検出器22のデータ計数回路51に実装する回路構成が簡単化される。 Thus, in the present embodiment, X-ray photon number W 1 to W 4 belonging to each of the first through fourth energy regions ER 1 to ER 4, the actual count value W 1 '~ W 4' Is obtained by calculation (subtraction). For this reason, a circuit for deciphering which energy region ER1 to ER4 the current event, that is, the incidence of X-ray photons belongs, becomes unnecessary from the combination of turning on and off the outputs of the comparators 54 1 to 54 4 . This simplifies the circuit configuration mounted on the data counting circuit 51 n of the detector 22.
 なお、本願に係るX線光子数のエネルギ領域毎の「収集」の意味には、上述のように実際の計数値から「演算によって求める」という意味が含まれる。 It should be noted that the meaning of “collection” for each energy region of the number of X-ray photons according to the present application includes the meaning “obtained by calculation” from the actual count value as described above.
 上述したカウンタ56~56にはコンソール17の後述するコントローラからスタート・ストップ端子T2を介して起動及び停止の信号が与えられる。一定時間の計数は、カウンタ自身が有するリセット回路を使って外部から管理される。 The counter 56 1-56 4 described above start and stop signals is supplied via a start-stop terminal T2 from below to the controller of the console 17. Counting for a fixed time is managed from the outside using a reset circuit included in the counter itself.
 このようにして、リセットされるまでの一定時間の間に、複数のカウンタ56~56により、検出器22に入射したX線の光子数が、画素S毎に計数される。このX線の光子数の計数値W´(k=1~4)は、カウンタ56~56のそれぞれからデジタル量の計数値として並列に出力された後、シリアル変換器59によりシリアルフォーマットに変換される。このシリアル変換器59は残り全ての収集チャンネルのシリアル変換器59~59とシリアルに接続されている。このため、全てのデジタル量の計数値は、最後のチャンネルのシリアル変換器59からシリアルに出力され、送信端T3を介してコンソール17に送られる。 Thus, during a certain period of time until reset by a plurality of counters 56 1-56 4, the number of photons of X-rays incident on the detector 22 is counted for each pixel S n. The number of photons counted value W k of the X-ray '(k = 1 ~ 4), after being outputted from each of the counters 56 1-56 4 in parallel as the count value of the digital quantity, serial format by serial converter 59 Is converted to The serial converter 59 1 is connected to the serial converter 59 2 ~ 59 N and serial all remaining acquisition channels. Therefore, the count of all digital content is output from the last channel of the serial converter 59 N serially sent to the console 17 via the transmitting end T3.
 コンソール17では、インターフェース31がそれらの計数値を受信して後述する記憶部に格納する。 In the console 17, the interface 31 receives these count values and stores them in a storage unit to be described later.
 このように、パノラマ撮影用及びCT撮影用の各画素S(共通画素あり)の計数値が所定の順にフォーマットされ、且つ、順次、シリアルに出力されることで、パノラマ撮影及びCT撮影によるフレームデータがサンプリング時間毎に収集される。このため、フレームデータのアドレスを指定することで、パノラマ撮影だけのフレームデータ及びCT撮影のフレームデータを得ることができる。 As described above, the count values of the respective pixels S n (with common pixels) for panoramic imaging and CT imaging are formatted in a predetermined order and sequentially output serially, so that a frame by panoramic imaging and CT imaging is obtained. Data is collected at every sampling time. For this reason, by specifying the address of the frame data, it is possible to obtain frame data for only panoramic imaging and frame data for CT imaging.
 なお、本実施形態では、上述したN個の画素Sに対応した半導体セルC及びデータ計数回路51はASIC(Application Specific Integrated Circuit)によりCMOSで一体に構成されている。勿論、このデータ計数回路51は、半導体セルCの群とは互いに別体の回路又はデバイスとして構成してもよい。 In the present embodiment, it is integrally constructed in CMOS by the semiconductor cell C and the data counting circuit 51 n corresponding to N pixels S n described above ASIC (Application Specific Integrated Circuit). Of course, the data counting circuit 51 n may be configured as a circuit or device separate from the group of semiconductor cells C.
 またなお、上記実施形態において、複数の検出モジュールB1~Bmは、柱状に加工された複数のシンチレータを束ねたシンチレーターアレイと、前記シンチレーターアレイと光学的に接続され、当該シンチレータから入射する光を受ける受光面に複数のアバランシェフォトダイオードを実装し、かつ当該受光面の前記セルに相当する所定サイズの矩形領域毎に当該領域に属する当該アバランシェフォトダイオードをクエンチング要素で電気的に接続した構成を有するシリコンフォトマルティプライヤーと、を備えていてもよい。 In the above-described embodiment, the plurality of detection modules B1 to Bm are connected to a scintillator array in which a plurality of scintillators processed into columnar shapes are bundled, and receives light incident from the scintillator. A plurality of avalanche photodiodes are mounted on the light receiving surface, and the avalanche photodiodes belonging to the region are electrically connected by a quenching element for each rectangular region having a predetermined size corresponding to the cell on the light receiving surface. And a silicon photomultiplier.
 また、シンチレータの材料はLFS(ケイ酸ルテチウム)、GAGG:Ce(ガドリニウムアルミニウムガリウムガーネット)、LuAG:Pr(プラセオジム添加ルテチウム・アルミニウム・ガーネット)、あるいは当該LuAG:Prに同等の減衰時間、発光量、比重を有する材料であってもよい。 The material of the scintillator is LFS (lutetium silicate), GAGG: Ce (gadolinium aluminum gallium garnet), LuAG: Pr (praseodymium-added lutetium aluminum garnet), or the same decay time and light emission amount as the LuAG: Pr. It may be a material having a specific gravity.
 コンソール17は、図8に示すように、信号の入出力を担うインターフェース(I/F)31を備え、このインターフェース31にバス32を介して通信可能に接続されたコントローラ33、第1の記憶部34、データプロセッサ35、表示器36、入力器37、キャリブレーション演算器38、第2の記憶部39、ROM40A~40D、及び閾値付与器41を備えている。 As shown in FIG. 8, the console 17 includes an interface (I / F) 31 that performs input and output of signals, a controller 33 that is communicably connected to the interface 31 via a bus 32, and a first storage unit 34, a data processor 35, a display unit 36, an input unit 37, a calibration calculator 38, a second storage unit 39, ROMs 40A to 40D, and a threshold value assigner 41.
 コントローラ33は、ROM40Aに予め与えられたプログラムに沿ってX線撮影装置1の駆動を制御する。この制御には、X線管21に高電圧を供給する高電圧発生装置42への指令値の送出、及び、キャリブレーション演算器38への駆動指令も含まれる。第1の記憶部34は、検出器22からインターフェース31を介して送られてきた計数値であるフレームデータ、及び、画像データを保管する。 The controller 33 controls driving of the X-ray imaging apparatus 1 in accordance with a program given in advance to the ROM 40A. This control includes sending a command value to the high voltage generator 42 that supplies a high voltage to the X-ray tube 21 and a drive command to the calibration calculator 38. The first storage unit 34 stores frame data and image data that are count values sent from the detector 22 via the interface 31.
 データプロセッサ35は、コントローラ33の管理の下に、ROM40Bに予め与えられたプログラムに基づいて動作する。データプロセッサ35は、その動作により、第1の記憶部34に保管されたフレームデータを所望のCT再構成法で処理してCT画像の再構成処理を行う。また、データプロセッサ35は、その動作により、第1の記憶部34に保管されたフレームデータに、公知のシフト・アンド・アッド(shift and add)と呼ばれる演算法に基づくトモシンセシス法を実施する。これにより、被験者Pの顎部JWのCT画像及びパノラマ画像が得られる。表示器36は、作成される画像の表示や、装置の動作状況を示す情報及び入力器37を介して与えられるオペレータの操作情報の表示を担う。入力器37は、オペレータが撮影に必要な情報を装置に与えるために使用される。 The data processor 35 operates based on a program given in advance to the ROM 40B under the control of the controller 33. By the operation, the data processor 35 processes the frame data stored in the first storage unit 34 by a desired CT reconstruction method to perform a CT image reconstruction process. Further, the data processor 35 performs a tomosynthesis method based on a known calculation method called “shift and add” on the frame data stored in the first storage unit 34 by the operation. Thereby, a CT image and a panoramic image of the jaw JW of the subject P are obtained. The display unit 36 is responsible for displaying an image to be created, information indicating the operation status of the apparatus, and operator operation information given via the input unit 37. The input device 37 is used for an operator to give information necessary for photographing to the apparatus.
 また、キャリブレーション演算器38は、コントローラ33の管理の下に、ROM40Cに予め内蔵されているプログラムの下で動作し、データ計数回路における画素S毎のエネルギ弁別回路毎に与える、X線エネルギ弁別のためのデジタル量の閾値をキャリブレーションする。 Further, the calibration computing unit 38, under the control of the controller 33, operating under program built in advance in ROM40C, giving for each energy discriminator circuit for each pixel S n in the data counting circuit, X-rays energy Calibrate the digital quantity threshold for discrimination.
 閾値付与器41は、コントローラ33の制御の下で、撮影時に第2の記憶部39に格納されているデジタル量の閾値を画素毎に且つ弁別回路毎に呼び出して、その閾値を指令値としてインターフェース31を介して検出器22に送信する。この処理を実行するため、閾値付与器41はROM40Dに予め格納されたプログラムを実行する。 Under the control of the controller 33, the threshold value assigner 41 calls the digital amount threshold value stored in the second storage unit 39 at the time of photographing for each pixel and for each discrimination circuit, and uses the threshold value as a command value as an interface. 31 to the detector 22. In order to execute this process, the threshold value assigner 41 executes a program stored in advance in the ROM 40D.
 コントローラ33、データプロセッサ35、キャリブレーション演算器38、閾値付与器41は共に、与えられたプログラムで稼動するCPU(中央処理装置)を備えている。それらのプログラムは、ROM40A~40Dのそれぞれに事前に格納されている。 The controller 33, the data processor 35, the calibration calculator 38, and the threshold value assigner 41 are all provided with a CPU (central processing unit) that operates according to a given program. Those programs are stored in advance in each of the ROMs 40A to 40D.
 本実施形態では、データプロセッサ35は、入力器37からの操作者の指令に応じて、第1の記憶部34に格納されている計数値を読み出し、この計数値を用いて画像処理、物質同定の処理、計測処理など、指令された処理を実行する。画像処理には、例えば、「パノラマ撮影」用のモシンセシス法に基づく歯列の断面のパノラマ画像の生成、及び、「CT撮影」用の所望の再構成法に基づく断層像の生成がある。 In this embodiment, the data processor 35 reads the count value stored in the first storage unit 34 in response to an operator command from the input device 37, and uses this count value for image processing and substance identification. The commanded process such as the above process and the measurement process is executed. The image processing includes, for example, generation of a panoramic image of a cross section of a dentition based on a mosynthesis method for “panoramic imaging” and generation of a tomographic image based on a desired reconstruction method for “CT imaging”.
 また、物質同定の概念には、ビームハードニング情報を用いた顎部を構成する複数の構造物(物質)の種類やその構造物の状態の同定(特定)が含まれる。この物質同定の処理は本願の特徴の一つである。 Also, the concept of substance identification includes identification (specification) of the types of structures (substances) constituting the jaw and the state of the structures using beam hardening information. This substance identification process is one of the features of the present application.
 ここで、図9を参照して、本実施形態に係るデンタル用のX線撮影装置1により実行される、物質同定のためのスキャン(撮影)、物質同定のための事前準備、及び、物質同定処理の概略を説明する。なお、本実施形態で言う物質同定とは、X線によりスキャンされる、被検体Pの顎部において、歯科医等のユーザが指定した関心位置(部位)に在る物質の少なくとも種類を同定することであり、物質とはX線の照射方向に厚さを有している物質(舌、海綿骨、エナメル質、皮質骨、金属(詰め物,被せ物)など)である。 Here, referring to FIG. 9, a scan (imaging) for substance identification, preparation for substance identification, and substance identification executed by the dental X-ray imaging apparatus 1 according to the present embodiment. An outline of the processing will be described. The substance identification referred to in the present embodiment identifies at least the type of substance at the position of interest (part) designated by a user such as a dentist in the jaw of the subject P scanned by X-rays. The substance is a substance having a thickness in the X-ray irradiation direction (tongue, cancellous bone, enamel, cortical bone, metal (stuffing, covering), etc.).
[スキャン(撮影)]
 最初に、このX線撮影装置1で実行されるスキャン(撮影)を説明する(ブロックB1参照)。
[Scan]
First, scanning (imaging) performed by the X-ray imaging apparatus 1 will be described (see block B1).
 このスキャンによれば、パノラマ撮影ためのスキャンはX線管21及び検出器22を共に円軌道に沿って回転させることで実施でき、CT撮影のためのスキャンはハーフスキャンでよい。実際にX線管21及び検出器22の対を回転させる範囲は、例えば図10に示すように、初期位置から角度αの範囲であって、この角度αはCT撮影用のハーフスキャンの角度範囲を超えて、パノラマ撮影で必要な角度範囲で決まる範囲である(例えば210°)。 According to this scan, a scan for panoramic imaging can be performed by rotating both the X-ray tube 21 and the detector 22 along a circular orbit, and a scan for CT imaging may be a half scan. The range in which the pair of the X-ray tube 21 and the detector 22 is actually rotated is a range of an angle α from the initial position as shown in FIG. 10, for example, and this angle α is an angle range of a half scan for CT imaging. This is a range determined by the angle range necessary for panoramic photography (for example, 210 °).
 その上で、これらパノラマ用及びCT用の2つのスキャンは、X線管21及び検出器22を1回だけ被検体Pの顎部JWの周りを角度範囲αに渡って回転させる中で統合的に実行される。つまり、パノラマ撮影とCT撮影とを同時に実施できるので、スキャン時間の短縮、X線被ばく量の軽減、操作者の労力軽減などを図ることができる。 In addition, these two panoramic and CT scans are integrated while the X-ray tube 21 and the detector 22 are rotated only once around the jaw JW of the subject P over the angular range α. To be executed. That is, since panoramic imaging and CT imaging can be performed simultaneously, it is possible to shorten the scanning time, reduce the X-ray exposure amount, reduce the operator's labor, and the like.
 勿論、CT撮影についてはハーフスキャンではなくフルスキャンであってもよい。また、パノラマ撮影とCT撮影とを別々のスキャンとして実施してもよい。 Of course, CT scanning may be full scan instead of half scan. Further, panoramic imaging and CT imaging may be performed as separate scans.
 被検体Pの顎部JWを撮影空間ISの所定位置に位置決めすると、コントローラ33は、回転ユニット16を回転させるX線管21及び検出器22を顎部JWの周りに回転させる。このときの回転軌道、図3に示したように、回転中心Oからの半径Dx,Ddが異なる2つの円軌道Tx,Tdである。この回転中に、X線管21のX線焦点FPから例えば連続X線が放射状に曝射される。このX線はスリット23により検出器22の収集窓「22A+22B」(図4参照)の形状に沿うように絞られる。この絞られたX線は顎部JWに存在する様々な物質を透過して検出器22の収集窓に入射する。検出器22は一定のフレームレート(例えば75fps)で入射X線をサンプリングする。これにより、回転中の検出器から、各画素に入射したX線のフォトン数に応じたデジタル量のフレームデータがエネルギ領域ER(~ER)毎に一定時間毎に出力される。これらのフレームデータは順次又は一括で、コンソール17の第1の記憶部34に転送される。 When the jaw portion JW of the subject P is positioned at a predetermined position in the imaging space IS, the controller 33 rotates the X-ray tube 21 and the detector 22 that rotate the rotating unit 16 around the jaw portion JW. At this time, as shown in FIG. 3, there are two circular orbits Tx and Td having different radii Dx and Dd from the rotation center O. During this rotation, for example, continuous X-rays are exposed radially from the X-ray focal point FP of the X-ray tube 21. This X-ray is narrowed by the slit 23 so as to follow the shape of the collection window “22A + 22B” (see FIG. 4) of the detector 22. The narrowed X-rays pass through various substances present in the jaw JW and enter the collection window of the detector 22. The detector 22 samples incident X-rays at a constant frame rate (for example, 75 fps). As a result, digital frame data corresponding to the number of X-ray photons incident on each pixel is output from the rotating detector at regular intervals for each energy region ER 1 (˜ER 3 ). These frame data are sequentially or collectively transferred to the first storage unit 34 of the console 17.
[事前準備]
 次に、図9のブロックB2に示す、同定処理を行うために必要な事前準備を説明する。この事前準備には、
 ・タフボーンによるパノラマ画像を補正するための補正データの作成(ブロック21):
 ・撮影空間ISとX線管21、検出器22、それらの回転中心との投影角度毎の幾何学的な関係を解析するためのキャリブレーション(又は、その理論値の準備)(ブロックB22):
 また、本実施形態では、国際公開公報WO2011/142343(国際出願番号PCT/JP2011/060731)により知られるように、ファントムを使って撮影空間ISの構造が解析され、検出器22の収集チャンネルCNがキャリブレートされる。このキャリブレーションは撮影前、保守点検時などの適宜なタイミングで実行される。
 ・後述する参照用散布図のデータベースの作成(ブロックB23)、
がある。
[Advance preparation]
Next, the preparations required for performing the identification process shown in block B2 of FIG. 9 will be described. For this preparation,
Creation of correction data for correcting a panoramic image using tough bones (block 21):
Calibration for analyzing the geometric relationship for each projection angle between the imaging space IS, the X-ray tube 21, the detector 22, and their rotation center (or preparation of the theoretical value) (block B22):
In this embodiment, as known from International Publication No. WO2011 / 142343 (International Application No. PCT / JP2011 / 060731), the structure of the imaging space IS is analyzed using a phantom, and the collection channel CN n of the detector 22 is analyzed. Is calibrated. This calibration is executed at an appropriate timing such as before photographing or during maintenance inspection.
-Creation of a reference scatter graph database (block B23) to be described later,
There is.
[物質同定処理]
 次に、図9のブロックB3で示す、データプロセッサ35によりオペレータとインターラクティブに実行される物質同定処理を説明する。
[Substance identification process]
Next, a substance identification process executed interactively with the operator by the data processor 35 shown by the block B3 in FIG. 9 will be described.
 この物質同定処理には、
  パノラマ画像の作成(ステップS31)、
  パノラマ3D画像の作成(ステップS32)、
  障害陰影の除去・軽減(ステップS33)、
  X線方向の設定(ステップS34)、
  CT画像の再構成(ステップS35)、
  CT画像の3次元表示(ステップS36)、
  物質の厚さ測定(ステップS37)、
  物質の吸収係数を演算(ステップS38)、
  散布図の作成及び物質の種類の同定(ステップS39)、及び、
  同定結果の表示(ステップS40)、
が含まれる。
For this substance identification process,
Creation of a panoramic image (step S31),
Creating a panorama 3D image (step S32);
Removal / reduction of obstacle shadow (step S33),
X-ray direction setting (step S34),
CT image reconstruction (step S35),
3D display of CT image (step S36),
Material thickness measurement (step S37),
Calculating the absorption coefficient of the substance (step S38);
Creating a scatter diagram and identifying the type of substance (step S39), and
Identification result display (step S40),
Is included.
 なお、一方のステップS31~S34までのステップ群はこの順に実行すればよく、他方のステップS35~S36までのステップ群はこの順に実行すればよい。どちらの群を先に処理してもよい。データプロセッサ35は、それら両群の処理が終わると、ステップS37~S40の処理を実行するようになっている。 Note that the step group from one step S31 to S34 may be executed in this order, and the other step group from step S35 to S36 may be executed in this order. Either group may be processed first. The data processor 35 executes the processes of steps S37 to S40 when the processing of both groups is completed.
 以下、これらのステップの処理を各別に説明する。
[パノラマ画像の作成(ステップS31)]
 データプロセッサ35は、第1の記憶部34に保存されているX線照射角度(投影角度)毎のパノラマ画像用のフレームデータを読み出し、そのフレームデータにトモシンセシス法を適用して3D基準断層面SSに沿ったパノラマ画像をエネルギ領域ER(n=1~3)毎に再構成する。なお、ここで読み出されるフレームデータは、図4に示す左側縦一列分の収集窓22Bを形成するモジュールBmから一定時間毎(例えば75fps)に収集されたフレームである。
Hereinafter, the processing of these steps will be described separately.
[Panorama image creation (step S31)]
The data processor 35 reads out panoramic image frame data for each X-ray irradiation angle (projection angle) stored in the first storage unit 34, applies a tomosynthesis method to the frame data, and applies the 3D reference tomographic plane SS. Is reconstructed for each energy region ER n (n = 1 to 3). Note that the frame data read out here are frames collected at regular intervals (for example, 75 fps) from the module Bm that forms the collection window 22B for the left vertical column shown in FIG.
 この実施形態では、図7に示す3つのエネルギ領域ER~ERそれぞれに対してパノラマ画像が作成されるが、2つ以上のエネルギ領域ERについてパノラマ画像を再構成するようにしてもよい。 In this embodiment, a panoramic image is created for each of the three energy regions ER 1 to ER 3 shown in FIG. 7, but the panoramic image may be reconstructed for two or more energy regions ER n. .
 この再構成に伴うトモシンセシス法、すなわちシフト・アンド・アッド処理は国際公開公報WO2012/008492に沿って実行される。撮影空間ISのキャリブレーションされたゲインは予め保有されているので、このゲインに沿ったシフト量分だけフレームデータをシフトさせて相互に画素値を加算することで、歯列の実在位置を反映した擬似的に3次元のパノラマ画像PIfocusが自動的に生成される(図10参照)。このパノラマ画像PIfocusは3Dオートフォーカス画像と呼ばれる。 The tomosynthesis method accompanying this reconstruction, that is, shift-and-add processing, is executed in accordance with International Publication No. WO2012 / 008492. Since the calibrated gain of the imaging space IS is held in advance, the actual position of the dentition is reflected by shifting the frame data by the shift amount along this gain and adding the pixel values to each other. A pseudo three-dimensional panoramic image PI focus is automatically generated (see FIG. 10). This panoramic image PI focus is called a 3D autofocus image.
[パノラマ3D画像の作成(ステップS32)]
 次に、データプロセッサ35は、事前準備されたタフボーンによる補正データをエネルギ領域ERn毎に第1の記憶部34から読み出し、この補正データを用いてパノラマ画像PIfocusをエネルギ領域ER毎に補正する。これにより、規格化されたパノラマ3D画像(3Dオートフォーカス画像)が3つのエネルギ領域ERに対して生成される(図10及び図11参照)。
[Creation of Panorama 3D Image (Step S32)]
Next, the data processor 35 reads correction data based on the prepared tough bone from the first storage unit 34 for each energy region ERn, and corrects the panoramic image PI focus for each energy region ER n using this correction data. . As a result, standardized panoramic 3D images (3D autofocus images) are generated for the three energy regions ER n (see FIGS. 10 and 11).
[障害陰影の除去・軽減(ステップS33)]
 このように生成された3つのエネルギ領域ERそれぞれのパノラマ3D画像に、データプロセッサ35により、障害陰影の除去・軽減の処理が実行される。この結果、得られたパノラマ3D画像は表示器36に表示される。
[Removal / Reduction of Obstacle Shadow (Step S33)]
The data processor 35 executes the removal / reduction processing of the obstacle shadow on the panorama 3D image of each of the three energy regions ER n generated in this way. As a result, the obtained panorama 3D image is displayed on the display 36.
 この処理は、図12に模式的に示すように、顎部JWに在る歯列TRを撮影対象であるとすると、頸椎CSは撮影の障害になる物体である。パノラマ撮影の場合、この頸椎の陰影がパノラマ画像に写り込む。この写り込みは特定のX線照射角度の範囲では不可避である。したがって、この歯列(歯茎)を撮影対象とするときには、作成したパノラマ3D画像から頸椎に因る障害陰影を除去・軽減することが望まれる。この除去・軽減の手法は後で別項目として詳述する。 In this process, as schematically shown in FIG. 12, if the dentition TR in the jaw JW is an object to be imaged, the cervical vertebra CS is an object that obstructs imaging. In the case of panoramic photography, the shadow of the cervical spine is reflected in the panoramic image. This reflection is inevitable in a specific X-ray irradiation angle range. Therefore, when this dentition (gum) is to be imaged, it is desirable to remove / reduce the obstruction shadow caused by the cervical spine from the created panoramic 3D image. This removal / reduction method will be described later in detail as a separate item.
[X線方向の設定(ステップS34)]
 次いで、データプロセッサ35は、オペレータ(歯科医など)との間でインターラクティブな情報交換を介して、いま表示されているパノラマ3D画像上で関心位置Pintを指定する。例えば、オペレータがパノラマ3D画像を観察しながら、物質(顎部の構造物)の種類を知りたくてその位置Pintを指定したとする。この指定は、例えば、歯の詰め物として利用されている金属物の種類を知りたいときにも行われる。
[X-ray direction setting (step S34)]
Next, the data processor 35 designates the position of interest P int on the currently displayed panoramic 3D image through interactive information exchange with an operator (such as a dentist). For example, it is assumed that the operator designates the position P int while observing a panoramic 3D image and wants to know the type of the substance (structure of the jaw). This designation is also performed, for example, when it is desired to know the type of metal object used as a tooth filling.
 これに応じて、データプロセッサ35は、その関心位置Pintの近傍に「L-1」個の位置Paddを指定する。これにより、合計L個の位置Pint、Paddが設定される。これらL個の点は、厚さを有する物質のX線吸収係数を演算するときに必要な後述する個の組合せに係る連列方程式を解くために必要である。ここで、Nは関心位置Pintを初め、その近傍の「L-1」個の位置Paddに相当するL本のX線照射方向に存在する、厚さを有する物質の数である。ここで、「近傍」とは、関心位置が指示している物質と同一と思われる物質の範囲内ということを意味している。 In response to this, the data processor 35 designates “L−1” positions P add in the vicinity of the position of interest P int . Thereby, a total of L positions P int and P add are set. These L-number of points are required to solve the communicating columns equation according to L C N-number of combinations to be described later required when calculating the X-ray absorption coefficient of a substance having a thickness. Here, N is the number of substances having a thickness that exist in the X X-ray irradiation directions corresponding to “L−1” positions P add in the vicinity of the position of interest P int . Here, “near” means within a range of a substance considered to be the same as the substance indicated by the position of interest.
 このため、オペレータは事前に、後述するCT画像を観察して、3次元のCT画像においてL本のX線照射方向に沿って厚さのある物質が何種類あるかを知っておけばよい。この種類数=Nになるので、この方程式から解を得るために、L≧Nを満足するL個の位置を自動的に又はインターラクティブに設定する。 For this reason, the operator only needs to observe in advance a CT image, which will be described later, and know how many kinds of substances have a thickness along the L X-ray irradiation directions in the three-dimensional CT image. Since the number of types = N, in order to obtain a solution from this equation, L positions satisfying L ≧ N are set automatically or interactively.
 例えば、図13に示す如く、指定した関心位置Pintに投影されるX線方向(仮想的なX線ビーム方向)に3種類の厚さを持つ物質があるとすると、N=3となる。このため、データプロセッサ35は、例えば、その関心位置Pintの周辺に等距離で0.5mm(デフォルト値)離れた2点以上の点を追加し、合計3個以上の点(=L)を設定してもよい。また、データプロセッサ35は、この追加点の位置及び数をオペレータからの指示に頼ってもよい。 For example, as shown in FIG. 13, if there are substances having three types of thicknesses in the X-ray direction (virtual X-ray beam direction) projected to the designated position of interest Pint , N = 3. For this reason, for example, the data processor 35 adds two or more points separated by 0.5 mm (default value) at equal distances around the position of interest P int and adds a total of three or more points (= L). It may be set. Further, the data processor 35 may rely on an instruction from the operator for the position and number of the additional points.
 このL個の位置は離れすぎて、互いに別々の物質を通ることがないように設定する必要があるので、この点も、オペレータとの間で確認を採る処理を追加してもよい。 Since it is necessary to set the L positions so as not to be separated from each other and to pass through different substances, it is also possible to add processing for confirming with the operator.
 このようにL個の点が指定されると、データプロセッサ35は、ブロックB22で処理されており、第1の記憶部34に事前の保存されている撮影空間IS内の幾何学的な構造を示す解析データを読み出し、この解析データに基づいて、上述した合計L個の位置Pint、Paddに投影される3次元のX線方向(仮想的な3次元のX線ビーム方向)をそれぞれ演算する。この3次元のX線方向は、X線管21のX線焦点FPからパノラマ3D画像上の前記位置Pint、Paddのそれぞれを睨む方向をXY面に投影した方向と、その方向のXY面に対する傾斜角度とに基づいて演算される。この方向と角度の情報を解析データから取得し、演算に用いる。 When the L points are designated in this way, the data processor 35 is processed in the block B22, and the geometric structure in the imaging space IS stored in advance in the first storage unit 34 is obtained. The analysis data shown is read out, and based on this analysis data, the three-dimensional X-ray directions (virtual three-dimensional X-ray beam directions) projected onto the total L positions P int and P add are calculated. To do. The three-dimensional X-ray direction includes a direction in which a direction sandwiching each of the positions P int and P add on the panorama 3D image from the X-ray focal point FP of the X-ray tube 21 is projected on the XY plane, and the XY plane in that direction Is calculated based on the inclination angle with respect to. Information on the direction and angle is acquired from the analysis data and used for calculation.
 このように求められたL個の位置(点)に投影されるX線の3次元方向を示すデータが第1の記憶部34に一時保存される。 The data indicating the three-dimensional direction of the X-rays projected on the L positions (points) thus obtained is temporarily stored in the first storage unit 34.
[CT画像の再構成(ステップS35)]
 一方、データプロセッサ35は第1の記憶部34に保存されているCT画像用のフレームデータを読み出す。このフレームデータは、図4に示す収集窓22Aを形成するモジュールBmから一定時間毎(例えば75fps)に収集されたフレームである。データプロセッサ35は、X線照射角度、すなわち投影角度毎に収集される3つのエネルギ領域ERnのフレームデータを互いに画素毎に加算して投影データを生成する。さらに、これらのハーフスキャン分の複数の投影角度について生成された複数組の投影データを例えば逐次近似法に基づくCT再構成に付す。これにより、顎部JWの3次元CT画像が生成される。
[Reconstruction of CT Image (Step S35)]
On the other hand, the data processor 35 reads the CT image frame data stored in the first storage unit 34. This frame data is a frame collected at regular time intervals (for example, 75 fps) from the module Bm forming the collection window 22A shown in FIG. The data processor 35 adds the frame data of the three energy regions ERn collected for each X-ray irradiation angle, that is, each projection angle, to each other to generate projection data. Further, a plurality of sets of projection data generated for a plurality of projection angles for these half scans are subjected to CT reconstruction based on, for example, a successive approximation method. Thereby, a three-dimensional CT image of the jaw JW is generated.
 ここで、歯に金属の詰め物がある場合には、その部分はメタルアーチファクトの影響でぼけてしまい、後述する読影や計測が難しくなる。このため、メタルアーチファクトを生じている場合には、そのアーチファクトを除去する処置を伴うCT再構成をし直すことが望ましい。以下、このメタルアーチファクトの除去を伴うCT再構成法説明する。 Here, if there is a metal padding on the teeth, that portion will be blurred due to the influence of metal artifacts, making interpretation and measurement described later difficult. For this reason, when a metal artifact has occurred, it is desirable to re-perform CT reconstruction with a procedure for removing the artifact. Hereinafter, a CT reconstruction method with removal of this metal artifact will be described.
<メタルアーチファクトの除去>
 ここでは、1枚の断層面の中にメタル(金属)部分が存在しているものとして、その部分のアーチファクトを除去する処理を説明する。この処理の一例を図14に示す。同図に示すステップS1~S11までの処理はデータプロセッサ35により実行される。
<Removal of metal artifact>
Here, it is assumed that a metal part exists in one tomographic plane, and a process for removing the artifact of the part will be described. An example of this processing is shown in FIG. The processing from steps S1 to S11 shown in FIG.
 ステップS1(サイノグラムの作成):
 このステップS1では、0~360度までの投影データPinitの値に基づいてサイノグラムが作成される(図15参照)。サイノグラムの横軸は検出器の位置、縦軸が投影角度となる。サイノグラムは、実際に放出されているX線の個数と検出器で計測された値との間で除算を行い、対数演算を行ったものである
Step S1 (creation of sinogram):
In step S1, a sinogram is created based on the value of projection data P init from 0 to 360 degrees (see FIG. 15). The horizontal axis of the sinogram is the detector position, and the vertical axis is the projection angle. The sinogram is a logarithmic operation by dividing between the number of X-rays actually emitted and the value measured by the detector.
 ステップS2(メタル部分の検出):
 次にステップS2において、サイノグラムの横軸方向に投影データPinitの値が走査され、近傍の投影データの値から大きく変化する部分の値が検出される(図16参照)。このとき、大きく減衰して変化する、すなわち金属部分か否かを決めるための尺度として、投影データの平均値に対する標準偏差を用いても良いし、近傍に設定した1次元のROIから分散分析などによってメタル部分にさしかかる部分の、投影データの急激な立ち上がりや立ち下がりの情報を利用しても良い。
Step S2 (detection of metal part):
Next, in step S2, the value of the projection data Pinit is scanned in the direction of the horizontal axis of the sinogram, and the value of the portion that greatly changes from the value of the nearby projection data is detected (see FIG. 16). At this time, the standard deviation with respect to the average value of the projection data may be used as a scale for determining whether or not the metal portion is greatly attenuated, that is, whether it is a metal part, or analysis of variance from a one-dimensional ROI set in the vicinity. Thus, information on the sudden rise and fall of the projection data of the portion approaching the metal portion may be used.
 本実施形態では、ステップS2の処理においてメタル部分の投影データの位置が完全に特定できたものとして、ステップS3以降の処理が実行される。もし、メタル部分が1次元のROIから特定できない場合には、サイノグラム(2次元)を利用してステップS2に相当する処理を施すこともあり得る。また、このようにしても、すべての投影データでメタル部分が特定できない場合は、メタル部分の投影データ上の領域は、サインカーブになることを利用して、メタル部分の抽出に失敗した部分を推定することも可能である。さらに、すべてのメタル部分が必ずしも抽出できなくともステップS3以降の処理において推定することも可能である。 In this embodiment, assuming that the position of the projection data of the metal part can be completely specified in the process of step S2, the processes after step S3 are executed. If the metal portion cannot be identified from the one-dimensional ROI, a process corresponding to step S2 may be performed using a sinogram (two-dimensional). Even in this case, if the metal part cannot be specified in all projection data, the area on the projection data of the metal part becomes a sine curve, and the part that failed to extract the metal part is used. It is also possible to estimate. Furthermore, even if not all metal parts can be necessarily extracted, it is possible to estimate in the processing after step S3.
 ステップS3(画像再構成):
 ステップS3では、ステップS2で抽出したメタル部分の投影データが0に設定され、それ以外の投影データの領域に1が代入され、この操作がすべての投影角度に関して行われる。この投影データが単純逆投影計算によって画像に再構成される(図17参照)。ここで、もしステップS2において投影データのメタル部分が特定できていなかった場合には、その角度を除いた投影データのみを使用して逆投影計算が行われる。このようにして作成された画像は、メタル部分のみが0となり、それ以外は0以外の値を持つことになる。このようにして、この単純逆投影画像の0の画素に適当な値、例えば1(1/cm)が代入し、それ以外の画素値が0に設定される。これが、メタル部分の基本画像Imetalになる。
Step S3 (image reconstruction):
In step S3, the projection data of the metal part extracted in step S2 is set to 0, and 1 is assigned to other projection data areas, and this operation is performed for all projection angles. This projection data is reconstructed into an image by simple back projection calculation (see FIG. 17). Here, if the metal portion of the projection data cannot be specified in step S2, back projection calculation is performed using only the projection data excluding the angle. In the image created in this way, only the metal portion is 0, and the other images have values other than 0. In this way, an appropriate value, for example, 1 (1 / cm) is substituted for the 0 pixel of this simple backprojection image, and the other pixel values are set to 0. This is the basic image I metal of the metal part.
 この基本画像Imetalがメタル部分に相当するため、前述したように、ステップS2の処理において投影データのメタル部分が完全には検出できていなくても、この基本画像基Imetalのデータをメタル部分として機能する。このデータを使えば特定の投影データのどの位置からどの位置までがメタル部分であるかは明確になる。 Since the basic image I metal corresponds to the metal part, as described above, even if the metal part of the projection data is not completely detected in the process of step S2, the data of the basic image base I metal is used as the metal part. Function as. Using this data, it is clear from which position of the specific projection data to which position is the metal part.
 ステップS4(投影データの演算):
 次いで、ステップS4において、ステップS3にて作成された基本画像Imetalのメタル部分のみの投影データPmetalが演算される(図18参照)。また、投影データPinitのうちメタル部分以外(Pmetal≠0)の投影データをPorgと呼ぶことにする(図19参照)。
Step S4 (calculation of projection data):
Next, in step S4, projection data P metal of only the metal portion of the basic image I metal created in step S3 is calculated (see FIG. 18). Further, to the projection data other than the metal portion (P metal ≠ 0) of the projection data P init is called a P org (see FIG. 19).
 ステップS5(画像再構成):
 ステップS5では、メタル部分以外の投影データPorg(Pmetal≠0の部分はPorg=0となっている)が逐次近似法(ML-EM,OS-EMなどの方法)により画像再構成される(図20参照)。この再構成画像のうち、Imetal≠0となる領域の画像をIreconと呼ぶことにする。
Step S5 (image reconstruction):
In step S5, the projection data P org other than the metal part is reconstructed by the successive approximation method (ML-EM, OS-EM, etc.) for the part where P metal ≠ 0 is P org = 0. (See FIG. 20). Among the reconstructed images, an image in a region where I metal ≠ 0 is referred to as I recon .
 ステップS6(画像加算):
 ステップS6では、画像IreconにおけるImetal=1となる領域の画素値は、Irecon=0に設定された後、両画像Irecon、Imetalが互いに画素毎に加算される。これにより加算画像をIallが作成される(図21参照)。
Step S6 (image addition):
In step S6, the pixel value of the region where I metal = 1 in the image I recon is set to I recon = 0, and then both the images I recon and I metal are added to each other for each pixel. As a result, I all is created as an added image (see FIG. 21).
 ステップS7(投影処理):
 さらにステップS7では、この加算画像Iallが投影処理され、投影データPallが作成される(図22参照)。
Step S7 (projection processing):
Further, in step S7, the added image I all is subjected to projection processing, and projection data P all is created (see FIG. 22).
 ステップS8(差分演算及び補正):
 さらにステップS8では、加算画像Iallの投影データPallとメタル部分以外の投影データPorgとの画素毎に比較される。つまり、両者の差分が演算され、その差分データが逐次近似的再構成法(例えばARTなどの算術的画像再構成法)を用いて補正される(図22参照)。この際、Pmetal=0の再構成領域のみについて補正が行われ、メタル部分での投影データなどは処理されない。これをすべての投影角度について処理し、メタル部分以外の画素値の補正が行われる。このようにして、画像Irecon(n)が作成される。ここでn=1となっている。
Step S8 (difference calculation and correction):
Furthermore, in step S8, it is compared for each pixel of the projection data P org except projection data P all the metal parts of the sum image I all. That is, the difference between the two is calculated, and the difference data is corrected using a successive approximation reconstruction method (for example, an arithmetic image reconstruction method such as ART) (see FIG. 22). At this time, only the reconstruction area of P metal = 0 is corrected, and projection data and the like in the metal portion are not processed. This is processed for all projection angles, and pixel values other than the metal portion are corrected. In this way, an image I recon (n) is created. Here, n = 1.
 ステップS9(画像加算、投影演算):
 さらにステップS9では、画像Irecon(1)における、Imetal=1の領域の値がIrecon(1)=0に設定され、両画像Irecon(1)、Imetalが相互に画像毎に加算される。これにより、加算画像Iall(1)が作成される。この加算画像Iall(1)が投影処理に付されて投影データPall(1)が作成される(図23参照)。
Step S9 (image addition, projection calculation):
In step S9, the value of the region of I metal = 1 in the image I recon (1) is set to I recon (1) = 0, and both images I recon (1) and I metal are added to each other for each image. Is done. Thereby, the addition image I all (1) is created. This added image I all (1) is subjected to projection processing to create projection data P all (1) (see FIG. 23).
 ステップS10(差分演算及び補正):
 次いで、ステップS10では、ステップS8と同様に、両投影データPall(1)とPorgが画素毎に比較(差分)され補正される。つまり、画像Irecon(1)が補正されてIrecon(n+1)が作成される
Step S10 (difference calculation and correction):
Next, in step S10, as in step S8, both projection data P all (1) and P org are compared (differed) for each pixel and corrected. That is, the image I recon (1) is corrected to create I recon (n + 1)
 ステップS11(収束の判断):
 次いで、ステップS11において、投影Pall(n)(ただしPmetal=0の部分)と投影データPorgとの間の誤差が一定値以下になったかどうかを見極めて、演算の収束が判断される。この収束判定条件を満たせば処理はメイン処理に返される。収束判定条件が満たされた場合は、ステップS9以降の処理が繰り返して実行される。
Step S11 (convergence determination):
Next, in step S11, it is determined whether or not the error between the projection P all (n) (where P metal = 0) and the projection data P org has become a certain value or less, and the convergence of the calculation is determined. . If this convergence determination condition is satisfied, the process is returned to the main process. When the convergence determination condition is satisfied, the processes after step S9 are repeatedly executed.
 以上のように、CT撮影では、金属部を投影データで認識して、その認識された金属部の投影データには固定値を埋めて再構成して、金属部からのメタルアーチファクトを除去又は抑制したCT画像を得る。これにより、金属部の3次元形状をより正確に計測できる。 As described above, in CT imaging, a metal part is recognized with projection data, and the recognized projection data of the metal part is reconstructed by filling a fixed value to remove or suppress metal artifacts from the metal part. CT images obtained are obtained. Thereby, the three-dimensional shape of the metal part can be measured more accurately.
 収束判定条件を満たして作成された(場合によっては、比較を行う回数の最大値を超えた)画像はメタルアーチファクトが低減され、メタル部分には予めセットした線減衰係数が与えられ、それ以外の部分には正しく測定された投影データから推定された値が与えられたことになる。 Metal artifacts are reduced in images created with the convergence criteria (in some cases exceeding the maximum number of comparisons), and a preset line attenuation coefficient is given to the metal part. The portion is given a value estimated from correctly measured projection data.
 上記のメタルアーチファクトの低減操作は、CT画像のビームハードニング除去のプロセスや散乱線除去と同時に逐次近似形の画像再構成のなかで行ってもよい。 The above-described metal artifact reduction operation may be performed in a successive approximation image reconstruction simultaneously with the beam hardening removal process of CT images and scattered radiation removal.
[CT画像の3次元表示(ステップS36)]
 次いで、データプロセッサ35は、生成された顎部JWの3次元CT画像を表示器36に3次元表示する。これにより、オペレータ(歯科医など)はその顎部JWの形態を3次元的に観察することができ、どのX線照射方向に厚さを持つ物質が何個在るか、どの物質とどの物質が同種のものであるかなど、その状態を推定できる。
[Three-dimensional display of CT image (step S36)]
Next, the data processor 35 three-dimensionally displays the generated three-dimensional CT image of the jaw JW on the display 36. As a result, an operator (such as a dentist) can observe the form of the jaw JW three-dimensionally, how many substances have thickness in which X-ray irradiation direction, which substance and which substance. It is possible to estimate the state of the same type.
[物質の厚さ測定(ステップS37)]
 ここでは、データプロセッサ35により自動で、又は、オペレータとインターラクティブに、合計L個の点Pint、Paddを睨む合計L本のX線方向それぞれに沿って在る物質の、その方向に沿った厚みti1~tiN(i=1~M:Mはエネルギ領域ERの数:Nは厚さのある物質の数)が計測される。この厚みは、例えばオペレータが3次元CT画像上で各X線方向における各物質の橋位置を点ROIで指定し、この指定位置を前記解析データ(ブロックB22)から演算することで計測される。この結果、関心のある複数のX線方向それぞれに在るN個(L≧N)の物質の厚さが測定される。
[Measurement of material thickness (step S37)]
Here, the data processor 35 automatically or interactively interacts with the operator along the direction of the substance existing along each of the total L X-ray directions including the total L points P int and P add . Thicknesses t i1 to t iN (i = 1 to M: M is the number of energy regions ER n : N is the number of thick materials) are measured. This thickness is measured, for example, by an operator specifying a bridge position of each substance in each X-ray direction on the 3D CT image with a point ROI and calculating this specified position from the analysis data (block B22). As a result, the thicknesses of N (L ≧ N) substances in each of a plurality of X-ray directions of interest are measured.
[物質の吸収係数を演算(ステップS38)]
 このように準備が整うと、データプロセッサ35は、ステップS33で障害陰影が除去(又は軽減)された3Dパノラマ画像のデータとステップS37で求められた物質の厚みti1~tiNのデータとに基づいて、エネルギ領域ER毎に物質のX線吸収係数μ1j, μ2j, …, μMj(Mはエネルギ領域数:j=1~N:Nは物質数)を個の連立方程式から演算する。
[Calculate the absorption coefficient of the substance (step S38)]
When the preparation is completed in this manner, the data processor 35 converts the 3D panoramic image data from which the obstacle shadow has been removed (or reduced) in step S33 and the material thicknesses t i1 to t iN obtained in step S37. Based on the X-ray absorption coefficient μ 1j , μ 2j ,..., Μ Mj (M is the number of energy regions: j = 1 to N: N is the number of materials) of L C N for each energy region ER n. Calculate from the equation.
 一例を挙げると、図13に示すように、観察した合計L本のX線方向に在る物質の数が3であり、それらの厚さがt,t,tで、それらのX線吸収係数がμ1j, μ2j, μ3j(j=1~3)である場合、
エネルギ領域ERの3Dパノラマ画像上の位置の画素値(計測値)の対数値=μ11*t+μ12*t+μ13*t+k(k:定数)、
エネルギ領域ERの3Dパノラマ画像上の位置の画素値(計測値)の対数値=μ21*t+μ22*t+μ23*t+k(k:定数)、
エネルギ領域ERの3Dパノラマ画像上の位置の画素値(計測値)の対数値=μ31*t+μ32*t+μ33*t+k(k:定数)、
の連立方程式を解けばよい。これらの方程式は、観察したい位置Pint及びその近傍の位置Paddの合計Lそれぞれに対して成立する。今の例はL=3である。このため、9個の変数μ1j, μ2j, μ3j(j=1~3(N=3))に対して9個の方程式が成り立つので、この連立方程式の特異値分解などの演算法で解、すなわちエネルギ領域毎に3個、合計9個のX線吸収係数μMjが求められる。L>Nの場合には、個の組合せだけ連立方程式が解け、通りの統計誤差を含んだ解が得られて、顎部の構造物(厚さのある物質)の種類の同定に使用される。
As an example, as shown in FIG. 13, the total number of observed materials in the X-ray direction is 3, and their thicknesses are t 1 , t 2 , t 3 , and their X When the linear absorption coefficient is μ 1j , μ 2j , μ 3j (j = 1 to 3),
Logarithmic value of pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 1 = μ 11 * t 1 + μ 12 * t 2 + μ 13 * t 3 + k 1 (k 1 : constant),
Logarithmic value of a pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 2 = μ 21 * t 1 + μ 22 * t 2 + μ 23 * t 3 + k 2 (k 2 : constant),
Logarithmic value of a pixel value (measured value) at a position on the 3D panoramic image in the energy region ER 3 = μ 31 * t 1 + μ 32 * t 2 + μ 33 * t 3 + k 3 (k 3 : constant),
You can solve these simultaneous equations. These equations hold for the total L of the position P int to be observed and the position P add in the vicinity thereof. In this example, L = 3. For this reason, since nine equations are established for nine variables μ 1j , μ 2j , μ 3j (j = 1 to 3 (N = 3)), an arithmetic method such as singular value decomposition of the simultaneous equations is used. A total of nine X-ray absorption coefficients μ Mj are obtained for the solution, that is, three for each energy region. In the case of L> N is, L C N-number of only melts simultaneous equations combined, L C inclusive solution the statistical error of the N kinds are obtained, the type of the structure of the jaw (substances with a thickness) Used for identification.
[散布図の作成及び物質の種類の同定(ステップS39)]
 次いで、データプロセッサ35は、ステップS38で得られたX線吸収係数μMjの解から物質毎に2次元散布図を作成する。この2次元散布図は、その一次元を成す縦軸に示す相対減衰指数RAI(Relative Attenuation Index)と、もう一次元を成す横軸に示す線質変化指数SDI(Spectrum Deformation Index)とからなる。勿論、この散布図はそれら相対減衰指数RAI及び線質変化指数SDIを含んでいればよく、他の指数を合わせた3次元又は4次元以上の散布図であってもよい。
[Creation of scatter diagram and identification of substance type (step S39)]
Next, the data processor 35 creates a two-dimensional scatter diagram for each substance from the solution of the X-ray absorption coefficient μ Mj obtained in step S38. This two-dimensional scatter diagram includes a relative attenuation index RAI (Relative Attenuation Index) shown on the vertical axis forming one dimension and a quality change index SDI (Spectrum Deformation Index) shown on the horizontal axis forming another dimension. Of course, this scatter diagram only needs to include the relative attenuation index RAI and the quality change index SDI, and may be a three-dimensional or four-dimensional or more scatter diagram combining other indexes.
 このうち、相対減衰指数RAIは、3次元ボリューム空間を成す各画素のCT値に相当する指数であって、物質毎に、その画素毎の相対減衰指数RAI=μ1j+μ2j+, …,+μMj(M:エネルギ領域数、j=1~N、Nは物質数)として定義される。これに対し、線質変化指数SDIは、物質固有の線吸収係数を示す指数であって、物質毎に、且つ、3次元ボリューム空間の画素毎に、線質変化指数SDIは例えば、μ3j-μ1jとして定義される。SDI=μ2j-μ1jとして定義してもよい。 Among these, the relative attenuation index RAI is an index corresponding to the CT value of each pixel forming the three-dimensional volume space, and for each substance, the relative attenuation index RAI for each pixel is equal to μ 1j + μ 2j + ,. Mj (M: number of energy regions, j = 1 to N, N is the number of substances). On the other hand, the line quality change index SDI is an index indicating a linear absorption coefficient specific to a substance, and the line quality change index SDI is, for example, μ 3j − for each substance and for each pixel in the three-dimensional volume space. Defined as μ 1j . It may be defined as SDI = μ 2j −μ 1j .
 この2次元散布図の一例を図24に示す。 An example of this two-dimensional scatter diagram is shown in FIG.
 これが済むと、データプロセッサ35は、事前準備されていた第1の記憶部34のデータベースから参照用の散布図(図25参照)を読み出し(ブロックB23)、ステップS39で生成した散布図と比較する。これにより、オペレータがパノラマ3D画像上で指定した関心位置Pintに存在する物質の種類を同定することができる。つまり、図24に示す散布図上のプロット群が、図25の示す参照用散布図の複数のプロット群のどれと一致するか又は最も近いかよって、いま関心ある位置の1つ又は複数の厚さのある物質の種類を判定できる。図24の散布図は、図13に例示するインプラントを透過するX線方向に沿った物質の同定を意図しているものとすると、プロット群A=歯槽骨、プロット群B=金属として同定される。 After this, the data processor 35 reads a reference scatter diagram (see FIG. 25) from the database of the first storage unit 34 prepared in advance (block B23), and compares it with the scatter diagram generated in step S39. . Thereby, it is possible to identify the type of substance present at the position of interest P int designated on the panorama 3D image by the operator. That is, depending on which plot group on the scatter diagram shown in FIG. 24 matches or is closest to one of the plot groups in the reference scatter diagram shown in FIG. The type of a certain substance can be determined. The scatter plot of FIG. 24 is identified as plot group A = alveolar bone and plot group B = metal, assuming the identification of substances along the X-ray direction passing through the implant illustrated in FIG. .
 特に図示していないが、異なる種類の金属が在る場合、それらの金属の種類によって散布図上のマップ位置も異なり、これにより、異種の金属についてもそれらを区別して同定できる。なお、金属同士の判別には、それらの金属のX線透過特性に応じて、判別の精度に差が生じることもある。 Although not shown in particular, when there are different types of metals, the map position on the scatter diagram differs depending on the types of the metals, and thus different types of metals can be distinguished and identified. It should be noted that in the discrimination between metals, there may be a difference in discrimination accuracy depending on the X-ray transmission characteristics of the metals.
 [同定結果の表示(ステップS40)]
 このように同定された結果は、データプロセッサ35により例えば表示器36に表示される。
[Display of identification result (step S40)]
The result thus identified is displayed on the display 36 by the data processor 35, for example.
 図26には、別の参照用の散布図を示す。これによれば、骨等の物質が骨粗しょう症を発症しているか、歯周病にかかっているかなど、病状の影響の有無を診断するための情報をも提供できる。 FIG. 26 shows another scatter diagram for reference. According to this, it is also possible to provide information for diagnosing the presence or absence of the influence of a medical condition such as whether a substance such as bone has osteoporosis or periodontal disease.
 以上のように、観察部位により固有の分布を行うために、事前にデータベースを作成しておけば、金属の種類、骨粗鬆症の診断、歯周病で解けてきている海綿骨など部位固有の分布が描け、物質の同定ができる。各部位のデータ数はの組合せだけあるが、一点であっても推定できる。医療用のCTでは水を基準とした縦軸のRAIのみ(所謂CT値)で物質を識別することになるが、SDI軸を追加し二次元散布図化できるフォトンカウンティング検出器では、違う物質の差異をより強調して、表現できるのが大きな利点である。これにより、同定精度が向上する。 As described above, if a database is created in advance in order to perform a specific distribution depending on the observation site, the site-specific distribution such as the type of metal, diagnosis of osteoporosis, and cancellous bone that has been solved by periodontal disease can be obtained. Draw and identify substances. There are only L CN combinations for the number of data of each part, but even one point can be estimated. In CT for medical use, the substance is identified only by the RAI on the vertical axis based on water (so-called CT value), but in the photon counting detector that can add a SDI axis and make a two-dimensional scatter plot, The great advantage is that the differences can be emphasized and expressed. Thereby, identification accuracy improves.
 なお、図25の散布図の例において、金(補綴物)は皮質骨よりSDI軸で左側に分布しているのは、不自然だが、これは金の場合、エネルギ帯域が、一番下の領域のカウントが極端に減り、このエネルギ帯域のμが大きく計算されるしまうためと考えられる。しかし、そのような特異な現象があるものの、物質同定と言う視点では、固有位置に分布するので、データベース化することで、その同定には影響しない。 In the example of the scatter diagram of FIG. 25, it is unnatural that gold (prosthesis) is distributed on the left side of the cortical bone on the SDI axis. However, in the case of gold, the energy band is the lowest. This is considered to be because the count of the region is extremely reduced and μ in this energy band is calculated to be large. However, although there is such a unique phenomenon, from the viewpoint of substance identification, since it is distributed at a specific position, the database is not affected by creating a database.
<障害陰影の除去法について>
 ここで、本実施形態で実行されるパノラマ画像の障害陰影の除去法の原理について説明する。
<How to remove obstacle shadows>
Here, the principle of the method of removing the obstacle shadow from the panoramic image executed in the present embodiment will be described.
 トモシンセシス画像g(i)(iはX線ビームに垂直な位置を表すものとする)は、X線ビーム方向のn層に分割した物体fi(i)(j=1,2
…,n)と、その位置での重ね合わせ操作によって発生する劣化関数hi(i)(j=1,2
…,n)の和の形で表現される。
The tomosynthesis image g (i) (i represents a position perpendicular to the X-ray beam) is an object f i (i) (j = 1, 2) divided into n layers in the X-ray beam direction.
..., n) and the degradation function h i (i) (j = 1,2) generated by the overlay operation at that position
..., n) is expressed in the form of a sum.
 ここですべての関数は対数をとった後のものと考える。すなわち、トモシンセシス画像g(i)は下記のように表現される。*はコンボリューションを示す記号である。 ”Here, all functions are considered to be logarithmic. That is, the tomosynthesis image g (i) is expressed as follows. * Is a symbol indicating convolution.
   g(i)=f1(i)*h1(i)+f2(i)*h2(i)+…+fj(i)*hj(i)+…+fn(i)*hn(i)
 トモシンセシス法のプロセスでは、特定の層のhj(i)をデルタ関数にして、それ以外をなるべく一様の関数とすることで、特定の層gj(i)が映像化される。
g (i) = f 1 (i) * h 1 (i) + f 2 (i) * h 2 (i) +… + f j (i) * h j (i) +… + f n (i) * h n (i)
In the tomosynthesis process, a specific layer g j (i) is imaged by making h j (i) of a specific layer a delta function and making the others as uniform functions as possible.
 いま、モデルを簡単化して、特定の2つの層f1(i), f2(i)とそれ以外fr(i)で構成される3層のモデルを以下のように考える。 Now, the model is simplified, and a three-layer model composed of two specific layers f 1 (i), f 2 (i) and the other f r (i) is considered as follows.
   g(i)=f1(i)*h1(i)+f2(i)*h2(i)+fr(i)*hr(i)
 ここで、第2層にフォーカスすると、下記式が成立する。
g (i) = f 1 (i) * h 1 (i) + f 2 (i) * h 2 (i) + f r (i) * h r (i)
Here, when focusing on the second layer, the following equation is established.
   h2(i)=δ(i)
   g2(i)=f1(i)*h1(i)+f2(i)*δ(i)+fr(i)*hr(i)
    = f1(i)*h1(i)+f2(i)+fr(i)*hr(i)

                        … (1)
 また、第1層にフォーカスすると、下記式が成立する。
h 2 (i) = δ (i)
g 2 (i) = f 1 (i) * h 1 (i) + f 2 (i) * δ (i) + f r (i) * h r (i)
= f 1 (i) * h 1 (i) + f 2 (i) + f r (i) * h r (i)

… (1)
Further, when focusing on the first layer, the following formula is established.
   h1(i)=δ(i)
   g1(i)=f1(i)*δ(i)+f2(i)*h2(i)+fr(i)*hr(i)
    = f1(i)+f2(i)*h2(i)+fr(i)*hr(i)

                        … (2)
 いま、fr(i)*hr(i)の成分が非常に小さいとすると、(1),(2)式 から下記式が導出される。
h 1 (i) = δ (i)
g 1 (i) = f 1 (i) * δ (i) + f 2 (i) * h 2 (i) + f r (i) * h r (i)
= f 1 (i) + f 2 (i) * h 2 (i) + f r (i) * h r (i)

… (2)
If the component of f r (i) * h r (i) is very small, the following equation is derived from equations (1) and (2).
   g2(i)=f1(i)*h1(i)+f2(i)   … (3)
   g1(i)=f1(i)+f2(i)*h2(i)   … (4)
 ここでg1(i)は歯列にフォーカスさせた画像、g2(i)を頸椎にフォーカスさせた画像とすると、障害陰影除去後の真の歯列画像f1(i)は
   f1(i)=(g1(i)-g2(i)*h2(i))/(1-h1(i)*h2(i)) … (5)
として与えられる。
g 2 (i) = f 1 (i) * h 1 (i) + f 2 (i)… (3)
g 1 (i) = f 1 (i) + f 2 (i) * h 2 (i)… (4)
Here, g 1 (i) is an image focused on the dentition, and g 2 (i) is an image focused on the cervical vertebra, and the true dentition image f 1 (i) after removing the shadow is f 1 ( i) = (g 1 (i) -g 2 (i) * h 2 (i)) / (1-h 1 (i) * h 2 (i))… (5)
As given.
 (5)式の意味するところは、
   (歯列にフォーカスした画像)-
   (頸椎にフォーカスした画像*歯列位置でのぼけ関数)
をその位置に応じたぼけ関数同士の重畳積分で除することで歯列のみの画像が得られることを意味している。
The meaning of equation (5) is
(Image focused on dentition)-
(Image focused on cervical spine * blurring function at dentition position)
This means that an image of only the dentition can be obtained by dividing by the superposition integration of the blur functions corresponding to the positions.
 fr(i)*hr(i)の成分が少なく無い場合でも、ある一定のパラメータ値としてg1(i)、g2(i)よりバイアス成分として差し引けば、上記の演算が成立するので障害陰影の除去又軽減が可能となる。 Even if the components of f r (i) * h r (i) are not small, the above calculation is valid if the bias component is subtracted from g 1 (i) and g 2 (i) as certain parameter values. Therefore, it is possible to remove or reduce the obstacle shadow.
[障害陰影の除去の具体的な流れ]
 次いで、このX線撮影装置1において上述した原理に従って実行される障害陰影の除去・軽減の処理例を図27~図34を参照して具体的に説明する。ここで、障害陰影は、被検体の歯列のパノラマ画像に写り込む頸椎の陰影であるとする。
[Specific flow of removal of obstacle shadow]
Next, an example of processing for removing and reducing obstacle shadows executed in accordance with the above-described principle in the X-ray imaging apparatus 1 will be specifically described with reference to FIGS. Here, it is assumed that the obstacle shadow is a shadow of the cervical vertebra reflected in the panoramic image of the dentition of the subject.
 まず、被検体Pの顎部のパノラマ撮影に係るフレームデータを読み出す(図27、ステップS51)。このフレームデータは第1の記憶部34に保存されている。このデータ収集は、例えば、図28(A)に示すように、歯列TRに予め設定した基準断層面(断面)に焦点が合うように実行されたものである。図28(A)に示す符号CSは頸椎を示す。 First, frame data relating to panoramic imaging of the jaw of the subject P is read (FIG. 27, step S51). This frame data is stored in the first storage unit 34. For example, as shown in FIG. 28A, this data collection is performed so that the reference tomographic plane (cross section) preset in the dentition TR is focused. Symbol CS shown in FIG. 28A indicates the cervical spine.
 次いで、データプロセッサ35は、第1の記憶部34に保存されているフレームデータを用いて歯列TRの基準断層面(図28(A)参照)に焦点を合わせたパノラマ画像を再構成して表示する(ステップS52)。このパノラマ画像の一例を図28(B)に示す。この再構成されたパノラマ画像のデータは第1の記憶部34に保存される。 Next, the data processor 35 reconstructs a panoramic image focused on the reference tomographic plane (see FIG. 28A) of the dentition TR using the frame data stored in the first storage unit 34. Displayed (step S52). An example of this panoramic image is shown in FIG. The reconstructed panoramic image data is stored in the first storage unit 34.
 ステップS52において、具体的には、その基準断層面の最適焦点の画像がトモシンセシス法、すなわちシフト・アンド・アッド処理で再構成される。第2のROM40Bに、予め定めた歯列TRの基準断層面を最適焦点化するためのゲインカーブの情報が格納されている。このゲインカーブの一例を図29に示す。ゲインカーブは、シフト・アンド・アッド処理において1枚1枚の短冊状のフレームデータ(フレーム画像)を互いにシフト(ずらす)させる量(カーブの微分値)を角度毎に示すカーブである。 In step S52, specifically, the image of the optimum focus on the reference tomographic plane is reconstructed by the tomosynthesis method, that is, shift-and-add processing. The second ROM 40B stores gain curve information for optimally focusing a reference tomographic plane of a predetermined dentition TR. An example of this gain curve is shown in FIG. The gain curve is a curve that indicates an amount (differential value of the curve) by which each piece of strip-shaped frame data (frame image) is shifted (shifted) with respect to each other in the shift-and-add process.
 このゲインカーブにおけるシフト量が歯列位置でのぼけ関数を作成する際のぼけ量に相当する。そこで、データプロセッサ35は、ぼけの形をガウス関数として、その標準偏差をこのシフト量に一致させ、角度の位置毎に変化する、すなわちシフトバリアントなぼけ関数としてフレームデータにコンボリューションする。その後で、データプロセッサ35は、各位置においてフレームデータを相互に加算(画素値の相互加算)して歯列TRに沿った基準断層面のパノラマ画像を作成する。 The shift amount in this gain curve corresponds to the blur amount when creating a blur function at the dentition position. Therefore, the data processor 35 uses the shape of the blur as a Gaussian function, matches its standard deviation to this shift amount, and convolves with the frame data as a shift variant blur function that changes for each angular position. Thereafter, the data processor 35 adds frame data to each other at each position (mutual addition of pixel values) to create a panoramic image of the reference tomographic plane along the tooth row TR.
 次いで、データプロセッサ35は、上述と同様に、第1の記憶部34に保存されているフレームデータを用いて頸椎CSを通る断層面(図28(A)参照)に焦点を合わせたパノラマ画像を再構成して表示する(ステップS53)。このパノラマ画像の一例を図28(C)に示す。この再構成されたパノラマ画像のデータも第1の記憶部34に保存される。 Next, as described above, the data processor 35 uses the frame data stored in the first storage unit 34 to generate a panoramic image focused on a tomographic plane (see FIG. 28A) passing through the cervical vertebra CS. Reconstructed and displayed (step S53). An example of this panoramic image is shown in FIG. The reconstructed panoramic image data is also stored in the first storage unit 34.
 この頸椎CSに焦点を合わせる画像は、歯列面に焦点を合わせる軌道TTRを折り返した軌道TCSを仮定してゲインカーブを作成する。この際、キャリブレーションファントムのワイア(鉛ファントム)の位置を頼りにゲインカーブを作成する。この場合、前述のゲインカーブをちょうど上下反転する形でゲインカーブが作成される(図30参照)。このゲインカーブが示すずれの量(つまりカーブの微分値)に基づいてシフト・アンド・アッドの演算を行う。この場合、一般に、歯列TRの再構成では前歯付近でのシフト量は小さいが、頸椎CSの再構成では収集したフレームデータを大きく動かして加算演算をすることになる。 The cervical vertebrae CS focused on the image creates a gain curve assuming a trajectory T CS folded back trajectory T TR focus on teeth surfaces. At this time, a gain curve is created depending on the position of the wire (lead phantom) of the calibration phantom. In this case, the gain curve is created in the form of inverting the above-described gain curve (see FIG. 30). Based on the amount of deviation indicated by the gain curve (that is, the differential value of the curve), a shift-and-add operation is performed. In this case, generally, in the reconstruction of the dentition TR, the shift amount in the vicinity of the front teeth is small, but in the reconstruction of the cervical vertebra CS, the collected frame data is moved greatly to perform the addition operation.
 上記ステップS52,S53におけるパノラマ画像の再構成に使用するフレームデータは、全てのエネルギ領域ER~ERに属するエネルギを有するフレームデータであってもよいし、また、その一部のエネルギ領域に属するエネルギのフレームデータであってもよい。全てのエネルギ領域に跨るエネルギのフレームデータを用いる場合でも、単純平均のほかに、領域毎に重み付けをして加算することもできる。 The frame data used for panoramic image reconstruction in the above steps S52 and S53 may be frame data having energy belonging to all energy regions ER 1 to ER 3 , or may be included in a part of the energy region. It may be frame data of the energy to which it belongs. Even when using frame data of energy over all energy regions, in addition to simple averaging, weighting can be performed for each region.
 検出器22が光子計数型であるから、画素毎にエネルギを弁別した状態でフレームデータを検出できる。これにより、X線光子が顎部の物質を透過するときの線質変化などのパラメータを敏感に反映した透過データ(フレームデータ)を使うことができる。このため、顎部の特定の物質を強調したパノラマ画像を取得でき、それに基づいて障害陰影除去法を実施できる。 Since the detector 22 is a photon counting type, the frame data can be detected in a state where energy is discriminated for each pixel. This makes it possible to use transmission data (frame data) that sensitively reflects parameters such as a change in radiation quality when X-ray photons pass through the jaw material. For this reason, a panoramic image in which a specific substance in the jaw is emphasized can be acquired, and the obstacle shadow removal method can be performed based on the panoramic image.
 次いで、データプロセッサ35はスケーリングの処理を行う(ステップS54)。図31に、歯列TRの基準断層面に沿った最適焦点画像ITRを縦線と共に示す。この縦線はキャリブレーションファントムで測定した特定角度の位置を示す。また、図32に、頸椎CSの断層面に沿った最適焦点画像ICSを縦線と共に示す。この縦線もキャリブレーションファントムで測定した特定角度の位置を示す。 Next, the data processor 35 performs a scaling process (step S54). Figure 31 shows the best focus image I TR along the reference tomographic plane of the dentition TR with the vertical line. This vertical line shows the position of the specific angle measured with the calibration phantom. Further, FIG. 32 shows the best focus image I CS along the fault plane cervical CS with the vertical line. This vertical line also indicates the position of a specific angle measured with the calibration phantom.
 頸椎のような障害陰影を除去するためには図31,32をスケーリングして等しい画像サイズにしなければならない。図31の縦線の間隔と図32の縦線の間隔はそれぞれの位置毎に異なっており、1つの縮小率を用いて図32の大きさを図31の大きさに合わせることはできない。そこで、この縦線で画成されたフレームを頼りに2枚の画像の縮尺をあわせる。簡単に合わせるには、2つの縦線で囲まれた領域毎に同一のサイズになるように合わせればよい。さらに精度をあげて一致させるには短冊状のフレームデータ毎に該当する位置を合わせるのが望ましい。 In order to remove the obstacle shadow such as the cervical spine, FIGS. 31 and 32 must be scaled to have the same image size. The distance between the vertical lines in FIG. 31 and the distance between the vertical lines in FIG. 32 are different for each position, and the size of FIG. 32 cannot be matched to the size of FIG. 31 using one reduction ratio. Therefore, the scales of the two images are adjusted using the frame defined by the vertical lines. In order to easily match, it is only necessary to match each area surrounded by two vertical lines so as to have the same size. Further, in order to match with higher accuracy, it is desirable to match the corresponding positions for each strip-shaped frame data.
 ここでは、歯列TRの最適焦点画像ITRの15本の縦線で画成された複数の矩形状の領域に、頸椎CSの最適焦点画像ICSの15本の縦線によるそれらの領域が一致するように、画像ICSの各領域を縮小させる。これにより、両画像ITR及びICSの大きさは同じになる。 Here, the plurality of rectangular regions defined by vertical lines 15 represent components of the optimum focus image I TR dentition TR, 15 present those regions by the vertical line of best focus image I CS cervical CS Each area of the image ICS is reduced so as to match. Thus, the size of both images I TR and I CS are the same.
 次いで、データプロセッサ35は頸椎CSの最適焦点画像ICSのぼかし処理を実行する(ステップS55)。この操作におけるぼけ関数を決めるためには、ゲインカーブを用いる。より具体的には、ゲインカーブのそれぞれの角度位置におけるゲイン値を半値全幅(FWHM:  full width at half maximum)としたガウス関数を作り、これを頸椎CSの最適焦点画像ICSに重畳積分する。 Next, the data processor 35 executes a blurring process of the optimally focused image I CS of the cervical vertebra CS (step S55). In order to determine the blur function in this operation, a gain curve is used. More specifically, the full width at half maximum of the gain value at each angular position of the gain curve: create a Gaussian function with a (FWHM full width at half maximum) , which is convolution optimally focused image I CS cervical CS.
 このようにしてできた頸椎のぼけ画像ICS´(図示せず)では、歯列部は大きくぼけており、これに対して頸椎部は、歯列面に焦点を合わせた画像における頸椎部と同程度のぼけとなっている。 In the cervical vertebra blurred image I CS ′ (not shown) thus formed, the dentition is greatly blurred, whereas the cervical vertebra is in contrast to the cervical vertebra in the image focused on the dentition surface. The blur is about the same.
 なお、このぼかし処理、つまりステップS5の処理は状況によっては省略してもよい。 Note that this blurring process, that is, the process of step S5 may be omitted depending on the situation.
 このぼかし処理の後、データプロセッサ35は、歯列TRの最適焦点画像ITRと前述のように縮小され且つぼかされた頸椎CSの最適焦点画像ICS´との間で、画素毎にそれらの画素値の引算又は割算を行って画像の差を採る処理を行う(ステップS56)。画素値を自然対数で表しているときには引算となり、画素値間で「ITR-ICS´」を行う。一方、画素値の自然対数をとっていない場合、画素値間で「ITR/ICS´」の割算を行う。さらに、この画素値にはぼけ関数に依存した濃度むらが発生することになるので、頸椎焦点面におけるぼけ関数と歯列面におけるぼけ関数を重畳積分し、これを1より減算した値を用い、角度位置ごとに除算することで、頸椎の影響を極力低減した、かつ歯列面に焦点の合った画像を得ることができる。この処理をステップS56に付加することが望ましい。 After this blurring process, the data processor 35 performs, for each pixel, between the optimal focus image I TR of the dentition TR and the optimal focus image I CS ′ of the cervical vertebra CS reduced and blurred as described above. The pixel value is subtracted or divided to perform a process of taking the image difference (step S56). When pixel values are expressed in natural logarithm, subtraction is performed, and “I TR -I CS ′” is performed between pixel values. On the other hand, when the natural logarithm of the pixel value is not taken, the division of “I TR / I CS ′” is performed between the pixel values. Further, since the density unevenness depending on the blur function occurs in the pixel value, the blur function on the cervical vertebra focal plane and the blur function on the dentition surface are superimposed and integrated, and a value obtained by subtracting this from 1 is used. By dividing each angular position, it is possible to obtain an image in which the influence of the cervical spine is reduced as much as possible and the dentition surface is focused. It is desirable to add this process to step S56.
 この差分又は割算による差画像が頸椎に因る障害陰影を除去(又は軽減)した後の歯列TRの最適焦点画像ITR_REVとなるので、これが表示器36に表示されると共に、例えば第1の記憶部34に保存される(ステップS57)。 Since the difference image by the difference or the division becomes the optimum focus image I TR_REV of the dentition TR after removing (or reducing) the shadow of the cervical spine, this is displayed on the display 36 and, for example, the first image (Step S57).
 図33に、頸椎除去前の歯列TRの最適焦点画像ITRの別の一例を示す。この画像ITRには、その中央部分に頸椎CSの白い影(障害陰影)が写り込んでおり、前歯付近がぼけており、描出能が低い。これに対し、図34に示すように、頸椎除去後の歯列TRの最適焦点画像ITR_REVの場合、かかる頸椎像が殆ど除去されており、その分、前歯がより明瞭に描出されている。 Figure 33 shows another example of the best focus image I TR dentition TR before cervical removal. In this image ITR , a white shadow (disturbance shadow) of the cervical vertebra CS is reflected in the central portion thereof, the vicinity of the front teeth is blurred, and the drawing ability is low. On the other hand, as shown in FIG. 34, in the optimally focused image I TR_REV of the dentition TR after cervical vertebra removal, the cervical vertebra image is almost removed, and the anterior teeth are more clearly depicted.
 このように、データ収集後の比較的簡単な補正処理により、障害陰影となる頸椎CSの陰影が歯列TRのパノラマ画像ITRから確実に除去又は軽減される。このため、歯列TRの描出能が上がる。 Thus, a relatively simple correction after data acquisition, shading cervical CS that interfere shadow is reliably removed or alleviated from the panoramic image I TR dentition TR. For this reason, the drawing ability of the dentition TR is improved.
 以上のように、本実施形態に係るハイブリッド型のX線撮影装置1によれば、物質同定が可能な歯科用X線撮像装置で、オールインワンのX線診断装置として機能できる。小撮影視野のCT撮影でも物質同定が可能である。金属補綴物がある場合も安定して物質同定可能である。これは、金属部によるメタルアーチファクトが除去又は軽減されたCT画像から金属部の厚さをより正確に計測できるからである。パノラマ撮影及びCT撮影を円軌道で同時に1回のスキャンで行えるので、患者へのX線被ばく量が少ない。 As described above, the hybrid X-ray imaging apparatus 1 according to this embodiment can function as an all-in-one X-ray diagnostic apparatus with a dental X-ray imaging apparatus capable of material identification. Substance identification is possible even with CT imaging in a small field of view. Even when there is a metal prosthesis, the substance can be identified stably. This is because the thickness of the metal part can be measured more accurately from a CT image in which metal artifacts due to the metal part are removed or reduced. Since panoramic imaging and CT imaging can be performed simultaneously in a circular orbit in one scan, the amount of X-ray exposure to the patient is small.
 また、本実施形態では、検出器にフォトンカウンティング技術を採用し、軟組織~硬組織まで映像化可能である。逐次近似CT再構成技術を採用し、小視野検出器で組織の境界が鮮明なCT画像が得られる。また、パノラマ画像の障害陰影が削除又は軽減されるため、パノラマ画像との組合せで物質同定を可能とする。 In this embodiment, the photon counting technique is used for the detector, and images from soft tissue to hard tissue can be visualized. A successive approximation CT reconstruction technique is employed, and a CT image with a clear tissue boundary can be obtained with a small field detector. Moreover, since the obstacle shadow of the panoramic image is deleted or reduced, the substance identification can be performed in combination with the panoramic image.
 さらに、パノラマ撮影とCT撮影を同時に1回のスキャンで行うことができるので、再構成されるパノラマ画像とCT画像の位置合わせも不要である。 Furthermore, since panoramic imaging and CT imaging can be performed simultaneously in one scan, it is not necessary to align the reconstructed panoramic image and CT image.
 以上のことから、本発明に係るハイブリッド型X線撮影装置1台で、従来のパノラマ撮影装置及びCT装置の2役を担うことができる。つまり、このハイブリッド型X線撮影装置1のみを使用して信頼性の高い診断ができ、治療時間短縮や治療精度向上、スループットの向上を実現する。またインプラント周囲炎を観察したり、軟組織の腫瘍、硬組織との位置関係を観察したり、臨床適用範囲が飛躍的に拡大する。 From the above, one hybrid X-ray imaging apparatus according to the present invention can play two roles of a conventional panoramic imaging apparatus and CT apparatus. That is, highly reliable diagnosis can be performed using only the hybrid X-ray imaging apparatus 1, and treatment time can be shortened, treatment accuracy can be improved, and throughput can be improved. In addition, the peri-implantitis is observed, the positional relationship between the soft tissue tumor and the hard tissue is observed, and the clinical application range is dramatically expanded.
 なお、機能ブロックを示す図9において、ブロックB1がスキャン手段に相当し、ブロックB3のステップS31がパノラマ画像生成手段に相当し、ステップS35がCT画像生成手段に相当し、ステップS32がパノラマ画像表示手段に相当し、ステップS34が関心位置指定手段に相当し、ステップS36~S40が物質同定手段に相当する。 In FIG. 9 showing functional blocks, block B1 corresponds to the scanning means, step S31 of block B3 corresponds to the panoramic image generating means, step S35 corresponds to the CT image generating means, and step S32 corresponds to the panoramic image display. Step S34 corresponds to the position of interest designation means, and steps S36 to S40 correspond to the substance identification means.
[その他の実施形態]
 上述した実施形態では、パノラマ撮影とCT撮影を同時に1回のスキャンで済ませるようにしたが、本発明に係るX線診断装置は必ずしもこれに限定されるものではない。例えばパノラマ撮影とCT撮影を2回別々に実行し、それぞれ再構成されたパノラマ画像及びCT画像の位置合わせを行い、前述したと同様の処理に付すこともできる。この場合、CT撮影を行う装置で使用する検出器は従来の積分型のX線検出器であってもよく、必ずしも光子計数型のX線検出器を使用する必要はない。
[Other embodiments]
In the above-described embodiment, panoramic imaging and CT imaging are completed by one scan at the same time, but the X-ray diagnostic apparatus according to the present invention is not necessarily limited to this. For example, panoramic imaging and CT imaging can be separately executed twice, the reconstructed panoramic image and CT image can be aligned, and the same processing as described above can be performed. In this case, the detector used in the CT imaging apparatus may be a conventional integral X-ray detector, and it is not always necessary to use a photon counting X-ray detector.
 上述したX線撮影装置1は、患者が歯科用チェアに仰向けの寝た状態(臥位)で撮影するように構成してもよい。X線管及び検出器の間に被検体の顎部を位置させた状態で回転させる撮影系を有していればよく、例えばそのような撮影系がチェアの背面又は支柱に固定設置され、患者が座位又は立位で撮影を受ける装置構成であってもよい。また、そのような撮影系が家屋や車両の壁や天井などの固定構造に取り付けられていてもよい。さらに、そのような撮影系が可搬式のユニットとして構成され、患者の肩に載せたり、一般の椅子の背後に設置したりして撮影を行う構成であってもよい。 The X-ray imaging apparatus 1 described above may be configured to take an image in a state where the patient is lying on his / her back on the dental chair (supposed position). What is necessary is just to have an imaging system that rotates with the jaw of the subject positioned between the X-ray tube and the detector. It may be a device configuration that receives an image in a sitting position or standing position. Further, such a photographing system may be attached to a fixed structure such as a house or a vehicle wall or ceiling. Furthermore, such an imaging system may be configured as a portable unit, and may be configured to perform imaging by placing on a patient's shoulder or behind a general chair.
 また、前述したX線撮影装置1は、X線管21及び検出器22が同一回転中心の周りに、それらの間の距離を可変にしながら且つ互いに独立して回転駆動できる撮影系を備えていてよい。 Further, the X-ray imaging apparatus 1 described above includes an imaging system in which the X-ray tube 21 and the detector 22 can be driven to rotate independently of each other while varying the distance between them around the same rotation center. Good.
 さらに、本願の実施形態であっては、検出器22を光子計数型の検出器を用いたが、シンチレータと光電素子を組み合わせて一定時間の間、電気信号を蓄積してフレームデータを出力する、所謂、積分型の検出器であってもよい。 Furthermore, in the embodiment of the present application, the detector 22 is a photon counting type detector. However, the scintillator and the photoelectric element are combined to accumulate electrical signals for a predetermined time and output frame data. A so-called integral type detector may be used.
 さらに、本願の障害陰影除去法は、トモシンセシス法を適用できない、シンチレータとCCD(電荷結合素子)を用いた検出器を用いたX線撮影装置にも適用できる。その場合には、歯列を通る断面に焦点を当てたX線管及び検出器の回転軌道の下で撮影した画像と、頸椎を通る断面に焦点を当てたX線管及び検出器の回転軌道の下で撮影した画像とを別々に用意し、それらの画像間で前述した障害陰影除去法を実施すればよい。 Furthermore, the obstacle shadow removal method of the present application can be applied to an X-ray imaging apparatus using a detector using a scintillator and a CCD (charge coupled device), to which the tomosynthesis method cannot be applied. In that case, an image taken under the rotational trajectory of the X-ray tube and detector focused on the cross-section passing through the dentition, and the rotational trajectory of the X-ray tube and detector focused on the cross-section passing through the cervical spine It is sufficient to prepare images taken under the image separately and perform the above-described obstacle shadow removal method between these images.
 本発明によれば、ファントムを用いた計測により、撮影空間の構造を規定する、X線管、3D基準断層面、及び検出器の位置・距離・角度のパラメータを容易に且つ精度良く解析できるとともに、キャリブレーションして撮影に備えることができる。したがって、対象物を精度良く3次元的に撮影できる、放射線を用いた撮影装置を提供することができる。 According to the present invention, the position, distance, and angle parameters of the X-ray tube, 3D reference tomographic plane, and detector that define the structure of the imaging space can be easily and accurately analyzed by measurement using a phantom. Can be calibrated and prepared for shooting. Therefore, it is possible to provide an imaging apparatus using radiation that can accurately image an object three-dimensionally.
1 歯科用のハイブリッド型X線撮影装置(機能的に物質同定装置を成す)
16 回転ユニット
17 コンソール
21 X線管
22 検出器
23 スリット
33 コントローラ(各種手段を機能的に実現する要素の一つ)
34 第1の記憶部
35 データプロセッサ(各種手段を機能的に実現する要素の一つ)
36 表示器
37 入力器
40A~40D ROM
C 半導体セル
Cp 検出回路
  画素
1 Dental X-ray radiography system (functionally forms a substance identification system)
16 Rotating unit 17 Console 21 X-ray tube 22 Detector 23 Slit 33 Controller (one of the elements for functionally realizing various means)
34 1st memory | storage part 35 Data processor (one of the elements which implement | achieves various means functionally)
36 Display 37 Input device 40A to 40D ROM
C semiconductor cells Cp detecting circuit S n pixels

Claims (15)

  1.  X線を照射するX線管と、
     前記X線のフォトンが入射を検知する度に当該フォトンのエネルギに応じた電気パルスを出力するセルを、2次元の画素群を形成するように2次元に配列した検出回路と、この検出部の各セルが検知した前記フォトンの数を2つ以上のエネルギ帯域(エネルギ帯域数M≧2:Mは正の整数)に分けて画素毎に計測する計測回路と、この計測回路が計測した各セルの出力に応じたデジタル量の電気信号をエネルギ帯域毎にフレームデータとして出力する出力回路とを備えて構成された検出器と、
     前記X線管と前記検出器とを撮影対象を挟んで互いに対向させて当該撮影対象の周りに当該X線管と当該検出器とを回転させ、前記検出器が出力する前記フレームデータを前記エネルギ帯域毎に収集するスキャン手段と、
     前記フレームデータに基づいて前記撮影対象のパノラマ画像のデータを前記エネルギ帯域毎に生成するパノラマ画像生成手段と、
     前記撮影対象のCT(computed tomography)画像を生成するCT画像生成手段と、
     前記パノラマ画像をモニタに表示するパノラマ画像表示手段と、
     前記モニタに表示された前記パノラマ画像上でオペレータの指示に応じて関心位置を指定する関心位置指定手段と、
     前記関心位置に対応した前記X線管及び前記検出器の間の撮影空間における投影方向に存在する、前記撮影対象における厚みを有する1つ又は複数の物質の少なくとも種類を、前記パノラマ画像のX線透過データと前記CT画像から得られる当該物質の形態情報とに基づいて同定する物質同定手段と、
     を備えたことを特徴とするX線パノラマ・CT撮影を利用した物質同定装置。
    An X-ray tube that emits X-rays;
    A detection circuit in which cells each outputting an electric pulse corresponding to the energy of the photon each time the X-ray photon is detected are two-dimensionally arranged so as to form a two-dimensional pixel group; and A measurement circuit that measures the number of photons detected by each cell into two or more energy bands (number of energy bands M ≧ 2: M is a positive integer) for each pixel, and each cell measured by the measurement circuit A detector configured to include an output circuit that outputs a digital amount of an electric signal corresponding to the output of each of the energy bands as frame data;
    The X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is converted into the energy. Scanning means for collecting each band;
    Panorama image generating means for generating data of the panorama image to be photographed for each energy band based on the frame data;
    CT image generation means for generating a CT (computed tomography) image of the imaging target;
    Panoramic image display means for displaying the panoramic image on a monitor;
    Interest position specifying means for specifying a position of interest on the panoramic image displayed on the monitor in accordance with an operator's instruction;
    X-rays of the panoramic image represent at least one type of one or a plurality of substances having a thickness in the imaging target that exists in the projection direction in the imaging space between the X-ray tube and the detector corresponding to the position of interest. A substance identifying means for identifying based on transmission data and morphological information of the substance obtained from the CT image;
    A substance identification apparatus using X-ray panorama / CT imaging characterized by comprising:
  2.  前記物質同定手段は、
     前記CT画像生成手段により生成された前記CT画像をオペレータに提示するするCT画像提示手段と、
     前記CT画像から、オペレータとの間でインターラクティブに、前記投影方向に沿った前記厚みを有する1つ又は複数の物質としてN個(Nは1以上の正の整数)の厚みを持つ物質が存在することを判定する判定手段と、
     前記関心位置に対応した前記撮影空間における投影方向を含め、当該関心位置の近傍の位置を含むL点(L≧N)の位置それぞれから前記投影方向に沿って前記撮影空間に投影される合計L本の相互に近傍の投影方向を特定する投影方向特定手段と、
     前記CT画像のデータにおいて前記L本の投影方向それぞれの通過方向に沿って存在する、前記撮影対象のN個の物質の厚さti1,ti2,…,tiN(i=1~M;Nは物質の数)を前記CT画像のデータから前記形態情報として演算する厚さ演算手段と、
     前記N個の物質の前記X線に対する前記エネルギ帯域毎の吸収係数をμ1j2j,…,μMj(j=1~N)としたとき、前記物質の厚さを既知として持ち且つ前記吸収係数を変数として持つ個の連立方程式を解いて当該吸収係数をμ1j2j,…,μMjの値を演算する吸収係数演算手段と、
     前記N個の物質それぞれに対して、縦軸にμ1j+μ2j,…,+μMj(j=1~N)を相対減衰指数:RAIとして採り、且つ横軸にμaj-μbj(a、b≦Mを満たす正の整数、ただし、a>b)を線質変化指数:SDIとして採った2次元の散布図を作成する散布図作成手段と、
     前記散布図作成手段により作成された前記物質毎の散布図を、予め設定してある当該物質毎の基準となる前記相対減衰指数及び前記線質変化指数を示す参照散布図と比較して前記関心位置に応じた前記投影方向に存在する前記1つ又は複数の物質それぞれの種類を判定する物質種類判定手段と、
     を備えたことを特徴とする請求項1に記載の物質同定装置。
    The substance identification means includes
    CT image presentation means for presenting the CT image generated by the CT image generation means to an operator;
    From the CT image, there are N substances (N is a positive integer of 1 or more) as one or more substances having the thickness along the projection direction interactively with the operator. Determining means for determining
    The total L projected from the positions of L points (L ≧ N) including the position in the vicinity of the position of interest including the projection direction in the shooting space corresponding to the position of interest along the projection direction to the shooting space. A projection direction identifying means for identifying projection directions in the vicinity of each other;
    The thicknesses t i1 , t i2 ,..., T iN (i = 1 to M;) of the N substances to be imaged that exist along the passing directions of the L projection directions in the CT image data. N is the thickness calculation means for calculating the number of substances) from the CT image data as the morphology information;
    When the absorption coefficients of the N substances for the X-rays for each energy band are μ 1j , μ 2j ,..., Μ Mj (j = 1 to N), the thicknesses of the substances are known and L C N-number of the absorption coefficient by solving the simultaneous equations mu 1j having an absorption coefficient as a variable, μ 2j, ..., and the absorption coefficient calculating means for calculating a value of mu Mj,
    For each of the N substances, the vertical axis represents μ 1j + μ 2j ,..., + Μ Mj (j = 1 to N) as a relative attenuation index: RAI, and the horizontal axis represents μ aj −μ bj (a, a positive integer satisfying b ≦ M, where a> b) is a quality change index: a scatter diagram creating means for creating a two-dimensional scatter diagram taken as SDI;
    The scatter diagram for each substance created by the scatter diagram creation means is compared with the reference scatter diagram showing the relative attenuation index and the quality change index, which are preset for each substance, and the interest. Substance type determination means for determining the type of each of the one or more substances present in the projection direction according to the position;
    The substance identification apparatus according to claim 1, comprising:
  3.  前記物質種類判定手段により判定された、前記関心位置に応じた前記投影方向に存在する前記1つ又は複数の物質の種類をユーザに提示する提示手段を備えたことを特徴とする請求項2に記載の物質同定装置。 The display device according to claim 2, further comprising a presentation unit that presents to the user the types of the one or more substances existing in the projection direction according to the position of interest determined by the substance type determination unit. The substance identification apparatus described.
  4.  前記2つ以上のエネルギ帯域数は3つ以上のエネルギ帯域数であることを特徴とする請求項1~3の何れか一項に記載の物質同定装置。 4. The substance identification apparatus according to claim 1, wherein the two or more energy bands are three or more energy bands.
  5.  前記CT画像生成手段は、
     前記エネルギ帯域毎に収集される投影角度毎にフレームデータを加算して投影データを生成する投影データ生成手段と、
     前記投影角度毎の投影データに基づいて前記撮影対象のCT画像の3次元データを再構成する再構成手段と、
     を備えたことを特徴とする請求項1~4の何れか一項に記載の物質同定装置。
    The CT image generation means includes
    Projection data generation means for generating projection data by adding frame data for each projection angle collected for each energy band;
    Reconstructing means for reconstructing three-dimensional data of the CT image to be imaged based on projection data for each projection angle;
    The substance identification apparatus according to any one of claims 1 to 4, further comprising:
  6.  前記投影データ生成手段は、前記スキャン手段により前記撮影対象の周りを回転させられる前記X線管及び前記検出器の約半周分の回転動作の間に収集されるフレームデータを使用するハーフスキャン方式の下で前記投影データを生成するように構成されたことを特徴とする請求項5に記載の物質同定装置。 The projection data generating means is a half-scan type that uses frame data collected during a rotation of the X-ray tube rotated about the object to be imaged by the scanning means and about a half turn of the detector. The substance identification apparatus according to claim 5, wherein the substance identification apparatus is configured to generate the projection data below.
  7.  前記スキャン手段は、前記X線管及び前記検出器を常に互いに対向させた状態で同一の円軌道上を回転させるように構成され、
     前記CT画像生成手段は、前記検出器から出力される前記フレームデータに基づいて前記CT画像を生成するように構成されている、
     前記パノラマ画像生成手段及び前記CT画像生成手段は、前記スキャン手段により前記X線管及び前記検出器を1回だけ同時に回転させたときに当該検出器から出力される前記フレームデータを用いて前記パノラマ画像及び前記CT画像をそれぞれ生成するように構成されている、
     請求項1~6の何れか一項に記載の物質同定装置。
    The scanning means is configured to rotate on the same circular orbit with the X-ray tube and the detector always facing each other,
    The CT image generation means is configured to generate the CT image based on the frame data output from the detector.
    The panoramic image generation means and the CT image generation means use the frame data output from the detector when the X-ray tube and the detector are simultaneously rotated once by the scanning means. Configured to generate an image and a CT image respectively;
    The substance identification device according to any one of claims 1 to 6.
  8.  前記検出回路は、前記X線を前記電気パルスに直接変換する半導体材料で形成され且つ前記画素毎に分割された半導体層と、この半導体層の一方の面に積層された荷電電極と、前記半導体層の他方の面に積層され且つ前記画素毎に分割された集電電極とを有し、
     前記計測回路及び前記出力回路は前記検出回路と一体の層としてのASIC(Application Specific Integrated Circuit)層として作り込まれており、
     前記半導体材料はCdTe、CZT、又はTlBrである、
     ことを特徴とする請求項1~7の何れか一項に記載のX物質同定装置。
    The detection circuit includes a semiconductor layer formed of a semiconductor material that directly converts the X-rays into the electric pulse and divided for each pixel, a charged electrode stacked on one surface of the semiconductor layer, and the semiconductor A current collecting electrode stacked on the other surface of the layer and divided for each pixel,
    The measurement circuit and the output circuit are built as an ASIC (Application Specific Integrated Circuit) layer as a layer integrated with the detection circuit,
    The semiconductor material is CdTe, CZT, or TlBr.
    The X substance identification device according to any one of claims 1 to 7, characterized in that:
  9.  前記撮影対象は被験者の顎部であり、
     前記パノラマ画像生成手段は、前記エネルギ帯域毎に、前記フレームデータに基づいて前記顎部に存在する歯列の各歯の位置に自動的に焦点を合わせ且つその実際の位置及び形状を反映した2次元のパノラマ画像を3次元的に展開した擬似的な3次元パノラマ画像を作成するように構成されていることを特徴とする請求項1~8の何れか一項に記載の物質同定装置。
    The subject is the subject's jaw,
    The panoramic image generation means automatically focuses on the position of each tooth of the dentition existing in the jaw based on the frame data and reflects the actual position and shape for each energy band. 9. The substance identification apparatus according to claim 1, wherein the substance identification apparatus is configured to create a pseudo three-dimensional panoramic image obtained by three-dimensionally developing a three-dimensional panoramic image.
  10.  前記パノラマ画像生成手段は、
     前記歯列を目的物とし、この歯列に焦点を当てた第1のパノラマ画像を前記フレームデータから前記エネルギ帯域毎に生成する第1のパノラマ画像生成手段と、
     前記顎部の後ろ側に存在する頸椎を障害物とし、この頸椎に焦点を当てた第2のパノラマ画像を前記フレームデータから前記エネルギ帯域毎に生成する第2のパノラマ画像生成手段と、
     前記第1及び第2のパノラマ画像に基づいて、当該第1のパノラマ画像から前記頸椎の陰影の映り込みを除去又は軽減する障害陰影除去・軽減手段と、
     を備えたことを特徴とする請求項1~9の何れか一項に記載の物質同定装置。
    The panoramic image generating means
    First panoramic image generation means for generating a first panoramic image focused on the dentition from the frame data for each energy band, the dentition being an object;
    A second panoramic image generation means for generating a second panoramic image focusing on the cervical vertebra for each energy band from the cervical vertebra existing on the back side of the jaw as an obstacle;
    Based on the first and second panoramic images, obstacle shadow removal / reduction means for removing or reducing the reflection of the shadow of the cervical vertebra from the first panoramic image;
    The substance identification device according to any one of claims 1 to 9, further comprising:
  11.  前記撮影空間に配置されるとともに、この配置によって当該撮影空間内の予め定めた断層面に位置付けられ且つ既知の位置情報を前記X線で画像化可能なマーカを有するファントムを前記撮影空間に配置した状態で、前記X線管から照射されたX線に応じて収集された前記フレームデータに基づいて画像を作成する画像作成手段と、
     前記マーカの既知の位置情報と前記画像から得られた前記マーカの位置の情報とに基づき、前記X線管と前記検出器の間の距離情報及び前記検出器に対する前記X線管の高さ情報を演算する第1の演算手段と、
     前記第1の演算手段の演算結果と前記データに基づいて、前記撮影空間における前記X線管、前記検出器、及び前記断層面の位置関係を規定するパラメータを演算する第2の演算手段と、
     前記第2の演算手段により演算されるパラメータをキャリブレーションデータとして記憶する記憶手段と、を備え、
     前記投影方向特定手段は、前記キャリブレーションデータを用いて、前記撮影空間に投影される合計M本の相互に近傍の投影方向を特定するように構成されている、
     請求項2~10の何れか一項に記載の物質同定装置。
    A phantom that is arranged in the imaging space and has a marker that is positioned on a predetermined tomographic plane in the imaging space and can image known position information with the X-ray is arranged in the imaging space. An image creating means for creating an image based on the frame data collected according to the X-rays irradiated from the X-ray tube in a state;
    Based on the known position information of the marker and the position information of the marker obtained from the image, distance information between the X-ray tube and the detector and height information of the X-ray tube with respect to the detector First computing means for computing
    Second computing means for computing parameters defining the positional relationship of the X-ray tube, the detector, and the tomographic plane in the imaging space based on the computation result of the first computing means and the data;
    Storage means for storing parameters calculated by the second calculation means as calibration data,
    The projection direction specifying means is configured to specify a total of M projection directions that are projected in the imaging space, using the calibration data.
    The substance identification device according to any one of claims 2 to 10.
  12.  前記CT画像生成手段は、
     前記関心位置に対応した前記投影方向に前記1つ又は複数の物質の一部として金属を含む場合、この金属のメタルアーチファクトを軽減するアーチファクト軽減手段と、
     このアーチファクト軽減手段により前記アーチファクトを軽減した状態で前記CT画像を再度、生成する手段と、
     を備えたことを特徴とする請求項1~11の何れか一項に記載の物質同定装置。
    The CT image generation means includes
    An artifact mitigation means for mitigating metal artifacts of the metal when including metal as part of the one or more substances in the projection direction corresponding to the position of interest;
    Means for generating the CT image again in a state where the artifact is reduced by the artifact reduction means;
    The substance identification device according to any one of claims 1 to 11, further comprising:
  13.  X線を照射するX線管と、
     前記X線のフォトンが入射を検知する度に当該フォトンのエネルギに応じた電気パルスを出力するセルを、2次元の画素群を形成するように2次元に配列した検出回路と、この検出部の各セルが検知した前記フォトンの数を2つ以上のエネルギ帯域(エネルギ帯域数M≧2:Mは正の整数)に分けて画素毎に計測する計測回路と、この計測回路が計測した各セルの出力に応じたデジタル量の電気信号をエネルギ帯域毎にフレームデータとして出力する出力回路とを備えて構成された検出器と、
     前記X線管と前記検出器とを撮影対象を挟んで互いに対向させて当該撮影対象の周りに当該X線管と当該検出器とを回転させ、前記検出器が出力する前記フレームデータを前記エネルギ帯域毎に収集するスキャン手段と、を備えたシステムに適用される物質同定方法であって、
     前記フレームデータに基づいて前記撮影対象のパノラマ画像のデータを前記エネルギ帯域毎に生成するステップと、
     前記撮影対象のCT(computed tomography)画像を生成するステップと、
     前記パノラマ画像をモニタに表示するステップと、
     前記モニタに表示された前記パノラマ画像上でオペレータの指示に応じて関心位置を指定するステップと、
     前記関心位置に対応した前記X線管及び前記検出器の間の撮影空間における投影方向に存在する、前記撮影対象における厚みを有する1つ又は複数の物質の少なくとも種類を、前記パノラマ画像のX線透過データと前記CT画像から得られる当該物質の形態情報とに基づいて同定するステップと、
     を有することを特徴とした物質同定方法。
    An X-ray tube that emits X-rays;
    A detection circuit in which cells each outputting an electric pulse corresponding to the energy of the photon each time the X-ray photon is detected are two-dimensionally arranged so as to form a two-dimensional pixel group; and A measurement circuit that measures the number of photons detected by each cell into two or more energy bands (number of energy bands M ≧ 2: M is a positive integer) for each pixel, and each cell measured by the measurement circuit A detector configured to include an output circuit that outputs a digital amount of an electric signal corresponding to the output of each of the energy bands as frame data;
    The X-ray tube and the detector are opposed to each other with the imaging target interposed therebetween, the X-ray tube and the detector are rotated around the imaging target, and the frame data output by the detector is converted into the energy. A substance identification method applied to a system comprising a scanning means for collecting each band,
    Generating panoramic image data to be imaged for each energy band based on the frame data;
    Generating a CT (computed tomography) image of the imaging target;
    Displaying the panoramic image on a monitor;
    Designating a position of interest on the panoramic image displayed on the monitor according to an operator's instruction;
    X-rays of the panoramic image represent at least one type of one or a plurality of substances having a thickness in the imaging target that exists in the projection direction in the imaging space between the X-ray tube and the detector corresponding to the position of interest. Identifying based on transmission data and morphological information of the substance obtained from the CT image;
    The substance identification method characterized by having.
  14.  前記同定ステップは、
     生成された前記CT画像をオペレータに提示し、
     前記CT画像から、オペレータとの間でインターラクティブに、前記投影方向に沿った前記厚みを有する1つ又は複数の物質としてN個(Nは1以上の正の整数)の厚みを持つ物質が存在することを判定し、
     前記関心位置に対応した前記撮影空間における投影方向を含め、当該関心位置の近傍の位置を含むL点(L≧N)の位置それぞれから前記投影方向に沿って前記撮影空間に投影される合計L本の相互に近傍の投影方向を特定し、
     前記CT画像のデータにおいて前記L本の投影方向それぞれの通過方向に沿って存在する、前記撮影対象のN個の物質の厚さti1,ti2,…,tiN(i=1~M;Nは物質の数)を前記CT画像のデータから前記形態情報として演算し、
     前記N個の物質の前記X線に対する前記エネルギ帯域毎の吸収係数をμ1j2j,…,μMj(j=1~N)としたとき、前記物質の厚さを既知として持ち且つ前記吸収係数を変数として持つ個の連立方程式を解いて当該吸収係数をμ1j2j,…,μMjの値を演算し、
     前記N個の物質それぞれに対して、縦軸にμ1j+μ2j,…,+μMj(j=1~N)を相対減衰指数:RAIとして採り、且つ横軸にμaj-μbj(a、b≦Mを満たす正の整数、ただし、a>b)を線質変化指数:SDIとして採った2次元の散布図を作成し、
     作成された前記物質毎の散布図を、予め設定してある当該物質毎の基準となる前記相対減衰指数及び前記線質変化指数を示す参照散布図と比較して前記関心位置に応じた前記投影方向に存在する前記1つ又は複数の物質それぞれの種類を判定する、
     ことを特徴とする請求項13に記載の物質同定方法。
    The identification step includes
    Presenting the generated CT image to an operator;
    From the CT image, there are N substances (N is a positive integer of 1 or more) as one or more substances having the thickness along the projection direction interactively with the operator. To determine that
    The total L projected from the positions of L points (L ≧ N) including the position in the vicinity of the position of interest including the projection direction in the shooting space corresponding to the position of interest along the projection direction to the shooting space. Identify the directions of projections near each other,
    The thicknesses t i1 , t i2 ,..., T iN (i = 1 to M;) of the N substances to be imaged that exist along the passing directions of the L projection directions in the CT image data. N is the number of substances) calculated from the CT image data as the morphological information,
    When the absorption coefficients of the N substances for the X-rays for each energy band are μ 1j , μ 2j ,..., Μ Mj (j = 1 to N), the thicknesses of the substances are known and L C N-number of the absorption coefficient by solving the simultaneous equations mu 1j having an absorption coefficient as a variable, μ 2j, ..., and calculates the value of mu Mj,
    For each of the N substances, the vertical axis represents μ 1j + μ 2j ,..., + Μ Mj (j = 1 to N) as a relative attenuation index: RAI, and the horizontal axis represents μ aj −μ bj (a, A positive integer satisfying b ≦ M, where a> b) is taken as a quality change index: SDI, and a two-dimensional scatter diagram is created.
    The projection according to the position of interest is compared with the reference scatter diagram showing the relative attenuation index and the quality change index that are set in advance as a reference for each of the substances set in advance. Determining the type of each of the one or more substances present in the direction;
    The method of identifying a substance according to claim 13.
  15.  判定された、前記関心位置に応じた前記投影方向に存在する前記1つ又は複数の物質の種類をユーザに提示するステップを有することを特徴とした請求項14に記載の物質同定方法。 15. The substance identification method according to claim 14, further comprising the step of presenting the determined type of the one or more substances present in the projection direction according to the position of interest to the user.
PCT/JP2014/062631 2013-05-10 2014-05-12 Substance identification device and substance identification method employing x-ray panoramic/ct photographing WO2014181889A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2015515917A JPWO2014181889A1 (en) 2013-05-10 2014-05-12 Substance identification apparatus and substance identification method using X-ray panorama / CT imaging

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2013-100400 2013-05-10
JP2013100400 2013-05-10

Publications (1)

Publication Number Publication Date
WO2014181889A1 true WO2014181889A1 (en) 2014-11-13

Family

ID=51867359

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/062631 WO2014181889A1 (en) 2013-05-10 2014-05-12 Substance identification device and substance identification method employing x-ray panoramic/ct photographing

Country Status (2)

Country Link
JP (1) JPWO2014181889A1 (en)
WO (1) WO2014181889A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150004558A1 (en) * 2011-12-21 2015-01-01 Carestream Health, Inc. Dental imaging with photon-counting detector
EP3053525A3 (en) * 2015-02-04 2016-09-21 Dental Imaging Technologies Corporation Panoramic imaging using multi-spectral x-ray source
WO2016171186A1 (en) * 2015-04-20 2016-10-27 株式会社ジョブ Data processing device and data processing method for x-ray examination, and x-ray examination apparatus provided with said device
WO2017170408A1 (en) * 2016-03-31 2017-10-05 株式会社ジョブ X-ray detection system, x-ray device, and device and method for processing x-ray detection data
JP2018196015A (en) * 2017-05-18 2018-12-06 キヤノン株式会社 Solid state imaging device, imaging apparatus, and imaging method
WO2018235823A1 (en) 2017-06-20 2018-12-27 株式会社ジョブ X-ray device, x-ray inspection method, and data processing apparatus
WO2019003506A1 (en) * 2017-06-30 2019-01-03 株式会社島津製作所 Tomographic image generation method and radiographic apparatus
KR102070373B1 (en) * 2019-10-29 2020-01-28 주식회사 레이 method of obtaining calibration parameters for dual-energy Cone-Beam Computed-Tomography images
KR20200050117A (en) * 2018-11-01 2020-05-11 오스템임플란트 주식회사 Method and Apparatus for generating panoramic image, computer-readable recording medium
CN112200798A (en) * 2020-10-29 2021-01-08 佛山市南海区广工大数控装备协同创新研究院 Printed circuit board detection method based on X-ray layering technology
US11016040B2 (en) 2017-05-16 2021-05-25 Job Corporation Apparatus and method of processing data acquired in x-ray examination, and x-ray examination system equipped with the apparatus
CN113876344A (en) * 2020-07-02 2022-01-04 佳能医疗系统株式会社 X-ray CT apparatus and method
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0341933A (en) * 1989-03-03 1991-02-22 Matsushita Electric Ind Co Ltd Radiograph processing method and photographing device
JP2007111526A (en) * 2005-10-17 2007-05-10 Siemens Ag Method and device for segmenting substance in x-ray image
WO2009133896A1 (en) * 2008-04-30 2009-11-05 株式会社モリタ製作所 Medical x-ray ct imaging device
WO2013047788A1 (en) * 2011-09-28 2013-04-04 株式会社テレシステムズ Image processor and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0341933A (en) * 1989-03-03 1991-02-22 Matsushita Electric Ind Co Ltd Radiograph processing method and photographing device
JP2007111526A (en) * 2005-10-17 2007-05-10 Siemens Ag Method and device for segmenting substance in x-ray image
WO2009133896A1 (en) * 2008-04-30 2009-11-05 株式会社モリタ製作所 Medical x-ray ct imaging device
WO2013047788A1 (en) * 2011-09-28 2013-04-04 株式会社テレシステムズ Image processor and image processing method

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9743893B2 (en) * 2011-12-21 2017-08-29 Carestream Health, Inc. Dental imaging with photon-counting detector
US20150004558A1 (en) * 2011-12-21 2015-01-01 Carestream Health, Inc. Dental imaging with photon-counting detector
US10405813B2 (en) 2015-02-04 2019-09-10 Dental Imaging Technologies Corporation Panoramic imaging using multi-spectral X-ray source
EP3053525A3 (en) * 2015-02-04 2016-09-21 Dental Imaging Technologies Corporation Panoramic imaging using multi-spectral x-ray source
WO2016171186A1 (en) * 2015-04-20 2016-10-27 株式会社ジョブ Data processing device and data processing method for x-ray examination, and x-ray examination apparatus provided with said device
US10502698B2 (en) 2015-04-20 2019-12-10 Job Corporation Data processing apparatus and data processing method for X-ray examination, and X-ray examination system provided with the data processing apparatus
WO2017170408A1 (en) * 2016-03-31 2017-10-05 株式会社ジョブ X-ray detection system, x-ray device, and device and method for processing x-ray detection data
JPWO2017170408A1 (en) * 2016-03-31 2019-02-07 株式会社ジョブ X-ray detection system, X-ray apparatus, and apparatus and method for processing X-ray detection data
US11016040B2 (en) 2017-05-16 2021-05-25 Job Corporation Apparatus and method of processing data acquired in x-ray examination, and x-ray examination system equipped with the apparatus
JP2018196015A (en) * 2017-05-18 2018-12-06 キヤノン株式会社 Solid state imaging device, imaging apparatus, and imaging method
JP7223070B2 (en) 2017-05-18 2023-02-15 キヤノン株式会社 Solid-state imaging device and imaging device
JP2021153346A (en) * 2017-05-18 2021-09-30 キヤノン株式会社 Solid state image sensor and imaging apparatus
WO2018235823A1 (en) 2017-06-20 2018-12-27 株式会社ジョブ X-ray device, x-ray inspection method, and data processing apparatus
JPWO2019003506A1 (en) * 2017-06-30 2020-03-26 株式会社島津製作所 Tomographic image generation method and radiation imaging apparatus
WO2019003506A1 (en) * 2017-06-30 2019-01-03 株式会社島津製作所 Tomographic image generation method and radiographic apparatus
KR102182649B1 (en) * 2018-11-01 2020-11-24 오스템임플란트 주식회사 Method and Apparatus for generating panoramic image, computer-readable recording medium
KR20200050117A (en) * 2018-11-01 2020-05-11 오스템임플란트 주식회사 Method and Apparatus for generating panoramic image, computer-readable recording medium
WO2021085700A1 (en) * 2019-10-29 2021-05-06 주식회사 레이 Method for generating calibration parameter for dual-energy cone-beam ct image
CN113164152A (en) * 2019-10-29 2021-07-23 瑞丽有限公司 Calibration parameter generation method for dual-energy cone-beam CT image
KR102070373B1 (en) * 2019-10-29 2020-01-28 주식회사 레이 method of obtaining calibration parameters for dual-energy Cone-Beam Computed-Tomography images
CN113164152B (en) * 2019-10-29 2022-02-08 瑞丽有限公司 Calibration parameter generation method for dual-energy cone-beam CT image
CN113876344A (en) * 2020-07-02 2022-01-04 佳能医疗系统株式会社 X-ray CT apparatus and method
CN112200798A (en) * 2020-10-29 2021-01-08 佛山市南海区广工大数控装备协同创新研究院 Printed circuit board detection method based on X-ray layering technology
CN112200798B (en) * 2020-10-29 2024-02-02 佛山市南海区广工大数控装备协同创新研究院 Printed circuit board detection method based on X-ray layering technology
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device
CN116433476B (en) * 2023-06-09 2023-09-08 有方(合肥)医疗科技有限公司 CT image processing method and device

Also Published As

Publication number Publication date
JPWO2014181889A1 (en) 2017-02-23

Similar Documents

Publication Publication Date Title
WO2014181889A1 (en) Substance identification device and substance identification method employing x-ray panoramic/ct photographing
US20230371804A1 (en) Dental imaging with photon-counting detector
US9269168B2 (en) Volume image reconstruction using data from multiple energy spectra
US9743893B2 (en) Dental imaging with photon-counting detector
JP6080766B2 (en) Image processing apparatus and image processing method
JP5942099B2 (en) Substance identification apparatus and imaging system operating method
JP2015144898A (en) Data processing apparatus for radiation imaging
WO2014126189A1 (en) X-ray imaging device and x-ray imaging method
JP2015156966A (en) Dental x-ray imaging apparatus and image correction method
EP3738512A1 (en) Systems and methods for automatic tube potential selection in dual energy imaging
JP5944012B2 (en) Dental imaging with a photon count detector
US20210153823A1 (en) Method for local x-ray bone density tomography
US20220233162A1 (en) Counting response and beam hardening calibration method for a full size photon-counting ct system
JP2014161590A (en) Dental x-ray imaging apparatus and image correction method in dental x-ray imaging
EP4290278A1 (en) Systems and methods for computed tomography
Baek et al. Simulated dental cone beam computed tomography using Timepix

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14794650

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015515917

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14794650

Country of ref document: EP

Kind code of ref document: A1