WO2020209313A1 - Dispositif et procédé de traitement d'image - Google Patents

Dispositif et procédé de traitement d'image Download PDF

Info

Publication number
WO2020209313A1
WO2020209313A1 PCT/JP2020/015889 JP2020015889W WO2020209313A1 WO 2020209313 A1 WO2020209313 A1 WO 2020209313A1 JP 2020015889 W JP2020015889 W JP 2020015889W WO 2020209313 A1 WO2020209313 A1 WO 2020209313A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
interest
brightness value
value
predetermined
Prior art date
Application number
PCT/JP2020/015889
Other languages
English (en)
Japanese (ja)
Inventor
聖 正岡
Original Assignee
朝日レントゲン工業株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 朝日レントゲン工業株式会社 filed Critical 朝日レントゲン工業株式会社
Priority to JP2021513687A priority Critical patent/JP7178621B2/ja
Publication of WO2020209313A1 publication Critical patent/WO2020209313A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01N23/044Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using laminography or tomosynthesis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01N23/046Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • G01N23/18Investigating the presence of flaws defects or foreign matter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding

Definitions

  • the present invention relates to a technique for detecting a foreign substance that may be contained in an object to be inspected.
  • Patent Document 1 detects an highlighted line from an X-ray transmission image of an object to be inspected, and detects a foreign substance based on the highlighted line.
  • an object of the present invention to provide an image processing apparatus, an image processing method, and an inspection apparatus having high foreign matter detection accuracy.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation unit for calculating the value is arranged on one side of the first direction of the pixel of interest.
  • the maximum first luminance value within one set number of pixels is calculated, the maximum second luminance value within the second set number of pixels arranged on the other side of the first direction of the focus pixel is calculated, and the focus
  • the fourth luminance value, which is the maximum among the number of pixels, is calculated, and if a predetermined condition is satisfied, a specific portion for specifying the pixel of interest as the position of a foreign object is provided, and the predetermined condition is (a).
  • the configuration is one of the two.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation unit for calculating the value is arranged on one side of the first direction of the pixel of interest.
  • the first brightness value which is the brightness value of a pixel having a brightness value larger than that of the pixel of interest and as far as possible from the pixel of interest or a brightness value of a pixel as close as possible to the pixel of interest, is calculated.
  • the brightness value of the pixel having a larger brightness value than the focus pixel and as far as possible from the focus pixel or the brightness value of the pixel as close as possible to the focus pixel.
  • the second luminance value is calculated, and the luminance value is larger than that of the pixel of interest within the third set number of pixels arranged on one side of the second direction, which is a direction different from the first direction of the pixel of interest, and from the pixel of interest.
  • the third brightness value which is the brightness value of the pixel as far away as possible or the brightness value of the pixel as close as possible to the focus pixel, is calculated, and the focus is within the fourth set number of pixels arranged on the other side of the second direction of the focus pixel.
  • the fourth brightness value which is the brightness value of a pixel having a brightness value larger than that of the pixel and as far as possible from the focus pixel or the brightness value of a pixel as close as possible to the focus pixel, is calculated, and if a predetermined condition is satisfied, the focus
  • the predetermined condition includes (a) the ratio of the brightness value of the pixel of interest to the first luminance value and the luminance of the pixel of interest with respect to the second luminance value, comprising a specific portion that identifies the pixel as the position of a foreign object.
  • the first condition that the ratio of the values is equal to or less than the first threshold value (b) the ratio of the brightness value of the pixel of interest to the third brightness value and the ratio of the brightness value of the pixel of interest to the fourth brightness value, respectively.
  • the configuration is one of the second condition of being equal to or less than the second threshold, and (c) the first condition and the second condition.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation unit for calculating the value and the luminance value of the pixel of interest are smaller than the average luminance value of the predetermined region to which the pixel of interest belongs by a predetermined value or more, the focus is on one side of the pixel of interest in the first direction.
  • the first luminance value which is the average luminance value of the first set number of pixels lined up at a first predetermined position away from the pixel and the second set number of pixels lined up at the other side in the first direction of the pixel of interest with a second predetermined number of pixels.
  • the third set number of pixels lined up on one side of the second direction which is a direction different from the first direction of the pixel of interest, at a third predetermined position from the pixel of interest, and the other side of the pixel of interest in the second direction.
  • a specific unit that calculates the second luminance value which is the average luminance value of the fourth set number of pixels lined up at a fourth predetermined position away from the pixel of interest, and specifies the pixel of interest as the position of a foreign object if a predetermined condition is satisfied.
  • the predetermined conditions are (a) the first condition that the ratio of the luminance value of the pixel of interest to the first luminance value is equal to or less than the first threshold value, and (b) the said one with respect to the second luminance value.
  • the configuration is one of the second condition that the ratio of the luminance value of the pixel of interest is equal to or less than the second threshold value, and (c) the first condition and the second condition.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation unit for calculating the value is aligned on one side of the first direction of the pixel of interest.
  • the minimum first luminance value within one set number of pixels is calculated, the minimum second luminance value within the second set number of pixels arranged on the other side of the first direction of the focus pixel is calculated, and the focus
  • the fourth luminance value, which is the minimum among the number of pixels, is calculated, and if a predetermined condition is satisfied, a specific portion for specifying the pixel of interest as the position of a foreign object is provided, and the predetermined condition is (a).
  • the configuration is one of the two.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation unit for calculating the value is aligned on one side of the first direction of the pixel of interest.
  • the first brightness value which is the brightness value of a pixel having a brightness value smaller than that of the pixel of interest and as far as possible from the pixel of interest or a brightness value of a pixel as close as possible to the pixel of interest, is calculated.
  • the brightness value of the pixel having a smaller brightness value than the focus pixel and as far as possible from the focus pixel or the brightness value of the pixel as close as possible to the focus pixel.
  • the second luminance value is calculated, and the luminance value is smaller than that of the pixel of interest within the third set number of pixels arranged on one side of the second direction, which is a direction different from the first direction of the pixel of interest, and from the pixel of interest.
  • the third brightness value which is the brightness value of the pixel as far away as possible or the brightness value of the pixel as close as possible to the focus pixel, is calculated, and the focus is within the fourth set number of pixels arranged on the other side of the second direction of the focus pixel.
  • the fourth brightness value which is the brightness value of a pixel whose brightness value is smaller than that of the pixel and is as far as possible from the focus pixel or the brightness value of a pixel as close as possible to the focus pixel, is calculated, and if a predetermined condition is satisfied, the focus
  • the predetermined condition includes (a) the ratio of the brightness value of the pixel of interest to the first luminance value and the luminance of the pixel of interest with respect to the second luminance value, comprising a specific portion that identifies the pixel as the position of a foreign object.
  • the first condition that the ratio of the values is equal to or higher than the first threshold value (b) the ratio of the brightness value of the pixel of interest to the third brightness value and the ratio of the brightness value of the pixel of interest to the fourth brightness value, respectively. It is configured to be one of the second condition that the second threshold value or more is reached, and (c) the first condition and the second condition.
  • the image processing apparatus has a division portion that divides an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the luminance value of the pixel of interest and the calculation unit for calculating the value is larger than the average luminance value of the predetermined region to which the pixel of interest belongs by a predetermined value or more, the focus is on one side of the pixel of interest in the first direction.
  • the first luminance value which is the average luminance value of the first set number of pixels lined up at a first predetermined position away from the pixel and the second set number of pixels lined up at the other side in the first direction of the pixel of interest with a second predetermined number of pixels.
  • the third set number of pixels lined up on one side of the second direction which is a direction different from the first direction of the pixel of interest, at a third predetermined position from the pixel of interest, and the other side of the pixel of interest in the second direction.
  • a specific unit that calculates the second luminance value which is the average luminance value of the fourth set number of pixels lined up at a fourth predetermined position away from the pixel of interest, and specifies the pixel of interest as the position of a foreign object if a predetermined condition is satisfied.
  • the predetermined conditions are (a) the first condition that the ratio of the luminance value of the pixel of interest to the first luminance value is equal to or higher than the first threshold value, and (b) the said one with respect to the second luminance value.
  • the configuration is one of the second condition that the ratio of the luminance value of the pixel of interest is equal to or higher than the second threshold value, and (c) the first condition and the second condition.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are smaller than the average luminance value of the predetermined region to which the pixel of interest belongs by a predetermined value or more, the second line is arranged on one side of the first direction of the pixel of interest.
  • the maximum first luminance value within one set number of pixels is calculated, the maximum second luminance value within the second set number of pixels arranged on the other side of the first direction of the focus pixel is calculated, and the focus
  • the fourth luminance value that becomes the maximum within the number of pixels is calculated, and if a predetermined condition is satisfied, the specific step of identifying the pixel of interest as the position of a foreign object is provided, and the predetermined condition is (a).
  • the configuration is one of the two.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are smaller than the average luminance value of the predetermined region to which the pixel of interest belongs by a predetermined value or more, the second line is arranged on one side of the first direction of the pixel of interest.
  • the first brightness value which is the brightness value of a pixel having a brightness value larger than that of the pixel of interest and as far as possible from the pixel of interest or a brightness value of a pixel as close as possible to the pixel of interest.
  • the brightness value of the pixel having a larger brightness value than the focus pixel and as far as possible from the focus pixel or the brightness value of the pixel as close as possible to the focus pixel is calculated.
  • the second luminance value is calculated, and the luminance value is larger than that of the pixel of interest within the third set number of pixels arranged on one side of the second direction, which is a direction different from the first direction of the pixel of interest, and from the pixel of interest.
  • the third brightness value which is the brightness value of the pixel as far away as possible or the brightness value of the pixel as close as possible to the focus pixel, is calculated, and the focus is within the fourth set number of pixels arranged on the other side of the second direction of the focus pixel.
  • the fourth brightness value which is the brightness value of a pixel having a brightness value larger than that of the pixel and as far as possible from the focus pixel or the brightness value of a pixel as close as possible to the focus pixel, is calculated, and if a predetermined condition is satisfied, the focus
  • the predetermined condition includes (a) the ratio of the brightness value of the pixel of interest to the first luminance value and the luminance of the pixel of interest with respect to the second luminance value, comprising a specific step of identifying the pixel as the position of a foreign object.
  • the first condition that the ratio of the values is equal to or less than the first threshold value (b) the ratio of the brightness value of the pixel of interest to the third brightness value and the ratio of the brightness value of the pixel of interest to the fourth brightness value, respectively.
  • the configuration is one of the second condition of being equal to or less than the second threshold, and (c) the first condition and the second condition.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are smaller than the average luminance value of the predetermined region to which the pixel of interest belongs by a predetermined value or more, the focus is on one side of the pixel of interest in the first direction.
  • the first luminance value which is the average luminance value of the first set number of pixels lined up at a first predetermined position away from the pixel and the second set number of pixels lined up at the other side in the first direction of the pixel of interest with a second predetermined number of pixels.
  • the third set number of pixels lined up on one side of the second direction which is a direction different from the first direction of the pixel of interest, at a third predetermined position from the pixel of interest, and the other side of the pixel of interest in the second direction.
  • a specific step of calculating the second luminance value which is the average luminance value of the fourth set number of pixels lined up at a fourth predetermined position away from the pixel of interest, and specifying the pixel of interest as the position of a foreign object if a predetermined condition is satisfied.
  • the predetermined conditions are (a) the first condition that the ratio of the luminance value of the pixel of interest to the first luminance value is equal to or less than the first threshold value, and (b) the said one with respect to the second luminance value.
  • the configuration is one of the second condition that the ratio of the luminance value of the pixel of interest is equal to or less than the second threshold value, and (c) the first condition and the second condition.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are greater than a predetermined value or more than the average luminance value in the predetermined region to which the pixel of interest belongs, the first line of the pixel of interest is arranged on one side in the first direction.
  • the minimum first luminance value within one set number of pixels is calculated, the minimum second luminance value within the second set number of pixels arranged on the other side of the first direction of the focus pixel is calculated, and the focus
  • the configuration is one of the two.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are greater than a predetermined value or more than the average luminance value in the predetermined region to which the pixel of interest belongs, the first line of the pixel of interest is arranged on one side in the first direction.
  • the first brightness value which is the brightness value of a pixel having a brightness value smaller than that of the pixel of interest and as far as possible from the pixel of interest or a brightness value of a pixel as close as possible to the pixel of interest.
  • the brightness value of the pixel having a smaller brightness value than the focus pixel and as far as possible from the focus pixel or the brightness value of the pixel as close as possible to the focus pixel is calculated.
  • the second luminance value is calculated, and the luminance value is smaller than that of the pixel of interest within the third set number of pixels arranged on one side of the second direction, which is a direction different from the first direction of the pixel of interest, and from the pixel of interest.
  • the third brightness value which is the brightness value of the pixel as far away as possible or the brightness value of the pixel as close as possible to the focus pixel, is calculated, and the focus is within the fourth set number of pixels arranged on the other side of the second direction of the focus pixel.
  • the fourth brightness value which is the brightness value of a pixel whose brightness value is smaller than that of the pixel and is as far as possible from the focus pixel or the brightness value of a pixel as close as possible to the focus pixel, is calculated, and if a predetermined condition is satisfied, the focus
  • the predetermined condition includes (a) the ratio of the brightness value of the pixel of interest to the first luminance value and the luminance of the pixel of interest with respect to the second luminance value, comprising a specific step of identifying the pixel as the position of a foreign object.
  • the first condition that the ratio of the values is equal to or higher than the first threshold value (b) the ratio of the brightness value of the pixel of interest to the third brightness value and the ratio of the brightness value of the pixel of interest to the fourth brightness value, respectively. It is configured to be one of the second condition that the second threshold value or more is reached, and (c) the first condition and the second condition.
  • the image processing method includes a division step of dividing an image based on X-ray photography of an object to be inspected into predetermined regions and an average brightness in each of the predetermined regions.
  • the calculation step for calculating the value and the luminance value of the pixel of interest are larger than the average luminance value in the predetermined region to which the pixel of interest belongs by a predetermined value or more, the focus is on one side of the pixel of interest in the first direction.
  • the first luminance value which is the average luminance value of the first set number of pixels lined up at a first predetermined position away from the pixel and the second set number of pixels lined up at the other side in the first direction of the pixel of interest with a second predetermined number of pixels.
  • the third set number of pixels lined up on one side of the second direction which is a direction different from the first direction of the pixel of interest, at a third predetermined position from the pixel of interest, and the other side of the pixel of interest in the second direction.
  • a specific step of calculating the second luminance value which is the average luminance value of the fourth set number of pixels lined up at a fourth predetermined position away from the pixel of interest, and specifying the pixel of interest as the position of a foreign object if a predetermined condition is satisfied.
  • the predetermined conditions are (a) the first condition that the ratio of the luminance value of the pixel of interest to the first luminance value is equal to or higher than the first threshold value, and (b) the said one with respect to the second luminance value.
  • the configuration is one of the second condition that the ratio of the luminance value of the pixel of interest is equal to or higher than the second threshold value, and (c) the first condition and the second condition.
  • the inspection apparatus generates a two-dimensional X-ray image based on the detection result of the X-ray detection unit used for the X-ray image, with the object to be inspected as an object of X-ray photography.
  • a first image generation unit and an image obtained by projecting the two-dimensional X-ray image onto a fault set at a predetermined position other than the position of the X-ray detection unit are generated, and then the generated image is two-dimensionally generated.
  • a reconstruction unit that reconstructs a projected image, which is an image, and a specific unit that specifies the position of a foreign object based on the projected image are provided, and the specific unit includes any of the above-mentioned image processing devices and is divided. The unit is configured to divide the projected image into predetermined areas.
  • the detection accuracy of foreign matter can be improved.
  • FIG. 5A Perspective view showing an example of the positional relationship between a pixel group which is a part of the detection surface of the X-ray detector and one voxel.
  • FIG. 5A Perspective view showing an example of the positional relationship between a pixel group which is a part of the detection surface of the X-ray detector and one voxel.
  • FIG. 5A Perspective view showing an example of the positional relationship between a pixel group which is a part of the detection surface of the X-ray detector and one voxel.
  • FIG. 6A The figure which shows the thickness of a voxel with respect to the X-ray incident direction Diagram showing an example of an oblique quadrangular prism that approximates a voxel Perspective view showing a rectangle formed in a voxel Figure showing another example of an oblique prism that approximates a voxel Figure showing another example of an oblique prism that approximates a voxel Figure showing another example of an oblique prism that approximates a voxel Figure showing another example of an oblique prism that approximates a voxel Figure showing another example of an oblique prism that approximates a voxel
  • FIG. 1 is a diagram showing a schematic configuration of an inspection device according to an embodiment of the present invention.
  • the inspection device 100 shown in FIG. 1 includes a first X-ray irradiation unit 1A, a second X-ray irradiation unit 1B, a first X-ray detection unit 2A, a second X-ray detection unit 2B, a belt conveyor 3, a CPU 4, and a ROM 5. , RAM 6, VRAM 7, display unit 8, HDD 9, and input unit 10.
  • the image processing device for processing the image is composed of the CPU 4, the ROM 5, the RAM 6, and the HDD 9.
  • the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B each irradiate the object T1 to be inspected with X-rays.
  • the X-rays emitted from the first X-ray irradiation unit 1A and the X-rays emitted from the second X-ray irradiation unit 1B each have a fan beam shape extending along the Y axis, and more specifically, a narrow fan beam shape.
  • the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B may be shared to form a single X-ray irradiation unit.
  • the X-rays emitted from the single X-ray irradiation unit may have a wide fan beam shape or a cone beam shape.
  • the X-ray irradiation direction from the first X-ray irradiation unit 1A to the first X-ray detection unit 2A and the X-ray irradiation direction from the second X-ray irradiation unit 1B to the second X-ray detection unit 2B are different from each other. ..
  • the irradiation direction of the X rays emitted from the first X-ray irradiation unit 1A to the first X-ray detection unit 2A is the direction orthogonal to the X-axis and the Y-axis
  • the second X-ray irradiation unit 1B to the second X-ray is a direction inclined from the direction orthogonal to the X-axis and the Y-axis.
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B each output a digital amount of electric signals corresponding to the incident X-rays at a constant frame rate.
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B are line sensors extending along the Y-axis, respectively. Since the inspection device 100 employs the tomosynthesis method, the first X-ray detection unit 2A and the second X-ray detection unit 2B each have a plurality of X-ray detection elements in the X-axis direction as well.
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B can each collect incident X-rays at a predetermined frame rate as image data of a digital electric amount corresponding to the amount of the X-rays.
  • this collected data is referred to as "frame data" (an example of a two-dimensional X-ray photographed image).
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B may be shared to form a single X-ray detection unit. However, when the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B are shared, the first X-ray detection unit 2A and the second X-ray detection unit 2B are not shared.
  • the belt conveyor 3 is an object to be inspected placed on the belt with respect to the pair of the first X-ray irradiation unit 1A and the first X-ray detection unit 2A and the pair of the second X-ray irradiation unit 1B and the second X-ray detection unit 2B.
  • the object to be inspected T1 is placed on the X-axis with respect to the pair of the first X-ray irradiation unit 1A and the first X-ray detection unit 2A and the pair of the second X-ray irradiation unit 1B and the second X-ray detection unit 2B.
  • the first moving mechanism (belt conveyor 3) that moves toward the negative side was used, but instead of the first moving mechanism, the pair of the first X-ray irradiation unit 1A and the first X-ray detection unit 2A and the second X-ray irradiation unit 1B
  • a second movement mechanism that moves the pair of the second X-ray detection unit 2B and the second X-ray detection unit 2B toward the positive side of the X-axis with respect to the object T1 to be inspected may be used.
  • the CPU 4 controls the entire inspection device 100 according to the programs and data stored in the ROM 5 and the HDD 9.
  • ROM 5 records fixed programs and data.
  • the RAM 6 provides working memory.
  • the CPU operates so as to perform a function of generating an image according to a program stored in the HDD 9. That is, the CPU 41 also serves as an image generation unit that generates an image.
  • VRAM 7 temporarily stores image data.
  • the display unit 8 displays an image based on the image data stored in the VRAM 7.
  • the HDD 9 has various types of radiography control programs for controlling X-ray photography operations, an image reconstruction processing program for generating a reconstructed image, a foreign matter position identification processing program for identifying the position of a foreign matter, a position correction program, and the like. Stores various data such as setting values of various parameters and image data used when executing a program and various programs.
  • the input unit 10 is, for example, a keyboard, a pointing device, or the like, and inputs the content of the user operation.
  • the inspection device 100 performs X-ray imaging (step S1). Specifically, while the belt conveyor 3 is moving the object to be inspected T1, X-rays are exposed from the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B. The X-rays emitted from the first X-ray irradiation unit 1A pass through the imaging region of the object to be inspected T1 and enter the first X-ray detection unit 2A, and the X-rays emitted from the second X-ray irradiation unit 1B are the objects to be inspected.
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B detect incident X-rays at a predetermined frame rate and sequentially output two-dimensional digital data of the corresponding digital electric amount in frame units. .. This frame data is stored in the HDD 9.
  • the inspection device 100 generates a projected image obtained by projecting onto a tomography set at a predetermined position excluding the positions of the X-ray detection units 2A and 2B (step S2). Specifically, the inspection device 100 generates an image in which the frame data is projected onto each of the 60 layers of tomography set at predetermined positions other than the positions of the X-ray detectors 2A and 2B, and then the generated image is generated. Is reconstructed into a two-dimensional image (projected image). As a method of projecting frame data onto a certain fault to obtain one projected image, for example, the frame data is projected onto the central plane in the depth direction (one fault plane) of the certain fault.
  • a method of obtaining the one projected image, a method of projecting the frame data on the uppermost surface (one tomographic plane) in the depth direction of the one fault, and obtaining the one projected image, the above frame data A method of obtaining the one projected image by projecting onto the lowest surface (one fault plane) in the depth direction of one fault, and projecting the frame data on each of a plurality of fault planes included in the one fault. Examples thereof include a method of obtaining the above-mentioned one projected image by synthesizing a plurality of projected images (for example, simple averaging process, weighted averaging process, etc.).
  • the projected image obtained by projecting onto each tomographic image set at a predetermined position other than the positions of the X-ray detectors 2A and 2B is a projected image on each tomographic image obtained based on the principle of tomosynthesis. ..
  • the direction perpendicular to the X-axis and the Y-axis is the depth direction of the fault, and the first X-ray detection unit 2A and the second X-ray detection unit 2B are from the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B side.
  • a 60-layer fault is set at a predetermined position excluding the positions of the X-ray detectors 2A and 2B at a pitch of 0.5 mm on the side.
  • the predetermined positions other than the positions of the X-ray detection units 2A and 2B include the position of the object to be inspected T1, but the present invention is not limited thereto.
  • the 60-layer projected image derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A is stored in the HDD 9.
  • the projected images of the 60 layers derived from the second X-ray irradiation unit 1B and the second X-ray detection unit 2B are also stored in the HDD 9.
  • the inspection device 100 identifies the position of the foreign matter for each projected image (step S3).
  • the details of the method for identifying the position of the foreign matter will be described later.
  • the detection accuracy of the foreign matter can be improved as compared with the case where the position of the foreign matter is specified for an X-ray image such as a simple X-ray transmission image.
  • the result of specifying the foreign matter position can be, for example, a binarized image showing the position of the foreign matter and the position other than the foreign matter with different luminance values.
  • the inspection device 100 removes the erroneous detection portion for specifying the position of the foreign matter (step S4).
  • the inspection device 100 includes a projection image derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A and a projection image derived from the second X-ray irradiation unit 1B and the second X-ray detection unit 2B at the same depth.
  • the image is specified based on the position of the foreign matter specified based on the projected images derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A and the projected images derived from the second X-ray irradiation unit 1B and the second X-ray detection unit 2B.
  • the position of the foreign matter is compared with that of the foreign matter, and the erroneous detection portion of the foreign matter is removed based on the comparison result. More specifically, the inspection device 100 adopts only the pixel identified as the position of the foreign matter in both projected images as the position of the foreign matter by the above comparison, and at the position of the foreign matter in only one projected image. Do not use the identified pixel as the position of the foreign object.
  • the foreign matter existing in the fault to be processed is detected at the same coordinate position of both projected images even if the irradiation angle of X-rays is different, whereas it is present in the fault to be processed.
  • the inspection device 100 executes this false detection partial removal process on all faults.
  • the inspection device 100 corrects the position (step S5).
  • the details of the position correction will be described later.
  • the inspection device 100 generates an output image and displays the output image on the display unit 8 (step S6).
  • the output image for example, an image obtained by adding all the projected images showing the positions of the foreign objects reflecting the false detection portion removal process and the position correction, that is, an image that displays the positions of the foreign objects in two dimensions can be mentioned. it can.
  • the output image there is an image obtained by stacking each projection image showing the position of the foreign matter reflecting the false detection portion removal process and the position correction, that is, an image that displays the position of the foreign matter in three dimensions. Can be done.
  • step S2 An example of the process of step S2 described above, that is, a process of generating an image obtained by projecting frame data onto a tomographic image and then reconstructing the generated image into a projected image will be described with reference to the flowchart of FIG.
  • the CPU 4 reads the defect registration data and creates a defect table (step S11).
  • step S12 the CPU 4 reads the density correction image and creates the density correction data. Note that, unlike the present embodiment, steps S11 and S12 may be executed before step S1.
  • the CPU 4 reads the projection data (frame data) (step S13), and performs defect correction and density correction on the projection data (step S14).
  • each projection data read in step S13 regarding the projection data derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A, only the voxels through which the X-rays exposed from the first X-ray irradiation unit 1A are transmitted.
  • the calculation may be performed, and the projection data derived from the second X-ray irradiation unit 1B and the second X-ray detection unit 2B need to be calculated only for the voxels through which the X-rays exposed from the second X-ray irradiation unit 1B are transmitted. ..
  • the CPU 4 may set the range of voxels to be calculated at each fault for each frame data in advance according to the size of the object T1 to be inspected.
  • the CPU 4 calculates the actual position coordinates of each voxel vertex occupying the reconstructed area (step S15).
  • the CPU 4 sequentially reads the projection data obtained in the data collection process in step S13 for each frame (step S16). In the process of one step S16, one frame of projection data is read.
  • the CPU 4 convolves and integrates the projection data and the filter function (step S17).
  • the CPU 4 performs coordinate conversion of the system so as to be the coordinate system shown in FIG. 4 for each calculation result of the convolution integral (step S18).
  • the second X-ray irradiation unit 1B is the origin and the center position of the second X-ray detection unit 2B is on the Z axis.
  • the system including the second X-ray irradiation unit 1B, the reconstruction region R1, and the second X-ray detection unit 2B is rotationally moved and translated so as to be in the positive direction of.
  • the Z-axis is an axis orthogonal to the X-axis and the Y-axis
  • the direction from the first X-ray irradiation unit 1A to the first X-ray detection unit 2A is the positive direction of the Z-axis.
  • the X-rays incident on the pixel of interest are attenuated when passing through the subject, and the X-rays captured by the X-ray detection unit 2 are reduced. That is, when the reconstruction region R1 shown in FIG. 4 composed of a plurality of voxels is set, X-rays transmitted through the voxels are incident on the pixel of interest, and the X-rays are attenuated by the amount of the voxels transmitted by the X-rays. It will be reflected in the brightness value of the pixel of interest.
  • the X-rays incident on the pixel of interest pass through a part of multiple voxels in the fault, the X-rays are attenuated according to the volume ratio of the part where the X-rays of each voxel are transmitted, and the degree of attenuation is the degree of attenuation of the pixel of interest. It is reflected in the brightness value.
  • each voxel transmitted with X-rays incident on the pixel of interest contributes to the brightness value of the pixel of interest at the ratio of the volume ratio of the portion of each voxel transmitted with X-rays.
  • the X-rays of each voxel are transmitted at each divided value.
  • the X-ray attenuation of the affected part corresponds to the X-ray attenuation of the affected part.
  • FIG. 5A is a perspective view showing an example of the positional relationship between the pixel group PXG, which is a part of the detection surface of the X-ray detection unit 2, and the voxels VX1 to VX4.
  • FIG. 5B is a top view showing the positional relationship shown in FIG. 5A.
  • the pixel group PXG is composed of pixels PX1 to PX16.
  • voxels VX1 to VX4 become voxels through which X-rays incident on the pixel of interest have passed.
  • the X-ray attenuation reflected in the pixel of interest corresponds to the brightness value of the pixel of interest and the total volume of the portion of all voxels in which the X-rays incident on the pixel of interest are transmitted.
  • the product of the product of the volume ratio of the X-ray-transmitted portion of one voxel in all voxels is summed for each voxel in the whole voxel.
  • the X-rays that pass through the voxel of interest are incident on a plurality of pixels, so that the effect of X-ray attenuation on the voxel of interest is applied to each pixel according to the volume ratio corresponding to each pixel.
  • the ratio (volume ratio) of the effect of the voxel of interest on that pixel among the affected X-ray attenuation is taken as the multiplication value with the brightness value, and the multiplication value is integrated for each pixel.
  • FIG. 6A is a perspective view showing an example of the positional relationship between the pixel group PXG, which is a part of the detection surface of the X-ray detection unit 2, and the voxel VX1.
  • FIG. 6B is a top view showing the positional relationship shown in FIG. 6A.
  • the X-ray attenuation of the entire voxel VX1 of interest is (1) the brightness value of the pixel PX5 and the pixel of the voxel VX1 of interest with respect to the volume of the voxel VX1 of interest.
  • Multiplying value with the volume ratio of (3) Multiplying value of the brightness value of the pixel PX7 with the volume ratio of the portion through which the X-ray transmitted to the pixel PX7 of the voxel VX1 of interest is transmitted to the volume of the voxel VX1 of interest, ( 4) Multiplying the brightness value of pixel PX9 and the ratio of the volume of the portion through which X-rays incident on pixel PX9 of voxel VX1 of interest to the volume of voxel VX1 of interest, (5) the brightness value of pixel PX10 and attention Multiplying the volume of the voxel VX1 with respect to the volume of the portion through which the X-ray incident on the pixel PX10 of the voxel VX1 is transmitted, and (6) the brightness value of the pixel PX11 and the volume of the voxel VX1 of interest with respect to the volume of the voxel VX1 It is
  • the detection surface of the X-ray detection unit 2 is projected to the voxel position in the X-ray incident direction for each voxel.
  • the voxel may be projected to the position of the detection surface of the X-ray detection unit 2 in the X-ray incident direction.
  • the thickness of the voxel with respect to the X-ray incident direction is not uniform within the voxel VX1, depending on the pixel. X-rays that have passed through the thin part of the voxel may be incident, and if this is calculated exactly, the calculation time for the back projection will be enormous.
  • the rectangular parallelepiped voxel is approximated to an oblique quadrangular prism having the same volume as the rectangular parallelepiped voxel and having a uniform thickness with respect to the X-ray incident direction, and back projection is performed.
  • the oblique quadrangular prism has two bottom surfaces by translating a pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction on a plane including both or one of the opposing constituent surfaces. It has the same volume as a rectangular parallelepiped voxel and has a uniform thickness with respect to the X-ray incident direction. Even if such an approximation is performed, the image quality of the projected image is hardly affected.
  • FIG. 8A is a diagram showing an example of an oblique quadrangular prism that approximates voxel VX1.
  • the oblique quadrangular prism OP1 shown in FIG. 8A has rectangles RT2 and RT3 as bottom surfaces, and has a uniform thickness with respect to the X-ray incident direction.
  • the rectangle RT1 is a side of each side of the voxel VX1 of interest other than the side included in the pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 shown in FIGS. 6A and 6B. It is a rectangle with the midpoint as the apex.
  • FIG. 8B A perspective view of the rectangle RT1 formed in the voxel VX1 of interest is as shown in FIG. 8B.
  • the rectangular RT2 detects the X-rays on the constituent planes of the voxel VX1 shown in FIGS. 6A and 6B, which are closer to the X-ray detector 2 on the pair of opposed constituent planes having overlapping regions when viewed from the X-ray incident direction. It is translated in parallel on a plane including the facing constituent surfaces closer to the portion 2.
  • the rectangle RT3 detects the X-ray detection unit 2 of the pair of facing constituent planes having overlapping regions when viewed from the X-ray incident direction among the constituent planes of the voxel VX1 shown in FIGS. 6A and 6B.
  • the portion 2 is translated in parallel on a plane including the distant facing configuration surface.
  • the outer circumferences of the rectangles RT1 to RT3 coincide with each other when viewed from the X-ray incident direction.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) the brightness value of the pixel PX5 and the X-ray incident on the pixel PX5 of the rectangle RT1 with respect to the area of the rectangle RT1.
  • Multiplying value with the ratio of the area of the transmitted part (2) Multiplying the brightness value of the pixel PX6 with the ratio of the area of the part where the X-ray incident on the pixel PX6 of the rectangle RT1 is transmitted to the area of the rectangle RT1 , (3) Multiplying the brightness value of the pixel PX7 by the ratio of the area of the portion where the X-ray incident on the pixel PX7 of the rectangle RT1 is transmitted to the area of the rectangle RT1, (4) The brightness value of the pixel PX9 and the rectangle.
  • each vertex of the rectangle RT1 can be calculated from the coordinates of each vertex of the voxel VX1 of interest. Further, each of the above areas can be calculated from the X-coordinate and Z-coordinate of the rectangle RT1 and the X-coordinate and Y-coordinate of each lattice point of the pixel formed on the detection surface of the X-ray detection unit 2.
  • the rectangle RT1 is the midpoint of each side of the voxel VX1 of interest other than the side included in the pair of facing constituent planes having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 of interest. It is a rectangle whose apex is.
  • the position setting of the oblique quadrangular prism that approximates the voxel VX1 of interest is not limited to the setting of the present embodiment.
  • an oblique quadrangular prism that approximates the voxel VX1 of interest is used.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) a rectangle with respect to the brightness value of the pixel PX5 and the area of the rectangle RT4. Multiplying the ratio of the area of the part where the X-ray incident on the pixel PX5 of the RT4 is transmitted, (2) the brightness value of the pixel PX6 and the X-ray incident on the pixel PX6 of the rectangle RT4 with respect to the area of the rectangle RT4 are transmitted.
  • Multiplying value with the ratio of the area of the part (3) Multiplying the brightness value of the pixel PX7 with the ratio of the area of the part through which the X-ray incident on the pixel PX7 of the rectangle RT4 is transmitted to the area of the rectangle RT4, (4) ) Multiplying the brightness value of the pixel PX9 and the ratio of the area of the portion where the X-ray incident on the pixel PX9 of the rectangle RT4 to the area of the rectangle RT4, (5) the brightness value of the pixel PX10 and the area of the rectangle RT4 The multiplication value of the ratio of the area of the portion where the X-ray incident on the pixel PX10 of the rectangle RT4 is transmitted, (6) the brightness value of the pixel PX11 and the X-ray incident on the pixel PX11 of the rectangle RT4 with respect to the area of the rectangle RT4.
  • the rectangle RT4 is farther from the X-ray detection unit 2 of the pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 shown in FIGS. 6A and 6B.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) a rectangle with respect to the brightness value of the pixel PX5 and the area of the rectangle RT5. Multiplying the area ratio of the area where the X-ray incident on the pixel PX5 of the RT5 is transmitted, (2) the brightness value of the pixel PX6 and the X-ray incident on the pixel PX6 of the rectangle RT5 with respect to the area of the rectangle RT5 are transmitted.
  • Multiplying value with the ratio of the area of the part (3) Multiplying the brightness value of the pixel PX7 with the ratio of the area of the part through which the X-ray incident on the pixel PX7 of the rectangle RT5 is transmitted to the area of the rectangle RT5, (4) ) Multiplying the brightness value of pixel PX9 and the ratio of the area of the portion where X-rays incident on pixel PX9 of rectangle RT5 to the area of rectangle RT5, (5) the brightness value of pixel PX10 and the area of rectangle RT5 The multiplication value of the ratio of the area of the portion where the X-ray incident on the pixel PX10 of the rectangle RT5 is transmitted, (6) the brightness value of the pixel PX11, and the X-ray incident on the pixel PX11 of the rectangle RT5 with respect to the area of the rectangle RT5.
  • the rectangle RT5 is closer to the X-ray detection unit 2 of the pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 shown in FIGS. 6A and 6B.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) the brightness value of the pixel PX5 and the rectangle with respect to the area of the rectangle RT6. Multiplying the ratio of the area of the part where the X-ray incident on the pixel PX5 of the RT6 is transmitted, (2) the brightness value of the pixel PX6 and the X-ray incident on the pixel PX6 of the rectangle RT6 with respect to the area of the rectangle RT6 are transmitted.
  • Multiplying value with the ratio of the area of the part (3) Multiplying the brightness value of the pixel PX7 with the ratio of the area of the part through which the X-ray incident on the pixel PX7 of the rectangle RT6 is transmitted to the area of the rectangle RT6, (4) ) Multiplying the brightness value of the pixel PX9 and the ratio of the area of the portion where the X-ray incident on the pixel PX9 of the rectangle RT6 to the area of the rectangle RT6, (5) the brightness value of the pixel PX10 and the area of the rectangle RT6.
  • the rectangle RT6 is each side of the voxel VX1 of interest other than the side included in the pair of facing constituent surfaces in which the overlapping regions exist when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 shown in FIGS. 6A and 6B. Is a rectangle whose apex is a division point that divides the X-ray detector 2 from the far side at a ratio of 1: 2.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) a rectangle with respect to the brightness value of the pixel PX5 and the area of the rectangle RT7. Multiplying the ratio of the area of the part where the X-ray incident on the pixel PX5 of the RT7 is transmitted, (2) the brightness value of the pixel PX6 and the X-ray incident on the pixel PX6 of the rectangle RT7 with respect to the area of the rectangle RT7 are transmitted.
  • Multiplying value with the ratio of the area of the part (3) Multiplying the brightness value of the pixel PX7 with the ratio of the area of the part through which the X-ray incident on the pixel PX7 of the rectangle RT7 is transmitted to the area of the rectangle RT7, (4) ) Multiplying the brightness value of pixel PX9 and the ratio of the area of the portion where X-rays incident on pixel PX9 of rectangle RT7 to the area of rectangle RT7, (5) the brightness value of pixel PX10 and the area of rectangle RT7 The multiplication value of the ratio of the area of the portion where the X-ray incident on the pixel PX10 of the rectangle RT7 is transmitted, (6) the brightness value of the pixel PX11, and the X-ray incident on the pixel PX11 of the rectangle RT7 with respect to the area of the rectangle RT7.
  • the rectangle RT7 is each side of the voxel VX1 of interest other than the side included in the pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 shown in FIGS. 6A and 6B. Is a rectangle whose apex is a division point that divides the X-ray detector 2 from the far side at a ratio of 2: 1.
  • FIG. 13 is a diagram showing an example of a case where X-rays transmitted through the voxel VX1 of interest are obliquely incident on the detection surface of the X-ray detection unit 2 in the Y-axis direction. Also in FIG. 13, similarly to FIGS. 7 to 12, a rectangular parallelepiped voxel (for example, the voxel of interest VX1 in FIG.
  • the oblique quadrangular prism has two bottom surfaces by translating a pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction on a plane including both or one of the opposing constituent surfaces. It has the same volume as a rectangular parallelepiped voxel and has a uniform thickness with respect to the X-ray incident direction.
  • the X-ray attenuation of the entire boxel VX1 of interest is (1) a rectangle with respect to the brightness value of the pixel PX5 and the area of the rectangle RT1. Multiplying the ratio of the area of the part where the X-ray incident on the pixel PX5 of RT1 is transmitted, (2) the brightness value of the pixel PX6 and the X-ray incident on the pixel PX6 of the rectangle RT1 with respect to the area of the rectangle RT1 are transmitted.
  • Multiplying value with the ratio of the area of the part (3) Multiplying the brightness value of the pixel PX7 with the ratio of the area of the part through which the X-ray incident on the pixel PX7 of the rectangle RT1 is transmitted to the area of the rectangle RT1, (4) ) Multiplying the brightness value of the pixel PX9 and the ratio of the area of the portion where the X-ray incident on the pixel PX9 of the rectangle RT1 is transmitted to the area of the rectangle RT1, (5) the brightness value of the pixel PX10 and the area of the rectangle RT1.
  • the rectangle RT1 is formed on each side of the voxel VX1 of interest other than the side included in the pair of facing constituent surfaces having overlapping regions when viewed from the X-ray incident direction among the constituent surfaces of the voxel VX1 of interest. It is a rectangle with the midpoint as the apex.
  • voxels VX1 are shown in FIGS. 6A and 6B, the same back projection is performed on all voxels in the reconstruction region R1.
  • a Cartesian coordinate system defined by the X-axis and the Z-axis shown in FIG. 4 and the Y-axis orthogonal to them may be used, and the moving diameter r, the first declination ⁇ , and the first deviation angle ⁇ are used.
  • a polar coordinate system defined by an argument ⁇ of 2 may be used.
  • the distance from the center (X-ray source) of the X-ray irradiation unit 1 to one voxel is defined as the moving diameter r.
  • the vertical field angle theta 1 necessary camera from the center of the X-ray irradiation unit 1 (X-ray source) from the end of one voxel to the end is very small, approximating the sin [theta 1 to theta 1 Can be done.
  • the lateral angle phi 1 necessary camera from the center of the X-ray irradiation unit 1 (X-ray source) from the end of one voxel to the end is very small, approximating the sin [phi 1 to phi 1 be able to.
  • the shape of the voxel is a rectangular parallelepiped, but the rectangular parallelepiped also includes a cube, which is a special example in which the length, width, and height are all the same.
  • the constituent planes of the voxel and the cross-sections of the voxels parallel to the constituent planes are rectangular in shape, but the rectangle also includes a square, which is a special example of equal length and width.
  • the pair of opposing configuration surfaces are among the constituent surfaces of the voxel of interest.
  • It may be selected as a pair of facing configuration surfaces having overlapping regions when viewed from the X-ray incident direction. Further, when there are three pairs of facing constituent planes in which overlapping regions exist at points among the constituent planes of the voxel of interest, only the pair of facing constituent planes are among the constituent planes of the voxel of interest. It may be selected as a pair of facing configuration surfaces having overlapping regions when viewed from the X-ray incident direction.
  • step S19 the reconstruction calculation using the FBP method in step S19 is completed.
  • the CPU 4 determines whether or not the frame data has ended (step S20), and if not, returns to step S16 and repeats the above-described operation.
  • the CPU 4 calculates the number of times (n) that X-rays have passed through each voxel (step S21). ), The final result is divided by n (step S22).
  • step S3 Positioning of foreign matter> An example of the process of step S3 described above, that is, the process of specifying the position of the foreign matter for each projected image will be described with reference to the flowchart of FIG.
  • the CPU 4 divides the projected image into predetermined areas (for example, 16 pixels ⁇ 16 pixels areas) (step S31). If there is a part of the projected image that is not filled with a predetermined area, the edge part of the projected image is excluded from the inspection target, and the position of the predetermined area group is set so that the predetermined area group is located in the center of the projected image. It may be set.
  • predetermined areas for example, 16 pixels ⁇ 16 pixels areas
  • the CPU 4 calculates the average luminance value L1 in each of the predetermined regions (step S32).
  • the CPU 4 determines whether or not the brightness value L2 of the pixel of interest is smaller than the average brightness value L1 in the predetermined region to which the pixel of interest belongs by a predetermined value V1 or more (step S33).
  • the predetermined value V1 for example, the standard deviation of the brightness value of each pixel in the predetermined region to which the pixel of interest belongs can be mentioned.
  • the CPU 4 positions the pixel of interest as a foreign object. Not specified as.
  • step S33 when it is determined that the luminance value L2 of the pixel of interest is smaller than the average luminance value L1 in the predetermined region to which the pixel of interest belongs by a predetermined value V1 or more (YES in step S33), the process proceeds to step S34.
  • step S34 the CPU 4 calculates the maximum first brightness value among the first set number of pixels arranged on the negative side in the horizontal direction of the pixel of interest, and the second set number of pixels arranged on the positive side in the horizontal direction of the pixel of interest. Calculate the maximum second brightness value within the pixel, calculate the maximum third brightness value within the third set number of pixels arranged on the negative side in the vertical direction of the pixel of interest, and line up on the positive side in the vertical direction of the pixel of interest.
  • the fourth brightness value which is the maximum within the fourth set number of pixels, is calculated.
  • the first set number to the fourth set number may all have the same value, or may have two or more and four or less different values. If a predetermined area is located at the edge of the projected image and at least one of the first set number to the fourth set number cannot be secured, the number of pixels that can be secured is used, and if it cannot be secured at all. The pixel is excluded from inspection.
  • the CPU 4 determines whether or not the ratio of the brightness value L2 of the pixel of interest to the first brightness value and the ratio of the brightness value L2 of the pixel of interest to the second brightness value are equal to or less than the threshold value TH1 (step S35). ..
  • step S37 When it is determined that the ratio of the brightness value L2 of the pixel of interest to the first brightness value and the ratio of the brightness value L2 of the pixel of interest to the second brightness value are each equal to or less than the threshold value TH1 (YES in step S35), step S37 described later. Move to.
  • step S35 when it is not determined that the ratio of the brightness value L2 of the pixel of interest to the first brightness value and the brightness value L2 of the pixel of interest to the second brightness value are each equal to or less than the threshold value TH1 (NO in step S35), step. Move to S36.
  • step S36 the CPU 4 determines whether or not the ratio of the brightness value L2 of the pixel of interest to the third brightness value and the ratio of the brightness value L2 of the pixel of interest to the fourth brightness value are each equal to or less than the threshold value TH1 (step S36). ).
  • the threshold value used in step S35 and the threshold value used in step S36 are set to the same value, but they may be different values from each other. Further, unlike the present embodiment, in step S35, it may be determined only whether or not the ratio of the brightness value L2 of the pixel of interest to the first brightness value is equal to or less than the threshold value TH1.
  • step S35 it may be determined only whether or not the ratio of the brightness value L2 of the pixel of interest to the second brightness value is equal to or less than the threshold value TH1.
  • step S36 it may be determined only whether or not the ratio of the brightness value L2 of the pixel of interest to the third brightness value is equal to or less than the threshold value TH1.
  • step S36 it may be determined only whether or not the ratio of the brightness value L2 of the pixel of interest to the fourth brightness value is equal to or less than the threshold value TH1.
  • step S37 When it is determined that the ratio of the brightness value L2 of the pixel of interest to the third luminance value and the luminance value L2 of the pixel of interest to the fourth luminance value are each equal to or less than the threshold value TH1 (YES in step S36), step S37 described later. Move to.
  • the CPU 4 Does not specify the pixel of interest as the position of the foreign object. It should be noted that, instead of immediately confirming that the pixel of interest is not specified as the position of the foreign matter, the size of the predetermined region and the value of the threshold value TH1 are changed to return to step S35, and the process proceeds from step S35 or step S36 to step S37. You may try to do it.
  • the size of the predetermined region and the value of the threshold value TH1 may be changed not only once but also twice or more.
  • step S37 the CPU 4 specifies the pixel of interest as the position of the foreign object.
  • the detection accuracy of the foreign matter can be improved.
  • the object to be inspected T1 for example, "fish” can be mentioned.
  • the foreign matter is a "small bone”.
  • the processing of the flowchart of FIG. 14 can be applied when the attenuation coefficient of the foreign matter is larger than the attenuation coefficient of the object T1 to be inspected (excluding the foreign matter).
  • the projected image is a black-and-white inverted image
  • "small” is replaced with "large”
  • "maximum” is replaced with "minimum”
  • "threshold value TH1 or less” is replaced. It may be replaced with "threshold TH1 or higher”.
  • the processing of the flowchart of FIG. 14 is not applied as it is, but is "small” in the processing of the flowchart of FIG. Is replaced with “large”, “maximum” is replaced with “minimum”, and “threshold TH1 or less” is replaced with “threshold TH1 or more".
  • the processing of the flowchart of FIG. 14 may be applied as it is.
  • each projected image after the reconstruction calculation is a projection onto the tomography when the inspected object T1 is viewed from the X-ray source, the position of the projected inspected object T1 shifts if the irradiation angle is different. In order to correct this deviation, the CPU 4 performs position correction as described above (step S5).
  • an arbitrary fault used as a correction reference when the brightness of a voxel on a fault other than the arbitrary fault used as a correction reference is projected onto an arbitrary fault used as a correction reference.
  • position correction is performed to match the horizontal and vertical lengths of the projected images after the reconstruction calculation.
  • Projections to any voxel used as a correction criterion for voxel brightness on a fault other than any fault used as a correction criterion should normally be calculated by examining the X-ray transmission path for all pixels in each frame.
  • J2 The position correction will be described in detail with reference to FIG. 15 in which voxels whose height direction is a predetermined position are arranged in a straight line for each fault for convenience.
  • a, b, and c are arbitrary natural numbers.
  • step S5 An example of the position correction in step S5 according to the above-mentioned concept will be described with reference to the flowchart shown in FIG.
  • the CPU 4 reads the number of voxels of each fault in the reconstruction area R1 from the HDD 9 (step S41).
  • the CPU 4 reads the data of the brightness value cal (i, j, k) of each voxel after the reconstruction calculation calculated in step S22 from the HDD 9 (step S42).
  • i is a variable for specifying the coordinates (position) of the target voxel in the X-axis direction shown in FIG. 1
  • j is a variable for specifying the coordinates (position) of the target voxel in the Y-axis direction shown in FIG.
  • k is a variable to identify the fault to which the target voxel belongs.
  • the CPU 4 reads from the HDD 9 the coordinates of the voxel through which the X-rays incident on the center of the X-ray detection unit 2 pass for each frame and each fault, and passes through the voxel in which the Y-axis direction of each fault is maximum.
  • the ratio of the distances in the Y-axis direction of each line connecting the radiation source and the X-ray detection unit 2 is read from the HDD 9 (step S43).
  • the CPU 4 uses the reading result of step S43 to transmit the n (i) th X-ray among the N (i) frames that pass through the voxels at the X-axis coordinate i on the reference fault lay.
  • the X-ray coordinates ii (i, k, n (i)) of the voxel of a certain fault k are calculated for each frame and each fault (step S44).
  • step S43 uses the reading result of step S43 to transmit the h (j) th X-ray of the H (j) frames that pass through the voxel at the Y-axis coordinate k on the reference fault lay.
  • the Y-axis coordinates kk (k, k, h (j)) of the voxels of a certain fault k are calculated for each frame and each fault (step S45).
  • the CPU 4 projects the luminance value of the boxel of the fault k onto the reference fault lay in a loop of the X-axis coordinates i, Y-axis coordinates j, n (i), h (j) of the box cell, and the luminance value datawa ( i, j, k) is calculated from the luminance value cal (ii, jj, k) of the point (ii, jj, k) on the fault k (step S46).
  • axis coordinates be the real number i k expressed by the following equation (5).
  • ⁇ 1 + ⁇ 2-1 is the number of continuous voxels in which the X-rays of the frame on the reference fault lay are not transmitted.
  • the constituent surfaces of the rectangular parallelepiped voxels constituting the reconstruction region instead of back-projecting the rectangular parallelepiped voxels constituting the reconstruction region onto the voxels, the constituent surfaces of the rectangular parallelepiped voxels constituting the reconstruction region.
  • back projection is performed on one of the pair of facing constituent planes having overlapping regions when viewed from the X-ray incident direction, or on the cross section of the voxel parallel to the facing constituent planes, so that the calculation time for back projection can be reduced. It can be shortened significantly. Therefore, the projected image can be reconstructed with a small amount of reconstruction calculation using the FBP method.
  • each projected image after the reconstruction calculation is not uniformly enlarged or reduced in each of the horizontal and vertical directions to perform position correction, but a certain fault other than an arbitrary fault used as a correction reference.
  • Position correction is performed based on the brightness of the voxel on the arbitrary fault used as the correction reference when projected onto an arbitrary fault using the brightness of the above voxel as the correction reference.
  • the horizontal and vertical lengths of the projected images after the reconstruction calculation are matched by the position correction, but the horizontal lengths of the projected images after the reconstruction calculation are different. If this is not a problem, the vertical length of each projected image after reconstruction calculation may be matched by position correction, and the vertical length of each projected image after reconstruction calculation may be matched. If the difference does not matter much, the lengths of each projected image after the reconstruction calculation may be matched only in the lateral direction by the position correction.
  • the horizontal and vertical lengths of the projected images after the reconstruction calculation are matched by the position correction, but the horizontal and vertical lengths of the projected images after the reconstruction calculation are matched by the position correction.
  • the lengths in the directions may be substantially the same. That is, for example, the side of each projected image after the position correction and the reconstruction calculation is not inconvenient when comparing the projected images of a plurality of layers with each other and comparing the state of the object T1 to be inspected according to the tomographic depth. There may be differences in directional and vertical lengths.
  • step S4 the false detection partial removal process of step S4 is executed, but if the specification that requires the foreign matter detection accuracy is satisfied without executing the false detection partial removal process of step S4, step S4 The false detection partial removal process may be omitted.
  • step S4 When omitting the erroneous detection portion removal process in step S4, either the pair of the first X-ray irradiation unit 1A and the first X-ray detection unit 2A or the pair of the second X-ray irradiation unit 1B and the second X-ray detection unit 2B is used. It is recommended to remove it from the inspection device 100.
  • step S4 in the erroneous detection portion removal process in step S4, the position of the foreign matter specified based on the projected images derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A, and the second X-ray irradiation unit 1B and the second X-ray irradiation unit 1B.
  • the position of the foreign matter specified based on the projected image derived from the 2X ray detection unit 2B was compared, and the false detection part of the foreign matter was removed based on the comparison result, but this false detection part removal process is just an example. Is.
  • the inspection device 100 is provided with the first to m (m is a natural number of 3 or more) X-ray irradiation units and the first to mX-ray detection units, which have different X-ray irradiation directions, and the k (k is). Any natural number of m or less) X-rays irradiated from the X-ray irradiation unit and transmitted through the object T1 to be inspected may be incident on the kX-ray detection unit.
  • the position of the foreign matter is specified based on the k-th projected image, which is a projected image obtained by the X-rays emitted from the kX-ray irradiation unit, and the first to mth projected images of the same depth are respectively.
  • the positions of the foreign substances specified based on the above are compared with each other, and the erroneous detection portion of the position of the foreign matter may be removed based on the comparison result.
  • step S34 the CPU 4 calculates the maximum first brightness value among the first set number of pixels arranged on the negative side in the horizontal direction of the pixel of interest, and arranges them on the positive side in the horizontal direction of the pixel of interest.
  • the maximum second brightness value within the second set number of pixels is calculated
  • the maximum third brightness value within the third set number of pixels arranged on the negative side in the vertical direction of the pixel of interest is calculated
  • the pixel of interest is calculated.
  • the maximum fourth brightness value within the fourth set number of pixels arranged on the positive side in the vertical direction was calculated, but this calculation process is only an example.
  • first direction one side and “first direction other side”, respectively.
  • One side in the second direction and “the other side in the second direction” can be generalized.
  • the first direction and the second direction are different directions.
  • the first direction and the second direction do not have to be orthogonal to each other.
  • the first brightness value is not the maximum brightness value among the first set number of pixels arranged on one side of the first direction of the pixel of interest, but the first set number of pixels arranged on one side of the first direction of the pixel of interest.
  • the brightness value of the pixel having a brightness value larger than that of the pixel of interest and being as far away as possible from the pixel of interest may be used, and the same substitution may be performed for the second to fourth brightness values. That is, the second brightness value is not the maximum brightness value among the second set number of pixels arranged on the other side of the first direction of the pixel of interest, but the second set number of pixels arranged on the other side of the first direction of the pixel of interest.
  • the brightness value of a pixel having a brightness value larger than that of the pixel of interest and as far away as possible from the pixel of interest is used.
  • the third brightness value is not the maximum brightness value among the pixels of the third set number arranged on one side of the second direction of the pixel of interest, but the pixels of the third set number arranged on one side of the second direction of the pixel of interest.
  • the brightness value of a pixel having a brightness value larger than that of the pixel of interest and as far away as possible from the pixel of interest is used.
  • the fourth brightness value is not the maximum brightness value among the fourth set number of pixels arranged on the other side of the second direction of the pixel of interest, but the fourth set number of pixels arranged on the other side of the second direction of the pixel of interest.
  • the brightness value of a pixel having a brightness value larger than that of the pixel of interest and as far away as possible from the pixel of interest is used. Even when the above replacement is performed, since the position of the foreign matter is specified based on the luminance characteristics of the entire minute region as in the above-described embodiment, the detection accuracy of the foreign matter can be improved.
  • the first brightness value is not the maximum brightness value among the first set number of pixels arranged on one side of the focus pixel in the first direction, but the first predetermined position from the focus pixel on one side of the focus pixel in the first direction.
  • the first set number of pixels lined up apart for example, in the case of four pixels lined up 3 pixels away from the pixel of interest, the third to sixth pixels lined up on one side in the first direction from the pixel of interest
  • the focus It is the average brightness value of the second set number of pixels arranged on the other side of the first direction of the pixels at a second predetermined position away from the pixel of interest. Then, in steps S34 to S36, the processing related to the second luminance value is not performed.
  • the third brightness value is not the maximum brightness value among the third set number of pixels arranged on one side of the second direction of the pixel of interest, but the third predetermined position from the pixel of interest on one side of the second direction of the pixel of interest.
  • the average brightness value of the pixels of the third set number arranged apart and the pixels of the fourth set number arranged at the other side in the second direction of the pixel of interest at a fourth predetermined position from the pixel of interest corresponds to "2 brightness value"). Then, in steps S34 to S36, the processing related to the fourth luminance value is not performed.
  • the CPU 4 may specify the position of the foreign matter by using the artificial intelligence learned from the projected image of the object to be inspected T1 inspected in the past. By using artificial intelligence, the accuracy of detecting foreign matter can be further improved.
  • the place where artificial intelligence is provided is not particularly limited.
  • the CPU 4 may be provided with artificial intelligence.
  • artificial intelligence may be provided on a cloud that can be accessed by the inspection device 100 via a communication network.
  • step S4 and step S5 are exchanged, and in step S6, all the projected images showing the positions of the foreign matter reflecting the position correction and the false detection partial removal process are added together.
  • An image to be generated that is, an image displaying the position of a foreign object in two dimensions may be generated as an output image.
  • step S3 it is not always necessary to specify the position of the foreign matter for each projected image, and the position of the foreign matter may be specified for each of a plurality of projected images.
  • step S5 is executed immediately after step S3, the positions of foreign substances on all faults are added for each irradiation angle after the position correction is completed, and the irradiation after adding the positions of foreign substances on all faults.
  • Step S4 is executed for the data for each angle, and in step S6, the image obtained by adding all the projected images showing the positions of the foreign matter reflecting the position correction and the false detection portion removal processing, that is, the position of the foreign matter is 2.
  • An image to be displayed in two dimensions may be generated as an output image.
  • step S3 it is not always necessary to specify the position of the foreign matter for each projected image, and the position of the foreign matter may be specified for each of a plurality of projected images.
  • the projected image obtained by projecting onto each tomographic image set at a predetermined position other than the positions of the X-ray detectors 2A and 2B is a projected image on each tomographic image obtained based on the principle of tomosynthesis.
  • the projected image obtained by projecting on each tomographic image set at a predetermined position other than the positions of the X-ray detectors 2A and 2B does not have to be the projected image on each tomographic image obtained based on the principle of tomosynthesis. ..
  • the position of the foreign matter is specified by using the projected image, but the position of the foreign matter may be specified by using an X-ray image such as a simple X-ray transmission image of the object T1 to be inspected. That is, for example, the processing of the flowchart shown in FIG. 14 may be executed for an X-ray image such as an X-ray transmission image.
  • the CPU 4 may use artificial intelligence.
  • the 1X-ray detection unit 2A and the second X-ray detection unit 2B may each be a single-line line sensor.
  • the 1X-ray detection unit 2A and the 2nd X-ray detection unit 2B are each a single line, the pixel size of the line sensor and the movement amount of the object to be inspected per frame are matched.
  • the first X-ray detection unit 2A and the second X-ray detection unit 2B may not be arranged at the same height.
  • the Z-axis direction position of the first X-ray detection unit 2A and the Z-axis direction position of the second X-ray detection unit 2B are different, and the pixel in which the X-ray passing through the same position of the inspected object T1 is incident is the first X-ray.
  • the detection unit 2A and the second X-ray detection unit 2B are misaligned.
  • the second X-ray irradiation unit 1B and the second X-ray detection unit corresponding to the pixels of the projected image derived from the first X-ray irradiation unit 1A and the first X-ray detection unit 2A in the Y-axis direction.
  • the pixels of the projected image derived from 2B may be specified.
  • the captured image may be generated based on the above.
  • the captured image is generated based on the original image processing, it is not always necessary to match the moving speed of the object to be inspected T1 with the pixel size of the line sensor.
  • a two-dimensional detector 2C may be used as shown in FIG.
  • the two-dimensional detector 2C can be used as a pseudo two-line sensor.
  • the first X-ray irradiation unit 1A and the second X-ray irradiation unit 1B may be shared to form a single X-ray irradiation unit.
  • a plurality of two-dimensional detectors can be simulated.
  • a single line sensor can be a plurality of pseudo line sensors.
  • the single line sensor is a line sensor having a plurality of lines extending along the Y-axis direction.
  • FIGS. 18 and 19 may be used as shown in FIGS. 18 and 19.
  • the inspection device 100 shown in FIGS. 18 and 19 may perform the following operations, for example.
  • the object to be inspected T1 is moved by the drive of the belt conveyor 3 to a position where the image can be taken by the second X-ray irradiation unit 1B and the two-dimensional detector 2E, and then the belt conveyor 3 is stopped, and the second X-ray irradiation unit 1B and the two-dimensional
  • the object to be inspected T1 is stationary at a position where it can be photographed by the detector 2E (see FIG. 18). In the stationary state shown in FIG.
  • one shot of the object to be inspected T1 is photographed by the second X-ray irradiation unit 1B and the two-dimensional detector 2E. Then, the object to be inspected T1 is moved by the drive of the belt conveyor 3 to a position where the first X-ray irradiation unit 1A and the two-dimensional detector 2D can take a picture, and then the belt conveyor 3 is stopped, and the first X-ray irradiation unit 1A and the first X-ray irradiation unit 1A and The object to be inspected T1 is stationary at a position where it can be photographed by the two-dimensional detector 2D (see FIG. 19). In the stationary state shown in FIG. 19, one shot of the object to be inspected T1 is photographed by the first X-ray irradiation unit 1A and the two-dimensional detector 2D.
  • FIGS. 20 and 21 X-ray irradiation is performed as shown in FIGS. 20 and 21.
  • the part 1C and the two-dimensional detector 2F may be used.
  • the inspection device 100 shown in FIGS. 20 and 21 may perform the following operations, for example.
  • the object to be inspected T1 is moved to the first predetermined position by driving the belt conveyor 3.
  • the two-dimensional detector 2F is also moved by a moving mechanism (not shown) in conjunction with the movement of the object T1 to be inspected.
  • the belt conveyor 3 and the moving mechanism are stopped, the object T1 to be inspected is stopped at the first predetermined position, and the two-dimensional detector 2F is stopped at the position corresponding to the first predetermined position (see FIG. 20). ).
  • the stationary state shown in FIG. 20 one shot of the object to be inspected T1 is taken by the X-ray irradiation unit 1C and the two-dimensional detector 2F.
  • the object to be inspected T1 is moved to the second predetermined position by driving the belt conveyor 3.
  • the two-dimensional detector 2F is also moved by a moving mechanism (not shown) in conjunction with the movement of the object T1 to be inspected.
  • the belt conveyor 3 and the moving mechanism are stopped, the object T1 to be inspected is stopped at the second predetermined position, and the two-dimensional detector 2F is stopped at the position corresponding to the second predetermined position (see FIG. 21). ).
  • the stationary state shown in FIG. 21 one shot of the object to be inspected T1 is photographed by the X-ray irradiation unit 1C and the two-dimensional detector 2F.
  • a plurality of two-dimensional X-ray images in which the positional relationship between the X-ray object T1 and the X-ray irradiation unit used for X-ray photography are different only in the X-axis direction are generated.
  • a plurality of two-dimensional X-ray images may be generated in which the positional relationship between the X-ray and the X-ray irradiation unit used for the X-ray photography is different only in the Y-axis direction.
  • a plurality of two-dimensional X-ray images having different positional relationships with the line irradiation unit in both the X-axis direction and the Y-axis direction may be generated.
  • the present invention also includes an inspection device having a configuration that is regarded as a "captured image” and is subjected to subsequent processing.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pulmonology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

L'invention concerne un dispositif de traitement d'image comprenant : une unité de division qui divise une image d'un objet à inspecter, obtenue par imagerie par rayons X, en régions prédéterminées; une unité de calcul qui calcule une valeur de luminosité moyenne pour chacune des régions prédéterminées; et une unité d'indication. Lorsque la valeur de luminosité d'un pixel d'intérêt est inférieure, d'une valeur prédéterminée, ou supérieure à la valeur de luminosité moyenne dans la région prédéterminée à laquelle appartient le pixel d'intérêt, l'unité d'indication calcule une première valeur de luminosité qui correspond à la valeur maximale au sein d'un premier nombre donné de pixels agencés d'un côté dans la première direction du pixel d'intérêt, et calcule une seconde valeur de luminosité qui correspond à la valeur maximale au sein d'un second nombre donné de pixels agencés de l'autre côté dans la première direction du pixel d'intérêt, et si le rapport entre la valeur de luminosité du pixel d'intérêt et la première valeur de luminosité et que le rapport entre la valeur de luminosité du pixel d'intérêt et la seconde valeur de luminosité sont chacun inférieurs ou égaux à un premier seuil, indique le pixel d'intérêt en tant que position d'un corps étranger.
PCT/JP2020/015889 2019-04-09 2020-04-08 Dispositif et procédé de traitement d'image WO2020209313A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021513687A JP7178621B2 (ja) 2019-04-09 2020-04-08 画像処理装置及び画像処理方法

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2019-074244 2019-04-09
JP2019074244 2019-04-09
JP2019117735 2019-06-25
JP2019-117735 2019-06-25

Publications (1)

Publication Number Publication Date
WO2020209313A1 true WO2020209313A1 (fr) 2020-10-15

Family

ID=72751305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/015889 WO2020209313A1 (fr) 2019-04-09 2020-04-08 Dispositif et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP7178621B2 (fr)
WO (1) WO2020209313A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022082331A (ja) * 2020-11-20 2022-06-01 朝日レントゲン工業株式会社 検査結果表示装置及び検査結果表示方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2937324B2 (ja) * 1988-05-26 1999-08-23 株式会社東芝 欠陥異物検出装置
US20050067570A1 (en) * 2003-09-05 2005-03-31 Retterath James E. System for automated detection of embedded objects
JP2006329906A (ja) * 2005-05-30 2006-12-07 Ishida Co Ltd X線検査装置
JP2007218749A (ja) * 2006-02-17 2007-08-30 Hitachi Zosen Corp 物体の判別方法および判別装置
JP2009080031A (ja) * 2007-09-26 2009-04-16 Ishida Co Ltd X線検査装置
WO2015111723A1 (fr) * 2014-01-27 2015-07-30 シャープ株式会社 Dispositif d'affichage à multiples couleurs primaires

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106030293B (zh) * 2014-01-23 2019-11-26 株式会社蛟簿 X射线检查装置以及x射线检查方法
JP6029698B2 (ja) 2015-02-19 2016-11-24 キヤノン株式会社 光電変換装置及びそれを用いた撮像システム

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2937324B2 (ja) * 1988-05-26 1999-08-23 株式会社東芝 欠陥異物検出装置
US20050067570A1 (en) * 2003-09-05 2005-03-31 Retterath James E. System for automated detection of embedded objects
JP2006329906A (ja) * 2005-05-30 2006-12-07 Ishida Co Ltd X線検査装置
JP2007218749A (ja) * 2006-02-17 2007-08-30 Hitachi Zosen Corp 物体の判別方法および判別装置
JP2009080031A (ja) * 2007-09-26 2009-04-16 Ishida Co Ltd X線検査装置
WO2015111723A1 (fr) * 2014-01-27 2015-07-30 シャープ株式会社 Dispositif d'affichage à multiples couleurs primaires

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022082331A (ja) * 2020-11-20 2022-06-01 朝日レントゲン工業株式会社 検査結果表示装置及び検査結果表示方法
JP7304077B2 (ja) 2020-11-20 2023-07-06 朝日レントゲン工業株式会社 検査結果表示装置及び検査結果表示方法

Also Published As

Publication number Publication date
JPWO2020209313A1 (ja) 2021-11-11
JP7178621B2 (ja) 2022-11-28

Similar Documents

Publication Publication Date Title
US10481110B2 (en) Radiographic image generating device
CN106030293B (zh) X射线检查装置以及x射线检查方法
US8009892B2 (en) X-ray image processing system
US5485500A (en) Digital x-ray imaging system and method
US20050078861A1 (en) Tomographic system and method for iteratively processing two-dimensional image data for reconstructing three-dimensional image data
US4183623A (en) Tomographic cross-sectional imaging using incoherent optical processing
US20090008581A1 (en) Transmission image capturing system and transmission image capturing method
US6944263B2 (en) Apparatus and methods for multiple view angle stereoscopic radiography
CN110264421B (zh) 一种ct坏通道校正方法
JP2018509600A (ja) 線形検出器アレイ用のギャップ分解能
CA3109826C (fr) Collimation de rayonnement dynamique pour analyse non destructive d'objets a tester
CN112955735B (zh) X射线相位摄像系统
WO2020209313A1 (fr) Dispositif et procédé de traitement d'image
WO2020209312A1 (fr) Dispositif d'inspection et procédé d'inspection
CN109360252B (zh) 基于投影变换的锥束cl投影数据等效转换方法
JP7267611B2 (ja) 検査装置及び検査方法
JP7304077B2 (ja) 検査結果表示装置及び検査結果表示方法
TWI613998B (zh) 斷層合成影像邊緣假影抑制方法
RU2776469C1 (ru) Динамическое коллимирование излучения для неразрушающего анализа тестовых объектов
JP3128036B2 (ja) X線撮影装置
JP7033779B2 (ja) 放射線画像生成装置
CN103211606A (zh) X射线诊断装置
WO2016147314A1 (fr) Procédé et dispositif d'imagerie tomographique assistée par ordinateur
WO2013051647A1 (fr) Dispositif de radiographie et procédé de traitement d'image
JPH04156828A (ja) パターン認識装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20787988

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021513687

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20787988

Country of ref document: EP

Kind code of ref document: A1