WO2022176280A1 - X-ray imaging apparatus, image processing device, and image processing method - Google Patents

X-ray imaging apparatus, image processing device, and image processing method Download PDF

Info

Publication number
WO2022176280A1
WO2022176280A1 PCT/JP2021/040986 JP2021040986W WO2022176280A1 WO 2022176280 A1 WO2022176280 A1 WO 2022176280A1 JP 2021040986 W JP2021040986 W JP 2021040986W WO 2022176280 A1 WO2022176280 A1 WO 2022176280A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ray
region
interest
target object
Prior art date
Application number
PCT/JP2021/040986
Other languages
French (fr)
Japanese (ja)
Inventor
尓重 胡
和義 西野
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Publication of WO2022176280A1 publication Critical patent/WO2022176280A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/12Devices for detecting or locating foreign bodies

Definitions

  • the present invention relates to an X-ray imaging apparatus, an image processing apparatus, and an image processing method.
  • biological image processing devices that extract objects contained in the body from images of subjects.
  • Such a biological image processing device is disclosed, for example, in Japanese Patent Application Laid-Open No. 2018-89301.
  • the biometric image processing apparatus described in JP-A-2018-89301 performs image processing for extracting a target object from a biometric image of a subject including the target object. Specifically, this biometric image processing apparatus generates a biometric image that does not include the target from the biometric image of the subject that includes the target using a learned neural network. Then, this biometric image processing apparatus extracts the object by subtracting the biometric image not including the object, which is the output image of the image processing by the neural network, from the biometric image of the subject including the object. is configured to generate a subtracted image.
  • JP-A-2018-89301 a worker such as a doctor is left behind in the body of a subject (subject) after surgery Hemostasis gauze (surgical gauze)
  • Hemostasis gauze surgical gauze
  • an X-ray image is taken of the subject after surgery. Therefore, like the biological image processing apparatus described in JP-A-2018-89301, image processing using a neural network (learned model) is performed on the captured X-ray image (biological image). Extracting target objects such as surgical gauze included in the X-ray image of the subject after surgery by It is conceivable to generate an enhanced (enhanced) image.
  • the present invention has been made to solve the above problems, and one object of the present invention is to perform image processing using a trained model generated by machine learning to photograph a subject.
  • an X-ray imaging apparatus, an image processing apparatus, and an image processing method which enable easy confirmation of a target object included in an X-ray image when the target object included in the X-ray image obtained from radiography is emphasized. is.
  • an X-ray imaging apparatus includes an X-ray irradiator that irradiates a subject with X-rays, and an X-ray that detects the X-rays emitted from the X-ray irradiator.
  • a ray detector an X-ray image generator for generating an X-ray image based on an X-ray detection signal detected by the X-ray detector, and a controller, wherein the controller controls the X-ray image.
  • a region detection unit for detecting a region of interest from an X-ray image by a first trained model generated by machine learning so as to detect a region of interest including a target object in the body of a subject; and detection by the region detection unit corresponding to the region of interest in the X-ray image based on the obtained region of interest and the output result of image processing by the second trained model generated by machine learning to remove or enhance the object of interest in the X-ray image.
  • a partial image generation unit for generating an enhanced partial image in which a target object is emphasized while a portion is cut out; an image output unit for causing a display unit to display a display in which the enhanced partial image generated by the partial image generation unit is identifiable; including.
  • An image processing apparatus detects a region of interest including a target object inside the body of a subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject.
  • the first learned model generated by machine learning so as to remove or An enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is enhanced based on the output result of image processing by the second trained model generated by machine learning so as to enhance.
  • an image output unit configured to cause a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
  • a region of interest including a target object inside the body of the subject is detected in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. detecting a region of interest from the X-ray image by a first trained model generated by machine learning to remove or enhance the detected region of interest and the object of interest in the X-ray image; generating an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized, based on the output result of image processing by the second trained model generated by and causing the display unit to display a display capable of identifying the highlighted partial image.
  • the portion corresponding to the region of interest in the X-ray image is cut out and the target object is extracted. generates an enhanced partial image in which is enhanced. Then, a display that allows identification of the generated emphasized partial image is displayed on the display unit.
  • a display that allows identification of the generated emphasized partial image is displayed on the display unit.
  • FIG. 1 is a diagram for explaining the configuration of an X-ray imaging apparatus according to a first embodiment
  • FIG. 1 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a first embodiment
  • FIG. It is a figure which shows an example of the X-ray image of the subject in which a target object is contained in the body.
  • FIG. 4 is a diagram for explaining generation of an identification image according to the first embodiment
  • FIG. 4 is a diagram for explaining detection of a region of interest according to the first embodiment
  • FIG. 4 is a diagram for explaining generation of a removed image according to the first embodiment
  • FIG. 4 is a diagram for explaining synthesis of an enhanced partial image and a contour partial image according to the first embodiment; It is the figure which showed the display of the display part by 1st Embodiment.
  • FIG. 3 is a flowchart for explaining an image processing method according to the first embodiment;
  • FIG. 3 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a second embodiment;
  • FIG. 10 is a diagram for explaining generation of an identification image according to the second embodiment;
  • FIG. 11 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a third embodiment;
  • FIG. FIG. 11 is a diagram for explaining generation of an identification image according to the third embodiment;
  • FIG. 11 is a diagram for explaining generation of an enhanced image according to the third embodiment;
  • the X-ray imaging apparatus 100 performs X-ray imaging to identify a target object 102 inside the body of a subject 101 .
  • the X-ray imaging apparatus 100 applies X-rays to a subject 101 after undergoing laparotomy in an operating room to confirm whether or not a target object 102 (foreign matter) remains in the body. take a picture.
  • the X-ray imaging apparatus 100 is, for example, a mobile X-ray imaging apparatus in which the entire apparatus is movable.
  • Target objects 102 include, for example, surgical gauze, suture needles, and forceps (such as hemostatic forceps).
  • a worker such as a doctor leaves a target object 102 such as surgical gauze, a suture needle, and forceps in the body of the subject 101 after the wound is closed ( X-ray imaging for confirmation is performed on the subject 101 so that there is no residual.
  • a worker such as a doctor confirms that the target object 102 is not left inside the body of the subject 101 by viewing the captured X-ray image 10 (see FIG. 3).
  • the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an X-ray image generation unit 3, a display unit 4, a storage unit 5, and a control unit 6.
  • the control unit 6 is an example of the "control unit” and the "image processing device” in the claims.
  • the X-ray irradiation unit 1 irradiates the subject 101 after surgery with X-rays.
  • the X-ray irradiation unit 1 includes an X-ray tube that emits X-rays when a voltage is applied.
  • the X-ray detection unit 2 detects X-rays emitted by the X-ray irradiation unit 1 and transmitted through the subject 101 . Then, the X-ray detector 2 outputs a detection signal based on the detected X-rays.
  • the X-ray detection unit 2 includes, for example, an FPD (Flat Panel Detector).
  • the X-ray detector 2 is configured as a wireless type X-ray detector and outputs a detection signal as a radio signal.
  • the X-ray detection unit 2 is configured to be able to communicate with an X-ray image generation unit 3, which will be described later, through a wireless connection such as a wireless LAN. Outputs a detection signal.
  • the X-ray image generator 3 generates an X-ray image 10 based on the X-ray detection signal detected by the X-ray detector 2 .
  • the X-ray image generation unit 3 controls X-ray imaging by controlling the X-ray irradiation unit 1 and the X-ray detection unit 2 .
  • the X-ray image generator 3 is configured to be able to communicate with the X-ray detector 2 through a wireless connection such as a wireless LAN.
  • the X-ray image generator 3 includes a processor such as an FPGA (field-programmable gate array). Then, the X-ray image generator 3 outputs the generated X-ray image 10 to the controller 6, which will be described later.
  • the X-ray image 10 is an image obtained by X-raying the abdomen of the subject 101 after surgery.
  • X-ray image 10 includes surgical gauze as target object 102 .
  • the surgical gauze is woven with a contrast thread that does not easily transmit X-rays so that it can be visually recognized in the X-ray image 10 obtained by radiography after surgery.
  • the display unit 4 includes, for example, a touch panel type liquid crystal display. Then, the display unit 4 displays the captured X-ray image 10 . The display unit 4 also displays an identification image 20 (see FIGS. 4 and 8) output by the image output unit 65, which will be described later. Further, the display unit 4 is configured to receive an input operation for operating the X-ray imaging apparatus 100 by an operator such as a doctor, based on the operation on the touch panel.
  • the storage unit 5 is composed of a storage device such as a hard disk drive, for example.
  • the storage unit 5 stores image data such as an X-ray image 10 generated by the X-ray image generation unit 3 and an identification image 20 (see FIG. 4) generated by the control unit 6, which will be described later.
  • the storage unit 5 is also configured to store various setting values for operating the X-ray imaging apparatus 100 .
  • the storage unit 5 also stores a program used for control processing of the X-ray imaging apparatus 100 by the control unit 6 .
  • the storage unit 5 also stores in advance a detection model 51 and a removal model 52, which will be described later.
  • the detection model 51 is an example of a "first trained model” in the scope of claims.
  • the elimination model 52 is an example of a "second trained model” in the scope of claims.
  • the control unit 6 is a computer including, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a ROM (Read Only Memory) and a RAM (Random Access Memory).
  • the control unit 6 includes an area detection unit 61, a partial image generation unit 62, a contour image generation unit 63, an identification image generation unit 64, and an image output unit 65 as functional configurations. That is, the area detection unit 61, the partial image generation unit 62, the contour image generation unit 63, the identification image generation unit 64, and the image output unit 65 are function blocks as software in the control unit 6, and are hardware. is configured to function by executing a predetermined control program.
  • the control unit 6 is configured to execute image processing for generating an identification image 20 for identifying a target object 102 included in the X-ray image 10 from the X-ray image 10. . Specifically, the controller 6 acquires the X-ray image 10 generated by the X-ray image generator 3 . Then, the control unit 6 performs processes of detecting a region of interest 11 , generating a removed image 12 , and generating a contour image 15 on the X-ray image 10 .
  • the region of interest 11 is a region of the X-ray image 10 that includes the target object 102 inside the body of the subject 101 .
  • the removed image 12 is an image obtained by removing the target object 102 from the X-ray image 10 .
  • the contour image 15 is an image showing the contour (extracted contour) of the subject 101 (and the bones of the subject 101, etc.) in the X-ray image 10 .
  • the control unit 6 also generates an enhanced whole image 13 in which the target object 102 is enhanced (extracted) based on the removed image 12 and the X-ray image 10 . Then, the control unit 6 generates an enhanced partial image 14 by cutting out a portion corresponding to the region of interest 11 in the enhanced whole image 13, and synthesizes the enhanced partial image 14 with the contour image 15 to form the identification image 20. Generate.
  • the region detection unit 61 detects the region of interest 11 from the X-ray image 10 using the detection model 51.
  • FIG. Specifically, the region detection unit 61 acquires the position (coordinates) and size (size) of the region of interest 11 based on the output from the detection model 51 to which the X-ray image 10 is input.
  • the detection model 51 is a trained model (algorithm) generated by machine learning using deep learning so as to detect the region of interest 11 including the target object 102 in the X-ray image 10 .
  • the detection model 51 is generated in advance by a learning device 103 separate from the X-ray imaging apparatus 100 .
  • the learning device 103 is, for example, a computer including a CPU, GPU, ROM, and RAM.
  • the learning device 103 generates a detection model 51 by machine learning using a plurality of teacher input X-ray images 10a and a plurality of teacher output regions of interest 11a as teacher data (training set).
  • the teacher input X-ray image 10a is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 with the target object 102 left inside the body.
  • the teacher input X-ray image 10 a is generated so as to have the same conditions (size, etc.) as the X-ray image 10 used for input in the inference using the detection model 51 .
  • the teacher output region of interest 11a is information (annotation) indicating the position and size of the region corresponding to the target object 102 in the teacher input X-ray image 10a.
  • the detection model 51 As a region detection (object detection) algorithm in the detection model 51, for example, FasterR-CNN (Faster Region-based Convolutional Neural Network) is used. In object detection using FasterR-CNN, one or more target objects are detected by being surrounded by a rectangular area (bounding box) in an input image. The likelihood (decision value) for each target object detection (inference) result is also estimated at the same time. The likelihood is a numerical value indicating the possibility (likelihood) that the target object is included in the detected rectangular area.
  • the detection model 51 detects one region with the highest likelihood among multiple regions estimated to contain the target object 102 . That is, the region detection unit 61 is configured to detect one region with the highest likelihood from the X-ray image 10 by the detection model 51 as the region of interest 11 .
  • the partial image generation unit 62 of the control unit 6 combines the region of interest 11 detected by the region detection unit 61 with the output result of image processing by the removal model 52. Based on this, an enhanced partial image 14 is generated.
  • the partial image generation unit 62 (control unit 6 ) generates the removal image 12 from the X-ray image 10 as an output result of image processing by the removal model 52 .
  • the partial image generation unit 62 generates the removal image 12 corresponding to the entire X-ray image 10 from the entire X-ray image 10 using the removal model 52 .
  • the removal model 52 is machine learning using deep learning so as to remove (suppress) the target object 102 in the X-ray image 10 to generate the removed image 12 in which the target object 102 is removed from the X-ray image 10. It is a trained model (algorithm) generated by
  • the removal model 52 is generated in advance by a learning device 103 separate from the X-ray imaging apparatus 100, similar to the detection model 51.
  • the learning device 103 generates a removal model 52 by machine learning using a plurality of teacher input X-ray images 10b and a plurality of teacher output removal images 12b as teacher data (training set).
  • the teacher input X-ray image 10b is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 with the target object 102 left inside the body.
  • the teacher output removed image 12b is an image obtained by removing the target object 102 from the teacher input X-ray image 10b.
  • the teacher input X-ray image 10b and the teacher output removal image 12b are generated so as to have the same conditions (size, etc.) as the X-ray image 10 used as an input in inference using the removal model 52 .
  • the removal model 52 is generated, for example, based on U-Net, which is one type of full convolution network (FCN).
  • the removal model 52 transforms the pixels estimated to be the target object 102 from among the pixels of the input X-ray image 10, thereby removing the portion estimated to be the target object 102 from the X-ray image 10. is generated by learning to perform an image transformation (image reconstruction) that removes
  • the partial image generation unit 62 (control unit 6) generates the enhanced partial image 14 based on the X-ray image 10, the removed image 12, and the region of interest 11.
  • the enhanced partial image 14 is an image obtained by cutting out a portion corresponding to the region of interest 11 in the X-ray image 10 and enhancing the target object 102 .
  • the partial image generator 62 generates the enhanced whole image 13 in which the target object 102 in the entire X-ray image 10 is emphasized based on the difference between the X-ray image 10 and the removed image 12 . That is, the partial image generator 62 subtracts the removed image 12 from the X-ray image 10 to generate the enhanced whole image 13 having the same size (region) as the X-ray image 10 .
  • the enhanced whole image 13 is an image in which the target object 102 included in the X-ray image 10 is enhanced (extracted).
  • the enhanced whole image 13 includes structures other than the target object 102, such as bones of the subject 101.
  • FIG. The partial image generation unit 62 is configured to generate the emphasized partial image 14 by cutting out the portion corresponding to the region of interest 11 from the generated emphasized whole image 13 . That is, in the first embodiment, the enhanced partial image 14 has the same position and size as the position and size in the X-ray image 10 of the region of interest 11 detected by the detection model 51 .
  • the contour image generator 63 (controller 6) generates the contour image 15.
  • FIG. The contour image generator 63 generates a contour image 15 by executing image processing for contour extraction (edge detection) on the X-ray image 10 .
  • the contour image generation unit 63 blurs the X-ray image 10 .
  • the contour image generator 63 generates the contour image 15 based on the difference between the X-ray image 10 after blurring and the X-ray image 10 before blurring.
  • the generated contour image 15 is an image showing high frequency components of the X-ray image 10 .
  • the contour image generator 63 is configured to be able to extract a contour with a lower intensity (lower contrast) than the enhanced whole image 13 from the X-ray image 10 by executing the blurring process.
  • the identification image generator 64 (controller 6) generates the identification image 20 based on the emphasized partial image 14 and the contour image 15.
  • Generate image 16 .
  • the identification image generator 64 combines the outline partial image 16 and the emphasized partial image 14 to generate a combined partial image 17 .
  • the identification image generator 64 generates the identification image 20 by synthesizing the combined partial image 17 with the clipped portion of the contour image 15 .
  • the identification image generation unit 64 (control unit 6) generates the pixel values of the emphasized partial image 14 and the pixel values of the portion (outline partial image 16) corresponding to the emphasized partial image 14 in the contour image 15. and are each weighted, and the contour image 15 and the emphasized partial image 14 are synthesized to generate the identification image 20 .
  • the identification image generator 64 weights the transparency of each pixel of the outline partial image 16 and the emphasized partial image 14 based on the luminance value of the pixels of the weighted image 18 .
  • the identification image generation unit 64 displays the emphasized partial image 14 in the portion (white portion) of the weighted image 18 where the luminance value is large, and displays the outline in the portion (black portion) of the weighted image 18 where the luminance value is small.
  • a composite partial image 17 is generated so that the image 15 (contour partial image 16) is displayed.
  • the weighted image 18 is generated by the identification image generator 64 so as to have the same size as the region of interest 11 .
  • the weighted image 18 is configured such that the brightness value is large near the center and gradually decreases circularly (radially) toward the periphery.
  • the image output unit 65 (control unit 6) causes the display unit 4 to display a display that allows the generated emphasized partial image 14 to be identified.
  • the image output unit 65 is configured to display the identification image 20 generated by the identification image generation unit 64 and the X-ray image 10 on the display unit 4 .
  • the image output unit 65 is configured to display the X-ray image 10 and the identification image 20 side by side on the display unit 4 .
  • the image output unit 65 can display the identification image 20 and the X-ray image 10 side by side based on an input operation on the touch panel of the display unit 4, or can display only the X-ray image 10 without displaying the identification image 20. It may be configured to switch between causing and causing.
  • step 401 the subject 101 is irradiated with X-rays in order to detect the target object 102 inside the body of the subject 101 after surgery.
  • step 402 emitted x-rays are detected.
  • step 403 an X-ray image 10 is generated based on the detected X-ray detection signal.
  • step 404 the detection model 51 detects the region of interest 11 from the X-ray image 10 . Then, the size (magnitude) of the detected region of interest 11 and the position (coordinates) in the X-ray image 10 are acquired.
  • a removal image 12 is generated from the X-ray image 10 as an output result of image processing by the removal model 52 .
  • An enhanced global image 13 is then generated in step 406 based on the difference between the X-ray image 10 and the removed image 12 .
  • the enhanced partial image 14 is generated by cutting out the region corresponding to the region of interest 11 from the enhanced whole image 13 . That is, through the processing in steps 405 to 407, the enhanced partial image 14 is generated based on the detected region of interest 11 and the output result of the image processing by the removal model 52.
  • a contour image 15 is generated from the X-ray image 10.
  • an identification image 20 is generated based on the enhanced partial image 14 and the contour image 15 . Specifically, based on the weighted image 18 , the contour partial image 16 cut out from the contour image 15 and the emphasized partial image 14 are combined to generate a combined partial image 17 .
  • An identification image 20 is generated by synthesizing the synthetic partial image 17 with the outline image 15 .
  • step 410 the display unit 4 displays an identifiable display of the emphasized partial image 14. That is, the identification image 20 is displayed on the display section 4 . Specifically, the X-ray image 10 and the identification image 20 are displayed side by side on the display unit 4 .
  • any of the detection of the region of interest 11 in step 404, the generation of the removed image 12 in step 405, and the generation of the contour image 15 in step 408 may be executed first.
  • the X-ray imaging apparatus 100 of the first embodiment generates the enhanced partial image 14 in which the portion corresponding to the region of interest 11 in the X-ray image 10 is clipped and the target object 102 is enhanced. Then, the display section 4 is caused to display a display that allows the generated emphasized partial image 14 to be identified. As a result, only the portion corresponding to the region of interest 11 including the target object 102 is cut out and emphasized, so that structures such as human bones in regions other than the portion including the target object 102 are similar to the target object 102. can be suppressed. Therefore, it is possible to suppress difficulty in identifying the target object 102 due to enhancement of the human body structure other than the portion including the target object 102 .
  • the target object 102 included in the X-ray image 10 obtained by imaging the subject 101 is emphasized by performing image processing using a trained model generated by machine learning, the target object 102 included in the X-ray image 10 is The target object 102 can be easily confirmed.
  • control unit 6 controls the target object 102 included in the X-ray image 10 based on the enhanced partial image 14 generated by the partial image generation unit 62 (control unit 6).
  • the image output unit 65 is configured to display the identification image 20 generated by the identification image generation unit 64 on the display unit 4. .
  • the control unit 6 includes the contour image generation unit 63 that generates the contour image 15 showing the contour of the subject 101 in the X-ray image 10, and the identification image generation unit 64 (Control section 6) generates identification image 20 based on contour image 15 generated by contour image generation section 63 and enhanced partial image 14 generated by partial image generation section 62 (control section 6).
  • the identification image 20 is generated based on the contour image 15 from which the contour of the object 101 is extracted and the enhanced partial image 14, a portion of the identification image 20 other than the target object 102 (enhanced partial image) is generated. 14) can be used as a contour image 15 showing the contour of the subject 101.
  • the identification image 20 can be generated so that the portion of the target object 102 is emphasized while the portion other than the outline of the subject 101 included in the X-ray image 10 is removed. Parts of the object 102 can be made more emphasized. As a result, since the identification image 20 can be generated so that the target object 102 can be more effectively identified, the target object 102 included in the X-ray image 10 can be more easily confirmed.
  • the identification image generation unit 64 (the control unit 6) generates the pixel values of the emphasized partial image 14 and the pixel values of the portion of the contour image 15 corresponding to the emphasized partial image 14. are weighted, and the contour image 15 and the emphasized partial image 14 are synthesized to generate the identification image 20 .
  • weighting is applied so that the boundary between the enhanced partial image 14 and the contour image 15 is smoothly switched, thereby enhancing the contour image 15 while smoothing the boundary line (outer peripheral portion) of the enhanced partial image 14. Partial images 14 can be synthesized.
  • the contour image generation unit 63 (the control unit 6) performs the blurring process on the X-ray image 10, and the X-ray image 10 after the blurring process is performed. , a contour image 15 is generated based on the difference from the X-ray image 10 before blurring.
  • the contour image 15 can be generated in a state where the intensity (contrast) of the contour of the subject 101 in the X-ray image 10 is relatively small. can be done. Therefore, the identification image 20 can be generated based on the contour image 15 in which the contrast of the structure included in the image is relatively low and the enhanced partial image 14 in which the target object 102 is emphasized and the contrast is relatively high. 20, the visibility of the part including the target object 102 can be relatively improved. As a result, the target object 102 included in the X-ray image 10 can be confirmed more easily.
  • the image output unit 65 (control unit 6) is configured to output the X-ray image 10 and the identification image 20 (220) to the display unit 4 as described above.
  • an operator such as a doctor can easily compare the identification image 20 and the X-ray image 10 in which the target object 102 is emphasized so that the object 102 can be easily identified. Therefore, an operator such as a doctor can more easily confirm the target object 102 included in the X-ray image 10 by comparing the identification image 20 and the X-ray image 10 .
  • the partial image generation unit 62 determines the position and It is configured to generate an enhanced partial image 14 having a position and size equal to the size.
  • the partial image generation unit 62 determines the position and It is configured to generate an enhanced partial image 14 having a position and size equal to the size.
  • the partial image generation unit 62 (control unit 6) removes the target object 102 from the entire X-ray image 10 using the removal model 52 (second learned model). Based on the difference between the X-ray image 10 and the removed image 12, an enhanced whole image 13 in which the target object 102 in the entire X-ray image 10 is enhanced is generated. At the same time, by cutting out a portion corresponding to the region of interest 11 from the generated enhanced whole image 13, an enhanced partial image 14 is generated. With this configuration, the enhanced overall image 13 can be easily generated from the entire X-ray image 10 without specifying in advance the portion containing the target object 102 . As a result, the enhanced partial image 14 cut out from the enhanced whole image 13 can be easily generated, so that the target object 102 included in the X-ray image 10 can be easily confirmed.
  • the region detection unit 61 determines that the target object 102 is included in the X-ray image 10 by the detection model 51 (first learned model). It is configured to detect one region with the highest likelihood as the region of interest 11 from among the plurality of estimated regions.
  • the detection model 51 first learned model
  • multiple regions of interest 11 are detected, multiple regions corresponding to the multiple detected regions of interest 11 are considered to be emphasized. In this case, it becomes difficult to identify a portion including the target object 102 in the X-ray image 10 due to detection of a region other than the target object 102 .
  • the region detection unit 61 detects one region of interest 11 from the X-ray image 10 that has the highest likelihood that the detection model 51 estimates that the target object 102 is included.
  • Configure to detect With this configuration, unlike the case where a plurality of regions of interest 11 are detected, only the portion corresponding to one region of interest 11 is emphasized, making it difficult to identify the portion containing the target object 102. can be suppressed. As a result, the part including the target object 102 can be easily identified by viewing the display on the display unit 4 , so the target object 102 included in the X-ray image 10 can be more easily confirmed.
  • the portion corresponding to the region of interest 11 in the X-ray image 10 is cut out and the enhanced partial image 14 in which the target object 102 is enhanced is generated.
  • the display section 4 is caused to display a display that allows the generated emphasized partial image 14 to be identified.
  • only the portion corresponding to the region of interest 11 including the target object 102 is cut out and emphasized, so that structures such as human bones in regions other than the portion including the target object 102 are similar to the target object 102. can be suppressed. Therefore, it is possible to suppress difficulty in identifying the target object 102 due to enhancement of the human body structure other than the portion including the target object 102 .
  • the target object 102 included in the X-ray image 10 obtained by imaging the subject 101 is emphasized by performing image processing using a trained model generated by machine learning, the target object 102 included in the X-ray image 10 is It is possible to provide an image processing method that allows the target object 102 to be easily confirmed.
  • FIG. 10 Unlike the first embodiment configured to generate the ablation image 12 from the entire X-ray image 10 by the ablation model 52, in the second embodiment, the portion of the X-ray image 10 corresponding to the region of interest 11 is (X-ray partial image 210 ) is cut out to generate a removed image 212 .
  • symbol is attached
  • the X-ray imaging apparatus 200 of the second embodiment includes a control section 206.
  • the control unit 206 performs image processing for generating an identification image 220 for identifying the target object 102 included in the X-ray image 10 from the X-ray image 10.
  • the control unit 206 includes an area detection unit 61, a partial image generation unit 262, a contour image generation unit 63, an identification image generation unit 264, and an image output unit 65 as functional configurations.
  • the control unit 206 is an example of a “control unit” and an “image processing device” in the claims.
  • the region detection unit 61 detects the region of interest 11 from the X-ray image 10 using the detection model 51, as in the first embodiment. Also, the contour image generator 63 (controller 206 ) generates the contour image 15 from the X-ray image 10 .
  • the partial image generation unit 262 (control unit 206) cuts out a portion corresponding to the region of interest 11 of the X-ray image 10 from the X-ray image 10 based on the region of interest 11. to generate an X-ray partial image 210 .
  • the X-ray partial image 210 has a predetermined (constant) size centered on the position of the region of interest 11 in the X-ray image 10 regardless of the size of the region of interest 11 . For example, if the target object 102 is surgical gauze, the predetermined size is set to be slightly larger than the size of the surgical gauze in the X-ray image 10 .
  • the partial image generation unit 262 (control unit 206) cuts out a portion corresponding to the region of interest 11 from the X-ray partial image 210 as an output result of the image processing by the removal model 252, and It is configured to generate an ablation image 212 with the object 102 removed. That is, the partial image generator 262 uses the removal model 252 to generate a removal image 212 of a predetermined size equal to the X-ray partial image 210 from the X-ray partial image 210 cut out to a predetermined size.
  • the removal model 252 is a trained model generated by machine learning using deep learning so as to generate the removal image 212 in which the target object 102 is removed from the X-ray partial image 210 .
  • the removal model 252 is generated in advance by the learning device 103 separate from the X-ray imaging apparatus 200 and stored in the storage unit 5, similarly to the removal model 52 according to the first embodiment.
  • the removed model 252 is an example of a "second trained model" in the claims.
  • the partial image generation unit 262 (control unit 206) creates a portion (X-ray partial image 210) corresponding to the region of interest 11 of the X-ray image 10 and a removed image 212 obtained by clipping the portion corresponding to the region of interest 11. is configured to generate an enhanced partial image 214 based on the difference between . Similar to the X-ray partial image 210 , the enhanced partial image 214 has a predetermined size centered on the position of the detected region of interest 11 in the X-ray image 10 regardless of the size of the region of interest 11 . That is, the partial image generator 262 generates an enhanced partial image 214 having the same size as the X-ray partial image 210 and the removed image 212 .
  • the identification image generation unit 264 (control unit 206) generates the identification image 220 based on the emphasized partial image 214 and the contour image 15, as in the first embodiment. Specifically, the identification image generator 264 generates a combined partial image 217 by combining the contour partial image 216 cut out from the contour image 15 and the emphasized partial image 214 . A contour partial image 216 is generated by cutting out a region having a predetermined size equal to that of the emphasized partial image 214 from a position corresponding to the region of interest 11 in the contour image 15 . Further, the identification image generator 264 synthesizes the contour image 15 (contour partial image 216) and the emphasized partial image 214 in a weighted state based on the weighted image 218, as in the first embodiment.
  • the weighted image 218 likewise has the same size as the contour partial image 216 and the enhanced partial image 214 . Weighting based on the weight image 218 in synthesizing the contour image 15 (contour partial image 216) and the enhanced partial image 214 (generating the identification image 220) is the same as in the first embodiment.
  • the image output unit 65 (control unit 206) is configured to display the X-ray image 10 and the identification image 220 side by side on the display unit 4.
  • Other configurations of the second embodiment are the same as those of the first embodiment.
  • the partial image generation unit 262 (control unit 206) generates a predetermined is configured to generate an enhanced partial image 214 having a size of .
  • the size of the region emphasized in the display on the display unit 4 can be made constant. Therefore, it is possible to suppress the size of the region to be emphasized from being too small and from being too large, so it is possible to suppress deterioration in the visibility of the target object 102 in the display on the display unit 4 .
  • the partial image generation unit 262 uses the removal model 252 (second trained model) to generate a portion ( A removed image 212 is generated by cutting out a portion corresponding to the region of interest 11 from the X-ray partial image 210) and removing the target object 102, which corresponds to the region of interest 11 of the X-ray image 10). Based on the difference between the part (X-ray partial image 210) and the removed image 212 in which the part corresponding to the region of interest 11 is cut out, an enhanced partial image 214 is generated.
  • the partial X-ray image 210 which is the clipped portion of the X-ray image 10
  • the entire X-ray image 10 is The processing load of image processing using the removal model 252 by the partial image generation unit 262 can be reduced compared to the case where the image processing is performed using the partial image generation unit 262 .
  • Other effects of the second embodiment are the same as those of the first embodiment.
  • the X-ray imaging apparatus 300 of the third embodiment includes a control section 306. As shown in FIG. Similar to the control unit 6 of the first embodiment, the control unit 306 generates an identification image 320 (see FIG. 13) for identifying the target object 102 included in the X-ray image 10 from the X-ray image 10. configured to do the work. Further, the control unit 306 includes an area detection unit 61, a partial image generation unit 362, a contour image generation unit 63, an identification image generation unit 364, and an image output unit 65 as functional configurations. Note that the control unit 306 is an example of a “control unit” and an “image processing device” in the claims.
  • the region detection unit 61 detects the region of interest 11 from the X-ray image 10 using the detection model 51, as in the first embodiment. Further, the contour image generator 63 (controller 306 ) generates the contour image 15 from the X-ray image 10 .
  • the partial image generation unit 362 (control unit 306) produces an image in which the target object 102 is emphasized (extracted) from the X-ray image 10 as the output result of image processing by the emphasis model 352.
  • An enhanced image 313 is generated.
  • the partial image generator 362 generates an enhanced image 313 corresponding to the entire X-ray image 10 from the entire X-ray image 10 by the enhancement model 352 .
  • the enhancement model 352 is a trained model (algorithm) generated by machine learning using deep learning so as to enhance the target object 102 in the X-ray image 10 .
  • the enhanced image 313 includes not only an image in which only the target object 102 is extracted, but also an image including structures such as bones of the subject 101 other than the target object 102 .
  • the enhancement model 352 is generated in advance by the learning device 103 separate from the X-ray imaging device 100, like the removal model 52 according to the first embodiment.
  • the learning device 103 generates an enhancement model 352 by machine learning using a plurality of teacher input X-ray images 310b and a plurality of teacher output enhanced images 313b as teacher data (training set).
  • the teacher input X-ray image 310b is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 including the target object 102 inside the body.
  • the teacher output enhanced image 313b is an image obtained by extracting only the target object 102 from the teacher input X-ray image 310b.
  • the teacher input X-ray image 310b and the teacher output enhanced image 313b are generated so as to have the same conditions (size, etc.) as the X-ray image 10 used for input in inference using the enhancement model 352 .
  • the enhancement model 352 is generated, for example, based on U-Net, which is one type of full convolution network (FCN).
  • the enhancement model 352 transforms the pixels of the input X-ray image 10 excluding the portion estimated to be the target object 102 , thereby extracting the background other than the target object 102 from the X-ray image 10 . It is generated by learning to perform an image transformation (image reconstruction) that removes (suppresses) parts that are presumed to be. Also, the generated enhancement model 352 is pre-stored in the storage unit 5 .
  • the partial image generation unit 362 (control unit 306) generates an enhanced partial image 314 based on the enhanced image 313 generated by the enhancement model 352 and the region of interest 11. is configured to generate The partial image generator 362 generates an enhanced partial image 314 by cutting out a portion corresponding to the region of interest 11 from the enhanced image 313, as in the first embodiment.
  • the identification image generation unit 364 (control unit 306) generates the identification image 320 based on the emphasized partial image 314 and the contour image 15, as in the first embodiment. That is, the identification image generation unit 264 generates the contour partial image 16 and the weighted image 18 by the same processing as in the first embodiment, weights the weighted image 18, and generates the contour image 15 (the contour partial image 16 ) and the enhanced partial image 314 to generate a synthesized partial image 317 . Then, the identification image generator 364 generates the identification image 320 based on the combined partial image 317 and the contour image 15 .
  • control unit 306 is configured to display the X-ray image 10 and the identification image 320 side by side on the display unit 4, as in the first embodiment.
  • Other configurations of the third embodiment are the same as those of the first embodiment.
  • the partial image generation unit 362 (control unit 306) generates an enhanced image 313 in which the target object 102 in the X-ray image 10 is enhanced by the enhancement model 352 (second trained model). and is configured to generate an enhanced partial image 314 based on the enhanced image 313 and the region of interest 11 .
  • an image from which the target object 102 has been removed is generated using a learned model that has been learned to remove the target object 102 included in the X-ray image 10, and the image from which the target object 102 has been removed and the X-ray
  • the enhancement model 352 By generating the enhanced image 313 directly from the X-ray image 10 using the enhancement model 352, as compared to enhancing the target object 102 contained in the X-ray image 10 by taking the difference with the image 10. , the target object 102 contained in the X-ray image 10 can be enhanced with higher accuracy. That is, by using the enhancement model 352, the portion (background portion) other than the target object 102 in the X-ray image 10 can be suppressed (removed) with higher accuracy.
  • the partial image generation unit 362 is configured to generate the enhanced image 313 using the enhancement model 352 and to generate the enhanced partial image 314 based on the enhanced image 313 and the region of interest 11.
  • the enhanced partial image 314 can be generated such that the target object 102 is more easily visually recognizable. As a result, an operator such as a doctor can more easily confirm the target object 102 included in the X-ray image 10 .
  • Other effects of the third embodiment are similar to those of the first and second embodiments.
  • the identification image generators 64, 264, 364 are identified based on the emphasized partial image 14 (214, 314) and the contour image 15.
  • the display unit 4 may be configured to display an identifiable display of the enhanced partial image 14 (214, 314) based on the X-ray image 10 and the enhanced partial image 14 (214, 314). .
  • the display unit 4 can display the portion including the target object 102 so as to be identifiable. may be configured to be displayed on
  • the identification image generators 64, 264, 364 (controllers 6, 206, 306) generate the pixel values of the emphasized partial image 14 (214, 314) and the emphasized portion in the contour image 15. Synthesizing the contour image 15 and the enhanced partial image 14 (214, 314) in a state where the pixel values of the portions (contour partial images 16, 216) corresponding to the image 14 (214, 314) are respectively weighted.
  • the identification image 20 (220, 320) is generated by, the present invention is not limited to this.
  • the identification image 20 (220, 320) may be generated by synthesizing the outline image 15 and the enhanced partial image 14 (214, 314) without weighting.
  • the outline image generation unit 63 (the control units 6, 206, and 306) is configured to generate the X-ray image 10 after blurring and the X-ray before blurring.
  • the contour image 15 may be generated by performing edge detection processing such as a Sobel filter or a Laplacian filter.
  • the partial image generators 62 and 362 are configured to detect the position of the region of interest 11 in the X-ray image 10 detected by the detection model 51 (first trained model). , and an example configured to generate an enhanced partial image 14 (314) having a position and size equal to the detected
  • An example has been shown in which the position of the region of interest 11 in the X-ray image 10 is used as the center to generate the enhanced partial image 214 having a predetermined size regardless of the size of the region of interest 11.
  • the present invention is not limited to this. is not limited to For example, an enhanced partial image that is larger than the size of the detected region of interest 11 by a predetermined ratio may be generated.
  • the image output unit 65 (control units 6, 206, 306) is configured to display the X-ray image 10 and the identification image 20 (220, 320) side by side on the display unit 4.
  • the present invention is not limited to this.
  • only the identification image 20 (220, 320) may be displayed without displaying the X-ray image 10.
  • FIG. the display of the identification image 20 (220, 320) and the display of the X-ray image 10 may be switched based on the operation on the operation unit (touch panel).
  • the region detection unit 61 (control units 6, 206, 306) is configured to detect one region from the X-ray image 10 as the region of interest 11.
  • the invention is not so limited. For example, it may be configured to detect multiple regions of interest.
  • the weighted image 18 (218) is configured such that the luminance value is large near the center and gradually (circularly) decreases toward the periphery.
  • the present invention is not limited to this.
  • the weighted image 18 (218) may be configured to have a rectangular shape instead of a circular shape so that the luminance value is gradually changed.
  • contour image 15 contour image 15 (contour partial images 16, 216) and enhanced partial image 14 (214, 314) may be combined by adding or multiplying them.
  • the detection model 51 (first trained model) may be configured to output a region of interest having a predetermined size (constant size).
  • the target object 102 left behind in the body of the subject 101 includes surgical gauze, suture needle, and forceps, but the present invention is limited to this. do not have.
  • the detected target object 102 may include bolts, fixing clips, and the like.
  • the X-ray image generation unit 3 and the control unit 6 which are configured as separate hardware, control processing for generation of the X-ray image 10 and , control processing for generation of the enhanced partial image 14 (214, 314) are respectively performed, but the present invention is not limited to this.
  • one common control unit (hardware) may be configured to generate the X-ray image 10 and the enhanced partial image 14 (214, 314).
  • the region detection unit 61, the partial image generation unit 62 (262, 362), the contour image generation unit 63, the identification image generation unit 64 (264, 364), and the image output unit 65 is configured as a functional block (software) in one piece of hardware (control unit 6), but the present invention is not limited to this.
  • the region detection unit 61, the partial image generation unit 62 (262, 362), the contour image generation unit 63, the identification image generation unit 64 (264, 364), and the image output unit 65 are provided with separate hardware (computation circuit).
  • the learning device 103 separate from the X-ray imaging apparatus 100 (200, 300) uses the detection model 51 (first trained model), the removal model 52 (252), and the Although an example in which the enhancement model 352 (second trained model) is generated has been shown, the present invention is not limited to this.
  • the detection model 51, the removal model 52 (252) and the enhancement model 352 may be generated by the X-ray imaging apparatus 100 (200, 300).
  • the detection model 51, the removal model 52 (252), and the enhancement model 352 may be generated by different learning devices (PCs).
  • the removal model 52 (252) and the enhancement model 352 are U-Nets, which are one type of full-layer convolution network (FCN).
  • FCN full-layer convolution network
  • the present invention is not limited to this.
  • the removal model 52 (252) and the enhancement model 352 may be generated based on a CNN (Convolutional Neural Network) including fully connected layers.
  • the removal model 52 (252) and enhancement model 352 may be generated based on Encoder-Decoder models other than U-Net such as SegNet or PSPNet.
  • the partial image generation units 62 and 262 obtain the difference between the X-ray image 10 and the removed image 12 (212) (subtraction processing is performed). ) to generate images in which the target object 102 contained in the X-ray image 10 is enhanced (the enhanced whole image 13 and the enhanced partial image 214), but the present invention is directed to this. Not limited. For example, an image in which the target object 102 is emphasized may be generated by performing division processing of the removed image 12 (212) from the X-ray image 10 instead of subtraction processing.
  • an example of displaying the generated identification image 20 (220, 320) on the display unit 4 provided in the X-ray imaging apparatus 100 (200, 300) is shown.
  • the present invention is not limited to this.
  • the identification image 20 (220, 320) may be displayed on a display device such as an external monitor provided separately from the X-ray imaging apparatus 100 (200, 300).
  • the X-ray imaging apparatus 100 is an X-ray imaging apparatus for rounds, but the present invention is not limited to this.
  • it may be a general X-ray imaging apparatus installed in an X-ray imaging room.
  • image processing for generating the identification image 20 (220, 320) is performed by the control unit 6 (206, 306) provided in the X-ray imaging apparatus 100 (200, 300).
  • the present invention is not limited to this.
  • the identification image may be generated by an image processing device (for example, a personal computer) that performs image processing separate from the X-ray imaging device.
  • the partial image generation unit 362 (control unit 306) is changed from the entire X-ray image 10 by the enhancement model 352 (second learned model) to the enhancement corresponding to the entire X-ray image 10.
  • the partial image generation unit 362 may be configured to use an enhancement model (second trained model) trained to generate an enhanced partial image from a portion corresponding to the region of interest 11 of the X-ray image 10. good. That is, the partial image generation unit 362 may be configured to generate an enhanced partial image from a portion of the X-ray image 10 corresponding to the region of interest 11 as an output result of image processing by the second trained model.
  • An X-ray irradiation unit that irradiates an object with X-rays; an X-ray detection unit that detects X-rays emitted from the X-ray irradiation unit; an X-ray image generation unit that generates an X-ray image based on an X-ray detection signal detected by the X-ray detection unit; a control unit;
  • the control unit A region for detecting the region of interest from the X-ray image by a first trained model generated by machine learning so as to detect the region of interest including the target object inside the body of the subject in the X-ray image.
  • a detection unit Based on the region of interest detected by the region detection unit and the output result of image processing by a second trained model generated by machine learning to remove or emphasize the target object in the X-ray image, a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized; an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
  • control unit further includes an identification image generation unit that generates an identification image for identifying the target object included in the X-ray image based on the enhanced partial image generated by the partial image generation unit;
  • image output section is configured to cause the display section to display the identification image generated by the identification image generation section.
  • the control unit further includes a contour image generation unit that generates a contour image representing the contour of the subject in the X-ray image,
  • the identification image generation section is configured to generate the identification image based on the contour image generated by the contour image generation section and the enhanced partial image generated by the partial image generation section.
  • the identification image generation unit weights the pixel values of the emphasized partial image and the pixel values of the portion of the contour image corresponding to the emphasized partial image, and generates the contour image and the emphasized partial image. 4.
  • the contour image generation unit executes blurring processing on the X-ray image, and based on the difference between the X-ray image after the blurring processing is executed and the X-ray image before the blurring processing is executed 5.
  • An X-ray imaging device according to item 3 or 4, adapted to generate the contour image by means of a
  • (Item 6) The X-ray imaging apparatus according to any one of items 2 to 5, wherein the image output unit is configured to display the X-ray image and the identification image on the display unit.
  • the partial image generation unit generates the enhanced partial image having a position and size equal to the position and size in the X-ray image of the region of interest detected by the first trained model, and the detected region of interest. Any one of items 1 to 6, configured to generate one of the enhanced partial image having a predetermined size centered on the position in the X-ray image regardless of the size of the region of interest. 1.
  • the X-ray imaging apparatus according to claim 1.
  • the partial image generation unit is configured to generate a removed image in which the target object is removed from the entire X-ray image by the second trained model, and the X-ray image and the removed image are Based on the difference between the above and the X-ray imaging apparatus according to any one of items 1 to 7, configured to generate an enhanced partial image.
  • the partial image generation unit generates a removed image in which a portion corresponding to the region of interest is cut out from a portion corresponding to the region of interest of the X-ray image and the target object is removed by the second trained model. and generating the enhanced partial image based on a difference between a portion of the X-ray image corresponding to the region of interest and the removed image obtained by clipping the portion corresponding to the region of interest.
  • the X-ray imaging apparatus according to any one of items 1 to 7, which is configured to
  • the partial image generation unit is configured to generate an enhanced image in which the target object in the X-ray image is enhanced by the second trained model, and based on the enhanced image and the region of interest 8.
  • the X-ray imaging apparatus according to any one of items 1 to 7, configured to generate the enhanced partial image.
  • the region detection unit detects, from the X-ray image, one region having the highest likelihood among a plurality of regions estimated to include the target object by the first trained model as the region of interest.
  • the X-ray imaging apparatus according to any one of items 1 to 10, which is configured to
  • a region detection unit that detects the region of interest from the X-ray image using the finished model;
  • a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized; an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
  • X-ray irradiation unit 2 X-ray detection unit 3
  • X-ray image generation unit 4 Display unit 6, 206, 306
  • Control unit 10 X-ray image 11 Region of interest 12, 212 Removal image 13 Enhanced whole image 14, 214, 314 Enhanced partial image 15 contour image 20, 220, 320 identification image 51 detection model (first trained model) 52, 252 elimination model (second trained model) 61 region detection unit 62, 262, 362 partial image generation unit 63 contour image generation unit 64, 264, 364 identification image generation unit 65 image output unit 100, 200, 300 X-ray imaging apparatus 101 subject 102 target object 313 enhanced image 352 Emphasis model (2nd trained model)

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

This X-ray imaging apparatus (100) detects a region of interest (11) including a target object (102) left behind in the body of a subject (101) from an X-ray image (10) using a detection model (51) generated by machine learning. On the basis of the detected region of interest (11) and an output result of image processing by a removal model (52) generated by machine learning, a portion corresponding to the region of interest (11) in the X-ray image (10) is cut out and a partially highlighted image (14) in which the target object (102) is highlighted is generated. A display unit (4) is made to display a display in which the partially highlighted image (14) can be identified.

Description

X線撮影装置、画像処理装置、および、画像処理方法X-ray imaging device, image processing device, and image processing method
 本発明は、X線撮影装置、画像処理装置、および、画像処理方法に関する。 The present invention relates to an X-ray imaging apparatus, an image processing apparatus, and an image processing method.
 従来、被検者の画像から体内に含まれる目的物を抽出する生体画像処理装置が知られている。このような生体画像処理装置は、たとえば、特開2018-89301号公報に開示されている。 Conventionally, biological image processing devices are known that extract objects contained in the body from images of subjects. Such a biological image processing device is disclosed, for example, in Japanese Patent Application Laid-Open No. 2018-89301.
 上記特開2018-89301号公報に記載されている生体画像処理装置は、目的物を含む被検者の生体画像から目的物を抽出する画像処理を行う。具体的には、この生体画像処理装置は、目的物を含む被検者の生体画像から、学習されたニューラルネットワークを用いて目的物を含まない生体画像を生成する。そして、この生体画像処理装置は、目的物を含む被検者の生体画像から、ニューラルネットワークによる画像処理の出力画像である目的物を含まない生体画像を減算処理することによって、目的物の抽出された減算画像を生成するように構成されている。 The biometric image processing apparatus described in JP-A-2018-89301 performs image processing for extracting a target object from a biometric image of a subject including the target object. Specifically, this biometric image processing apparatus generates a biometric image that does not include the target from the biometric image of the subject that includes the target using a learned neural network. Then, this biometric image processing apparatus extracts the object by subtracting the biometric image not including the object, which is the output image of the image processing by the neural network, from the biometric image of the subject including the object. is configured to generate a subtracted image.
特開2018-89301号公報Japanese Unexamined Patent Application Publication No. 2018-89301
 ここで、上記特開2018-89301号公報には記載されていないが、医師などの作業者が手術後に被検者(被検体)の体内に取り残されている止血用ガーゼ(手術用ガーゼ)などの異物(対象物体)の有無を確認するために、手術後の被検体に対してX線撮影が行われる。そこで、上記特開2018-89301号公報に記載されている生体画像処理装置のように、撮影されたX線画像(生体画像)に対してニューラルネットワーク(学習済みモデル)を用いた画像処理を行うこと(目的物(対象物体)を含む画像から目的物(対象物体)を含まない画像を生成すること)によって、手術後の被検体のX線画像に含まれる手術用ガーゼなどの対象物体を抽出した(強調した)画像を生成することが考えられる。 Here, although not described in JP-A-2018-89301, a worker such as a doctor is left behind in the body of a subject (subject) after surgery Hemostasis gauze (surgical gauze) In order to confirm the presence or absence of a foreign body (target object), an X-ray image is taken of the subject after surgery. Therefore, like the biological image processing apparatus described in JP-A-2018-89301, image processing using a neural network (learned model) is performed on the captured X-ray image (biological image). Extracting target objects such as surgical gauze included in the X-ray image of the subject after surgery by It is conceivable to generate an enhanced (enhanced) image.
 しかしながら、被検体のX線画像に対して学習済みモデルを用いた画像処理を行うことによって対象物体を強調した画像を生成する際、骨などの人体の構造物が対象物体と同様に強調される場合がある。この場合、対象物体と同様に対象物体以外の構造物が強調されることに起因して、X線画像に含まれる対象物体の確認が困難になるという問題点がある。 However, when generating an image in which the target object is emphasized by performing image processing on the X-ray image of the subject using the trained model, human body structures such as bones are emphasized in the same way as the target object. Sometimes. In this case, there is a problem that confirmation of the target object included in the X-ray image becomes difficult due to the fact that structures other than the target object are emphasized in the same way as the target object.
 この発明は、上記のような課題を解決するためになされたものであり、この発明の1つの目的は、機械学習によって生成された学習済みモデルを用いた画像処理を行うことによって被検体を撮影したX線画像に含まれる対象物体を強調する場合に、X線画像に含まれる対象物体を容易に確認することが可能なX線撮影装置、画像処理装置、および、画像処理方法を提供することである。 The present invention has been made to solve the above problems, and one object of the present invention is to perform image processing using a trained model generated by machine learning to photograph a subject. To provide an X-ray imaging apparatus, an image processing apparatus, and an image processing method, which enable easy confirmation of a target object included in an X-ray image when the target object included in the X-ray image obtained from radiography is emphasized. is.
 上記目的を達成するために、この発明の第1の局面におけるX線撮影装置は、被検体にX線を照射するX線照射部と、X線照射部から照射されたX線を検出するX線検出部と、X線検出部によって検出されたX線の検出信号に基づいてX線画像を生成するX線画像生成部と、制御部と、を備え、制御部は、X線画像のうちの被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、X線画像から関心領域を検出する領域検出部と、領域検出部によって検出された関心領域と、X線画像における対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、X線画像における関心領域に対応する部分が切り出されるとともに対象物体が強調された強調部分画像を生成する部分画像生成部と、部分画像生成部によって生成された強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を含む。なお、ここで言う「対象物体の除去」とは、対象物体の画像を弱める(抑制する)ことをも含む広い概念として記載している。 In order to achieve the above object, an X-ray imaging apparatus according to a first aspect of the present invention includes an X-ray irradiator that irradiates a subject with X-rays, and an X-ray that detects the X-rays emitted from the X-ray irradiator. a ray detector, an X-ray image generator for generating an X-ray image based on an X-ray detection signal detected by the X-ray detector, and a controller, wherein the controller controls the X-ray image. a region detection unit for detecting a region of interest from an X-ray image by a first trained model generated by machine learning so as to detect a region of interest including a target object in the body of a subject; and detection by the region detection unit corresponding to the region of interest in the X-ray image based on the obtained region of interest and the output result of image processing by the second trained model generated by machine learning to remove or enhance the object of interest in the X-ray image. a partial image generation unit for generating an enhanced partial image in which a target object is emphasized while a portion is cut out; an image output unit for causing a display unit to display a display in which the enhanced partial image generated by the partial image generation unit is identifiable; including. It should be noted that the "removal of the target object" referred to here is described as a broad concept including weakening (suppressing) the image of the target object.
 この発明の第2の局面における画像処理装置は、被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、X線画像から関心領域を検出する領域検出部と、領域検出部によって検出された関心領域と、X線画像における対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、X線画像における関心領域に対応する部分が切り出されるとともに対象物体が強調された強調部分画像を生成する部分画像生成部と、部分画像生成部によって生成された強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を備える。 An image processing apparatus according to a second aspect of the present invention detects a region of interest including a target object inside the body of a subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. By the first learned model generated by machine learning so as to remove or An enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is enhanced based on the output result of image processing by the second trained model generated by machine learning so as to enhance. and an image output unit configured to cause a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
 この発明の第3の局面における画像処理方法では、被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、X線画像から関心領域を検出するステップと、検出された関心領域と、X線画像における対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、X線画像における関心領域に対応する部分が切り出されるとともに対象物体が強調された強調部分画像を生成するステップと、生成された強調部分画像を識別可能な表示を表示部に表示させるステップと、を備える。 In the image processing method according to the third aspect of the present invention, a region of interest including a target object inside the body of the subject is detected in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. detecting a region of interest from the X-ray image by a first trained model generated by machine learning to remove or enhance the detected region of interest and the object of interest in the X-ray image; generating an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized, based on the output result of image processing by the second trained model generated by and causing the display unit to display a display capable of identifying the highlighted partial image.
 上記第1の局面におけるX線撮影装置、上記第2の局面における画像処理装置、および、上記第3の局面における画像処理方法では、X線画像における関心領域に対応する部分が切り出されるとともに対象物体が強調された強調部分画像を生成する。そして、生成された強調部分画像を識別可能な表示を表示部に表示させる。これにより、対象物体が含まれる関心領域に対応する部分のみが切り出されて強調されるため、対象物体が含まれる部分以外の領域における人体の骨などの構造物が対象物体と同様に強調されることを抑制することができる。そのため、対象物体が含まれる部分以外の人体の構造物が強調されることに起因して、対象物体の識別が困難になることを抑制することができる。その結果、機械学習によって生成された学習済みモデルを用いた画像処理を行うことによって被検体を撮影したX線画像に含まれる対象物体を強調する場合に、X線画像に含まれる対象物体を容易に確認することができる。 In the X-ray imaging apparatus according to the first aspect, the image processing apparatus according to the second aspect, and the image processing method according to the third aspect, the portion corresponding to the region of interest in the X-ray image is cut out and the target object is extracted. generates an enhanced partial image in which is enhanced. Then, a display that allows identification of the generated emphasized partial image is displayed on the display unit. As a result, only the portion corresponding to the region of interest containing the target object is cut out and emphasized, so that structures such as bones of the human body in regions other than the portion containing the target object are emphasized in the same way as the target object. can be suppressed. Therefore, it is possible to prevent difficulty in identifying the target object due to enhancement of the human body structure other than the portion including the target object. As a result, when the target object included in the X-ray image of the subject is emphasized by performing image processing using a trained model generated by machine learning, the target object included in the X-ray image can be easily identified. can be verified.
第1実施形態によるX線撮影装置の構成を説明するための図である。1 is a diagram for explaining the configuration of an X-ray imaging apparatus according to a first embodiment; FIG. 第1実施形態によるX線撮影装置の構成を説明するためのブロック図である。1 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a first embodiment; FIG. 体内に対象物体が含まれる被検体のX線画像の一例を示す図である。It is a figure which shows an example of the X-ray image of the subject in which a target object is contained in the body. 第1実施形態による識別画像の生成を説明するための図である。FIG. 4 is a diagram for explaining generation of an identification image according to the first embodiment; 第1実施形態による関心領域の検出を説明するための図である。FIG. 4 is a diagram for explaining detection of a region of interest according to the first embodiment; 第1実施形態による除去画像の生成を説明するための図である。FIG. 4 is a diagram for explaining generation of a removed image according to the first embodiment; 第1実施形態による強調部分画像と輪郭部分画像との合成を説明するための図である。FIG. 4 is a diagram for explaining synthesis of an enhanced partial image and a contour partial image according to the first embodiment; 第1実施形態による表示部の表示を示した図である。It is the figure which showed the display of the display part by 1st Embodiment. 第1実施形態による画像処理方法を説明するためのフローチャート図である。FIG. 3 is a flowchart for explaining an image processing method according to the first embodiment; 第2実施形態によるX線撮影装置の構成を説明するためのブロック図である。FIG. 3 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a second embodiment; FIG. 第2実施形態による識別画像の生成を説明するための図である。FIG. 10 is a diagram for explaining generation of an identification image according to the second embodiment; 第3実施形態によるX線撮影装置の構成を説明するためのブロック図である。FIG. 11 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to a third embodiment; FIG. 第3実施形態による識別画像の生成を説明するための図である。FIG. 11 is a diagram for explaining generation of an identification image according to the third embodiment; 第3実施形態による強調画像の生成を説明するための図である。FIG. 11 is a diagram for explaining generation of an enhanced image according to the third embodiment; FIG.
 以下、本発明を具体化した実施形態を図面に基づいて説明する。 An embodiment embodying the present invention will be described below based on the drawings.
 [第1実施形態]
 (X線撮影装置の全体構成)
 図1~図8を参照して、本発明の第1実施形態によるX線撮影装置100について説明する。
[First embodiment]
(Overall configuration of X-ray imaging apparatus)
An X-ray imaging apparatus 100 according to a first embodiment of the present invention will be described with reference to FIGS. 1 to 8. FIG.
 図1に示すように、X線撮影装置100は、被検体101の体内の対象物体102を識別するためにX線撮影を行う。たとえば、X線撮影装置100は、手術室内において開腹手術が行われた手術後の被検体101に対して、対象物体102(異物)が体内に取り残されているか否かを確認するためのX線撮影を行う。X線撮影装置100は、たとえば、装置全体が移動可能な回診用X線撮影装置である。対象物体102は、たとえば、手術用ガーゼ、縫合針、および、鉗子(止血鉗子など)を含む。 As shown in FIG. 1, the X-ray imaging apparatus 100 performs X-ray imaging to identify a target object 102 inside the body of a subject 101 . For example, the X-ray imaging apparatus 100 applies X-rays to a subject 101 after undergoing laparotomy in an operating room to confirm whether or not a target object 102 (foreign matter) remains in the body. take a picture. The X-ray imaging apparatus 100 is, for example, a mobile X-ray imaging apparatus in which the entire apparatus is movable. Target objects 102 include, for example, surgical gauze, suture needles, and forceps (such as hemostatic forceps).
 一般に、開腹手術などの外科手術が行われた場合には、医師などの作業者は、閉創後に手術用ガーゼ、縫合針、および、鉗子などの対象物体102が被検体101の体内に取り残される(残留する)ことがないように、被検体101に対して確認のためのX線撮影を行う。医師などの作業者は、撮影されたX線画像10(図3参照)を視認することによって、被検体101の体内に対象物体102が取り残されていないことを確認する。 In general, when a surgical operation such as laparotomy is performed, a worker such as a doctor leaves a target object 102 such as surgical gauze, a suture needle, and forceps in the body of the subject 101 after the wound is closed ( X-ray imaging for confirmation is performed on the subject 101 so that there is no residual. A worker such as a doctor confirms that the target object 102 is not left inside the body of the subject 101 by viewing the captured X-ray image 10 (see FIG. 3).
 〈X線撮影装置の各部〉
 図2に示すように、X線撮影装置100は、X線照射部1、X線検出部2、X線画像生成部3、表示部4、記憶部5、および、制御部6を備える。なお、制御部6は、請求の範囲における「制御部」および「画像処理装置」の一例である。
<Parts of X-ray equipment>
As shown in FIG. 2, the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an X-ray image generation unit 3, a display unit 4, a storage unit 5, and a control unit 6. The control unit 6 is an example of the "control unit" and the "image processing device" in the claims.
 X線照射部1は、手術後の被検体101にX線を照射する。X線照射部1は、電圧が印加されることによってX線を照射するX線管を含む。 The X-ray irradiation unit 1 irradiates the subject 101 after surgery with X-rays. The X-ray irradiation unit 1 includes an X-ray tube that emits X-rays when a voltage is applied.
 X線検出部2は、X線照射部1によって照射され、被検体101を透過したX線を検出する。そして、X線検出部2は、検出されたX線に基づいて検出信号を出力する。X線検出部2は、たとえば、FPD(Flat Panel Detector)を含む。また、X線検出部2は、ワイヤレスタイプのX線検出器として構成されており、無線信号としての検出信号を出力する。具体的には、X線検出部2は、無線LANなどによる無線接続によって、後述するX線画像生成部3と通信可能に構成されており、X線画像生成部3に対して無線信号としての検出信号を出力する。 The X-ray detection unit 2 detects X-rays emitted by the X-ray irradiation unit 1 and transmitted through the subject 101 . Then, the X-ray detector 2 outputs a detection signal based on the detected X-rays. The X-ray detection unit 2 includes, for example, an FPD (Flat Panel Detector). The X-ray detector 2 is configured as a wireless type X-ray detector and outputs a detection signal as a radio signal. Specifically, the X-ray detection unit 2 is configured to be able to communicate with an X-ray image generation unit 3, which will be described later, through a wireless connection such as a wireless LAN. Outputs a detection signal.
 図3に示すように、X線画像生成部3は、X線検出部2によって検出されたX線の検出信号に基づいてX線画像10を生成する。X線画像生成部3は、X線照射部1およびX線検出部2を制御することによって、X線撮影の制御を行う。X線画像生成部3は、無線LANなどによる無線接続によってX線検出部2と通信可能に構成されている。X線画像生成部3は、たとえば、FPGA(field-programmable gate array)などのプロセッサを含む。そして、X線画像生成部3は、後述する制御部6に対して生成されたX線画像10を出力する。 As shown in FIG. 3 , the X-ray image generator 3 generates an X-ray image 10 based on the X-ray detection signal detected by the X-ray detector 2 . The X-ray image generation unit 3 controls X-ray imaging by controlling the X-ray irradiation unit 1 and the X-ray detection unit 2 . The X-ray image generator 3 is configured to be able to communicate with the X-ray detector 2 through a wireless connection such as a wireless LAN. The X-ray image generator 3 includes a processor such as an FPGA (field-programmable gate array). Then, the X-ray image generator 3 outputs the generated X-ray image 10 to the controller 6, which will be described later.
 X線画像10は、手術後の被検体101の腹部をX線撮影することにより取得された画像である。たとえば、X線画像10には、対象物体102として手術用ガーゼが含まれる。なお、手術用ガーゼは、手術後のX線撮影によるX線画像10において視認可能なようにX線を透過させにくい造影糸が織り込まれている。 The X-ray image 10 is an image obtained by X-raying the abdomen of the subject 101 after surgery. For example, X-ray image 10 includes surgical gauze as target object 102 . The surgical gauze is woven with a contrast thread that does not easily transmit X-rays so that it can be visually recognized in the X-ray image 10 obtained by radiography after surgery.
 表示部4は、たとえば、タッチパネル式の液晶ディスプレイを含む。そして、表示部4は、撮影されたX線画像10を表示する。また、表示部4は、後述する画像出力部65によって出力された識別画像20(図4および図8参照)を表示する。また、表示部4は、タッチパネルに対する操作に基づいて、医師などの作業者によるX線撮影装置100を操作するための入力操作を受け付けるように構成されている。 The display unit 4 includes, for example, a touch panel type liquid crystal display. Then, the display unit 4 displays the captured X-ray image 10 . The display unit 4 also displays an identification image 20 (see FIGS. 4 and 8) output by the image output unit 65, which will be described later. Further, the display unit 4 is configured to receive an input operation for operating the X-ray imaging apparatus 100 by an operator such as a doctor, based on the operation on the touch panel.
 記憶部5は、たとえば、ハードディスクドライブなどの記憶装置により構成されている。記憶部5は、X線画像生成部3によって生成されたX線画像10および後述する制御部6によって生成された識別画像20(図4参照)などの画像データを記憶する。また、記憶部5は、X線撮影装置100を動作させる各種の設定値を記憶するように構成されている。また、記憶部5は、制御部6によるX線撮影装置100の制御の処理に用いられるプログラムを記憶する。また、記憶部5は、後述する検出モデル51および除去モデル52を予め記憶する。なお、検出モデル51は、請求の範囲における「第1学習済みモデル」の一例である。また、除去モデル52は、請求の範囲における「第2学習済みモデル」の一例である。 The storage unit 5 is composed of a storage device such as a hard disk drive, for example. The storage unit 5 stores image data such as an X-ray image 10 generated by the X-ray image generation unit 3 and an identification image 20 (see FIG. 4) generated by the control unit 6, which will be described later. The storage unit 5 is also configured to store various setting values for operating the X-ray imaging apparatus 100 . The storage unit 5 also stores a program used for control processing of the X-ray imaging apparatus 100 by the control unit 6 . The storage unit 5 also stores in advance a detection model 51 and a removal model 52, which will be described later. Note that the detection model 51 is an example of a "first trained model" in the scope of claims. Also, the elimination model 52 is an example of a "second trained model" in the scope of claims.
 制御部6は、たとえば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、ROM(Read Only Memory)およびRAM(Random Access Memory)などを含んで構成されたコンピュータである。制御部6は、機能的な構成として、領域検出部61、部分画像生成部62、輪郭画像生成部63、識別画像生成部64、および、画像出力部65を含む。すなわち、領域検出部61、部分画像生成部62、輪郭画像生成部63、識別画像生成部64、および、画像出力部65は、制御部6の中のソフトウェアとしての機能ブロックであり、ハードウェアとしての制御部6が所定の制御プログラムを実行することによって機能するように構成されている。 The control unit 6 is a computer including, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a ROM (Read Only Memory) and a RAM (Random Access Memory). The control unit 6 includes an area detection unit 61, a partial image generation unit 62, a contour image generation unit 63, an identification image generation unit 64, and an image output unit 65 as functional configurations. That is, the area detection unit 61, the partial image generation unit 62, the contour image generation unit 63, the identification image generation unit 64, and the image output unit 65 are function blocks as software in the control unit 6, and are hardware. is configured to function by executing a predetermined control program.
 (識別画像の生成)
 図4に示すように、制御部6は、X線画像10から、X線画像10に含まれる対象物体102を識別するための識別画像20を生成する画像処理を実行するように構成されている。具体的には、制御部6は、X線画像生成部3によって生成されたX線画像10を取得する。そして、制御部6は、X線画像10に対して、関心領域11の検出と、除去画像12の生成と、輪郭画像15の生成との処理を実行する。関心領域11は、X線画像10のうちの被検体101の体内の対象物体102が含まれる領域である。除去画像12は、X線画像10から対象物体102が除去された画像である。そして、輪郭画像15は、X線画像10中の被検体101(および、被検体101の骨など)の輪郭を示す(輪郭を抽出した)画像である。また、制御部6は、除去画像12とX線画像10とに基づいて対象物体102が強調(抽出)された強調全体画像13を生成する。そして、制御部6は、強調全体画像13のうちの関心領域11に対応する部分を切り取って強調部分画像14を生成し、強調部分画像14を輪郭画像15と合成することによって、識別画像20を生成する。
(Generation of identification image)
As shown in FIG. 4, the control unit 6 is configured to execute image processing for generating an identification image 20 for identifying a target object 102 included in the X-ray image 10 from the X-ray image 10. . Specifically, the controller 6 acquires the X-ray image 10 generated by the X-ray image generator 3 . Then, the control unit 6 performs processes of detecting a region of interest 11 , generating a removed image 12 , and generating a contour image 15 on the X-ray image 10 . The region of interest 11 is a region of the X-ray image 10 that includes the target object 102 inside the body of the subject 101 . The removed image 12 is an image obtained by removing the target object 102 from the X-ray image 10 . The contour image 15 is an image showing the contour (extracted contour) of the subject 101 (and the bones of the subject 101, etc.) in the X-ray image 10 . The control unit 6 also generates an enhanced whole image 13 in which the target object 102 is enhanced (extracted) based on the removed image 12 and the X-ray image 10 . Then, the control unit 6 generates an enhanced partial image 14 by cutting out a portion corresponding to the region of interest 11 in the enhanced whole image 13, and synthesizes the enhanced partial image 14 with the contour image 15 to form the identification image 20. Generate.
 〈関心領域の検出〉
 図4および図5に示すように、第1実施形態では、領域検出部61(制御部6)は、検出モデル51によって、X線画像10から関心領域11を検出する。具体的には、領域検出部61は、X線画像10が入力された検出モデル51による出力に基づいて、関心領域11の位置(座標)とサイズ(大きさ)を取得する。検出モデル51は、X線画像10のうちの対象物体102が含まれる関心領域11を検出するように、深層学習を用いた機械学習によって生成された学習済みモデル(アルゴリズム)である。
<Detection of region of interest>
As shown in FIGS. 4 and 5, in the first embodiment, the region detection unit 61 (control unit 6) detects the region of interest 11 from the X-ray image 10 using the detection model 51. FIG. Specifically, the region detection unit 61 acquires the position (coordinates) and size (size) of the region of interest 11 based on the output from the detection model 51 to which the X-ray image 10 is input. The detection model 51 is a trained model (algorithm) generated by machine learning using deep learning so as to detect the region of interest 11 including the target object 102 in the X-ray image 10 .
 検出モデル51は、X線撮影装置100とは別個の学習装置103によって予め生成される。学習装置103は、たとえば、CPU、GPU、ROM、および、RAMなどを含んで構成されたコンピュータである。学習装置103は、複数の教師入力用X線画像10aと複数の教師出力用関心領域11aとを教師データ(トレーニングセット)として、機械学習によって検出モデル51を生成する。教師入力用X線画像10aは、体内に対象物体102が取り残された被検体101を撮影したX線画像10を模擬するように生成される。また、教師入力用X線画像10aは、検出モデル51を用いた推論において入力に用いられるX線画像10と同様の条件(大きさなど)となるように生成される。教師出力用関心領域11aは、教師入力用X線画像10aのうちの対象物体102に対応する領域の位置および大きさを示す情報(アノテーション)である。 The detection model 51 is generated in advance by a learning device 103 separate from the X-ray imaging apparatus 100 . The learning device 103 is, for example, a computer including a CPU, GPU, ROM, and RAM. The learning device 103 generates a detection model 51 by machine learning using a plurality of teacher input X-ray images 10a and a plurality of teacher output regions of interest 11a as teacher data (training set). The teacher input X-ray image 10a is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 with the target object 102 left inside the body. Also, the teacher input X-ray image 10 a is generated so as to have the same conditions (size, etc.) as the X-ray image 10 used for input in the inference using the detection model 51 . The teacher output region of interest 11a is information (annotation) indicating the position and size of the region corresponding to the target object 102 in the teacher input X-ray image 10a.
 検出モデル51における領域検出(物体検出)のアルゴリズムとして、たとえば、FasterR-CNN(Faster Region-based Convolutional Neural Network)が用いられる。FasterR-CNNを用いた物体検出では、入力画像に対して、1つまたは複数の目標物体が矩形の領域(バウンディングボックス)で囲まれて検出される。そして、各々の目標物体の検出(推論)結果に対する尤度(判定値)も同時に推定される。尤度は、検出された矩形の領域に目標物体が含まれる可能性(尤もらしさ)を示す数値である。第1実施形態では、検出モデル51は、対象物体102が含まれると推定される複数の領域のうち最も尤度の大きい1つの領域を検出する。すなわち、領域検出部61は、X線画像10のうちから、検出モデル51によって尤度が最も大きい1つの領域を関心領域11として検出するように構成されている。 As a region detection (object detection) algorithm in the detection model 51, for example, FasterR-CNN (Faster Region-based Convolutional Neural Network) is used. In object detection using FasterR-CNN, one or more target objects are detected by being surrounded by a rectangular area (bounding box) in an input image. The likelihood (decision value) for each target object detection (inference) result is also estimated at the same time. The likelihood is a numerical value indicating the possibility (likelihood) that the target object is included in the detected rectangular area. In the first embodiment, the detection model 51 detects one region with the highest likelihood among multiple regions estimated to contain the target object 102 . That is, the region detection unit 61 is configured to detect one region with the highest likelihood from the X-ray image 10 by the detection model 51 as the region of interest 11 .
 〈強調部分画像の生成〉
 図4および図6に示すように、第1実施形態では、制御部6の部分画像生成部62は、領域検出部61によって検出された関心領域11と除去モデル52による画像処理の出力結果とに基づいて、強調部分画像14を生成する。部分画像生成部62(制御部6)は、除去モデル52による画像処理の出力結果として、X線画像10から除去画像12を生成する。具体的には、部分画像生成部62は、除去モデル52によって、X線画像10の全体から、X線画像10の全体に対応する除去画像12を生成する。除去モデル52は、X線画像10における対象物体102を除去(抑制)することによって、X線画像10から対象物体102が除去された除去画像12を生成するように、深層学習を用いた機械学習によって生成された学習済みモデル(アルゴリズム)である。
<Generation of enhanced partial image>
As shown in FIGS. 4 and 6, in the first embodiment, the partial image generation unit 62 of the control unit 6 combines the region of interest 11 detected by the region detection unit 61 with the output result of image processing by the removal model 52. Based on this, an enhanced partial image 14 is generated. The partial image generation unit 62 (control unit 6 ) generates the removal image 12 from the X-ray image 10 as an output result of image processing by the removal model 52 . Specifically, the partial image generation unit 62 generates the removal image 12 corresponding to the entire X-ray image 10 from the entire X-ray image 10 using the removal model 52 . The removal model 52 is machine learning using deep learning so as to remove (suppress) the target object 102 in the X-ray image 10 to generate the removed image 12 in which the target object 102 is removed from the X-ray image 10. It is a trained model (algorithm) generated by
 除去モデル52は、検出モデル51と同様に、X線撮影装置100とは別個の学習装置103によって予め生成される。学習装置103は、複数の教師入力用X線画像10bと複数の教師出力用除去画像12bとを教師データ(トレーニングセット)として、機械学習によって除去モデル52を生成する。教師入力用X線画像10bは、体内に対象物体102が取り残された被検体101を撮影したX線画像10を模擬するように生成される。教師出力用除去画像12bは、教師入力用X線画像10bのうちから対象物体102が除去された画像である。教師入力用X線画像10bおよび教師出力用除去画像12bは、除去モデル52を用いた推論において入力に用いられるX線画像10と同様の条件(大きさなど)となるように生成される。 The removal model 52 is generated in advance by a learning device 103 separate from the X-ray imaging apparatus 100, similar to the detection model 51. The learning device 103 generates a removal model 52 by machine learning using a plurality of teacher input X-ray images 10b and a plurality of teacher output removal images 12b as teacher data (training set). The teacher input X-ray image 10b is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 with the target object 102 left inside the body. The teacher output removed image 12b is an image obtained by removing the target object 102 from the teacher input X-ray image 10b. The teacher input X-ray image 10b and the teacher output removal image 12b are generated so as to have the same conditions (size, etc.) as the X-ray image 10 used as an input in inference using the removal model 52 .
 除去モデル52は、たとえば、全層畳み込みネットワーク(Fully Convolution Network:FCN)の1種類であるU-Netをベースとして生成される。除去モデル52は、入力であるX線画像10の各画素のうちから、対象物体102であると推定される画素を変換することによって、X線画像10から対象物体102であると推定された部分を除去する画像変換(画像再構成)を実行するように学習させることによって生成される。 The removal model 52 is generated, for example, based on U-Net, which is one type of full convolution network (FCN). The removal model 52 transforms the pixels estimated to be the target object 102 from among the pixels of the input X-ray image 10, thereby removing the portion estimated to be the target object 102 from the X-ray image 10. is generated by learning to perform an image transformation (image reconstruction) that removes
 そして、第1実施形態では、部分画像生成部62(制御部6)は、X線画像10と除去画像12と関心領域11とに基づいて強調部分画像14を生成する。強調部分画像14は、X線画像10における関心領域11に対応する部分が切り出されるとともに対象物体102が強調された画像である。具体的には、部分画像生成部62は、X線画像10と除去画像12との差分に基づいて、X線画像10の全体における対象物体102が強調された強調全体画像13を生成する。すなわち、部分画像生成部62は、X線画像10から除去画像12を減算することによって、X線画像10と等しい大きさ(領域)の強調全体画像13を生成する。強調全体画像13は、X線画像10に含まれる対象物体102が強調(抽出)された画像である。また、強調全体画像13には、対象物体102以外の被検体101の骨などの構造物が含まれる。そして、部分画像生成部62は、生成された強調全体画像13から関心領域11に対応する部分を切り出すことによって、強調部分画像14を生成するように構成されている。すなわち、第1実施形態では、強調部分画像14は、検出モデル51によって検出された関心領域11のX線画像10における位置および大きさと等しい位置および大きさを有する。 Then, in the first embodiment, the partial image generation unit 62 (control unit 6) generates the enhanced partial image 14 based on the X-ray image 10, the removed image 12, and the region of interest 11. The enhanced partial image 14 is an image obtained by cutting out a portion corresponding to the region of interest 11 in the X-ray image 10 and enhancing the target object 102 . Specifically, the partial image generator 62 generates the enhanced whole image 13 in which the target object 102 in the entire X-ray image 10 is emphasized based on the difference between the X-ray image 10 and the removed image 12 . That is, the partial image generator 62 subtracts the removed image 12 from the X-ray image 10 to generate the enhanced whole image 13 having the same size (region) as the X-ray image 10 . The enhanced whole image 13 is an image in which the target object 102 included in the X-ray image 10 is enhanced (extracted). In addition, the enhanced whole image 13 includes structures other than the target object 102, such as bones of the subject 101. FIG. The partial image generation unit 62 is configured to generate the emphasized partial image 14 by cutting out the portion corresponding to the region of interest 11 from the generated emphasized whole image 13 . That is, in the first embodiment, the enhanced partial image 14 has the same position and size as the position and size in the X-ray image 10 of the region of interest 11 detected by the detection model 51 .
 〈輪郭画像の生成〉
 図4に示すように、第1実施形態では、輪郭画像生成部63(制御部6)は、輪郭画像15を生成する。輪郭画像生成部63は、X線画像10に対して輪郭抽出(エッジ検出)の画像処理を実行することによって輪郭画像15を生成する。たとえば、第1実施形態では、輪郭画像生成部63は、X線画像10にぼかし処理を実行する。そして、輪郭画像生成部63は、ぼかし処理が実行された後のX線画像10と、ぼかし処理が実行される前のX線画像10との差分に基づいて、輪郭画像15を生成する。生成された輪郭画像15は、X線画像10の高周波成分を示す画像となる。輪郭画像生成部63は、ぼかし処理を実行することによって、X線画像10から、強調全体画像13と比べて強度の小さい(コントラストの小さい)輪郭を抽出することが可能に構成されている。
<Generation of contour image>
As shown in FIG. 4, in the first embodiment, the contour image generator 63 (controller 6) generates the contour image 15. FIG. The contour image generator 63 generates a contour image 15 by executing image processing for contour extraction (edge detection) on the X-ray image 10 . For example, in the first embodiment, the contour image generation unit 63 blurs the X-ray image 10 . Then, the contour image generator 63 generates the contour image 15 based on the difference between the X-ray image 10 after blurring and the X-ray image 10 before blurring. The generated contour image 15 is an image showing high frequency components of the X-ray image 10 . The contour image generator 63 is configured to be able to extract a contour with a lower intensity (lower contrast) than the enhanced whole image 13 from the X-ray image 10 by executing the blurring process.
 〈強調部分画像と輪郭画像の合成〉
 図4および図7に示すように、第1実施形態では、識別画像生成部64(制御部6)は、強調部分画像14と輪郭画像15とに基づいて識別画像20を生成する。具体的には、識別画像生成部64は、輪郭画像15のうちから、関心領域11と位置および大きさの等しい領域(強調部分画像14と位置および大きさの等しい領域)を切り出すことによって輪郭部分画像16を生成する。そして、識別画像生成部64は、輪郭部分画像16と強調部分画像14とを合成して合成部分画像17を生成する。そして、識別画像生成部64は、合成部分画像17を輪郭画像15の切り出された部分に合成することによって、識別画像20を生成する。
<Synthesis of emphasized partial image and contour image>
As shown in FIGS. 4 and 7, in the first embodiment, the identification image generator 64 (controller 6) generates the identification image 20 based on the emphasized partial image 14 and the contour image 15. FIG. Specifically, the identification image generation unit 64 cuts out a region having the same position and size as the region of interest 11 (a region having the same position and size as the emphasized partial image 14) from the contour image 15, thereby extracting the contour portion. Generate image 16 . Then, the identification image generator 64 combines the outline partial image 16 and the emphasized partial image 14 to generate a combined partial image 17 . Then, the identification image generator 64 generates the identification image 20 by synthesizing the combined partial image 17 with the clipped portion of the contour image 15 .
 また、第1実施形態では、識別画像生成部64(制御部6)は、強調部分画像14の画素値と、輪郭画像15における強調部分画像14に対応する部分(輪郭部分画像16)の画素値との各々に重み付けをした状態で、輪郭画像15と強調部分画像14とを合成することによって、識別画像20を生成するように構成されている。たとえば、識別画像生成部64は、重み画像18の画素の輝度値に基づいて、輪郭部分画像16と強調部分画像14との各々の画素の透過度に重み付けをする。すなわち、識別画像生成部64は、重み画像18の輝度値が大きい部分(白色の部分)では、強調部分画像14が表示され、重み画像18の輝度値が小さい部分(黒色の部分)では、輪郭画像15(輪郭部分画像16)が表示されるように、合成部分画像17を生成する。重み画像18は、関心領域11と等しい大きさを有するように識別画像生成部64によって生成される。また、重み画像18は、中心付近の輝度値が大きく、周囲に向かって徐々に輝度値が円形に(放射状に)小さくなるように構成されている。 Further, in the first embodiment, the identification image generation unit 64 (control unit 6) generates the pixel values of the emphasized partial image 14 and the pixel values of the portion (outline partial image 16) corresponding to the emphasized partial image 14 in the contour image 15. and are each weighted, and the contour image 15 and the emphasized partial image 14 are synthesized to generate the identification image 20 . For example, the identification image generator 64 weights the transparency of each pixel of the outline partial image 16 and the emphasized partial image 14 based on the luminance value of the pixels of the weighted image 18 . In other words, the identification image generation unit 64 displays the emphasized partial image 14 in the portion (white portion) of the weighted image 18 where the luminance value is large, and displays the outline in the portion (black portion) of the weighted image 18 where the luminance value is small. A composite partial image 17 is generated so that the image 15 (contour partial image 16) is displayed. The weighted image 18 is generated by the identification image generator 64 so as to have the same size as the region of interest 11 . Also, the weighted image 18 is configured such that the brightness value is large near the center and gradually decreases circularly (radially) toward the periphery.
 (表示部の表示)
 図8に示すように、第1実施形態では、画像出力部65(制御部6)は、生成された強調部分画像14を識別可能な表示を表示部4に表示させる。具体的には、画像出力部65は、識別画像生成部64によって生成された識別画像20とX線画像10とを表示部4に表示させるように構成されている。たとえば、画像出力部65は、X線画像10と識別画像20とを並べて表示部4に表示させるように構成されている。なお、画像出力部65は、表示部4のタッチパネルに対する入力操作に基づいて、識別画像20とX線画像10とを並べて表示させることと、識別画像20を表示せずX線画像10のみを表示させることとを切り替えるように構成されていてもよい。
(Display on display)
As shown in FIG. 8, in the first embodiment, the image output unit 65 (control unit 6) causes the display unit 4 to display a display that allows the generated emphasized partial image 14 to be identified. Specifically, the image output unit 65 is configured to display the identification image 20 generated by the identification image generation unit 64 and the X-ray image 10 on the display unit 4 . For example, the image output unit 65 is configured to display the X-ray image 10 and the identification image 20 side by side on the display unit 4 . Note that the image output unit 65 can display the identification image 20 and the X-ray image 10 side by side based on an input operation on the touch panel of the display unit 4, or can display only the X-ray image 10 without displaying the identification image 20. It may be configured to switch between causing and causing.
 (第1実施形態による画像処理方法について)
 次に、図9を参照して、第1実施形態による画像処理方法に関する制御処理フローについて説明する。また、ステップ401~ステップ403は、X線画像生成部3による制御処理を示し、ステップ404~ステップ410は、制御部6による制御処理を示す。
(Image processing method according to the first embodiment)
Next, with reference to FIG. 9, a control processing flow regarding the image processing method according to the first embodiment will be described. Further, steps 401 to 403 indicate control processing by the X-ray image generation unit 3, and steps 404 to 410 indicate control processing by the control unit 6. FIG.
 まず、ステップ401において、手術後の被検体101の体内の対象物体102を検出するために被検体101にX線が照射される。次に、ステップ402において、照射されたX線が検出される。次に、ステップ403において、検出されたX線の検出信号に基づいてX線画像10が生成される。 First, in step 401, the subject 101 is irradiated with X-rays in order to detect the target object 102 inside the body of the subject 101 after surgery. Next, in step 402, emitted x-rays are detected. Next, in step 403, an X-ray image 10 is generated based on the detected X-ray detection signal.
 次に、ステップ404において、検出モデル51によって、X線画像10から関心領域11が検出される。そして、検出された関心領域11のサイズ(大きさ)およびX線画像10における位置(座標)が取得される。 Next, in step 404 , the detection model 51 detects the region of interest 11 from the X-ray image 10 . Then, the size (magnitude) of the detected region of interest 11 and the position (coordinates) in the X-ray image 10 are acquired.
 次に、ステップ405において、除去モデル52による画像処理の出力結果として、X線画像10から除去画像12が生成される。そして、ステップ406において、X線画像10と除去画像12との差分に基づいて、強調全体画像13が生成される。そして、ステップ407において、強調全体画像13と関心領域11とに基づいて、強調全体画像13のうちの関心領域11に対応する領域が切り出された強調部分画像14が生成される。すなわち、ステップ405~ステップ407における処理によって、検出された関心領域11と除去モデル52による画像処理の出力結果とに基づいて強調部分画像14が生成される。 Next, in step 405 , a removal image 12 is generated from the X-ray image 10 as an output result of image processing by the removal model 52 . An enhanced global image 13 is then generated in step 406 based on the difference between the X-ray image 10 and the removed image 12 . Then, in step 407 , based on the enhanced whole image 13 and the region of interest 11 , the enhanced partial image 14 is generated by cutting out the region corresponding to the region of interest 11 from the enhanced whole image 13 . That is, through the processing in steps 405 to 407, the enhanced partial image 14 is generated based on the detected region of interest 11 and the output result of the image processing by the removal model 52. FIG.
 次に、ステップ408において、X線画像10から輪郭画像15が生成される。そしてステップ409において、強調部分画像14と輪郭画像15とに基づいて、識別画像20が生成される。具体的には、重み画像18に基づいて、輪郭画像15から切り出された輪郭部分画像16と、強調部分画像14とが合成されて合成部分画像17が生成される。そして、合成部分画像17を輪郭画像15に合成することによって、識別画像20が生成される。 Next, in step 408, a contour image 15 is generated from the X-ray image 10. Then, in step 409 , an identification image 20 is generated based on the enhanced partial image 14 and the contour image 15 . Specifically, based on the weighted image 18 , the contour partial image 16 cut out from the contour image 15 and the emphasized partial image 14 are combined to generate a combined partial image 17 . An identification image 20 is generated by synthesizing the synthetic partial image 17 with the outline image 15 .
 次に、ステップ410において、強調部分画像14が識別可能な表示が表示部4に表示される。すなわち、識別画像20が表示部4に表示される。具体的には、X線画像10と識別画像20とが並べられて表示部4に表示される。 Next, in step 410, the display unit 4 displays an identifiable display of the emphasized partial image 14. That is, the identification image 20 is displayed on the display section 4 . Specifically, the X-ray image 10 and the identification image 20 are displayed side by side on the display unit 4 .
 なお、ステップ404における関心領域11の検出と、ステップ405における除去画像12の生成と、ステップ408における輪郭画像15の生成とは、いずれのステップを先に実行するようにしてもよい。 Any of the detection of the region of interest 11 in step 404, the generation of the removed image 12 in step 405, and the generation of the contour image 15 in step 408 may be executed first.
 (第1実施形態の効果)
 第1実施形態では、以下のような効果を得ることができる。
(Effect of the first embodiment)
The following effects can be obtained in the first embodiment.
 第1実施形態のX線撮影装置100では、上記のように、X線画像10における関心領域11に対応する部分が切り出されるとともに対象物体102が強調された強調部分画像14を生成する。そして、生成された強調部分画像14を識別可能な表示を表示部4に表示させる。これにより、対象物体102が含まれる関心領域11に対応する部分のみが切り出されて強調されるため、対象物体102が含まれる部分以外の領域における人体の骨などの構造物が対象物体102と同様に強調されることを抑制することができる。そのため、対象物体102が含まれる部分以外の人体の構造物が強調されることに起因して、対象物体102の識別が困難になることを抑制することができる。その結果、機械学習によって生成された学習済みモデルを用いた画像処理を行うことによって被検体101を撮影したX線画像10に含まれる対象物体102を強調する場合に、X線画像10に含まれる対象物体102を容易に確認することができる。 As described above, the X-ray imaging apparatus 100 of the first embodiment generates the enhanced partial image 14 in which the portion corresponding to the region of interest 11 in the X-ray image 10 is clipped and the target object 102 is enhanced. Then, the display section 4 is caused to display a display that allows the generated emphasized partial image 14 to be identified. As a result, only the portion corresponding to the region of interest 11 including the target object 102 is cut out and emphasized, so that structures such as human bones in regions other than the portion including the target object 102 are similar to the target object 102. can be suppressed. Therefore, it is possible to suppress difficulty in identifying the target object 102 due to enhancement of the human body structure other than the portion including the target object 102 . As a result, when the target object 102 included in the X-ray image 10 obtained by imaging the subject 101 is emphasized by performing image processing using a trained model generated by machine learning, the target object 102 included in the X-ray image 10 is The target object 102 can be easily confirmed.
 また、上記第1実施形態では、以下のように構成したことによって、下記のような更なる効果が得られる。 In addition, in the above-described first embodiment, the following further effects can be obtained by configuring as follows.
 すなわち、第1実施形態では、上記のように、制御部6は、部分画像生成部62(制御部6)によって生成された強調部分画像14に基づいて、X線画像10に含まれる対象物体102を識別するための識別画像20を生成する識別画像生成部64を含み、画像出力部65は、識別画像生成部64によって生成された識別画像20を表示部4に表示させるように構成されている。このように構成すれば、医師などの作業者は、表示部4に表示された識別画像20によって、X線画像10に含まれる対象物体102を容易に視覚的に認識することができる。そのため、医師などの作業者は、表示部4を視認することによって、X線画像10に含まれる対象物体102を容易に確認することができる。 That is, in the first embodiment, as described above, the control unit 6 controls the target object 102 included in the X-ray image 10 based on the enhanced partial image 14 generated by the partial image generation unit 62 (control unit 6). The image output unit 65 is configured to display the identification image 20 generated by the identification image generation unit 64 on the display unit 4. . With this configuration, a worker such as a doctor can easily visually recognize the target object 102 included in the X-ray image 10 from the identification image 20 displayed on the display unit 4 . Therefore, an operator such as a doctor can easily confirm the target object 102 included in the X-ray image 10 by viewing the display unit 4 .
 また、第1実施形態では、上記のように、制御部6は、X線画像10中の被検体101の輪郭を示す輪郭画像15を生成する輪郭画像生成部63を含み、識別画像生成部64(制御部6)は、輪郭画像生成部63によって生成された輪郭画像15と、部分画像生成部62(制御部6)によって生成された強調部分画像14とに基づいて、識別画像20を生成するように構成されている。このように構成すれば、被検体101の輪郭が抽出された輪郭画像15と強調部分画像14とに基づいて識別画像20を生成するため、識別画像20における対象物体102以外の部分(強調部分画像14以外の部分)を被検体101の輪郭を示す輪郭画像15とすることができる。そのため、X線画像10に含まれる被検体101の輪郭以外の部分が除かれた状態で対象物体102の部分が強調されるように識別画像20を生成することができるので、識別画像20における対象物体102の部分をより強調させることができる。その結果、対象物体102がより効果的に識別可能なように識別画像20を生成することができるので、X線画像10に含まれる対象物体102をより容易に確認することができる。 Further, in the first embodiment, as described above, the control unit 6 includes the contour image generation unit 63 that generates the contour image 15 showing the contour of the subject 101 in the X-ray image 10, and the identification image generation unit 64 (Control section 6) generates identification image 20 based on contour image 15 generated by contour image generation section 63 and enhanced partial image 14 generated by partial image generation section 62 (control section 6). is configured as With this configuration, since the identification image 20 is generated based on the contour image 15 from which the contour of the object 101 is extracted and the enhanced partial image 14, a portion of the identification image 20 other than the target object 102 (enhanced partial image) is generated. 14) can be used as a contour image 15 showing the contour of the subject 101. FIG. Therefore, the identification image 20 can be generated so that the portion of the target object 102 is emphasized while the portion other than the outline of the subject 101 included in the X-ray image 10 is removed. Parts of the object 102 can be made more emphasized. As a result, since the identification image 20 can be generated so that the target object 102 can be more effectively identified, the target object 102 included in the X-ray image 10 can be more easily confirmed.
 また、第1実施形態では、上記のように、識別画像生成部64(制御部6)は、強調部分画像14の画素値と、輪郭画像15における強調部分画像14に対応する部分の画素値との各々に重み付けをした状態で、輪郭画像15と強調部分画像14とを合成することによって、識別画像20を生成するように構成されている。このように構成すれば、強調部分画像14と輪郭画像15との境界において滑らかに切り替わるように重み付けをすることにより、強調部分画像14の境界線(外周部分)を滑らかにしながら輪郭画像15に強調部分画像14を合成することができる。このため、識別画像20において対象物体102が含まれるとして強調されている部分と強調されていない部分との境界が必要以上に際立つことを抑制することができる。そのため、識別画像20において強調されている部分と強調されていない部分との境界が際立つことに起因して、生成された識別画像20がもとのX線画像10と大きく異なることを抑制することができる。その結果、識別画像20とX線画像10とを見比べる場合の差異を小さくすることができるので、X線画像10に含まれる対象物体102をより一層容易に確認することができる。 Further, in the first embodiment, as described above, the identification image generation unit 64 (the control unit 6) generates the pixel values of the emphasized partial image 14 and the pixel values of the portion of the contour image 15 corresponding to the emphasized partial image 14. are weighted, and the contour image 15 and the emphasized partial image 14 are synthesized to generate the identification image 20 . With this configuration, weighting is applied so that the boundary between the enhanced partial image 14 and the contour image 15 is smoothly switched, thereby enhancing the contour image 15 while smoothing the boundary line (outer peripheral portion) of the enhanced partial image 14. Partial images 14 can be synthesized. Therefore, it is possible to prevent the boundary between a portion that is emphasized as including the target object 102 in the identification image 20 and a portion that is not emphasized from becoming unnecessarily conspicuous. Therefore, it is possible to prevent the generated identification image 20 from being significantly different from the original X-ray image 10 due to the conspicuous boundary between the emphasized portion and the non-emphasized portion in the identification image 20. can be done. As a result, the difference between the identification image 20 and the X-ray image 10 can be reduced, so that the target object 102 included in the X-ray image 10 can be more easily confirmed.
 また、第1実施形態では、上記のように、輪郭画像生成部63(制御部6)は、X線画像10にぼかし処理を実行するとともに、ぼかし処理が実行された後のX線画像10と、ぼかし処理が実行される前のX線画像10との差分に基づいて、輪郭画像15を生成するように構成されている。このように構成すれば、ぼかし処理を実行して輪郭画像15を生成することによって、X線画像10における被検体101の輪郭の強度(コントラスト)が比較的小さい状態の輪郭画像15を生成することができる。そのため、画像に含まれる構造のコントラストが比較的小さい輪郭画像15と対象物体102が強調されてコントラストが比較的大きい強調部分画像14とに基づいて識別画像20を生成することができるので、識別画像20における対象物体102の含まれる部分の視認性を相対的に向上することができる。その結果、X線画像10に含まれる対象物体102をより一層容易に確認することができる。 Further, in the first embodiment, as described above, the contour image generation unit 63 (the control unit 6) performs the blurring process on the X-ray image 10, and the X-ray image 10 after the blurring process is performed. , a contour image 15 is generated based on the difference from the X-ray image 10 before blurring. With this configuration, by executing the blurring process to generate the contour image 15, the contour image 15 can be generated in a state where the intensity (contrast) of the contour of the subject 101 in the X-ray image 10 is relatively small. can be done. Therefore, the identification image 20 can be generated based on the contour image 15 in which the contrast of the structure included in the image is relatively low and the enhanced partial image 14 in which the target object 102 is emphasized and the contrast is relatively high. 20, the visibility of the part including the target object 102 can be relatively improved. As a result, the target object 102 included in the X-ray image 10 can be confirmed more easily.
 また、第1実施形態では、上記のように、画像出力部65(制御部6)は、X線画像10と識別画像20(220)とを表示部4に出力するように構成されている。このように構成すれば、医師などの作業者は、対象物体102が容易に識別可能なように強調された識別画像20とX線画像10とを容易に見比べることができる。そのため、医師などの作業者は、識別画像20とX線画像10とを比較することによってX線画像10に含まれる対象物体102をより容易に確認することができる。 Further, in the first embodiment, the image output unit 65 (control unit 6) is configured to output the X-ray image 10 and the identification image 20 (220) to the display unit 4 as described above. With this configuration, an operator such as a doctor can easily compare the identification image 20 and the X-ray image 10 in which the target object 102 is emphasized so that the object 102 can be easily identified. Therefore, an operator such as a doctor can more easily confirm the target object 102 included in the X-ray image 10 by comparing the identification image 20 and the X-ray image 10 .
 また、第1実施形態では、上記のように、部分画像生成部62(制御部6)は、検出モデル51(第1学習済みモデル)によって検出された関心領域11のX線画像10における位置および大きさと等しい位置および大きさを有する強調部分画像14を生成するように構成されている。このように構成すれば、検出モデル51によって対象物体102であると推定された部分と等しい位置および大きさを有する強調部分画像14を生成することができるので、検出された対象物体102の位置および大きさと等しい領域を強調させることができる。そのため、X線画像10に含まれる対象物体102の大きさが比較的小さい場合と比較的大きい場合とのいずれにおいても、対象物体102の視認性が低下することを抑制することができるので、X線画像10に含まれる対象物体102を容易に確認することができる。 Further, in the first embodiment, as described above, the partial image generation unit 62 (control unit 6) determines the position and It is configured to generate an enhanced partial image 14 having a position and size equal to the size. With this configuration, it is possible to generate the enhanced partial image 14 having the same position and size as the portion estimated to be the target object 102 by the detection model 51, so that the position and the size of the detected target object 102 can be generated. Areas equal in size can be emphasized. Therefore, regardless of whether the size of the target object 102 included in the X-ray image 10 is relatively small or relatively large, it is possible to suppress the deterioration of the visibility of the target object 102. The target object 102 included in the line image 10 can be easily confirmed.
 また、第1実施形態では、上記のように、部分画像生成部62(制御部6)は、除去モデル52(第2学習済みモデル)によって、X線画像10の全体から、対象物体102が除去された除去画像12を生成するように構成されており、X線画像10と除去画像12との差分に基づいて、X線画像10の全体における対象物体102が強調された強調全体画像13を生成するとともに、生成された強調全体画像13から関心領域11に対応する部分を切り出すことによって、強調部分画像14を生成するように構成されている。このように構成すれば、予め対象物体102の含まれる部分を特定することなく、X線画像10の全体から強調全体画像13を容易に生成することができる。その結果、強調全体画像13から切り出された強調部分画像14を容易に生成することができるため、X線画像10に含まれる対象物体102を容易に確認することができる。 Further, in the first embodiment, as described above, the partial image generation unit 62 (control unit 6) removes the target object 102 from the entire X-ray image 10 using the removal model 52 (second learned model). Based on the difference between the X-ray image 10 and the removed image 12, an enhanced whole image 13 in which the target object 102 in the entire X-ray image 10 is enhanced is generated. At the same time, by cutting out a portion corresponding to the region of interest 11 from the generated enhanced whole image 13, an enhanced partial image 14 is generated. With this configuration, the enhanced overall image 13 can be easily generated from the entire X-ray image 10 without specifying in advance the portion containing the target object 102 . As a result, the enhanced partial image 14 cut out from the enhanced whole image 13 can be easily generated, so that the target object 102 included in the X-ray image 10 can be easily confirmed.
 また、第1実施形態では、上記のように、領域検出部61(制御部6)は、X線画像10のうちから、検出モデル51(第1学習済みモデル)によって対象物体102が含まれると推定される複数の領域のうち尤度が最も大きい1つの領域を関心領域11として検出するように構成されている。ここで、複数の関心領域11を検出した場合には、検出された複数の関心領域11に対応する複数の領域が強調されると考えられる。この場合には、対象物体102以外の領域が検出されることに起因してX線画像10において対象物体102が含まれる部分を識別することが困難となる。これに対して、第1実施形態では、領域検出部61を、X線画像10のうちから、検出モデル51によって対象物体102が含まれると推定される尤度が最も大きい1つの関心領域11を検出するように構成する。このように構成すれば、複数の関心領域11を検出する場合と異なり、1つの関心領域11に対応する部分のみが強調されるため、対象物体102の含まれる部分を識別することが困難となることを抑制することができる。その結果、表示部4の表示を視認することによって対象物体102の含まれる部分を容易に識別することができるので、X線画像10に含まれる対象物体102をより容易に確認することができる。 Further, in the first embodiment, as described above, the region detection unit 61 (control unit 6) determines that the target object 102 is included in the X-ray image 10 by the detection model 51 (first learned model). It is configured to detect one region with the highest likelihood as the region of interest 11 from among the plurality of estimated regions. Here, when multiple regions of interest 11 are detected, multiple regions corresponding to the multiple detected regions of interest 11 are considered to be emphasized. In this case, it becomes difficult to identify a portion including the target object 102 in the X-ray image 10 due to detection of a region other than the target object 102 . On the other hand, in the first embodiment, the region detection unit 61 detects one region of interest 11 from the X-ray image 10 that has the highest likelihood that the detection model 51 estimates that the target object 102 is included. Configure to detect. With this configuration, unlike the case where a plurality of regions of interest 11 are detected, only the portion corresponding to one region of interest 11 is emphasized, making it difficult to identify the portion containing the target object 102. can be suppressed. As a result, the part including the target object 102 can be easily identified by viewing the display on the display unit 4 , so the target object 102 included in the X-ray image 10 can be more easily confirmed.
 (第1実施形態の画像処理方法の効果)
 第1実施形態の画像処理方法では、以下のような効果を得ることができる。
(Effect of the image processing method of the first embodiment)
With the image processing method of the first embodiment, the following effects can be obtained.
 第1実施形態の画像処理方法では、上記のように構成することにより、X線画像10における関心領域11に対応する部分が切り出されるとともに対象物体102が強調された強調部分画像14を生成する。そして、生成された強調部分画像14を識別可能な表示を表示部4に表示させる。これにより、対象物体102が含まれる関心領域11に対応する部分のみが切り出されて強調されるため、対象物体102が含まれる部分以外の領域における人体の骨などの構造物が対象物体102と同様に強調されることを抑制することができる。そのため、対象物体102が含まれる部分以外の人体の構造物が強調されることに起因して、対象物体102の識別が困難になることを抑制することができる。その結果、機械学習によって生成された学習済みモデルを用いた画像処理を行うことによって被検体101を撮影したX線画像10に含まれる対象物体102を強調する場合に、X線画像10に含まれる対象物体102を容易に確認することが可能な画像処理方法を提供することができる。 In the image processing method of the first embodiment, by configuring as described above, the portion corresponding to the region of interest 11 in the X-ray image 10 is cut out and the enhanced partial image 14 in which the target object 102 is enhanced is generated. Then, the display section 4 is caused to display a display that allows the generated emphasized partial image 14 to be identified. As a result, only the portion corresponding to the region of interest 11 including the target object 102 is cut out and emphasized, so that structures such as human bones in regions other than the portion including the target object 102 are similar to the target object 102. can be suppressed. Therefore, it is possible to suppress difficulty in identifying the target object 102 due to enhancement of the human body structure other than the portion including the target object 102 . As a result, when the target object 102 included in the X-ray image 10 obtained by imaging the subject 101 is emphasized by performing image processing using a trained model generated by machine learning, the target object 102 included in the X-ray image 10 is It is possible to provide an image processing method that allows the target object 102 to be easily confirmed.
 [第2実施形態]
 次に、図10および図11を参照して、第2実施形態のX線撮影装置200について説明する。除去モデル52によってX線画像10の全体から除去画像12を生成するように構成された第1実施形態とは異なり、第2実施形態では、X線画像10のうちから関心領域11に対応する部分(X線部分画像210)を切り出して除去画像212を生成するように構成されている。なお、第2実施形態において、上記第1実施形態と同様の構成に関しては、同じ符号を付して説明を省略する。
[Second embodiment]
Next, an X-ray imaging apparatus 200 according to a second embodiment will be described with reference to FIGS. 10 and 11. FIG. Unlike the first embodiment configured to generate the ablation image 12 from the entire X-ray image 10 by the ablation model 52, in the second embodiment, the portion of the X-ray image 10 corresponding to the region of interest 11 is (X-ray partial image 210 ) is cut out to generate a removed image 212 . In addition, in 2nd Embodiment, the same code|symbol is attached|subjected regarding the structure similar to the said 1st Embodiment, and description is abbreviate|omitted.
 図10に示すように、第2実施形態のX線撮影装置200は、制御部206を備えている。制御部206は、第1実施形態の制御部6と同様に、X線画像10から、X線画像10に含まれる対象物体102を識別するための識別画像220を生成する画像処理を実行するように構成されている。また、制御部206は、機能的な構成として、領域検出部61、部分画像生成部262、輪郭画像生成部63、識別画像生成部264、および、画像出力部65を含む。なお、制御部206は、請求の範囲における「制御部」および「画像処理装置」の一例である。 As shown in FIG. 10, the X-ray imaging apparatus 200 of the second embodiment includes a control section 206. As shown in FIG. As with the control unit 6 of the first embodiment, the control unit 206 performs image processing for generating an identification image 220 for identifying the target object 102 included in the X-ray image 10 from the X-ray image 10. is configured to Further, the control unit 206 includes an area detection unit 61, a partial image generation unit 262, a contour image generation unit 63, an identification image generation unit 264, and an image output unit 65 as functional configurations. Note that the control unit 206 is an example of a “control unit” and an “image processing device” in the claims.
 図11に示すように、第1実施形態と同様に、領域検出部61(制御部206)は、検出モデル51によって、X線画像10から、関心領域11を検出する。また、輪郭画像生成部63(制御部206)は、X線画像10から輪郭画像15を生成する。 As shown in FIG. 11, the region detection unit 61 (control unit 206) detects the region of interest 11 from the X-ray image 10 using the detection model 51, as in the first embodiment. Also, the contour image generator 63 (controller 206 ) generates the contour image 15 from the X-ray image 10 .
 第2実施形態におけるX線撮影装置200では、部分画像生成部262(制御部206)は、関心領域11に基づいて、X線画像10からX線画像10の関心領域11に対応する部分を切り出してX線部分画像210を生成する。X線部分画像210は、関心領域11のX線画像10における位置を中心として、関心領域11の大きさにかかわらず所定の(一定の)大きさを有する。たとえば、対象物体102が手術用ガーゼである場合、X線画像10における手術用ガーゼの大きさよりも少し大きいサイズが所定の大きさとして設定される。 In the X-ray imaging apparatus 200 according to the second embodiment, the partial image generation unit 262 (control unit 206) cuts out a portion corresponding to the region of interest 11 of the X-ray image 10 from the X-ray image 10 based on the region of interest 11. to generate an X-ray partial image 210 . The X-ray partial image 210 has a predetermined (constant) size centered on the position of the region of interest 11 in the X-ray image 10 regardless of the size of the region of interest 11 . For example, if the target object 102 is surgical gauze, the predetermined size is set to be slightly larger than the size of the surgical gauze in the X-ray image 10 .
 そして、第2実施形態では、部分画像生成部262(制御部206)は、除去モデル252による画像処理の出力結果として、X線部分画像210から、関心領域11に対応する部分が切り出されるとともに対象物体102が除去された除去画像212を生成するように構成されている。すなわち、部分画像生成部262は、除去モデル252によって、所定の大きさに切り出されたX線部分画像210から、X線部分画像210と等しい所定の大きさの除去画像212を生成する。除去モデル252は、X線部分画像210から対象物体102が除去された除去画像212を生成するように深層学習を用いた機械学習によって生成された学習済みモデルである。除去モデル252は、第1実施形態による除去モデル52と同様に、X線撮影装置200とは別個の学習装置103によって予め生成され、記憶部5に記憶される。なお、除去モデル252は、請求の範囲における「第2学習済みモデル」の一例である。 In the second embodiment, the partial image generation unit 262 (control unit 206) cuts out a portion corresponding to the region of interest 11 from the X-ray partial image 210 as an output result of the image processing by the removal model 252, and It is configured to generate an ablation image 212 with the object 102 removed. That is, the partial image generator 262 uses the removal model 252 to generate a removal image 212 of a predetermined size equal to the X-ray partial image 210 from the X-ray partial image 210 cut out to a predetermined size. The removal model 252 is a trained model generated by machine learning using deep learning so as to generate the removal image 212 in which the target object 102 is removed from the X-ray partial image 210 . The removal model 252 is generated in advance by the learning device 103 separate from the X-ray imaging apparatus 200 and stored in the storage unit 5, similarly to the removal model 52 according to the first embodiment. The removed model 252 is an example of a "second trained model" in the claims.
 そして、部分画像生成部262(制御部206)は、X線画像10の関心領域11に対応する部分(X線部分画像210)と、関心領域11に対応する部分が切り出された除去画像212との差分に基づいて、強調部分画像214を生成するように構成されている。強調部分画像214は、X線部分画像210と同様に、検出された関心領域11のX線画像10における位置を中心として、関心領域11の大きさにかかわらず所定の大きさを有する。すなわち、部分画像生成部262は、X線部分画像210および除去画像212と等しい大きさの強調部分画像214を生成する。 Then, the partial image generation unit 262 (control unit 206) creates a portion (X-ray partial image 210) corresponding to the region of interest 11 of the X-ray image 10 and a removed image 212 obtained by clipping the portion corresponding to the region of interest 11. is configured to generate an enhanced partial image 214 based on the difference between . Similar to the X-ray partial image 210 , the enhanced partial image 214 has a predetermined size centered on the position of the detected region of interest 11 in the X-ray image 10 regardless of the size of the region of interest 11 . That is, the partial image generator 262 generates an enhanced partial image 214 having the same size as the X-ray partial image 210 and the removed image 212 .
 識別画像生成部264(制御部206)は、第1実施形態と同様に、強調部分画像214と輪郭画像15とに基づいて識別画像220を生成する。具体的には、識別画像生成部264は、輪郭画像15から切り出された輪郭部分画像216と強調部分画像214とを合成することによって合成部分画像217を生成する。輪郭部分画像216は、輪郭画像15における関心領域11に対応する位置から、強調部分画像214と等しい所定の大きさを有する領域を切り出すことによって生成される。また、識別画像生成部264は、第1実施形態と同様に、重み画像218に基づいて重み付けをした状態で輪郭画像15(輪郭部分画像216)と強調部分画像214とを合成する。重み画像218も同様に、輪郭部分画像216および強調部分画像214と等しい大きさを有する。輪郭画像15(輪郭部分画像216)と強調部分画像214との合成(識別画像220の生成)における重み画像218に基づく重み付けは、第1実施形態と同様である。 The identification image generation unit 264 (control unit 206) generates the identification image 220 based on the emphasized partial image 214 and the contour image 15, as in the first embodiment. Specifically, the identification image generator 264 generates a combined partial image 217 by combining the contour partial image 216 cut out from the contour image 15 and the emphasized partial image 214 . A contour partial image 216 is generated by cutting out a region having a predetermined size equal to that of the emphasized partial image 214 from a position corresponding to the region of interest 11 in the contour image 15 . Further, the identification image generator 264 synthesizes the contour image 15 (contour partial image 216) and the emphasized partial image 214 in a weighted state based on the weighted image 218, as in the first embodiment. The weighted image 218 likewise has the same size as the contour partial image 216 and the enhanced partial image 214 . Weighting based on the weight image 218 in synthesizing the contour image 15 (contour partial image 216) and the enhanced partial image 214 (generating the identification image 220) is the same as in the first embodiment.
 また、第1実施形態と同様に、画像出力部65(制御部206)は、X線画像10と識別画像220とを並べて表示部4に表示させるように構成されている。なお、第2実施形態のその他の構成は、第1実施形態の構成と同様である。 Further, as in the first embodiment, the image output unit 65 (control unit 206) is configured to display the X-ray image 10 and the identification image 220 side by side on the display unit 4. Other configurations of the second embodiment are the same as those of the first embodiment.
 (第2実施形態の効果)
 第2実施形態では、以下のような効果を得ることができる。
(Effect of Second Embodiment)
The following effects can be obtained in the second embodiment.
 第2実施形態では、上記のように、部分画像生成部262(制御部206)は、検出された関心領域11のX線画像10における位置を中心として、関心領域11の大きさにかかわらず所定の大きさを有する強調部分画像214を生成するように構成されている。このように構成すれば、強調部分画像214が所定の大きさを有するため、表示部4の表示において強調される領域の大きさを一定とすることができる。そのため、強調される領域の大きさが小さすぎることと大きすぎることとを抑制することができるので、表示部4の表示における対象物体102の視認性の低下を抑制することができる。 In the second embodiment, as described above, the partial image generation unit 262 (control unit 206) generates a predetermined is configured to generate an enhanced partial image 214 having a size of . With this configuration, since the emphasized partial image 214 has a predetermined size, the size of the region emphasized in the display on the display unit 4 can be made constant. Therefore, it is possible to suppress the size of the region to be emphasized from being too small and from being too large, so it is possible to suppress deterioration in the visibility of the target object 102 in the display on the display unit 4 .
 また、第2実施形態では、上記のように、部分画像生成部262(制御部206)は、除去モデル252(第2学習済みモデル)によって、X線画像10の関心領域11に対応する部分(X線部分画像210)から、関心領域11に対応する部分が切り出されるとともに対象物体102が除去された除去画像212を生成するように構成されており、X線画像10の関心領域11に対応する部分(X線部分画像210)と、関心領域11に対応する部分が切り出された除去画像212との差分に基づいて、強調部分画像214を生成するように構成されている。このように構成すれば、X線画像10のうちの切り出された部分であるX線部分画像210を用いて除去画像212を生成するように構成されているため、X線画像10の全体に対して画像処理を行う場合に比べて、部分画像生成部262による除去モデル252を用いた画像処理の処理負担を軽減することができる。なお、第2実施形態のその他の効果は、第1実施形態の効果と同様である。 Further, in the second embodiment, as described above, the partial image generation unit 262 (control unit 206) uses the removal model 252 (second trained model) to generate a portion ( A removed image 212 is generated by cutting out a portion corresponding to the region of interest 11 from the X-ray partial image 210) and removing the target object 102, which corresponds to the region of interest 11 of the X-ray image 10). Based on the difference between the part (X-ray partial image 210) and the removed image 212 in which the part corresponding to the region of interest 11 is cut out, an enhanced partial image 214 is generated. With this configuration, since the partial X-ray image 210, which is the clipped portion of the X-ray image 10, is used to generate the removed image 212, the entire X-ray image 10 is The processing load of image processing using the removal model 252 by the partial image generation unit 262 can be reduced compared to the case where the image processing is performed using the partial image generation unit 262 . Other effects of the second embodiment are the same as those of the first embodiment.
 [第3実施形態]
 次に、図12~図14を参照して、第3実施形態のX線撮影装置300について説明する。X線画像10から除去画像12を生成するように学習された除去モデル52を用いるように構成された第1実施形態とは異なり、第3実施形態では、X線画像10から強調画像313を生成するように学習された強調モデル352を用いるように構成されている。なお、強調モデル352は、請求の範囲における「第2学習済みモデル」の一例である。また、第3実施形態において、上記第1実施形態と同様の構成に関しては、同じ符号を付して説明を省略する。
[Third embodiment]
Next, an X-ray imaging apparatus 300 according to a third embodiment will be described with reference to FIGS. 12 to 14. FIG. Unlike the first embodiment, which was configured to use an ablation model 52 trained to generate the ablation image 12 from the X-ray image 10, the third embodiment generates an enhanced image 313 from the X-ray image 10. It is configured to use an enhancement model 352 that has been trained to do so. Note that the emphasis model 352 is an example of a "second trained model" in the scope of claims. In addition, in the third embodiment, the same reference numerals are given to the same configurations as in the first embodiment, and the description thereof is omitted.
 図12に示すように、第3実施形態のX線撮影装置300は、制御部306を備えている。制御部306は、第1実施形態の制御部6と同様に、X線画像10から、X線画像10に含まれる対象物体102を識別するための識別画像320(図13参照)を生成する画像処理を実行するように構成されている。また、制御部306は、機能的な構成として、領域検出部61、部分画像生成部362、輪郭画像生成部63、識別画像生成部364、および、画像出力部65を含む。なお、制御部306は、請求の範囲における「制御部」および「画像処理装置」の一例である。 As shown in FIG. 12, the X-ray imaging apparatus 300 of the third embodiment includes a control section 306. As shown in FIG. Similar to the control unit 6 of the first embodiment, the control unit 306 generates an identification image 320 (see FIG. 13) for identifying the target object 102 included in the X-ray image 10 from the X-ray image 10. configured to do the work. Further, the control unit 306 includes an area detection unit 61, a partial image generation unit 362, a contour image generation unit 63, an identification image generation unit 364, and an image output unit 65 as functional configurations. Note that the control unit 306 is an example of a “control unit” and an “image processing device” in the claims.
 図13に示すように、第1実施形態と同様に、領域検出部61(制御部306)は、検出モデル51によって、X線画像10から、関心領域11を検出する。また、輪郭画像生成部63(制御部306)は、X線画像10から輪郭画像15を生成する。 As shown in FIG. 13, the region detection unit 61 (control unit 306) detects the region of interest 11 from the X-ray image 10 using the detection model 51, as in the first embodiment. Further, the contour image generator 63 (controller 306 ) generates the contour image 15 from the X-ray image 10 .
 そして、第3実施形態では、部分画像生成部362(制御部306)は、強調モデル352による画像処理の出力結果として、X線画像10から、対象物体102が強調(抽出)された画像である強調画像313を生成する。具体的には、部分画像生成部362は、強調モデル352によってX線画像10の全体から、X線画像10の全体に対応する強調画像313を生成する。強調モデル352は、X線画像10における対象物体102を強調するように、深層学習を用いた機械学習によって生成される学習済みモデル(アルゴリズム)である。なお、強調画像313は、対象物体102のみが抽出された画像のみならず、対象物体102以外の被検体101の骨などの構造物が含まれる画像を含む。 In the third embodiment, the partial image generation unit 362 (control unit 306) produces an image in which the target object 102 is emphasized (extracted) from the X-ray image 10 as the output result of image processing by the emphasis model 352. An enhanced image 313 is generated. Specifically, the partial image generator 362 generates an enhanced image 313 corresponding to the entire X-ray image 10 from the entire X-ray image 10 by the enhancement model 352 . The enhancement model 352 is a trained model (algorithm) generated by machine learning using deep learning so as to enhance the target object 102 in the X-ray image 10 . Note that the enhanced image 313 includes not only an image in which only the target object 102 is extracted, but also an image including structures such as bones of the subject 101 other than the target object 102 .
 図14に示すように、強調モデル352は、第1実施形態による除去モデル52と同様に、X線撮影装置100とは別個の学習装置103によって予め生成される。学習装置103は、複数の教師入力用X線画像310bと複数の教師出力用強調画像313bとを教師データ(トレーニングセット)として、機械学習によって強調モデル352を生成する。教師入力用X線画像310bは、体内に対象物体102が含まれる被検体101を撮影したX線画像10を模擬するように生成される。教師出力用強調画像313bは、教師入力用X線画像310bのうちから対象物体102のみを抽出した画像である。教師入力用X線画像310bおよび教師出力用強調画像313bは、強調モデル352を用いた推論において入力に用いられるX線画像10と同様の条件(大きさなど)となるように生成される。 As shown in FIG. 14, the enhancement model 352 is generated in advance by the learning device 103 separate from the X-ray imaging device 100, like the removal model 52 according to the first embodiment. The learning device 103 generates an enhancement model 352 by machine learning using a plurality of teacher input X-ray images 310b and a plurality of teacher output enhanced images 313b as teacher data (training set). The teacher input X-ray image 310b is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 including the target object 102 inside the body. The teacher output enhanced image 313b is an image obtained by extracting only the target object 102 from the teacher input X-ray image 310b. The teacher input X-ray image 310b and the teacher output enhanced image 313b are generated so as to have the same conditions (size, etc.) as the X-ray image 10 used for input in inference using the enhancement model 352 .
 強調モデル352は、たとえば、全層畳み込みネットワーク(Fully Convolution Network:FCN)の1種類であるU-Netをベースとして生成される。強調モデル352は、入力であるX線画像10の各画素のうちから、対象物体102であると推定される部分以外の画素を変換することによって、X線画像10から対象物体102以外の背景であると推定された部分を除去(抑制)する画像変換(画像再構成)を実行するように学習させることによって生成される。また、生成された強調モデル352は、記憶部5に予め記憶される。 The enhancement model 352 is generated, for example, based on U-Net, which is one type of full convolution network (FCN). The enhancement model 352 transforms the pixels of the input X-ray image 10 excluding the portion estimated to be the target object 102 , thereby extracting the background other than the target object 102 from the X-ray image 10 . It is generated by learning to perform an image transformation (image reconstruction) that removes (suppresses) parts that are presumed to be. Also, the generated enhancement model 352 is pre-stored in the storage unit 5 .
 そして、図13に示すように、第3実施形態では、部分画像生成部362(制御部306)は、強調モデル352によって生成された強調画像313と関心領域11とに基づいて、強調部分画像314を生成するように構成されている。部分画像生成部362は、第1実施形態と同様に、強調画像313から関心領域11に対応する部分を切り出すことによって、強調部分画像314を生成する。 Then, as shown in FIG. 13, in the third embodiment, the partial image generation unit 362 (control unit 306) generates an enhanced partial image 314 based on the enhanced image 313 generated by the enhancement model 352 and the region of interest 11. is configured to generate The partial image generator 362 generates an enhanced partial image 314 by cutting out a portion corresponding to the region of interest 11 from the enhanced image 313, as in the first embodiment.
 識別画像生成部364(制御部306)は、第1実施形態と同様に、強調部分画像314と輪郭画像15とに基づいて識別画像320を生成する。すなわち、識別画像生成部264は、第1実施形態と同様の処理によって、輪郭部分画像16および重み画像18を生成し、重み画像18に基づいて重み付けした状態で、輪郭画像15(輪郭部分画像16)と強調部分画像314とを合成することによって合成部分画像317を生成する。そして、識別画像生成部364は、合成部分画像317と輪郭画像15に基づいて識別画像320を生成する。 The identification image generation unit 364 (control unit 306) generates the identification image 320 based on the emphasized partial image 314 and the contour image 15, as in the first embodiment. That is, the identification image generation unit 264 generates the contour partial image 16 and the weighted image 18 by the same processing as in the first embodiment, weights the weighted image 18, and generates the contour image 15 (the contour partial image 16 ) and the enhanced partial image 314 to generate a synthesized partial image 317 . Then, the identification image generator 364 generates the identification image 320 based on the combined partial image 317 and the contour image 15 .
 また、画像出力部65(制御部306)は、第1実施形態と同様に、X線画像10と識別画像320とを並べて表示部4に表示させるように構成されている。なお、第3実施形態のその他の構成は、第1実施形態の構成と同様である。 Further, the image output unit 65 (control unit 306) is configured to display the X-ray image 10 and the identification image 320 side by side on the display unit 4, as in the first embodiment. Other configurations of the third embodiment are the same as those of the first embodiment.
 (第3実施形態の効果)
 第3実施形態では、以下のような効果を得ることができる。
(Effect of the third embodiment)
The following effects can be obtained in the third embodiment.
 第3実施形態では、上記のように、部分画像生成部362(制御部306)は、強調モデル352(第2学習済みモデル)によって、X線画像10における対象物体102が強調された強調画像313を生成するように構成されており、強調画像313と関心領域11とに基づいて、強調部分画像314を生成するように構成されている。ここで、X線画像10に含まれる対象物体102を除去するように学習された学習済みモデルを用いて対象物体102が除去された画像を生成し、対象物体102が除去された画像とX線画像10との差分を取得することによってX線画像10に含まれる対象物体102を強調する場合に比べて、強調モデル352を用いてX線画像10から直接的に強調画像313を生成することによって、X線画像10に含まれる対象物体102をより精度よく強調することができる。すなわち、強調モデル352を用いることによって、X線画像10における対象物体102以外の部分(背景部分)をより精度よく抑制(除去)することができる。そこで、第3実施形態では、部分画像生成部362を、強調モデル352によって強調画像313を生成し、強調画像313と関心領域11に基づいて強調部分画像314を生成するように構成することによって、対象物体102を視覚的により容易に認識可能なように強調部分画像314を生成することができる。その結果、医師などの作業者は、X線画像10に含まれる対象物体102をより容易に確認することができる。また、第3実施形態によるその他の効果は、第1および第2実施形態と同様である。 In the third embodiment, as described above, the partial image generation unit 362 (control unit 306) generates an enhanced image 313 in which the target object 102 in the X-ray image 10 is enhanced by the enhancement model 352 (second trained model). and is configured to generate an enhanced partial image 314 based on the enhanced image 313 and the region of interest 11 . Here, an image from which the target object 102 has been removed is generated using a learned model that has been learned to remove the target object 102 included in the X-ray image 10, and the image from which the target object 102 has been removed and the X-ray By generating the enhanced image 313 directly from the X-ray image 10 using the enhancement model 352, as compared to enhancing the target object 102 contained in the X-ray image 10 by taking the difference with the image 10. , the target object 102 contained in the X-ray image 10 can be enhanced with higher accuracy. That is, by using the enhancement model 352, the portion (background portion) other than the target object 102 in the X-ray image 10 can be suppressed (removed) with higher accuracy. Therefore, in the third embodiment, the partial image generation unit 362 is configured to generate the enhanced image 313 using the enhancement model 352 and to generate the enhanced partial image 314 based on the enhanced image 313 and the region of interest 11. The enhanced partial image 314 can be generated such that the target object 102 is more easily visually recognizable. As a result, an operator such as a doctor can more easily confirm the target object 102 included in the X-ray image 10 . Other effects of the third embodiment are similar to those of the first and second embodiments.
 [変形例]
 なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく請求の範囲によって示され、さらに請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
[Modification]
It should be noted that the embodiments disclosed this time should be considered as examples and not restrictive in all respects. The scope of the present invention is indicated by the scope of the claims rather than the above description of the embodiments, and includes all modifications (modifications) within the scope and meaning equivalent to the scope of the claims.
 たとえば、上記第1~第3実施形態では、識別画像生成部64、264、364(制御部6、206、306)を、強調部分画像14(214、314)と輪郭画像15とに基づいて識別画像20(220、320)を生成するように構成する例を示したが、本発明はこれに限られない。本発明では、X線画像10と強調部分画像14(214、314)とに基づいて強調部分画像14(214、314)を識別可能な表示を表示部4に表示させるように構成してもよい。たとえば、輝度(明度)を低下させた状態のX線画像10に対して強調部分画像14(214、314)を合成することによって、対象物体102の含まれる部分を識別可能なように表示部4に表示させるように構成してもよい。 For example, in the first to third embodiments, the identification image generators 64, 264, 364 ( controllers 6, 206, 306) are identified based on the emphasized partial image 14 (214, 314) and the contour image 15. Although an example configured to generate the image 20 (220, 320) has been shown, the present invention is not limited to this. In the present invention, the display unit 4 may be configured to display an identifiable display of the enhanced partial image 14 (214, 314) based on the X-ray image 10 and the enhanced partial image 14 (214, 314). . For example, by synthesizing the emphasized partial image 14 (214, 314) with the X-ray image 10 with reduced brightness (brightness), the display unit 4 can display the portion including the target object 102 so as to be identifiable. may be configured to be displayed on
 上記第1~第3実施形態では、識別画像生成部64、264、364(制御部6、206、306)を、強調部分画像14(214、314)の画素値と、輪郭画像15における強調部分画像14(214、314)に対応する部分(輪郭部分画像16、216)の画素値との各々に重み付けをした状態で、輪郭画像15と強調部分画像14(214、314)とを合成することによって、識別画像20(220、320)を生成するように構成する例を示したが、本発明はこれに限られない。たとえば、重み付けをせずに輪郭画像15と強調部分画像14(214、314)とを合成することによって識別画像20(220、320)を生成するようにしてもよい。 In the above-described first to third embodiments, the identification image generators 64, 264, 364 ( controllers 6, 206, 306) generate the pixel values of the emphasized partial image 14 (214, 314) and the emphasized portion in the contour image 15. Synthesizing the contour image 15 and the enhanced partial image 14 (214, 314) in a state where the pixel values of the portions (contour partial images 16, 216) corresponding to the image 14 (214, 314) are respectively weighted. Although an example is shown in which the identification image 20 (220, 320) is generated by, the present invention is not limited to this. For example, the identification image 20 (220, 320) may be generated by synthesizing the outline image 15 and the enhanced partial image 14 (214, 314) without weighting.
 上記第1~第3実施形態では、輪郭画像生成部63(制御部6、206、306)を、ぼかし処理が実行された後のX線画像10と、ぼかし処理が実行される前のX線画像10との差分に基づいて、輪郭画像15を生成するように構成する例を示したが、本発明はこれに限られない。たとえば、ソーベルフィルタまたはラプラシアンフィルタなどのエッジ検出処理を実行することによって輪郭画像15を生成するようにしてもよい。 In the first to third embodiments described above, the outline image generation unit 63 (the control units 6, 206, and 306) is configured to generate the X-ray image 10 after blurring and the X-ray before blurring. Although an example is shown in which the contour image 15 is generated based on the difference from the image 10, the present invention is not limited to this. For example, the contour image 15 may be generated by performing edge detection processing such as a Sobel filter or a Laplacian filter.
 上記第1および第3実施形態では、部分画像生成部62、362(制御部6、306)を、検出モデル51(第1学習済みモデル)によって検出された関心領域11のX線画像10における位置および大きさと等しい位置および大きさを有する強調部分画像14(314)を生成するように構成する例を示し、上記第2実施形態では、部分画像生成部262(制御部206)を、検出された関心領域11のX線画像10における位置を中心として、関心領域11の大きさにかかわらず所定の大きさを有する強調部分画像214を生成するように構成する例を示したが、本発明はこれに限られない。たとえば、検出された関心領域11の大きさよりも所定の割合分だけ大きい強調部分画像を生成するようにしてもよい。 In the first and third embodiments described above, the partial image generators 62 and 362 (controllers 6 and 306) are configured to detect the position of the region of interest 11 in the X-ray image 10 detected by the detection model 51 (first trained model). , and an example configured to generate an enhanced partial image 14 (314) having a position and size equal to the detected An example has been shown in which the position of the region of interest 11 in the X-ray image 10 is used as the center to generate the enhanced partial image 214 having a predetermined size regardless of the size of the region of interest 11. However, the present invention is not limited to this. is not limited to For example, an enhanced partial image that is larger than the size of the detected region of interest 11 by a predetermined ratio may be generated.
 上記第1~第3実施形態では、画像出力部65(制御部6、206、306)を、X線画像10と識別画像20(220、320)とを並べて表示部4に表示させるように構成する例を示したが、本発明はこれに限られない。たとえば、X線画像10を表示させずに識別画像20(220、320)のみを表示させるようにしてもよい。また、操作部(タッチパネル)に対する操作に基づいて、識別画像20(220、320)の表示とX線画像10の表示とを切り替えるようにしてもよい。 In the first to third embodiments, the image output unit 65 ( control units 6, 206, 306) is configured to display the X-ray image 10 and the identification image 20 (220, 320) side by side on the display unit 4. Although an example is shown, the present invention is not limited to this. For example, only the identification image 20 (220, 320) may be displayed without displaying the X-ray image 10. FIG. Further, the display of the identification image 20 (220, 320) and the display of the X-ray image 10 may be switched based on the operation on the operation unit (touch panel).
 上記第1~第3実施形態では、領域検出部61(制御部6、206、306)を、X線画像10のうちから1つの領域を関心領域11として検出するように構成されている例を示したが、本発明はこれに限られない。たとえば、複数の関心領域を検出するように構成してもよい。 In the above-described first to third embodiments, the region detection unit 61 ( control units 6, 206, 306) is configured to detect one region from the X-ray image 10 as the region of interest 11. Although shown, the invention is not so limited. For example, it may be configured to detect multiple regions of interest.
 また、上記第1~第3実施形態では、重み画像18(218)を、中心付近の輝度値が大きく、周囲に向かって徐々に(円形に)輝度値が小さくなるように構成する例を示したが、本発明はこれに限られない。たとえば、円形ではなく矩形に、徐々に輝度値が変更されるように重み画像18(218)を構成してもよい。 Further, in the first to third embodiments, an example is shown in which the weighted image 18 (218) is configured such that the luminance value is large near the center and gradually (circularly) decreases toward the periphery. However, the present invention is not limited to this. For example, the weighted image 18 (218) may be configured to have a rectangular shape instead of a circular shape so that the luminance value is gradually changed.
 また、上記第1~第3実施形態では、輪郭画像15(輪郭部分画像16、216)と強調部分画像14(214、314)とを合成する場合に、透過度を変更させて合成する例を示したが、本発明はこれに限られない。たとえば、輪郭画像15(輪郭部分画像16、216)と強調部分画像14(214、314)とを加算処理または乗算処理することによって合成するようにしてもよい。 In addition, in the above-described first to third embodiments, when synthesizing the outline image 15 (outline partial images 16 and 216) and the emphasized partial image 14 (214 and 314), an example of synthesizing by changing the transparency is given. Although shown, the invention is not so limited. For example, contour image 15 (contour partial images 16, 216) and enhanced partial image 14 (214, 314) may be combined by adding or multiplying them.
 また、上記第2実施形態では、検出された関心領域11の大きさにかかわらず所定の大きさを有する強調部分画像214を生成する例を示したが、本発明はこれに限られない。たとえば、検出モデル51(第1学習済みモデル)を、所定の大きさ(一定の大きさ)の関心領域を出力するように構成してもよい。 Also, in the second embodiment, an example of generating the emphasized partial image 214 having a predetermined size regardless of the size of the detected region of interest 11 has been described, but the present invention is not limited to this. For example, the detection model 51 (first trained model) may be configured to output a region of interest having a predetermined size (constant size).
 また、上記第1~第3実施形態では、被検体101の体内に取り残される対象物体102は、手術用ガーゼ、縫合針、および、鉗子を含む例を示したが、本発明はこれに限られない。たとえば、検出される対象物体102は、ボルト、および、固定用のクリップなどを含んでいてもよい。 Further, in the above-described first to third embodiments, the target object 102 left behind in the body of the subject 101 includes surgical gauze, suture needle, and forceps, but the present invention is limited to this. do not have. For example, the detected target object 102 may include bolts, fixing clips, and the like.
 また、上記第1~第3実施形態では、別個のハードウェアとして構成されたX線画像生成部3と、制御部6(206、306)とによって、X線画像10の生成についての制御処理と、強調部分画像14(214、314)の生成についての制御処理とがそれぞれ行われる例を示したが、本発明はこれに限られない。たとえば、共通の1つの制御部(ハードウェア)によって、X線画像10の生成および強調部分画像14(214、314)の生成が行われるように構成されていてもよい。 Further, in the first to third embodiments, the X-ray image generation unit 3 and the control unit 6 (206, 306), which are configured as separate hardware, control processing for generation of the X-ray image 10 and , control processing for generation of the enhanced partial image 14 (214, 314) are respectively performed, but the present invention is not limited to this. For example, one common control unit (hardware) may be configured to generate the X-ray image 10 and the enhanced partial image 14 (214, 314).
 また、上記第1~第3実施形態では、領域検出部61、部分画像生成部62(262、362)、輪郭画像生成部63、識別画像生成部64(264、364)、および、画像出力部65が、1つのハードウェア(制御部6)における機能ブロック(ソフトウェア)として構成されている例を示したが、本発明はこれに限られない。たとえば、領域検出部61、部分画像生成部62(262、362)、輪郭画像生成部63、識別画像生成部64(264、364)、および、画像出力部65が、それぞれ別個のハードウェア(演算回路)によって構成されていてもよい。 Further, in the first to third embodiments, the region detection unit 61, the partial image generation unit 62 (262, 362), the contour image generation unit 63, the identification image generation unit 64 (264, 364), and the image output unit 65 is configured as a functional block (software) in one piece of hardware (control unit 6), but the present invention is not limited to this. For example, the region detection unit 61, the partial image generation unit 62 (262, 362), the contour image generation unit 63, the identification image generation unit 64 (264, 364), and the image output unit 65 are provided with separate hardware (computation circuit).
 また、上記第1~第3実施形態では、X線撮影装置100(200、300)とは別個の学習装置103によって、検出モデル51(第1学習済みモデル)と、除去モデル52(252)および強調モデル352(第2学習済みモデル)とが生成される例を示したが、本発明はこれに限られない。たとえば、検出モデル51と、除去モデル52(252)および強調モデル352とが、X線撮影装置100(200、300)によって生成されてもよい。また、検出モデル51と、除去モデル52(252)および強調モデル352とがそれぞれ異なる学習装置(PC)によって生成されてもよい。 Further, in the first to third embodiments described above, the learning device 103 separate from the X-ray imaging apparatus 100 (200, 300) uses the detection model 51 (first trained model), the removal model 52 (252), and the Although an example in which the enhancement model 352 (second trained model) is generated has been shown, the present invention is not limited to this. For example, the detection model 51, the removal model 52 (252) and the enhancement model 352 may be generated by the X-ray imaging apparatus 100 (200, 300). Also, the detection model 51, the removal model 52 (252), and the enhancement model 352 may be generated by different learning devices (PCs).
 また、上記第1~第3実施形態では、領域検出(物体検出)の学習済みモデル(アルゴリズム)である検出モデル51として、FasterR-CNNが用いられる例を示したが、本発明はこれに限られない。たとえば、YOLO(You Only Look Once)を領域検出のアルゴリズムとして用いてもよい。また、領域検出のアルゴリズムとして、FastR-CNN、R-CNN、および、SSD(Single Shot MultiBox Detector)などを用いてもよい。 Further, in the first to third embodiments, an example in which FasterR-CNN is used as the detection model 51, which is a trained model (algorithm) for area detection (object detection), is shown, but the present invention is limited to this. can't For example, YOLO (You Only Look Once) may be used as an algorithm for area detection. Also, FastR-CNN, R-CNN, SSD (Single Shot MultiBox Detector), etc. may be used as algorithms for area detection.
 また、上記第1~第3実施形態では、除去モデル52(252)および強調モデル352(第2学習済みモデル)は、全層畳み込みネットワーク(Fully Convolution Network:FCN)の1種類であるU-Netをベースとして生成される例を示したが本発明はこれに限られない。たとえば、除去モデル52(252)および強調モデル352は、全結合層を含むCNN(Convolutional Neural Network)をベースとして生成されてもよい。また、除去モデル52(252)および強調モデル352は、SegNetまたはPSPNetなどのU-Net以外のEncoder-Decoderモデルをベースとして生成されてもよい。 In addition, in the above-described first to third embodiments, the removal model 52 (252) and the enhancement model 352 (second trained model) are U-Nets, which are one type of full-layer convolution network (FCN). Although an example generated based on is shown, the present invention is not limited to this. For example, the removal model 52 (252) and the enhancement model 352 may be generated based on a CNN (Convolutional Neural Network) including fully connected layers. Also, the removal model 52 (252) and enhancement model 352 may be generated based on Encoder-Decoder models other than U-Net such as SegNet or PSPNet.
 また、上記第1および第2実施形態では、部分画像生成部62、262(制御部6、206)は、X線画像10と除去画像12(212)との差分を取得する(減算処理をする)ことによって、X線画像10に含まれる対象物体102が強調された画像(強調全体画像13および強調部分画像214)を生成するように構成されている例を示したが、本発明はこれに限られない。たとえば、減算処理ではなく、X線画像10から除去画像12(212)を除算処理することによって対象物体102が強調された画像を生成するようにしてもよい。 In the first and second embodiments, the partial image generation units 62 and 262 (control units 6 and 206) obtain the difference between the X-ray image 10 and the removed image 12 (212) (subtraction processing is performed). ) to generate images in which the target object 102 contained in the X-ray image 10 is enhanced (the enhanced whole image 13 and the enhanced partial image 214), but the present invention is directed to this. Not limited. For example, an image in which the target object 102 is emphasized may be generated by performing division processing of the removed image 12 (212) from the X-ray image 10 instead of subtraction processing.
 また、上記第1~第3実施形態では、X線撮影装置100(200、300)に備えられた表示部4に生成された識別画像20(220、320)を表示させる例を示したが、本発明はこれに限られない。たとえば、X線撮影装置100(200、300)とは別個に設けられた外部モニタなどの表示装置に識別画像20(220、320)を表示させるようにしてもよい。 Further, in the first to third embodiments, an example of displaying the generated identification image 20 (220, 320) on the display unit 4 provided in the X-ray imaging apparatus 100 (200, 300) is shown. The present invention is not limited to this. For example, the identification image 20 (220, 320) may be displayed on a display device such as an external monitor provided separately from the X-ray imaging apparatus 100 (200, 300).
 また、上記第1~第3実施形態では、X線撮影装置100(200、300)は、回診用X線撮影装置である例を示したが、本発明はこれに限られない。たとえば、X線撮影室に設置されている一般X線撮影装置であってもよい。 Further, in the first to third embodiments, the X-ray imaging apparatus 100 (200, 300) is an X-ray imaging apparatus for rounds, but the present invention is not limited to this. For example, it may be a general X-ray imaging apparatus installed in an X-ray imaging room.
 また、上記第1~第3実施形態では、X線撮影装置100(200、300)に設けられた制御部6(206、306)によって識別画像20(220、320)を生成する画像処理が行われる例を示したが、本発明はこれに限られない。たとえば、X線撮影装置とは別個の画像処理を行う画像処理装置(たとえば、パーソナルコンピュータ)によって識別画像を生成するようにしてもよい。 Further, in the first to third embodiments, image processing for generating the identification image 20 (220, 320) is performed by the control unit 6 (206, 306) provided in the X-ray imaging apparatus 100 (200, 300). However, the present invention is not limited to this. For example, the identification image may be generated by an image processing device (for example, a personal computer) that performs image processing separate from the X-ray imaging device.
 また、上記第3実施形態では、部分画像生成部362(制御部306)を、強調モデル352(第2学習済みモデル)によってX線画像10の全体から、X線画像10の全体に対応する強調画像313を生成するように構成する例を示したが、本発明はこれに限られない。たとえば、部分画像生成部362を、X線画像10の関心領域11に対応する部分から強調部分画像を生成するように学習された強調モデル(第2学習済みモデル)を用いるように構成してもよい。すなわち、部分画像生成部362を、X線画像10の関心領域11に対応する部分から、第2学習済みモデルによる画像処理の出力結果として、強調部分画像を生成するように構成してもよい。 Further, in the third embodiment, the partial image generation unit 362 (control unit 306) is changed from the entire X-ray image 10 by the enhancement model 352 (second learned model) to the enhancement corresponding to the entire X-ray image 10. Although an example configured to generate the image 313 has been shown, the present invention is not limited to this. For example, the partial image generation unit 362 may be configured to use an enhancement model (second trained model) trained to generate an enhanced partial image from a portion corresponding to the region of interest 11 of the X-ray image 10. good. That is, the partial image generation unit 362 may be configured to generate an enhanced partial image from a portion of the X-ray image 10 corresponding to the region of interest 11 as an output result of image processing by the second trained model.
 [態様]
 上記した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
[Aspect]
It will be appreciated by those skilled in the art that the exemplary embodiments described above are specific examples of the following aspects.
(項目1)
 被検体にX線を照射するX線照射部と、
 前記X線照射部から照射されたX線を検出するX線検出部と、
 前記X線検出部によって検出されたX線の検出信号に基づいてX線画像を生成するX線画像生成部と、
 制御部と、を備え、
 前記制御部は、
 前記X線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出する領域検出部と、
 前記領域検出部によって検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成する部分画像生成部と、
 前記部分画像生成部によって生成された前記強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を含む、X線撮影装置。
(Item 1)
an X-ray irradiation unit that irradiates an object with X-rays;
an X-ray detection unit that detects X-rays emitted from the X-ray irradiation unit;
an X-ray image generation unit that generates an X-ray image based on an X-ray detection signal detected by the X-ray detection unit;
a control unit;
The control unit
A region for detecting the region of interest from the X-ray image by a first trained model generated by machine learning so as to detect the region of interest including the target object inside the body of the subject in the X-ray image. a detection unit;
Based on the region of interest detected by the region detection unit and the output result of image processing by a second trained model generated by machine learning to remove or emphasize the target object in the X-ray image, a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized;
an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
(項目2)
 前記制御部は、前記部分画像生成部によって生成された前記強調部分画像に基づいて、前記X線画像に含まれる前記対象物体を識別するための識別画像を生成する識別画像生成部をさらに含み、
 前記画像出力部は、前記識別画像生成部によって生成された前記識別画像を前記表示部に表示させるように構成されている、項目1に記載のX線撮影装置。
(Item 2)
the control unit further includes an identification image generation unit that generates an identification image for identifying the target object included in the X-ray image based on the enhanced partial image generated by the partial image generation unit;
The X-ray imaging apparatus according to Item 1, wherein the image output section is configured to cause the display section to display the identification image generated by the identification image generation section.
(項目3)
 前記制御部は、前記X線画像中の前記被検体の輪郭を示す輪郭画像を生成する輪郭画像生成部をさらに含み、
 前記識別画像生成部は、前記輪郭画像生成部によって生成された前記輪郭画像と、前記部分画像生成部によって生成された前記強調部分画像とに基づいて、前記識別画像を生成するように構成されている、項目2に記載のX線撮影装置。
(Item 3)
The control unit further includes a contour image generation unit that generates a contour image representing the contour of the subject in the X-ray image,
The identification image generation section is configured to generate the identification image based on the contour image generated by the contour image generation section and the enhanced partial image generated by the partial image generation section. The X-ray imaging apparatus according to item 2, wherein
(項目4)
 前記識別画像生成部は、前記強調部分画像の画素値と、前記輪郭画像における前記強調部分画像に対応する部分の画素値との各々に重み付けをした状態で、前記輪郭画像と前記強調部分画像とを合成することによって、前記識別画像を生成するように構成されている、項目3に記載のX線撮影装置。
(Item 4)
The identification image generation unit weights the pixel values of the emphasized partial image and the pixel values of the portion of the contour image corresponding to the emphasized partial image, and generates the contour image and the emphasized partial image. 4. The X-ray imaging apparatus according to item 3, configured to generate the identification image by combining .
(項目5)
 前記輪郭画像生成部は、前記X線画像にぼかし処理を実行するとともに、ぼかし処理が実行された後の前記X線画像と、ぼかし処理が実行される前の前記X線画像との差分に基づいて、前記輪郭画像を生成するように構成されている、項目3または4に記載のX線撮影装置。
(Item 5)
The contour image generation unit executes blurring processing on the X-ray image, and based on the difference between the X-ray image after the blurring processing is executed and the X-ray image before the blurring processing is executed 5. An X-ray imaging device according to item 3 or 4, adapted to generate the contour image by means of a
(項目6)
 前記画像出力部は、前記X線画像と前記識別画像とを前記表示部に表示させるように構成されている、項目2~5のいずれか1項に記載のX線撮影装置。
(Item 6)
6. The X-ray imaging apparatus according to any one of items 2 to 5, wherein the image output unit is configured to display the X-ray image and the identification image on the display unit.
(項目7)
 前記部分画像生成部は、前記第1学習済みモデルによって検出された前記関心領域の前記X線画像における位置および大きさと等しい位置および大きさを有する前記強調部分画像と、検出された前記関心領域の前記X線画像における位置を中心として、前記関心領域の大きさにかかわらず所定の大きさを有する前記強調部分画像とのいずれか一方を生成するように構成されている、項目1~6のいずれか1項に記載のX線撮影装置。
(Item 7)
The partial image generation unit generates the enhanced partial image having a position and size equal to the position and size in the X-ray image of the region of interest detected by the first trained model, and the detected region of interest. Any one of items 1 to 6, configured to generate one of the enhanced partial image having a predetermined size centered on the position in the X-ray image regardless of the size of the region of interest. 1. The X-ray imaging apparatus according to claim 1.
(項目8)
 前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像の全体から、前記対象物体が除去された除去画像を生成するように構成されており、前記X線画像と前記除去画像との差分に基づいて、前記X線画像の全体における前記対象物体が強調された強調全体画像を生成するとともに、生成された前記強調全体画像から前記関心領域に対応する部分を切り出すことによって、前記強調部分画像を生成するように構成されている、項目1~7のいずれか1項に記載のX線撮影装置。
(Item 8)
The partial image generation unit is configured to generate a removed image in which the target object is removed from the entire X-ray image by the second trained model, and the X-ray image and the removed image are Based on the difference between the above and the X-ray imaging apparatus according to any one of items 1 to 7, configured to generate an enhanced partial image.
(項目9)
 前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像の前記関心領域に対応する部分から、前記関心領域に対応する部分が切り出されるとともに前記対象物体が除去された除去画像を生成するように構成されており、前記X線画像の前記関心領域に対応する部分と、前記関心領域に対応する部分が切り出された前記除去画像との差分に基づいて、前記強調部分画像を生成するように構成されている、項目1~7のいずれか1項に記載のX線撮影装置。
(Item 9)
The partial image generation unit generates a removed image in which a portion corresponding to the region of interest is cut out from a portion corresponding to the region of interest of the X-ray image and the target object is removed by the second trained model. and generating the enhanced partial image based on a difference between a portion of the X-ray image corresponding to the region of interest and the removed image obtained by clipping the portion corresponding to the region of interest. The X-ray imaging apparatus according to any one of items 1 to 7, which is configured to
(項目10)
 前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像における前記対象物体が強調された強調画像を生成するように構成されており、前記強調画像と前記関心領域とに基づいて、前記強調部分画像を生成するように構成されている、項目1~7のいずれか1項に記載のX線撮影装置。
(Item 10)
The partial image generation unit is configured to generate an enhanced image in which the target object in the X-ray image is enhanced by the second trained model, and based on the enhanced image and the region of interest 8. The X-ray imaging apparatus according to any one of items 1 to 7, configured to generate the enhanced partial image.
(項目11)
 前記領域検出部は、前記X線画像のうちから、前記第1学習済みモデルによって前記対象物体が含まれると推定される複数の領域のうち尤度が最も大きい1つの領域を前記関心領域として検出するように構成されている、項目1~10のいずれか1項に記載のX線撮影装置。
(Item 11)
The region detection unit detects, from the X-ray image, one region having the highest likelihood among a plurality of regions estimated to include the target object by the first trained model as the region of interest. The X-ray imaging apparatus according to any one of items 1 to 10, which is configured to
(項目12)
 被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出する領域検出部と、
 前記領域検出部によって検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成する部分画像生成部と、
 前記部分画像生成部によって生成された前記強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を備える、画像処理装置。
(Item 12)
A first learning generated by machine learning so as to detect a region of interest including a target object inside the body of the subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. a region detection unit that detects the region of interest from the X-ray image using the finished model;
Based on the region of interest detected by the region detection unit and the output result of image processing by a second trained model generated by machine learning to remove or emphasize the target object in the X-ray image, a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized;
an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
(項目13)
 被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出するステップと、
 検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成するステップと、
 生成された前記強調部分画像を識別可能な表示を表示部に表示させるステップと、を備える、画像処理方法。
(Item 13)
A first learning generated by machine learning so as to detect a region of interest including a target object inside the body of the subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. detecting the region of interest from the X-ray image with a pre-defined model;
Based on the detected region of interest and the output result of image processing by a second trained model generated by machine learning to remove or enhance the target object in the X-ray image, in the X-ray image generating an enhanced partial image in which a portion corresponding to the region of interest is clipped and the target object is enhanced;
and causing a display unit to display a display that allows identification of the generated emphasized partial image.
 1 X線照射部
 2 X線検出部
 3 X線画像生成部
 4 表示部
 6、206、306 制御部
 10 X線画像
 11 関心領域
 12、212 除去画像
 13 強調全体画像
 14、214、314 強調部分画像
 15 輪郭画像
 20、220、320 識別画像
 51 検出モデル(第1学習済みモデル)
 52、252 除去モデル(第2学習済みモデル)
 61 領域検出部
 62、262、362 部分画像生成部
 63 輪郭画像生成部
 64、264、364 識別画像生成部
 65 画像出力部
 100、200、300 X線撮影装置
 101 被検体
 102 対象物体
 313 強調画像
 352 強調モデル(第2学習済みモデル)
1 X-ray irradiation unit 2 X-ray detection unit 3 X-ray image generation unit 4 Display unit 6, 206, 306 Control unit 10 X-ray image 11 Region of interest 12, 212 Removal image 13 Enhanced whole image 14, 214, 314 Enhanced partial image 15 contour image 20, 220, 320 identification image 51 detection model (first trained model)
52, 252 elimination model (second trained model)
61 region detection unit 62, 262, 362 partial image generation unit 63 contour image generation unit 64, 264, 364 identification image generation unit 65 image output unit 100, 200, 300 X-ray imaging apparatus 101 subject 102 target object 313 enhanced image 352 Emphasis model (2nd trained model)

Claims (13)

  1.  被検体にX線を照射するX線照射部と、
     前記X線照射部から照射されたX線を検出するX線検出部と、
     前記X線検出部によって検出されたX線の検出信号に基づいてX線画像を生成するX線画像生成部と、
     制御部と、を備え、
     前記制御部は、
     前記X線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出する領域検出部と、
     前記領域検出部によって検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成する部分画像生成部と、
     前記部分画像生成部によって生成された前記強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を含む、X線撮影装置。
    an X-ray irradiation unit that irradiates an object with X-rays;
    an X-ray detection unit that detects X-rays emitted from the X-ray irradiation unit;
    an X-ray image generation unit that generates an X-ray image based on an X-ray detection signal detected by the X-ray detection unit;
    a control unit;
    The control unit
    A region for detecting the region of interest from the X-ray image by a first trained model generated by machine learning so as to detect the region of interest including the target object inside the body of the subject in the X-ray image. a detection unit;
    Based on the region of interest detected by the region detection unit and the output result of image processing by a second trained model generated by machine learning to remove or emphasize the target object in the X-ray image, a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized;
    an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
  2.  前記制御部は、前記部分画像生成部によって生成された前記強調部分画像に基づいて、前記X線画像に含まれる前記対象物体を識別するための識別画像を生成する識別画像生成部をさらに含み、
     前記画像出力部は、前記識別画像生成部によって生成された前記識別画像を前記表示部に表示させるように構成されている、請求項1に記載のX線撮影装置。
    the control unit further includes an identification image generation unit that generates an identification image for identifying the target object included in the X-ray image based on the enhanced partial image generated by the partial image generation unit;
    The X-ray imaging apparatus according to claim 1, wherein the image output section is configured to display the identification image generated by the identification image generation section on the display section.
  3.  前記制御部は、前記X線画像中の前記被検体の輪郭を示す輪郭画像を生成する輪郭画像生成部をさらに含み、
     前記識別画像生成部は、前記輪郭画像生成部によって生成された前記輪郭画像と、前記部分画像生成部によって生成された前記強調部分画像とに基づいて、前記識別画像を生成するように構成されている、請求項2に記載のX線撮影装置。
    The control unit further includes a contour image generation unit that generates a contour image representing the contour of the subject in the X-ray image,
    The identification image generation section is configured to generate the identification image based on the contour image generated by the contour image generation section and the enhanced partial image generated by the partial image generation section. 3. The radiographic apparatus of claim 2, wherein
  4.  前記識別画像生成部は、前記強調部分画像の画素値と、前記輪郭画像における前記強調部分画像に対応する部分の画素値との各々に重み付けをした状態で、前記輪郭画像と前記強調部分画像とを合成することによって、前記識別画像を生成するように構成されている、請求項3に記載のX線撮影装置。 The identification image generation unit weights the pixel values of the emphasized partial image and the pixel values of the portion of the contour image corresponding to the emphasized partial image, and generates the contour image and the emphasized partial image. 4. The X-ray imaging apparatus according to claim 3, configured to generate said identification image by synthesizing .
  5.  前記輪郭画像生成部は、前記X線画像にぼかし処理を実行するとともに、ぼかし処理が実行された後の前記X線画像と、ぼかし処理が実行される前の前記X線画像との差分に基づいて、前記輪郭画像を生成するように構成されている、請求項3に記載のX線撮影装置。 The contour image generation unit executes blurring processing on the X-ray image, and based on the difference between the X-ray image after the blurring processing is executed and the X-ray image before the blurring processing is executed 4. An X-ray imaging apparatus according to claim 3, configured to generate said contour image by means of a
  6.  前記画像出力部は、前記X線画像と前記識別画像とを前記表示部に表示させるように構成されている、請求項2に記載のX線撮影装置。 The X-ray imaging apparatus according to claim 2, wherein the image output unit is configured to display the X-ray image and the identification image on the display unit.
  7.  前記部分画像生成部は、前記第1学習済みモデルによって検出された前記関心領域の前記X線画像における位置および大きさと等しい位置および大きさを有する前記強調部分画像と、検出された前記関心領域の前記X線画像における位置を中心として、前記関心領域の大きさにかかわらず所定の大きさを有する前記強調部分画像とのいずれか一方を生成するように構成されている、請求項1に記載のX線撮影装置。 The partial image generation unit generates the enhanced partial image having a position and size equal to the position and size in the X-ray image of the region of interest detected by the first trained model, and the detected region of interest. 2. The enhanced partial image according to claim 1, configured to generate either one of said enhanced partial image having a predetermined size centered on a position in said X-ray image regardless of the size of said region of interest. X-ray equipment.
  8.  前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像の全体から、前記対象物体が除去された除去画像を生成するように構成されており、前記X線画像と前記除去画像との差分に基づいて、前記X線画像の全体における前記対象物体が強調された強調全体画像を生成するとともに、生成された前記強調全体画像から前記関心領域に対応する部分を切り出すことによって、前記強調部分画像を生成するように構成されている、請求項1に記載のX線撮影装置。 The partial image generation unit is configured to generate a removed image in which the target object is removed from the entire X-ray image by the second trained model, and the X-ray image and the removed image are Based on the difference between the above and the 2. An X-ray imaging device as claimed in claim 1, arranged to generate an enhanced partial image.
  9.  前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像の前記関心領域に対応する部分から、前記関心領域に対応する部分が切り出されるとともに前記対象物体が除去された除去画像を生成するように構成されており、前記X線画像の前記関心領域に対応する部分と、前記関心領域に対応する部分が切り出された前記除去画像との差分に基づいて、前記強調部分画像を生成するように構成されている、請求項1に記載のX線撮影装置。 The partial image generation unit generates a removed image in which a portion corresponding to the region of interest is cut out from a portion corresponding to the region of interest of the X-ray image and the target object is removed by the second trained model. and generating the enhanced partial image based on a difference between a portion of the X-ray image corresponding to the region of interest and the removed image obtained by clipping the portion corresponding to the region of interest. 2. The radiographic apparatus of claim 1, configured to.
  10.  前記部分画像生成部は、前記第2学習済みモデルによって、前記X線画像における前記対象物体が強調された強調画像を生成するように構成されており、前記強調画像と前記関心領域とに基づいて、前記強調部分画像を生成するように構成されている、請求項1に記載のX線撮影装置。 The partial image generation unit is configured to generate an enhanced image in which the target object in the X-ray image is enhanced by the second trained model, and based on the enhanced image and the region of interest , the radiographic apparatus of claim 1, adapted to generate the enhanced partial image.
  11.  前記領域検出部は、前記X線画像のうちから、前記第1学習済みモデルによって前記対象物体が含まれると推定される複数の領域のうち尤度が最も大きい1つの領域を前記関心領域として検出するように構成されている、請求項1に記載のX線撮影装置。 The region detection unit detects, from the X-ray image, one region having the highest likelihood among a plurality of regions estimated to include the target object by the first trained model as the region of interest. 2. The radiographic apparatus of claim 1, configured to.
  12.  被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出する領域検出部と、
     前記領域検出部によって検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成する部分画像生成部と、
     前記部分画像生成部によって生成された前記強調部分画像を識別可能な表示を表示部に表示させる画像出力部と、を備える、画像処理装置。
    A first learning generated by machine learning so as to detect a region of interest including a target object inside the body of the subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. a region detection unit that detects the region of interest from the X-ray image using the finished model;
    Based on the region of interest detected by the region detection unit and the output result of image processing by a second trained model generated by machine learning to remove or emphasize the target object in the X-ray image, a partial image generation unit that generates an enhanced partial image in which a portion corresponding to the region of interest in the X-ray image is cut out and the target object is emphasized;
    an image output unit that causes a display unit to display an identifiable display of the enhanced partial image generated by the partial image generation unit.
  13.  被検体に照射されたX線の検出信号に基づいて生成されたX線画像のうちの前記被検体の体内の対象物体が含まれる関心領域を検出するように機械学習によって生成された第1学習済みモデルによって、前記X線画像から前記関心領域を検出するステップと、
     検出された前記関心領域と、前記X線画像における前記対象物体を除去または強調するように機械学習によって生成された第2学習済みモデルによる画像処理の出力結果とに基づいて、前記X線画像における前記関心領域に対応する部分が切り出されるとともに前記対象物体が強調された強調部分画像を生成するステップと、
     生成された前記強調部分画像を識別可能な表示を表示部に表示させるステップと、を備える、画像処理方法。
    A first learning generated by machine learning so as to detect a region of interest including a target object inside the body of the subject in an X-ray image generated based on a detection signal of X-rays irradiated to the subject. detecting the region of interest from the X-ray image with a pre-defined model;
    Based on the detected region of interest and the output result of image processing by a second trained model generated by machine learning to remove or enhance the target object in the X-ray image, in the X-ray image generating an enhanced partial image in which a portion corresponding to the region of interest is clipped and the target object is enhanced;
    and causing a display unit to display a display that allows identification of the generated emphasized partial image.
PCT/JP2021/040986 2021-02-16 2021-11-08 X-ray imaging apparatus, image processing device, and image processing method WO2022176280A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021022591 2021-02-16
JP2021-022591 2021-02-16

Publications (1)

Publication Number Publication Date
WO2022176280A1 true WO2022176280A1 (en) 2022-08-25

Family

ID=82931315

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/040986 WO2022176280A1 (en) 2021-02-16 2021-11-08 X-ray imaging apparatus, image processing device, and image processing method

Country Status (1)

Country Link
WO (1) WO2022176280A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007198934A (en) * 2006-01-27 2007-08-09 Hitachi Ltd Region of interest extraction method for image data, computer program using the same and region of interest extracting system
JP2011083499A (en) * 2009-10-17 2011-04-28 Tele Systems:Kk Radiation imaging apparatus, and phantom device used therein
JP2016148629A (en) * 2015-02-13 2016-08-18 東芝メディカルシステムズ株式会社 Medical image processor and medical image processing method
WO2019208804A1 (en) * 2018-04-27 2019-10-31 キヤノンメディカルシステムズ株式会社 Medical information processing system and medical information processing program
JP2020018694A (en) * 2018-08-02 2020-02-06 株式会社日立製作所 Ultrasonic diagnostic device and ultrasonic image processing method
JP2021013685A (en) * 2019-07-16 2021-02-12 富士フイルム株式会社 Radiation image processing device and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007198934A (en) * 2006-01-27 2007-08-09 Hitachi Ltd Region of interest extraction method for image data, computer program using the same and region of interest extracting system
JP2011083499A (en) * 2009-10-17 2011-04-28 Tele Systems:Kk Radiation imaging apparatus, and phantom device used therein
JP2016148629A (en) * 2015-02-13 2016-08-18 東芝メディカルシステムズ株式会社 Medical image processor and medical image processing method
WO2019208804A1 (en) * 2018-04-27 2019-10-31 キヤノンメディカルシステムズ株式会社 Medical information processing system and medical information processing program
JP2020018694A (en) * 2018-08-02 2020-02-06 株式会社日立製作所 Ultrasonic diagnostic device and ultrasonic image processing method
JP2021013685A (en) * 2019-07-16 2021-02-12 富士フイルム株式会社 Radiation image processing device and program

Similar Documents

Publication Publication Date Title
US11389132B2 (en) Radiographic image processing apparatus, radiographic image processing method, and radiographic image processing program
JP2007044485A (en) Method and device for segmentation of part with intracerebral hemorrhage
JP2011120897A (en) System and method for suppressing artificial object in medical image
CN104240271B (en) Medical image-processing apparatus
US11037672B2 (en) Medical image processing apparatus, medical image processing method, and system
JP6750425B2 (en) Radiation image processing apparatus and radiation image processing method
JP5536644B2 (en) Medical image processing apparatus, medical image processing method, and medical image processing program
JP5380231B2 (en) Medical image display apparatus and method, and program
US11069060B2 (en) Image processing apparatus and radiographic image data display method
JP4416823B2 (en) Image processing apparatus, image processing method, and computer program
WO2022176280A1 (en) X-ray imaging apparatus, image processing device, and image processing method
JP2008512161A (en) User interface for CT scan analysis
CN111353950B (en) Method for image processing, medical imaging device and electronically readable data carrier
JP2012176082A (en) X-ray diagnostic apparatus and x-ray diagnostic program
JP7178822B2 (en) medical image processor
US20220114729A1 (en) X-ray imaging apparatus, image processing method, and generation method of trained model
US11978549B2 (en) Training image generation device, training image generation method, training image generation program, learning device, learning method, learning program, discriminator, radiographic image processing device, radiographic image processing method, and radiographic image processing program
JP7366870B2 (en) Learning device, method and program, trained model, and radiographic image processing device, method and program
WO2017179255A1 (en) Method, device and computer program for automatic estimation of bone region in ct image
WO2022215303A1 (en) Image processing device, image processing method, and image processing program
JP6677263B2 (en) X-ray equipment
JP6491823B2 (en) Medical image diagnosis support apparatus, method and program
JP7413216B2 (en) Learning device, method and program, trained model, and radiographic image processing device, method and program
JP2002336242A (en) Three-dimensional image display device
JP2022090165A (en) Image processing device, image processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21926708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21926708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP