WO2022215303A1 - X線撮影装置、画像処理装置、および、画像処理プログラム - Google Patents
X線撮影装置、画像処理装置、および、画像処理プログラム Download PDFInfo
- Publication number
- WO2022215303A1 WO2022215303A1 PCT/JP2021/048529 JP2021048529W WO2022215303A1 WO 2022215303 A1 WO2022215303 A1 WO 2022215303A1 JP 2021048529 W JP2021048529 W JP 2021048529W WO 2022215303 A1 WO2022215303 A1 WO 2022215303A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- ray
- output
- target object
- unit
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 41
- 238000003672 processing method Methods 0.000 title description 4
- 238000003384 imaging method Methods 0.000 claims abstract description 53
- 238000001514 detection method Methods 0.000 claims abstract description 32
- 239000002131 composite material Substances 0.000 claims abstract description 11
- 238000004040 coloring Methods 0.000 claims description 12
- 238000000034 method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 7
- 239000010410 layer Substances 0.000 description 163
- 238000010586 diagram Methods 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000001356 surgical procedure Methods 0.000 description 6
- 230000002439 hemostatic effect Effects 0.000 description 4
- 238000002350 laparotomy Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000007423 decrease Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000003909 pattern recognition Methods 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 238000002601 radiography Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 239000011229 interlayer Substances 0.000 description 2
- 210000004197 pelvis Anatomy 0.000 description 2
- 210000000689 upper leg Anatomy 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013076 target substance Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
- A61B6/463—Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/44—Constructional features of apparatus for radiation diagnosis
- A61B6/4405—Constructional features of apparatus for radiation diagnosis the apparatus being movable or portable, e.g. handheld or mounted on a trolley
Definitions
- the present invention relates to an X-ray imaging apparatus, an image processing apparatus, and an image processing program, and more particularly to an X-ray imaging apparatus, an image processing apparatus, and an image processing program for confirming target substances inside the body of a subject.
- the radiation imaging system described in JP-A-2019-180605 processes a radiation image according to a preset processing procedure according to the inspection purpose.
- a processing procedure for checking the presence or absence of hemostatic gauze after laparotomy is set, foreign matter enhancement processing is performed on the captured image.
- an abdominal image is displayed on which foreign matter enhancement processing has been performed so that foreign matter (gauze) inside the subject's body can be easily seen.
- a doctor or the like can detect the presence or absence of a foreign body (hemostatic gauze, etc.) in the subject's body after surgery, like the radiography system described in Patent Document 1.
- edge enhancement processing may be performed as the foreign matter enhancement processing.
- an enhanced image is generated in which not only the foreign matter but also the human body structure such as the bones of the subject are emphasized. In this case, since the visibility of the foreign matter in the enhanced image is lowered, there is a problem that it is difficult to confirm the foreign matter (target object) included in the X-ray image.
- the present invention has been made to solve the above-described problems, and one object of the present invention is to enhance a target object contained in an X-ray image of a subject.
- An object of the present invention is to provide an X-ray imaging apparatus, an image processing apparatus, and an image processing program that can easily confirm a target object included in a radiograph.
- an X-ray imaging apparatus includes an X-ray irradiator that irradiates a subject with X-rays, and an X-ray that detects the X-rays emitted from the X-ray irradiator.
- a ray detector an X-ray image generator for generating an X-ray image based on an X-ray detection signal detected by the X-ray detector, and a controller, wherein the controller is the X-ray image generator
- the controller is the X-ray image generator
- an image output unit for simultaneously or switchably displaying at least one of the output image generated by and the composite image and the X-ray image on the display unit.
- the X-ray image generated based on the detection signal of the X-ray irradiated to the subject is input, thereby determining the inside of the subject's body in the X-ray image.
- an enhanced image generator for generating at least one of the combined images as an enhanced image; and at least one of the output image and the combined image generated by the enhanced image generator and the X-ray image simultaneously or switching.
- an image output unit for displaying on a display unit.
- the X-ray image generated based on the detection signal of the X-ray irradiated to the subject is input to the computer, and the subject in the X-ray image is input to the computer.
- the position of the target object is enhanced based on the output image of the learned model in which the position of the target object is enhanced based on the trained model for detecting the region of the target object in the body of the body, and the output image and the X-ray image
- the object in the X-ray image is An output image of the trained model in which the position of the target object is enhanced based on the trained model for detecting the region of the target object within the body of the subject, and the position of the target object is enhanced based on the output image and the X-ray image.
- At least one of the synthesized images generated as above is generated as an enhanced image.
- At least one of the output image and the synthesized image and the X-ray image are displayed on the display unit simultaneously or switchably.
- At least one of the output image and the synthesized image is generated as an enhanced image in which the position of the target object is emphasized based on the trained model that directly detects the region of the target object.
- To suppress the enhancement of the human body structure such as the bone of the subject in the enhanced image unlike the case where an enhanced image is generated in which both the target object and the human body structure such as the bone of the subject are emphasized by can be done.
- the target object included in the X-ray image can be easily confirmed.
- At least one of the output image and the synthesized image as the enhanced image and the X-ray image are simultaneously or switchably displayed on the display section.
- the target object included in the X-ray image can be easily identified by comparing the X-ray image and the enhanced image. It is very useful to be able to check
- the target object included in the X-ray image of the subject is emphasized by performing image processing using the trained model, a learned model for removing the target object from the X-ray image is generated, It is conceivable to generate a removed image in which the target object is removed from the X-ray image using the generated trained model, and generate an enhanced image from the difference between the X-ray image and the removed image.
- the enhanced image is generated by the difference between the X-ray image and the removed image, the structure of the subject similar to the target object (such as the pelvis and femur) is removed in the removed image. , not only the target object but also the subject structure similar to the target object may be enhanced in the enhanced image. Therefore, when performing image processing using a trained model that removes the target object from the X-ray image, the visibility of the target object in the enhanced image decreases, making it difficult to confirm the target object included in the X-ray image. is.
- the image processing apparatus by inputting an X-ray image, output image of the trained model in which the position of the target object is emphasized based on the trained model for detecting the region of the target object in the body of the subject, and the position of the target object based on the output image and the X-ray image
- At least one of the synthetic images generated to be enhanced is generated as an enhanced image.
- At least one of the output image and the synthesized image and the X-ray image are displayed on the display unit simultaneously or switchably.
- At least one of the output image and the synthesized image is generated as an enhanced image in which the position of the target object is emphasized based on the trained model that directly detects the region of the target object.
- the removed image is generated by a trained model that removes the target object from the X-ray image and the enhanced image is generated by the difference between the X-ray image and the removed image
- the structure of the subject similar to the target object is emphasized in the enhanced image. can be suppressed.
- the target object included in the X-ray image of the subject is emphasized by performing image processing using the trained model, the target object included in the X-ray image can be easily confirmed. can.
- FIG. 1 is a diagram for explaining the configuration of an X-ray imaging apparatus according to one embodiment
- FIG. 1 is a block diagram for explaining the configuration of an X-ray imaging apparatus according to one embodiment
- FIG. 1 illustrates an example X-ray image of a subject with a target object in the body according to one embodiment
- FIG. 5 is a diagram for explaining image processing using a trained model according to one embodiment
- FIG. 4 is a diagram for explaining an output layer image according to one embodiment
- FIG. FIG. 4 is a diagram for explaining an intermediate layer image according to one embodiment
- FIG. 4 is a diagram for explaining generation of a trained model according to one embodiment
- FIG. 4 is a diagram for explaining a colored image according to one embodiment
- FIG. 4 is a diagram for explaining a colored superimposed image according to one embodiment;
- FIG. 4 is a diagram for explaining an intermediate layer superimposed image according to one embodiment;
- FIG. 4 is a diagram for explaining display on a display unit according to one embodiment;
- FIG. 4 is a flow chart for explaining an image processing method according to one embodiment;
- FIG. 1 (Overall configuration of X-ray imaging apparatus) An X-ray imaging apparatus 100 according to an embodiment of the present invention will be described with reference to FIGS. 1 to 11.
- FIG. 1 (Overall configuration of X-ray imaging apparatus) An X-ray imaging apparatus 100 according to an embodiment of the present invention will be described with reference to FIGS. 1 to 11.
- FIG. 1 (Overall configuration of X-ray imaging apparatus) An X-ray imaging apparatus 100 according to an embodiment of the present invention will be described with reference to FIGS. 1 to 11. FIG.
- the X-ray imaging apparatus 100 performs X-ray imaging to identify a target object 200 inside the body of a subject 101 .
- the X-ray imaging apparatus 100 performs X-ray imaging for confirming whether or not a target object 200 (residue) is left in the body of a subject 101 who has undergone laparotomy in an operating room. I do.
- the X-ray imaging apparatus 100 is, for example, a mobile X-ray imaging apparatus in which the entire apparatus is movable.
- Target objects 200 are, for example, surgical gauze, suture needles, and forceps (such as hemostatic forceps).
- a worker such as a doctor leaves a target object 200 such as surgical gauze, a suture needle, and forceps in the body of the subject 101 after the wound is closed ( X-ray imaging for confirmation is performed on the subject 101 so that there is no residual.
- a worker such as a doctor confirms whether or not the target object 200 inside the body of the subject 101 is left behind by visually recognizing the X-ray image 10 (see FIG. 3) of the subject 101 .
- the X-ray imaging apparatus 100 includes an X-ray irradiation unit 1, an X-ray detection unit 2, an X-ray image generation unit 3, a display unit 4, a storage unit 5, and a control unit 6.
- the control unit 6 is an example of the “image processing device” and the "computer” in the claims.
- the X-ray irradiation unit 1 irradiates the subject 101 after surgery with X-rays.
- the X-ray irradiation unit 1 includes an X-ray tube that emits X-rays when a voltage is applied.
- the X-ray detection unit 2 detects X-rays that have passed through the subject 101 . Then, the X-ray detector 2 outputs a detection signal based on the detected X-rays.
- the X-ray detection unit 2 includes, for example, an FPD (Flat Panel Detector).
- the X-ray detector 2 is configured as a wireless type X-ray detector and outputs a detection signal as a radio signal.
- the X-ray detection unit 2 is configured to be able to communicate with the X-ray image generation unit 3 through a wireless connection such as a wireless LAN. to output
- the X-ray image generation unit 3 controls X-ray imaging by controlling the X-ray irradiation unit 1 and the X-ray detection unit 2, as shown in FIG.
- the X-ray image generator 3 generates an X-ray image 10 based on the X-ray detection signal detected by the X-ray detector 2 .
- the X-ray image generator 3 is configured to be able to communicate with the X-ray detector 2 through a wireless connection such as a wireless LAN.
- the X-ray image generator 3 includes a processor such as an FPGA (field-programmable gate array). The X-ray image generator 3 then outputs the generated X-ray image 10 to the controller 6 .
- the X-ray image 10 shown in FIG. 3 is an image acquired by X-ray imaging the abdomen of the subject 101 after surgery.
- the X-ray image 10 shown in FIG. 3 includes surgical gauze as the target object 200 .
- the surgical gauze is woven with a contrast thread that hardly transmits X-rays so that it can be visually recognized in an X-ray image 10 obtained by X-ray photography after surgery.
- the X-ray image 10 shown in FIG. 3 also includes surgical wires and surgical clips as artificial structures 201 other than the target object 200 .
- the display unit 4 includes, for example, a touch panel type liquid crystal display.
- the display unit 4 displays various images such as an X-ray image 10 . Further, the display unit 4 is configured to receive an input operation for operating the X-ray imaging apparatus 100 by an operator such as a doctor, based on the operation on the touch panel.
- the storage unit 5 is composed of a storage device such as a hard disk drive, for example.
- the storage unit 5 stores image data such as the X-ray image 10 .
- the storage unit 5 also stores various setting values for operating the X-ray imaging apparatus 100 .
- the storage unit 5 also stores a program used for control processing of the X-ray imaging apparatus 100 by the control unit 6 .
- the storage unit 5 also stores an image processing program 51 .
- the image processing program 51 can be stored in the storage unit 5 by, for example, reading from a non-transitory portable storage medium such as an optical disk and USB memory, or downloading via a network.
- the storage unit 5 also stores a learned model 52, which will be described later.
- the control unit 6 is a computer including, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a ROM (Read Only Memory), and a RAM (Random Access Memory).
- the control unit 6 includes an enhanced image generation unit 61 and an image output unit 62 as functional components. That is, the control unit 6 functions as an enhanced image generation unit 61 and an image output unit 62 by executing the image processing program 51 .
- the emphasized image generation unit 61 and the image output unit 62 are functional blocks as software in the control unit 6, and are configured to function based on command signals from the control unit 6 as hardware. there is
- the emphasized image generation unit 61 receives the X-ray image 10 generated by the X-ray image generation unit 3, thereby generating an X-ray image. Output images (11, 12) and (11 , 12) and the X-ray image 10 to generate synthesized images (14, 15) so that the position of the target object 200 is emphasized as an enhanced image. Then, the image output unit 62 (control unit 6) outputs at least one of the output images (11, 12) and the combined images (14, 15) generated by the enhanced image generation unit 61 and the X-ray image 10. Displayed on the display unit 4 simultaneously or switchably.
- the enhanced image generation unit 61 (control unit 6) generates the output layer image 11 and the intermediate layer image 12 based on the learned model 52 generated by machine learning. are generated as output images of the trained model 52 .
- the trained model 52 is generated by machine learning using deep learning.
- the trained model 52 is generated based on, for example, U-Net, which is one type of full convolution network (FCN).
- the trained model 52 is estimated to be the target object 200 from the X-ray image 10 by transforming the pixels estimated to be the target object 200 from among the pixels of the input X-ray image 10. It is generated by learning to perform an image transformation (image reconstruction) that detects parts.
- the output layer image 11 and the intermediate layer image 12 are examples of the "enhanced image" and the "output image” in the claims.
- the trained model 52 includes an input layer 52a, an intermediate layer 52b, and an output layer 52c.
- the input layer 52a receives an input image (X-ray image 10).
- the intermediate layer 52b acquires the feature component (target object 200) from the input image received by the input layer 52a.
- the intermediate layer 52b includes a downsampling unit that acquires the feature component (the target object 200) from the input image while reducing the size of the input image, and an image containing the feature component whose size is reduced by the downsampling unit. to the original size (the size of the input image).
- the intermediate layer 52b outputs the intermediate layer image 12 in which the target object 200 in the X-ray image 10 is emphasized.
- the output layer 52c has a sigmoid function as an activation function.
- the output layer 52c generates and outputs an output layer image 11 representing the region of the target object 200 in the X-ray image 10 by processing the intermediate layer image 12 output from the intermediate layer 52b using a sigmoid function. do.
- the output layer image 11 is constructed to represent the probability that the pixel value of each pixel belongs to the region of the target object 200 .
- pixels with a high probability of belonging to the region of the target object 200 pixels whose pixel value is 1 or close to 1 are displayed in white, and pixels with a low probability of belonging to the region of the target object 200 (pixel values of 0 or 1) are displayed in white. 0) are displayed in black.
- the output layer image 11 shown in FIG. 5 is an image acquired from the X-ray image 10 shown in FIG.
- structures such as bones of the subject 101 are removed and the target object 200 is emphasized.
- the artificial structure 201 similar in shape (feature) to the target object 200 is also emphasized. can be easily distinguished from the target object 200.
- the intermediate layer image 12 shown in FIG. 6 is an image obtained from the X-ray image 10 shown in FIG. 3 based on the learned model 52.
- some structures such as bones of the subject 101 remain, but they are mostly removed and the target object 200 is emphasized.
- Remaining structures such as bones of the subject 101 are removed by processing using the sigmoid function of the output layer 52c.
- the artificial structure 201 similar in shape (feature) to the target object 200 is also emphasized. can be easily distinguished from the target object 200.
- the trained model 52 is generated by machine learning to detect target objects 200 such as surgical gauze, suture needles and forceps from the X-ray image 10 .
- the trained model 52 is generated in advance by a learning device 300 separate from the X-ray imaging device 100 .
- Learning device 300 is, for example, a computer including a CPU, GPU, ROM, and RAM.
- the learning device 300 generates a learned model 52 by machine learning using deep learning, using a plurality of teacher input X-ray images 310 and a plurality of teacher output images 320 as teacher data (training set).
- the teacher input X-ray image 310 is generated so as to simulate the X-ray image 10 obtained by imaging the subject 101 with the target object 200 left inside the body.
- the teacher output image 320 is generated to simulate detection of the target object 200 from the teacher input X-ray image 310 .
- the teacher input X-ray image 310 and the teacher output image 320 are generated so as to have the same conditions (such as size) as the X-ray image 10 used for input in inference using the trained model 52 .
- the enhanced image generation unit 61 (control unit 6), based on the X-ray image 10 and the output layer image 11 of the trained model 52, A colored superimposed image 14 in which a colored image 13 generated based on the output layer image 11 is superimposed on the X-ray image 10 is generated as a composite image.
- the emphasized image generator 61 generates the colored image 13 by coloring the portion corresponding to the target object 200 in the output layer image 11 based on the output layer image 11 .
- the emphasized image generator 61 generates a colored superimposed image 14 in which the generated colored image 13 is superimposed on the X-ray image 10 .
- the colored image 13 is an example of "an image generated based on an output image” in the claims.
- the colored superimposed image 14 is an example of the "enhanced image", the "composite image", and the "superimposed image” in the claims.
- the enhanced image generation unit 61 Based on the output layer image 11, the enhanced image generation unit 61 identifies the linear structure (shape) of the target object 200 in the output layer image 11 and arranges the linear structure of the identified target object 200.
- a colored image 13 is generated as a heat map image (color map image) that is colored so as to change according to the density in a predetermined region including the site where the color is located.
- the enhanced image generation unit 61 binarizes the generated output layer image 11 . Then, the enhanced image generation unit 61 detects the density of the linear structure in the binarized output layer image 11 to identify the portion (pixel) containing the linear structure. Specifically, the enhanced image generation unit 61 extracts the feature amount from the binarized output layer image 11, executes pattern recognition, and generates a linear structure. Identify.
- the enhanced image generation unit 61 extracts a high-order local autocorrelation (HLAC) feature as a feature quantity.
- the enhanced image generator 61 acquires, for example, one pixel of the output layer image 11 as a reference point. Then, the enhanced image generation unit 61 extracts a feature quantity based on the local autocorrelation feature of a predetermined region including (centered on) the reference point. Then, the enhanced image generation unit 61 measures the degree of matching between the extracted feature amount and the feature amount of the linear structure (shape) set in advance, thereby obtaining a linear shape in a predetermined region including the reference point.
- HLAC local autocorrelation
- the enhanced image generator 61 acquires the detection value of the linear structure in a predetermined area including the reference point as the density of the linear structure in the predetermined area at the reference point.
- the enhanced image generation unit 61 acquires the local autocorrelation feature amount using each pixel of the output layer image 11 as a reference point, thereby obtaining the density (detection value) of the linear structure in each pixel of the output layer image 11. get.
- the size of the predetermined area containing the reference point may be, for example, a 3 ⁇ 3 pixel area, or a 9 ⁇ 9 pixel area larger than 3 ⁇ 3 pixels. good too.
- the enhanced image generation unit 61 generates the colored image 13 by coloring each pixel based on the density (detection value) of the linear structure obtained for each pixel of the output layer image 11 .
- the emphasized image generation unit 61 generates the colored image 13 by coloring each pixel so that the hue differs according to the value of the density of the linear structure.
- the colored image 13 is colored in the order of red, yellow, green, and blue, from the highest to the lowest density value. That is, if the density value is high, the pixel is colored red, and if the density value is low, the pixel is colored blue.
- the emphasized image generation unit 61 sets the colors in the colored image 13 by associating the acquired linear structure density (detection value) in the range of 0 to 600 with each color.
- the difference in color is represented by the difference in hatching.
- the enhanced image generation unit 61 determines how much the feature amount of each pixel (each region) matches the feature amount of the linear structure corresponding to the target object 200 depending on the displayed color.
- a colored image 13 that can be identified is generated. Further, identification (extraction) of the linear structure (shape) when generating the colored image 13 is performed by pattern recognition using a feature extraction method other than the high-order local autocorrelation feature. is obtained, and the density of the linear structure (shape) (degree of pattern relevance) may be identified (detected) and colored. In the above description, an example of identifying surgical gauze as the target object 200 has been described. ) produces the colored image 13 .
- the colored image 13 shown in FIG. 8 is an image obtained based on the output layer image 11 shown in FIG.
- the position of the target object 200 is emphasized by coloring based on the linear structure of the target object 200 .
- the position of the artificial structure 201 whose shape (feature) is similar to the target object 200 is also emphasized by coloring.
- a worker such as a doctor can easily distinguish it from the target object 200 because it is colored with a color (such as green) that represents a lower pixel density compared to the position of 200 .
- the artificial structure 201 has a lower estimated probability of belonging to the region of the target object 200 than the target object 200, and thus has a lower pixel density than the position of the target object 200. ing. Further, the colored image 13 does not include shape information of the target object 200 and the artificial structure 201 .
- a colored superimposed image 14 shown in FIG. 9 is an image in which the colored image 13 shown in FIG. 8 is superimposed on the X-ray image 10 shown in FIG.
- the position of the target object 200 is emphasized by superimposing the colored image 13 on the X-ray image 10 .
- the position of the artificial structure 201 whose shape (feature) is similar to the target object 200 is also emphasized by superimposing the colored image 13 .
- a worker such as a doctor can easily distinguish the location from the target object 200 because the location is colored with a color (such as green) that represents a lower pixel density compared to the location of the target object 200.
- a worker such as a doctor can easily distinguish the artificial structure 201 from the target object 200 because the position and shape of the artificial structure 201 are known.
- the enhanced image generation unit 61 converts the intermediate layer image 12 into an X-ray image based on the X-ray image 10 and the intermediate layer image 12 of the trained model 52.
- An intermediate layer superimposed image 15 superimposed on 10 is generated as a synthesized image.
- the enhanced image generation unit 61 (control unit 6) generates the intermediate layer superimposed image 15 by superimposing the transparent intermediate layer image 12 on the X-ray image 10 .
- the intermediate layer superimposed image 15 is an example of the "composite image" and the "superimposed image” in the claims.
- An intermediate layer superimposed image 15 shown in FIG. 10 is an image in which the intermediate layer image 12 shown in FIG. 6 is superimposed on the X-ray image 10 shown in FIG.
- the position and shape of the target object 200 are emphasized by superimposing the intermediate layer image 12 on the X-ray image 10 .
- the position and shape of the artificial structure 201 whose shape (feature) is similar to the target object 200 are also emphasized by coloring. grasps the position, shape, etc. of the artificial structure 201 , it is possible to easily distinguish it from the target object 200 .
- the image output unit 62 (control unit 6) outputs at least one of the output layer image 11, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15. Then, the X-ray image 10 is displayed on the display unit 4 simultaneously or switchably.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10 and the output layer image 11 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 and the output layer image 11 by switching. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 and the intermediate layer image 12 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to switch between the X-ray image 10 and the intermediate layer image 12 to be displayed. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 and the colored superimposed image 14 side by side at the same time.
- the image output unit 62 causes the display unit 4 to switch between the X-ray image 10 and the colored superimposed image 14 to be displayed. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to switch between the X-ray image 10 and the intermediate layer superimposed image 15 to be displayed.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the intermediate layer image 12 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the intermediate layer image 12 by switching. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the colored superimposed image 14 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the colored superimposed image 14 by switching.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, and the intermediate layer superimposed image 15 by switching.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the intermediate layer image 12, and the colored superimposed image 14 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 , the intermediate layer image 12 , and the colored superimposed image 14 by switching. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the intermediate layer image 12, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 , the intermediate layer image 12 , and the intermediate layer superimposed image 15 by switching between them.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the colored superimposed image 14, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the colored superimposed image 14, and the intermediate layer superimposed image 15 by switching between them.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the intermediate layer image 12, and the colored superimposed image 14 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the intermediate layer image 12, and the colored superimposed image 14 by switching. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the intermediate layer image 12, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the intermediate layer image 12, and the intermediate layer superimposed image 15 by switching.
- the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the colored superimposed image 14, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 , the output layer image 11 , the colored superimposed image 14 , and the intermediate layer superimposed image 15 by switching between them. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15 side by side at the same time. Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10 , the intermediate layer image 12 , the colored superimposed image 14 , and the intermediate layer superimposed image 15 by switching between them.
- the image output unit 62 simultaneously displays the X-ray image 10, the output layer image 11, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15 side by side on the display unit 4. . Further, for example, the image output unit 62 causes the display unit 4 to display the X-ray image 10, the output layer image 11, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15 by switching between them. . Note that the switching operation of each image on the display unit 4 can be performed by an input operation on the touch panel of the display unit 4, for example.
- Steps 501 to 503 indicate control processing by the X-ray image generation unit 3
- steps 504 to 507 indicate control processing by the control unit 6.
- step 501 the subject 101 is irradiated with X-rays in order to identify the target object 200 left behind in the body of the subject 101 after surgery.
- step 502 emitted x-rays are detected.
- step 503 an X-ray image 10 is generated based on the detected X-ray detection signal.
- step 504 the output layer image 11 and the intermediate layer image 12 are generated by inputting the X-ray image 10 to the trained model 52.
- step 505 a colorized image 13 is generated based on the generated output layer image 11 .
- the colored image 13 is generated by coloring the portion corresponding to the target object 200 in the output layer image 11 .
- step 506 the colored image 13 is superimposed on the X-ray image 10 to generate the colored superimposed image 14 . Further, an intermediate layer superimposed image 15 is generated by superimposing the intermediate layer image 12 on the X-ray image 10 .
- step 507 at least one of the output layer image 11, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15 and the X-ray image 10 are simultaneously or switchably displayed on the display unit. 4.
- the X-ray imaging apparatus 100 of the present embodiment receives the X-ray image 10 as a learned model for detecting the region of the target object 200 inside the body of the subject 101 in the X-ray image 10. Based on 52, the position of the target object 200 is emphasized based on the output image (11, 12) of the trained model 52 and the output image (11, 12) and the X-ray image 10 At least one of the composite images (14, 15) generated so as to be generated is generated as an enhanced image. At least one of the output images (11, 12) and the synthesized images (14, 15) and the X-ray image 10 are displayed on the display unit 4 simultaneously or switchably.
- At least one of the output images (11, 12) and the synthesized images (14, 15) emphasizes the position of the target object 200 based on the trained model 52 that directly detects the region of the target object 200. Therefore, unlike the case where an enhanced image is generated in which both the target object and the human body structure such as the bones of the subject are emphasized by the edge enhancement processing, the bones of the subject 101 are not displayed in the enhanced image. It is possible to suppress the emphasis of human body structures such as As a result, when the target object 200 included in the X-ray image 10 of the subject 101 is emphasized, the target object 200 included in the X-ray image 10 can be easily confirmed.
- the enhanced image and the While comparing with the X-ray image 10 can be easily confirmed. Further, since the final judgment of the presence or absence of the target object 200 in the body of the subject 101 by a doctor or the like is made based on the X-ray image 10, the X-ray image 10 and the emphasized image are compared to determine what is included in the X-ray image 10. It is very effective to be able to easily confirm the target object 200 to be captured.
- the trained model removes the target object 200 from the X-ray image 10. is generated, a removed image is generated by removing the target object 200 from the X-ray image 10 using the generated trained model, and an enhanced image is generated by the difference between the X-ray image 10 and the removed image.
- an enhanced image is generated by the difference between the X-ray image 10 and the removed image
- structures of the subject 101 similar to the target object 200 such as the pelvis and femur
- the target object 200 included in the X-ray image 10 is removed because the visibility of the target object 200 decreases in the enhanced image. difficult to verify.
- the removed image is generated by a learned model that removes the target object 200 from the X-ray image 10
- the enhanced image is generated by the difference between the X-ray image 10 and the removed image.
- the target object 200 included in the X-ray image 10 of the subject 101 is emphasized by performing image processing using the trained model, the target object 200 included in the X-ray image 10 can be easily identified. can be verified.
- the enhanced image generation unit 61 generates a superimposed image in which the output image (12) or the image (13) generated based on the output image (11) is superimposed on the X-ray image 10. (14, 15) are generated as a composite image, and the image output unit 62 displays the superimposed image (14, 15) and the X-ray image 10 on the display unit 4 simultaneously or switchably. It is configured to allow With this configuration, the target object 200 included in the X-ray image 10 can be confirmed by comparing the superimposed images (14, 15), which are images whose positional relationship can be easily grasped, and the X-ray image 10. Therefore, the target object 200 included in the X-ray image 10 can be more easily confirmed.
- the emphasized image generation unit 61 colors the portion corresponding to the target object 200 in the output image (11) based on the output image (11), thereby generating a colored image 13 and generate a superimposed image (14) in which the generated colored image 13 is superimposed on the X-ray image 10, and the image output unit 62 outputs the colored image 13 to the X-ray image 10.
- the superimposed image (14) and the X-ray image 10 are displayed on the display unit 4 simultaneously or switchably.
- the target object 200 included in the X-ray image 10 can be confirmed by comparing the superimposed image (14) in which the colored image 13 is superimposed on the X-ray image 10 and the X-ray image 10. Therefore, the position of the target object 200 included in the X-ray image 10 can be intuitively and easily grasped based on the color of the superimposed image (14).
- the enhanced image generation unit 61 identifies the linear structure of the target object 200 in the output image (11) based on the output image (11), and identifies It is configured to acquire the density in a predetermined area including the part where the linear structure of the target object 200 is arranged, and generate the colored image 13 as a heat map image colored so as to change according to the density.
- the image output unit 62 is configured to display the superimposed image (14) on which the colored image 13 as the heat map image is superimposed and the X-ray image 10 on the display unit 4 simultaneously or switchably. .
- the superimposed image (14) is colored according to the density in the predetermined region including the part where the linear structure of the target object 200 is arranged, so that the superimposed image (14) emphasizes the high-density portion.
- An image (14) can be generated.
- the density of the linear structure is high in the portion corresponding to the target object 200. Therefore, the target object 200 in the superimposed image (14) It is possible to emphasize and color the portion corresponding to .
- the target object included in the X-ray image 10 is compared. 200, the position of the target object 200 included in the X-ray image 10 can be more intuitively and easily determined based on the color of the superimposed image (14) on which the colored image 13 as a heat map image is superimposed. can grasp.
- the emphasized image generation unit 61 generates an intermediate image as an output image in which the target object 200 in the X-ray image 10 is emphasized while being output from the intermediate layer 52b of the trained model 52.
- the layer image 12 is configured to generate a superimposed image (15) in which the intermediate layer image 12 is superimposed on the X-ray image 10, and the image output unit 62 outputs a superimposed image (15) in which the intermediate layer image 12 is superimposed on the X-ray image 10. 15) and the X-ray image 10 are displayed on the display unit 4 simultaneously or switchably.
- the X-ray image 10 is visually compared with the superimposed image (15) in which the intermediate layer image 12 that enables the shape of the target object 200 to be easily grasped is superimposed on the X-ray image 10 and the X-ray image 10. Therefore, the target object 200 included in the X-ray image 10 can be more easily confirmed based on the shape and position of the target object 200 in the superimposed image (15). .
- the enhanced image generation unit 61 outputs from the output layer 52c of the learned model 52, and outputs the output layer image 11 representing the region of the target object 200 in the X-ray image 10. and an intermediate layer image 12 in which the target object 200 in the X-ray image 10 is emphasized, which is output from the intermediate layer 52b of the trained model 52, as an output image.
- the unit 62 is configured to cause the display unit 4 to display at least one of the output layer image 11 and the intermediate layer image 12 and the X-ray image 10 simultaneously or switchably.
- the object included in the X-ray image 10 is compared with at least one of the output layer image 11 and the intermediate layer image 12 from which the shape of the target object 200 can be easily grasped. Since the object 200 can be confirmed, the target object 200 included in the X-ray image 10 can be more easily confirmed.
- the X-ray imaging apparatus 100 is an X-ray imaging apparatus for rounds, but the present invention is not limited to this.
- the X-ray imaging apparatus 100 may be a general X-ray imaging apparatus installed in an X-ray imaging room.
- the emphasized image generation unit 61 (control unit 6) is configured to generate the output layer image 11, the intermediate layer image 12, the colored superimposed image 14, and the intermediate layer superimposed image 15.
- the enhanced image generator 61 may generate at least one of the output layer image 11 , the intermediate layer image 12 , the colored superimposed image 14 , and the intermediate layer superimposed image 15 .
- the emphasized image generation unit 61 (control unit 6) is configured to generate the colored image 13 based on the output layer image 11, but the present invention is not limited to this. Not limited. For example, colorized image 13 may be generated based on interlayer image 12 .
- the enhanced image generation unit 61 (control unit 6) identifies the linear structure of the target object 200 in the output layer image 11, and performs coloring based on the identified linear structure.
- the colored image 13 may be generated by coloring based on the pixel values of the output layer image 11 instead of pattern recognition of linear structures (shapes). That is, the coloring image 13 may be colored based on the density of pixel values in a predetermined area of the output layer image 11 .
- the enhanced image generation unit 61 (control unit 6) generates heat images colored so as to change according to the density in a predetermined region including the portion where the linear structure of the target object 200 is arranged.
- the present invention is not limited to this. For example, by setting a threshold for the density (detection value) of the linear structure without changing the color, a colored image 13 is generated in which the portion (region) larger than the threshold is colored with one color. may
- the emphasized image generation unit 61 (control unit 6) is configured to generate the intermediate layer superimposed image 15 in which the intermediate layer image 12 is superimposed on the X-ray image 10.
- the present invention is not limited to this.
- an output layer superimposed image in which the output layer image 11 is superimposed on the X-ray image 10 may be generated.
- the target object 200 includes surgical gauze, suture needle, and forceps, but the present invention is not limited to this.
- target object 200 may include bolts, surgical wires, surgical clips, and the like.
- the X-ray imaging apparatus 100 has the display unit 4 that displays the image output by the image output unit 62 (control unit 6), but the present invention is not limited to this.
- an image output unit such as an X-ray image 10, an output layer image 11, an intermediate layer image 12, a colored superimposed image 14, and an intermediate layer superimposed image 15 on an external display device provided separately from the X-ray imaging apparatus 100.
- the image output by 62 may be displayed.
- the X-ray imaging apparatus 100 has an example in which the control unit 6 is provided as an image processing apparatus, but the present invention is not limited to this.
- an image processing device provided separately from the X-ray imaging apparatus 100 generates enhanced images such as an output layer image 11, an intermediate layer image 12, a colored superimposed image 14, and an intermediate layer superimposed image 15.
- the emphasized image generation unit 61 (control unit 6) generates the colored image 13 so that the density (detection value) of the linear structure (shape) can be identified by differentiating the hue.
- the present invention is not limited to this.
- the colored image 13 may be generated so that the density (detection value) of linear structures (shapes) can be identified. That is, the colored image 13 may be generated so that the luminance increases when the detected value is large, and the luminance decreases when the detected value is small.
- one common controller (hardware) may generate the X-ray image 10 and the enhanced image.
- the enhanced image generation unit 61 and the image output unit 62 are configured as functional blocks (software) in one piece of hardware (control unit 6)
- the present invention It is not limited to this.
- the emphasized image generator 61 and the image output unit 62 may be configured by separate hardware (arithmetic circuits).
- the learned model 52 is generated by the learning device 300 that is separate from the X-ray imaging device 100 was shown, but the present invention is not limited to this.
- the learned model 52 may be generated by the X-ray imaging apparatus 100 .
- the trained model 52 is generated based on U-Net, which is one type of full convolution network (FCN), but the present invention is not limited to this. do not have.
- the trained model 52 may be generated based on a CNN (Convolutional Neural Network) including fully connected layers.
- the trained model 52 may be generated based on an Encoder-Decoder model other than U-Net, such as SegNet or PSPNet.
- an X-ray irradiation unit that irradiates an object with X-rays
- an X-ray detection unit that detects X-rays emitted from the X-ray irradiation unit
- an X-ray image generation unit that generates an X-ray image based on an X-ray detection signal detected by the X-ray detection unit
- a control unit By inputting the X-ray image generated by the X-ray image generating unit, the target object is detected based on a trained model for detecting a region of the target object inside the body of the subject in the X-ray image.
- an enhanced image generator that generates an image
- an image output unit that causes a display unit to display at least one of the output image and the synthesized image generated by the enhanced image generation unit and the X-ray image simultaneously or switchably.
- the enhanced image generation unit is configured to generate, as the composite image, a superimposed image in which the output image or an image generated based on the output image is superimposed on the X-ray image,
- the enhanced image generation unit generates a colored image by coloring a portion corresponding to the target object in the output image based on the output image, and converts the generated colored image to the X-ray image. configured to generate the superimposed image that is superimposed; Item 2, wherein the image output unit is configured to simultaneously or switchably display the superimposed image in which the colored image is superimposed on the X-ray image and the X-ray image on the display unit.
- the enhanced image generation unit identifies a linear structure of the target object in the output image based on the output image, and identifies a predetermined portion including a portion where the identified linear structure of the target object is arranged. and generating the colored image as a heatmap image colored to vary according to the density; Item 3, wherein the image output unit is configured to simultaneously or switchably display the superimposed image on which the colored image as the heat map image is superimposed, and the X-ray image.
- X-ray imaging apparatus according to.
- the enhanced image generation unit superimposes an intermediate layer image as the output image output from the intermediate layer of the learned model and in which the target object in the X-ray image is enhanced, on the X-ray image. configured to generate the superimposed image; Item 2, wherein the image output unit is configured to simultaneously or switchably display the superimposed image in which the intermediate layer image is superimposed on the X-ray image and the X-ray image on the display unit. 5.
- the X-ray imaging apparatus according to any one of 1 to 4.
- the enhanced image generation unit outputs from the output layer of the trained model, outputs from the output layer image representing the region of the target object in the X-ray image, and outputs from the intermediate layer of the trained model and an intermediate layer image in which the target object in the X-ray image is enhanced, as the output image, Items 1 to 5, wherein the image output unit is configured to simultaneously or switchably display at least one of the output layer image and the intermediate layer image and the X-ray image on the display unit.
- the X-ray imaging apparatus according to any one of 1.
- An image processing apparatus comprising: an image output unit that causes a display unit to simultaneously or switchably display at least one of the output image and the synthesized image generated by the enhanced image generation unit, and the X-ray image.
- X-ray irradiation unit 2 X-ray detection unit 3
- X-ray image generation unit 4 Display unit 6
- Control unit (image processing device, computer) 10 X-ray image 11
- Output layer image (enhanced image, output image) 12
- Interlayer image (enhanced image, output image) 13
- Colored image 14 Colored superimposed image (enhanced image, synthesized image, superimposed image) 15
- Intermediate layer superimposed image (enhanced image, synthesized image, superimposed image) 51 image processing program 52 trained model 52b intermediate layer 52c output layer 61 enhanced image generator 62 image output unit 100 X-ray imaging apparatus 101 subject 200 target object
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
図1~図11を参照して、本発明の一実施形態によるX線撮影装置100について説明する。
図2に示すように、X線撮影装置100は、X線照射部1、X線検出部2、X線画像生成部3、表示部4、記憶部5、および、制御部6を備える。なお、制御部6は、請求の範囲の「画像処理装置」および「コンピュータ」の一例である。
ここで、本実施形態では、図4に示すように、強調画像生成部61(制御部6)は、X線画像生成部3によって生成されたX線画像10が入力されることによって、X線画像10中の被検体101の対象物体200の領域を検出する学習済みモデル52に基づいて、対象物体200の位置が強調された学習済みモデル52の出力画像(11、12)および出力画像(11、12)とX線画像10とに基づいて対象物体200の位置が強調されるように生成される合成画像(14、15)を、強調画像として生成する。そして、画像出力部62(制御部6)は、強調画像生成部61によって生成された出力画像(11、12)および合成画像(14、15)のうちの少なくとも一方と、X線画像10とを同時または切り替え可能に表示部4に表示させる。
本実施形態では、図4~図6に示すように、強調画像生成部61(制御部6)は、機械学習によって生成された学習済みモデル52に基づいて、出力層画像11と中間層画像12とを、学習済みモデル52の出力画像として生成する。学習済みモデル52は、深層学習を用いた機械学習によって生成される。学習済みモデル52は、たとえば、全層畳み込みネットワーク(Fully Convolution Network:FCN)の1種類であるU-Netをベースとして生成される。学習済みモデル52は、入力であるX線画像10の各画素のうちから、対象物体200であると推定される画素を変換することによって、X線画像10から対象物体200であると推定された部分を検出する画像変換(画像再構成)を実行するように学習させることによって生成される。なお、出力層画像11および中間層画像12は、請求の範囲の「強調画像」および「出力画像」の一例である。
図7に示すように、本実施形態では、学習済みモデル52は、X線画像10から手術用ガーゼ、縫合針、および、鉗子などの対象物体200を検出するように機械学習によって生成される。学習済みモデル52は、X線撮影装置100とは別個の学習装置300によって予め生成される。学習装置300は、たとえば、CPU、GPU、ROM、および、RAMなどを含んで構成されたコンピュータである。学習装置300は、複数の教師入力用X線画像310と複数の教師出力用画像320とを教師データ(トレーニングセット)として、深層学習を用いた機械学習によって学習済みモデル52を生成する。
本実施形態では、図4、図8および図9に示すように、強調画像生成部61(制御部6)は、X線画像10と、学習済みモデル52の出力層画像11とに基づいて、出力層画像11に基づいて生成した色付け画像13がX線画像10に重畳された色付け重畳画像14を、合成画像として生成する。具体的には、強調画像生成部61は、出力層画像11に基づいて、出力層画像11中の対象物体200に対応する部分を色付けすることによって、色付け画像13を生成する。そして、強調画像生成部61は、生成した色付け画像13がX線画像10に重畳された色付け重畳画像14を生成する。なお、色付け画像13は、請求の範囲の「出力画像に基づいて生成した画像」の一例である。また、色付け重畳画像14は、請求の範囲の「強調画像」、「合成画像」および「重畳画像」の一例である。
図11に示すように、本実施形態では、画像出力部62(制御部6)は、出力層画像11、中間層画像12、色付け重畳画像14、および、中間層重畳画像15のうちの少なくとも1つと、X線画像10とを、同時にまたは切り替え可能に表示部4に表示させる。
次に、図12を参照して、本実施形態による画像処理方法に関する制御処理フローについて説明する。また、ステップ501~ステップ503は、X線画像生成部3による制御処理を示し、ステップ504~ステップ507は、制御部6による制御処理を示す。
本実施形態では、以下のような効果を得ることができる。
なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく請求の範囲によって示され、さらに請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
上記した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
被検体にX線を照射するX線照射部と、
前記X線照射部から照射されたX線を検出するX線検出部と、
前記X線検出部によって検出されたX線の検出信号に基づいてX線画像を生成するX線画像生成部と、
制御部と、を備え、
前記制御部は、
前記X線画像生成部によって生成された前記X線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成する強調画像生成部と、
前記強調画像生成部によって生成された前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる画像出力部と、を含む、X線撮影装置。
前記強調画像生成部は、前記出力画像または前記出力画像に基づいて生成した画像が前記X線画像に重畳された重畳画像を、前記合成画像として生成するように構成されており、
前記画像出力部は、前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、項目1に記載のX線撮影装置。
前記強調画像生成部は、前記出力画像に基づいて、前記出力画像中の前記対象物体に対応する部分を色付けすることによって、色付け画像を生成するとともに、生成した前記色付け画像が前記X線画像に重畳された前記重畳画像を生成するように構成されており、
前記画像出力部は、前記色付け画像が前記X線画像に重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、項目2に記載のX線撮影装置。
前記強調画像生成部は、前記出力画像に基づいて、前記出力画像中の前記対象物体の線状の構造を識別するとともに、識別した前記対象物体の線状の構造が配置される部位を含む所定の領域における密度を取得し、前記密度に応じて変化するように色付けされたヒートマップ画像として前記色付け画像を生成するように構成されており、
前記画像出力部は、前記ヒートマップ画像としての前記色付け画像が重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、項目3に記載のX線撮影装置。
前記強調画像生成部は、前記学習済みモデルの中間層から出力されるとともに前記X線画像中の前記対象物体が強調された前記出力画像としての中間層画像が、前記X線画像に重畳された前記重畳画像を生成するように構成されており、
前記画像出力部は、前記中間層画像が前記X線画像に重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、項目2~4のいずれか1項に記載のX線撮影装置。
前記強調画像生成部は、前記学習済みモデルの出力層から出力されるとともに、前記X線画像中の前記対象物体の領域を表す出力層画像と、前記学習済みモデルの中間層から出力されるとともに、前記X線画像中の前記対象物体が強調された中間層画像とのうちの少なくとも一方を、前記出力画像として生成するように構成されており、
前記画像出力部は、前記出力層画像および前記中間層画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、項目1~5のいずれか1項に記載のX線撮影装置。
被検体に照射されたX線の検出信号に基づいて生成されたX線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成する強調画像生成部と、
前記強調画像生成部によって生成された前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる画像出力部と、を備える、画像処理装置。
コンピュータに、
被検体に照射されたX線の検出信号に基づいて生成されたX線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成させる処理と、
前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる処理と、
を実行させる、画像処理プログラム。
2 X線検出部
3 X線画像生成部
4 表示部
6 制御部(画像処理装置、コンピュータ)
10 X線画像
11 出力層画像(強調画像、出力画像)
12 中間層画像(強調画像、出力画像)
13 色付け画像
14 色付け重畳画像(強調画像、合成画像、重畳画像)
15 中間層重畳画像(強調画像、合成画像、重畳画像)
51 画像処理プログラム
52 学習済みモデル
52b 中間層
52c 出力層
61 強調画像生成部
62 画像出力部
100 X線撮影装置
101 被検体
200 対象物体
Claims (8)
- 被検体にX線を照射するX線照射部と、
前記X線照射部から照射されたX線を検出するX線検出部と、
前記X線検出部によって検出されたX線の検出信号に基づいてX線画像を生成するX線画像生成部と、
制御部と、を備え、
前記制御部は、
前記X線画像生成部によって生成された前記X線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成する強調画像生成部と、
前記強調画像生成部によって生成された前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる画像出力部と、を含む、X線撮影装置。 - 前記強調画像生成部は、前記出力画像または前記出力画像に基づいて生成した画像が前記X線画像に重畳された重畳画像を、前記合成画像として生成するように構成されており、
前記画像出力部は、前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、請求項1に記載のX線撮影装置。 - 前記強調画像生成部は、前記出力画像に基づいて、前記出力画像中の前記対象物体に対応する部分を色付けすることによって、色付け画像を生成するとともに、生成した前記色付け画像が前記X線画像に重畳された前記重畳画像を生成するように構成されており、
前記画像出力部は、前記色付け画像が前記X線画像に重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、請求項2に記載のX線撮影装置。 - 前記強調画像生成部は、前記出力画像に基づいて、前記出力画像中の前記対象物体の線状の構造を識別するとともに、識別した前記対象物体の線状の構造が配置される部位を含む所定の領域における密度を取得し、前記密度に応じて変化するように色付けされたヒートマップ画像として前記色付け画像を生成するように構成されており、
前記画像出力部は、前記ヒートマップ画像としての前記色付け画像が重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、請求項3に記載のX線撮影装置。 - 前記強調画像生成部は、前記学習済みモデルの中間層から出力されるとともに前記X線画像中の前記対象物体が強調された前記出力画像としての中間層画像が、前記X線画像に重畳された前記重畳画像を生成するように構成されており、
前記画像出力部は、前記中間層画像が前記X線画像に重畳された前記重畳画像と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、請求項2に記載のX線撮影装置。 - 前記強調画像生成部は、前記学習済みモデルの出力層から出力されるとともに、前記X線画像中の前記対象物体の領域を表す出力層画像と、前記学習済みモデルの中間層から出力されるとともに、前記X線画像中の前記対象物体が強調された中間層画像とのうちの少なくとも一方を、前記出力画像として生成するように構成されており、
前記画像出力部は、前記出力層画像および前記中間層画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に前記表示部に表示させるように構成されている、請求項1に記載のX線撮影装置。 - 被検体に照射されたX線の検出信号に基づいて生成されたX線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成する強調画像生成部と、
前記強調画像生成部によって生成された前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる画像出力部と、を備える、画像処理装置。 - コンピュータに、
被検体に照射されたX線の検出信号に基づいて生成されたX線画像が入力されることによって、前記X線画像中の前記被検体の体内の対象物体の領域を検出する学習済みモデルに基づいて、前記対象物体の位置が強調された前記学習済みモデルの出力画像および前記出力画像と前記X線画像とに基づいて前記対象物体の位置が強調されるように生成される合成画像のうちの少なくとも一方を、強調画像として生成させる処理と、
前記出力画像および前記合成画像のうちの少なくとも一方と、前記X線画像とを同時または切り替え可能に表示部に表示させる処理と、
を実行させる、画像処理プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023512817A JPWO2022215303A1 (ja) | 2021-04-07 | 2021-12-27 | |
EP21936100.3A EP4321098A1 (en) | 2021-04-07 | 2021-12-27 | Image processing device, image processing method, and image processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021065057 | 2021-04-07 | ||
JP2021-065057 | 2021-04-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022215303A1 true WO2022215303A1 (ja) | 2022-10-13 |
Family
ID=83546304
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/048529 WO2022215303A1 (ja) | 2021-04-07 | 2021-12-27 | X線撮影装置、画像処理装置、および、画像処理プログラム |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4321098A1 (ja) |
JP (1) | JPWO2022215303A1 (ja) |
WO (1) | WO2022215303A1 (ja) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016034451A (ja) * | 2014-08-04 | 2016-03-17 | 株式会社東芝 | X線診断装置 |
WO2019138438A1 (ja) * | 2018-01-09 | 2019-07-18 | 株式会社島津製作所 | 画像作成装置 |
JP2019180605A (ja) | 2018-04-04 | 2019-10-24 | キヤノン株式会社 | 情報処理装置、放射線撮影装置、放射線撮影システム、情報処理方法及びプログラム |
WO2019229119A1 (en) * | 2018-05-29 | 2019-12-05 | Koninklijke Philips N.V. | Deep anomaly detection |
JP2020036773A (ja) * | 2018-09-05 | 2020-03-12 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2021013685A (ja) * | 2019-07-16 | 2021-02-12 | 富士フイルム株式会社 | 放射線画像処理装置、方法およびプログラム |
JP2021049111A (ja) * | 2019-09-25 | 2021-04-01 | 富士フイルム株式会社 | 放射線画像処理装置、方法およびプログラム |
-
2021
- 2021-12-27 EP EP21936100.3A patent/EP4321098A1/en active Pending
- 2021-12-27 WO PCT/JP2021/048529 patent/WO2022215303A1/ja active Application Filing
- 2021-12-27 JP JP2023512817A patent/JPWO2022215303A1/ja active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016034451A (ja) * | 2014-08-04 | 2016-03-17 | 株式会社東芝 | X線診断装置 |
WO2019138438A1 (ja) * | 2018-01-09 | 2019-07-18 | 株式会社島津製作所 | 画像作成装置 |
JP2019180605A (ja) | 2018-04-04 | 2019-10-24 | キヤノン株式会社 | 情報処理装置、放射線撮影装置、放射線撮影システム、情報処理方法及びプログラム |
WO2019229119A1 (en) * | 2018-05-29 | 2019-12-05 | Koninklijke Philips N.V. | Deep anomaly detection |
JP2020036773A (ja) * | 2018-09-05 | 2020-03-12 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2021013685A (ja) * | 2019-07-16 | 2021-02-12 | 富士フイルム株式会社 | 放射線画像処理装置、方法およびプログラム |
JP2021049111A (ja) * | 2019-09-25 | 2021-04-01 | 富士フイルム株式会社 | 放射線画像処理装置、方法およびプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP4321098A1 (en) | 2024-02-14 |
JPWO2022215303A1 (ja) | 2022-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102094737B1 (ko) | 의료 영상 처리 장치, 의료 영상 처리 방법, 컴퓨터 판독가능한 의료 영상 처리 프로그램, 이동 대상물 추적 장치 및 방사선 치료 시스템 | |
US10244991B2 (en) | Method and system for providing recommendation for optimal execution of surgical procedures | |
US20200035350A1 (en) | Method and apparatus for processing histological image captured by medical imaging device | |
WO2023103467A1 (zh) | 图像处理方法、装置及设备 | |
EP2386999A2 (en) | Image processing apparatus, image processing method, and image processing program | |
TW200530894A (en) | Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading | |
JP2021013685A (ja) | 放射線画像処理装置、方法およびプログラム | |
KR20200137178A (ko) | 기계학습을 이용한 의료 영상 처리 방법 및 장치 | |
US8155480B2 (en) | Image processing apparatus and image processing method | |
WO2020174747A1 (ja) | 医用画像処理装置、プロセッサ装置、内視鏡システム、医用画像処理方法、及びプログラム | |
WO2023095492A1 (ja) | 手術支援システム、手術支援方法、及び手術支援プログラム | |
JPH0877329A (ja) | 時系列処理画像の表示装置 | |
US20200243184A1 (en) | Medical image processing apparatus, medical image processing method, and system | |
US11069060B2 (en) | Image processing apparatus and radiographic image data display method | |
US20220114729A1 (en) | X-ray imaging apparatus, image processing method, and generation method of trained model | |
WO2022215303A1 (ja) | X線撮影装置、画像処理装置、および、画像処理プログラム | |
US20240180501A1 (en) | X-ray imaging apparatus, image processing apparatus, and image processing program | |
JP6987342B2 (ja) | 画像処理装置、方法及びプログラム | |
WO2022176280A1 (ja) | X線撮影装置、画像処理装置、および、画像処理方法 | |
JP2022064760A (ja) | X線撮影装置、画像処理方法、および、学習済みモデルの生成方法 | |
CN106575436B (zh) | 用于钙化的肋-软骨关节的视觉评估的轮廓显示 | |
JP5324041B2 (ja) | 超音波診断装置 | |
JP6990540B2 (ja) | 映像処理装置、映像処理方法、および映像処理プログラム | |
JP2022064766A (ja) | X線撮影システムおよび異物確認方法 | |
JP7472845B2 (ja) | X線撮影装置および画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21936100 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023512817 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18285144 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021936100 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021936100 Country of ref document: EP Effective date: 20231107 |