WO2020240653A1 - X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device - Google Patents

X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device Download PDF

Info

Publication number
WO2020240653A1
WO2020240653A1 PCT/JP2019/020873 JP2019020873W WO2020240653A1 WO 2020240653 A1 WO2020240653 A1 WO 2020240653A1 JP 2019020873 W JP2019020873 W JP 2019020873W WO 2020240653 A1 WO2020240653 A1 WO 2020240653A1
Authority
WO
WIPO (PCT)
Prior art keywords
ray imaging
main body
imaging apparatus
apparatus main
image
Prior art date
Application number
PCT/JP2019/020873
Other languages
French (fr)
Japanese (ja)
Inventor
和俊 谷
真二 ▲浜▼▲崎▼
淳平 坂口
健 代田
皓史 奥村
Original Assignee
株式会社島津製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社島津製作所 filed Critical 株式会社島津製作所
Priority to PCT/JP2019/020873 priority Critical patent/WO2020240653A1/en
Priority to JP2021521585A priority patent/JP7173321B2/en
Publication of WO2020240653A1 publication Critical patent/WO2020240653A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/10Application or adaptation of safety means

Definitions

  • the present invention relates to an X-ray imaging apparatus and a method for avoiding obstacle contact of the X-ray imaging apparatus.
  • An obstacle contact avoiding method for an X-ray imaging device and an X-ray imaging device has been known.
  • An obstacle contact avoidance method for an X-ray imaging apparatus and an X-ray imaging apparatus is disclosed in, for example, Japanese Patent Application Laid-Open No. 2012-205681.
  • the above-mentioned Japanese Patent Application Laid-Open No. 2012-205681 discloses an X-ray imaging apparatus including a holding device movably installed on a ceiling portion of a room.
  • the holding device holds an X-ray tube or the like that irradiates X-rays.
  • the X-ray imaging apparatus calculates a movement route for moving the holding apparatus according to a predetermined condition. Then, the holding device is automatically moved along the calculated movement route.
  • the X-ray imaging apparatus of JP2012-205681A is provided with a camera used for photographing the room. Further, the X-ray imaging apparatus is provided with an obstacle position calculation unit that calculates the position of an obstacle in the room based on the information captured by the camera. Further, the main CPU of the X-ray imaging device suppresses (stops or decelerates) the movement of the holding device when the distance between the obstacle on the moving route of the holding device and the holding device becomes less than the set distance. Take control.
  • a method for calculating the position of an obstacle in a conventional X-ray imaging apparatus as disclosed in Japanese Patent Application Laid-Open No. 2012-205681 One method is to calculate the vertical distance from the camera to an obstacle based on the parallax image taken by the camera. In this method, the height of the obstacle is calculated based on the calculated vertical distance. Then, an obstacle having a height equal to or higher than the height of the X-ray tube or the like is determined as an obstacle that may come into contact with the X-ray tube or the like. Then, the movement of the holding device is controlled so that the X-ray tube or the like does not come into contact with the obstacle.
  • the main body of the X-ray imaging apparatus such as an X-ray tube may be photographed.
  • the present invention has been made to solve the above-mentioned problems, and one object of the present invention is to use an obstacle and an X-ray imaging apparatus main body even when an obstacle is detected based on a differential image. It is an object of the present invention to provide an obstacle contact avoiding method of an X-ray photographing apparatus and an X-ray photographing apparatus capable of appropriately controlling the movement of the X-ray photographing apparatus main body so that
  • the X-ray apparatus has the X-ray apparatus main body configured to be movable at least in the horizontal direction and the same region around the X-ray apparatus main body.
  • the imaging unit that acquires visible images as a plurality of two-dimensional images in the above and the differential image as a three-dimensional image generated from the plurality of visible images acquired by the imaging unit.
  • the X-ray imaging apparatus main body and X-ray An image processing unit that identifies an object to be detected including an obstacle around the main body of the radiographing device and a control unit that controls to avoid contact between the main body of the X-ray radiographing device and the obstacle are provided.
  • the processing unit is based on the learning result of machine learning that identifies the X-ray machine main body based on a plurality of visible images for teachers as two-dimensional images including the image of the X-ray machine main body as the teacher data given in advance.
  • the X-ray imaging apparatus main body identified in the above is identified in at least one of a plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus main body in the visible image identified based on the learning result.
  • the X-ray imaging device main body and the obstacle are discriminated in the object to be detected, and the horizontal distance between the discriminated X-ray imaging device main body and the obstacle is calculated.
  • the unit is configured to perform control for avoiding contact between the X-ray imaging apparatus main body and an obstacle based on the horizontal distance calculated by the image processing unit.
  • the method of avoiding obstacle contact of the X-ray apparatus in the second aspect is a plurality of teachers as a two-dimensional image including an image of the X-ray apparatus main body as the teacher data given in advance.
  • the step of identifying the detected object and the X-ray apparatus main body identified based on the learning result of machine learning are identified in at least one of the plurality of acquired visible images.
  • a step of calculating the horizontal distance and a step of performing control for avoiding contact between the main body of the X-ray imaging apparatus and an obstacle based on the calculated horizontal distance are provided.
  • the X-ray imaging device main body is specified based on a plurality of visible images for teachers as two-dimensional images including an image of the X-ray imaging device main body as teacher data given in advance.
  • the X-ray machine body and the X-ray machine body in the object to be detected identified based on the differential image based on the image of the X-ray machine body in the visible image identified based on the learning result of machine learning. It is identified as an obstacle.
  • the X-ray imaging apparatus main body and the obstacle can be easily discriminated in the object to be detected based on the parallax image, so that even when the obstacle is detected based on the parallax image, the obstacle and the obstacle can be easily distinguished.
  • the movement of the X-ray imaging apparatus main body can be appropriately controlled so as not to come into contact with the X-ray imaging apparatus main body.
  • the X-ray imaging apparatus 100 includes an X-ray imaging apparatus main body 100a configured to be movable in the horizontal direction.
  • the X-ray imaging apparatus main body 100a is provided in the room 200. Further, the X-ray imaging apparatus main body 100a is provided so as to be suspended from the ceiling surface 201 of the room 200.
  • a rail 202 extending in the X direction is attached to the ceiling surface 201 of the room 200.
  • a rail 203 extending in the Y direction is attached to the lower portion of the rail 202 via a roller or the like.
  • the rail 203 is configured to be movable in the X direction along the rail 202.
  • the X-ray imaging apparatus main body 100a is attached to the rail 203 via a roller or the like.
  • the X-ray imaging apparatus main body 100a is configured to be movable in the Y direction along the rail 203. Therefore, the rail 202 and the rail 203 make the X-ray imaging apparatus main body 100a movable in the horizontal direction (in the XY plane).
  • the X and Y directions are orthogonal to each other.
  • the X-ray imaging apparatus main body 100a includes an X-ray generator 10 including an X-ray tube 11 that irradiates X-rays.
  • the X-ray generating unit 10 includes a collimator 12 provided in the lower part of the X-ray tube 11. Further, the X-ray generating unit 10 is provided with an operation panel 13 for manually moving the X-ray tube 11 and the collimator 12.
  • the X-ray generating unit 10 (X-ray tube 11, collimator 12, and operation panel 13) is configured to be rotatable around an axis along the horizontal direction and an axis along the vertical direction as rotation axes.
  • the X-ray irradiation direction by the X-ray generating unit 10 can be changed, so that it is possible to correspond to each of the case where the imaging method is the standing position and the case where the imaging method is the lying position.
  • the X-ray imaging apparatus main body 100a includes a holding unit 20 that holds the X-ray generating unit 10.
  • the holding unit 20 is provided with a connecting unit 21 connected to the X-ray generating unit 10.
  • the holding portion 20 is provided with a strut portion 22 connected to the connecting portion 21.
  • the strut portion 22 is provided so as to extend in the vertical direction (Z direction).
  • the support column portion 22 is configured so that the X-ray generating portion 10 of the X-ray photographing apparatus main body 100a can be moved in the vertical direction.
  • the lower end 100b (X-ray generating portion 10) of the X-ray photographing apparatus main body 100a is moved in the vertical direction as the support column portion 22 expands and contracts.
  • the strut portion 22 is an example of a “vertical moving portion” within the scope of the claims.
  • the lower end 100b (X-ray generating portion 10) is configured to be movable to an arbitrary position in the horizontal direction and the vertical direction (that is, in the three-dimensional space) by the support column portion 22, the rail 202, and the rail 203. There is.
  • the holding portion 20 is provided with a housing portion 23 attached to the rail 203 of the ceiling surface 201.
  • the strut portion 22 is provided at the lower part of the housing portion 23.
  • the X-ray imaging apparatus main body 100a includes a cable 24 for supplying a current to the X-ray generating unit 10 (X-ray tube 11).
  • the X-ray photographing apparatus 100 includes a stereo camera 30.
  • the stereo camera 30 is attached to the side surface 23a of the housing portion 23.
  • the stereo camera 30 is configured to (simultaneously) acquire a plurality of (two) visible images 31 in the same region around (lower) the X-ray imaging apparatus main body 100a.
  • the stereo camera 30 is an example of the "imaging unit" in the claims.
  • control board 40 is attached to the side surface 23a.
  • the stereo camera 30 is integrally attached to the control board 40.
  • the control board 40 may be provided inside the housing portion 23.
  • the X-ray imaging apparatus 100 includes a control controller 300 that controls the X-ray imaging apparatus main body 100a.
  • the control controller 300 is provided with an emergency stop button 301, a shooting program button 302, four auto-positioning buttons 303, an emergency stop release button 304, a position registration button 305, and a drive status display LED 306. There is.
  • the emergency stop button 301 When the emergency stop button 301 is pressed during the automatic operation of the X-ray imaging apparatus main body 100a, the operation of the X-ray imaging apparatus main body 100a is urgently stopped. Further, by pressing the photographing program button 302, the photographing program of the X-ray photographing apparatus main body 100a is changed. Further, when the auto-positioning button 303 is pressed, the X-ray imaging apparatus main body 100a is automatically moved to a position registered in advance corresponding to each auto-positioning button 303.
  • the emergency stop release button 304 by pressing the emergency stop release button 304, the movement of the X-ray imaging apparatus main body 100a that has been emergency stopped is restarted. Further, when the position registration button 305 is pressed at a predetermined position, the predetermined position is registered as a position where the X-ray imaging apparatus main body 100a is automatically moved when the auto-positioning button 303 is pressed. To. Further, the drive status display LED 306 changes, for example, the blinking pattern of the LED and the color of the LED based on the drive status (for example, during an emergency stop) of the X-ray imaging apparatus main body 100a.
  • the X-ray imaging apparatus main body 100a is controlled by the control unit 42 described later of the control board 40 according to the transmission signal transmitted from the control controller 300 to the X-ray imaging apparatus main body 100a.
  • the control board 40 includes an image processing unit 41 and a control unit 42.
  • the image processing unit 41 acquires image data of a plurality of (two) visible images 31 (see FIG. 7A) captured by the stereo camera 30 from the stereo camera 30.
  • the image processing unit 41 includes a CPU 41a and an image processing circuit 41b.
  • the image processing circuit 41b is composed of an FPGA (Field-Programmable Gate Array).
  • the control unit 42 includes the CPU 42a. Both the CPU 41a and the image processing circuit 41b may be configured as one FPGA.
  • step S1 machine learning for identifying the X-ray imaging apparatus main body 100a is executed in the visible image 31 (see FIG. 7) acquired by the stereo camera 30.
  • This machine learning is performed based on a plurality of visible images for teachers 30a (see FIG. 6) given in advance.
  • the teacher visible image 30a is a two-dimensional image including an image of the X-ray imaging apparatus main body 100a as teacher data.
  • a plurality of (for example, tens of thousands) visible images for teachers 30a having different positions and orientations of the X-ray imaging apparatus main body 100a (for example, the X-ray generating unit 10) are provided. , Is imaged by the stereo camera 30.
  • the plurality of visible images for teachers 30a are given as teacher data, so that the X-ray imaging apparatus 100 learns the shape of the X-ray imaging apparatus main body 100a.
  • deep learning AI
  • the machine learning in step S1 is performed in advance prior to the use of the X-ray imaging apparatus 100 in the medical field (steps S2 to S8).
  • step S2 the visible image 31 and the parallax image 32 are acquired. Specifically, a plurality of (two) visible images 31 (see FIG. 7A) are captured by the stereo camera 30, and an image processing unit 41 (image processing circuit 41b) is imaged based on the two visible images 31. ) Generates a disparity image 32 (see FIG. 7B) as an image for obtaining a three-dimensional depth (distance distance D1 from the stereo camera 30 (see FIG. 8)).
  • a person 401, a person 402, an instrument 403, and a part of the X-ray imaging apparatus main body 100a (X-ray generating unit 10) are displayed as the object to be detected P displayed on the parallax image 32.
  • a bed, a stand for standing photography, a tsuit for X-ray protection, a shelf, a display device, and the like can be considered.
  • the imaging of the visible image 31 and the generation of the parallax image 32 are performed every about 30 fps.
  • "fps" represents a frame rate.
  • step S3 the height H1 (see FIG. 8) of the object to be detected P is calculated by the image processing unit 41 (CPU 41a) based on the parallax image 32.
  • the object to be detected P is color-coded according to the distance D1 (see FIG. 8) from the stereo camera 30 (in FIG. 7B, it is shown by diagonal lines in the parallax image 32).
  • the object to be detected P having a height H1 (see FIG. 8) equal to or higher than the height H2 (see FIG. 8) of the X-ray imaging apparatus main body 100a is the image processing unit 41 (image processing circuit 41b).
  • the height H2 of the X-ray imaging apparatus main body 100a means the height of the lower end 100b (see FIG. 8) of the X-ray imaging apparatus main body 100a from the floor surface 204 (see FIG. 8).
  • the height H2 of the X-ray imaging apparatus main body 100a changes (moves).
  • binarization processing is performed by the image processing unit 41 (image processing circuit 41b) based on the parallax image 32. That is, the image processing unit 41 (image processing circuit 41b) has a height H1 (see FIG. 8) calculated by the image processing unit 41 based on the parallax image 32 among the objects to be detected P displayed on the parallax image 32.
  • the object to be detected P having a height H2 (see FIG. 8) or more of the X-ray imaging apparatus main body 100a and the object to be detected P having a height H1 smaller than the height H2 are distinguished. As shown in FIG.
  • the binarized image 33 displayed by painting is generated by the image processing unit 41 (image processing circuit 41b).
  • the person 401, the person 402, and the instrument 403 are extracted as the object to be detected P having a height H1 equal to or higher than the height H2 of the X-ray apparatus main body 100a.
  • the X-ray imaging apparatus main body 100a itself is also extracted as the object to be detected P having a height H1 equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a.
  • the height H3 of the stereo camera 30 means the height of the stereo camera 30 from the floor surface 204. Further, the information on the height H3 of the stereo camera 30 is owned by the X-ray imaging apparatus 100 in advance.
  • the information on the height H4 of the X-ray generating unit 10 itself is owned by the X-ray photographing apparatus 100 in advance.
  • the image processing unit 41 (image processing circuit 41b) has a plurality of visible images acquired by the stereo camera 30 of the X-ray imaging apparatus main body 100a specified based on the learning result of machine learning.
  • the X-ray apparatus main body 100a and the obstacle in the object P to be detected are based on the image of the X-ray apparatus main body 100a in the visible image 31 identified based on the learning result. It is configured to discriminate from 400.
  • the image processing unit 41 identifies the region R1 in which the X-ray imaging apparatus main body 100a is photographed in the visible image 31 acquired by the stereo camera 30, and also displays the parallax image 32. Based on this, among the regions R in which the object to be detected P is photographed, the regions R other than the region R1a corresponding to the region R1 are configured to be identified as the region R2 in which the obstacle 400 is photographed.
  • the region R1 and the region R2 are examples of the "first region” and the "second region” of the claims, respectively.
  • step S5 the image processing unit 41 (image processing circuit 41b) determines the learning result of machine learning in at least one of the plurality of visible images 31. Based on this, the X-ray imaging apparatus main body 100a is identified by the image processing unit 41 (image processing circuit 41b).
  • FIG. 7D a plurality of extraction unit regions R10 (hatched portions) are shown so as to include the region R1 in which the X-ray imaging apparatus main body 100a exists in the visible image 31.
  • the extraction unit area R10 is the smallest unit area used when the image processing unit 41 (image processing circuit 41b) performs image recognition.
  • step S6 the region R1a in the binarized image 33 corresponding to the region R1 in which the X-ray imaging apparatus main body 100a is identified in the visible image 31 is removed from the region R in the binarized image 33.
  • a digitized image 34 (see FIG. 7E) is generated by the image processing unit 41 (image processing circuit 41b). That is, by discriminating between the X-ray imaging apparatus main body 100a and the obstacle 410 (around the X-ray imaging apparatus main body 100a), the obstacle 400 (person) with respect to the X-ray imaging apparatus main body 100a in the binarized image 34. Only 401, person 402, and instrument 403) have been extracted. As a result, as shown in FIG. 7 (F), the image processing unit 41 (image processing circuit 41b) identifies each of the person 401, the person 402, and the instrument 403 as an obstacle 410.
  • the image processing unit 41 (CPU 41a) is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is smaller than the extraction unit region R10 based on the learning result.
  • the region R excluding P is configured to be identified as the region R3. Further, among the regions R in which the object to be detected P is photographed, the regions R other than the regions R corresponding to the regions R1 and R3 are configured to be identified as the region R2 in which the obstacle 410 is photographed. There is.
  • the region R3 is an example of the "third region".
  • the extraction unit region R10 when the extraction unit region R10 is relatively large, a portion R1b that is not identified as the region R1 may occur.
  • the binarized image 33 in the binarized image 34a in which the region R1 is removed from the binarized image 33 (see FIG. 7C), the binarized image 33 (see FIG. 7C).
  • a region R3 smaller than the extraction unit region R10 is extracted at a position corresponding to the region R1. Since the region R3 is a portion where the X-ray imaging apparatus main body 100a cannot be identified in the visible image 31, the image processing unit 41 (CPU 41a) interferes with the region R1 and the region R other than the region R corresponding to the region R3.
  • the object 410 is identified as the area R2 in which the image is taken. That is, the image processing unit 41 (CPU 41a) removes the region R3 from the binarized image 34a to generate the binarized image 33 (see FIG. 7C).
  • step S7 the distance L (see FIG. 7 (F)) between the X-ray imaging apparatus main body 100a determined in step S6 and the obstacle 410 (around the X-ray imaging apparatus main body 100a) is an image. It is calculated by the processing unit 41 (image processing circuit 41b).
  • FIG. 7 (F) the horizontal distance La between the person 401 and the X-ray imaging apparatus main body 100a, the horizontal distance Lb between the person 402 and the X-ray imaging apparatus main body 100a, the instrument 403 and the X-ray.
  • the horizontal distance Lc from the photographing apparatus main body 100a is shown.
  • the object to be detected P (X-ray imaging apparatus main body 100a and obstacle 400) is extracted as having a height H1 equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a. .. That is, the image processing unit 41 (image processing circuit 41b) calculates the distance L (see FIG. 7 (F)) when the height of the obstacle 410 is equal to or higher than the height of the X-ray imaging apparatus main body 100a. It is configured in.
  • the distance L is an example of the "horizontal distance" in the claims.
  • the distance L is calculated based on the position of the X-ray imaging apparatus main body 100a identified based on the learning result of machine learning. Specifically, the position (coordinates) of the X-ray apparatus main body 100a identified based on the learning result of machine learning on the image and the position (coordinates) of the obstacle 410 identified by the above control on the image ( The distance L is calculated by calculating the difference from the coordinates) by the image processing unit 41 (CPU 41a).
  • the image processing unit 41 (CPU 41a) is configured to calculate the shortest distance between the X-ray imaging apparatus main body 100a and the obstacle 410.
  • control unit 42 (CPU 42a) has the X-ray imaging apparatus main body 100a and the obstacle 410 based on the distance L (La, Lb and Lc) calculated by the image processing unit 41 (CPU 41a). It is configured to perform control to avoid contact with. Specific explanations will be given in the following steps S8 and S9.
  • step S8 the control unit 42 (CPU 42a) determines whether or not the distance L (La, Lb, and Lc) calculated by the image processing unit 41 (CPU 41a) is equal to or less than a predetermined value. Is determined by. If the distance L (La, Lb and Lc) is equal to or less than a predetermined value, the process proceeds to step S9, and each of the distance L1 (L1a, L1b and L1c) and the distance L2 (L2a) has a predetermined value. If it is larger than, the process returns to step S2. A plurality of values may be set as the predetermined value.
  • the control unit 42 performs X-ray imaging when the horizontal distance L between the X-ray imaging apparatus main body 100a and the obstacle 410 (510) is equal to or less than a predetermined value. It is configured to decelerate or stop the device body 100a. For example, the control unit 42 (CPU 42a) decelerates the X-ray imaging apparatus main body 100a when the distance L is 40 cm or less, and stops the X-ray imaging apparatus main body 100a when the distance L is 20 cm or less. .. Further, the control unit 42 (CPU 42a) may perform control to generate an alarm when the distance L is equal to or less than a predetermined value.
  • step S9 the process returns to step S2.
  • steps S2 to S9 are performed when the X-ray imaging apparatus main body 100a is automatically moving (or when the X-ray generating unit 10 is automatically rotating). Further, since the visible image 31 (parallax image 32) is acquired (generated) (step S2) in a relatively short time of 30 fps, the patient suddenly gets up when the obstacle 410 moves (for example, in tomography). In the case of), it is possible to suppress the contact between the obstacle 410 and the X-ray imaging apparatus main body 100a. Further, the X-ray imaging apparatus main body 100a can be moved at a relatively high speed.
  • the image processing unit 41 (CPU 41a) is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is smaller than the extraction unit region R10 based on the learning result.
  • the region R excluding P is configured to be identified as the region R3. Further, among the regions R in which the object to be detected P is photographed, the regions R other than the regions R corresponding to the regions R1 and R3 are configured to be identified as the region R2 in which the obstacle 410 is photographed. There is.
  • the region R3 is an example of the "third region".
  • the extraction unit region R10 when the extraction unit region R10 is relatively large, a portion R1b that is not identified as the region R1 may occur.
  • the binarized image 33 in the binarized image 34a in which the region R1 is removed from the binarized image 33 (see FIG. 7C), the binarized image 33 (see FIG. 7C).
  • a region R3 smaller than the extraction unit region R10 is extracted at a position corresponding to the region R1. Since the region R3 is a portion where the X-ray imaging apparatus main body 100a cannot be identified in the visible image 31, the image processing unit 41 (CPU 41a) interferes with the region R1 and the region R other than the region R corresponding to the region R3.
  • the object 410 is identified as the area R2 in which the image is taken. That is, the image processing unit 41 (CPU 41a) removes the region R3 from the binarized image 34a to generate the binarized image 33 (see FIG. 7C).
  • the X-ray imaging apparatus 100 is based on a plurality of visible images for teachers 30a as two-dimensional images including an image of the X-ray imaging apparatus main body 100a as teacher data given in advance.
  • the object to be detected P identified based on the differential image 32 based on the image of the X-ray imaging apparatus main body 100a in the visible image 31 identified based on the learning result of the machine learning that identifies the X-ray imaging apparatus main body 100a. It is configured so that the X-ray imaging apparatus main body 100a and the obstacle 400 around the X-ray imaging apparatus main body 100a can be discriminated from each other.
  • the X-ray imaging apparatus main body 100a and the obstacle 400 can be easily discriminated in the object P identified based on the parallax image 32, so that the obstacle 400 is detected based on the parallax image 32. Even in this case, the movement of the X-ray imaging apparatus main body 100a can be appropriately controlled so that the obstacle 400 and the X-ray imaging apparatus main body 100a do not come into contact with each other. Further, since only the X-ray imaging apparatus main body 100a among the detected objects P identified based on the disparity image 32 is specified based on the learning result of machine learning, a plurality of types of detected objects P can be machine-learned. It is possible to suppress an increase in the learning result of machine learning required for specifying the object P to be detected, as compared with the case of specifying based on the learning result of.
  • the image processing unit 41 is in the visible image 31 acquired by the stereo camera 30 (imaging unit), and the region R1 (first) in which the X-ray imaging apparatus main body 100a is photographed. It is configured to identify the area). Further, in the image processing unit 41, the obstacle 400 covers the area R other than the area R1a corresponding to the area R1 (first area) in the area where the object P to be detected is photographed based on the parallax image 32. It is configured to be identified as the area R2 (second area) in which the image is being photographed.
  • the X-ray imaging apparatus main body 100a region R1 (first region) in the visible image 31) and the obstacle 400 (region R2 (second region) in the parallax image 32). Area)) can be easily identified.
  • the image processing unit 41 is adjacent to the region R1 (first region) identified in the visible image 31 based on the parallax image 32, and image recognition based on the learning result is performed.
  • the region excluding the object to be detected P which is smaller than the extraction unit region R10, which is the minimum unit region at the time of the operation, is configured to be identified as the region R3 (third region).
  • the image processing unit 41 sets the image processing unit 41 to a region other than the region R1 (first region) and the region R3 (third region) in the region where the object P to be detected is photographed based on the parallax image 32.
  • the area R is configured to be identified as the area R2 (second area) where the obstacle 400 is photographed.
  • the extraction unit region R10 is relatively large, in the visible image 31, a part of the region (region R3 (third region)) in which the X-ray imaging apparatus main body 100a is photographed is learned by machine learning. Even if it cannot be identified based on the result, the region R3 (third region) is identified based on the parallax image 32, so that the X-ray imaging apparatus main body 100a can be appropriately identified in the object P to be detected.
  • the X-ray imaging apparatus main body 100a is configured to be provided so as to be suspended from the ceiling surface 201 of the room 200 in which the X-ray imaging apparatus main body 100a is provided.
  • the image processing unit 41 is configured to calculate the distance L (horizontal distance) when the height H1 of the obstacle 400 is equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a.
  • the distance L (horizontal distance) is calculated only when there is a possibility that the X-ray imaging apparatus main body 100a and the obstacle 400 come into contact with each other, so that the processing load of the image processing unit 41 is suppressed from becoming large. Can be done.
  • the support column portion 22 (vertical direction moving portion) configured so that the X-ray photographing apparatus main body 100a and the lower end 100b of the X-ray photographing apparatus main body 100a can be moved in the vertical direction.
  • the image processing unit 41 is configured to calculate the distance L (horizontal distance) when the height H1 of the obstacle 400 is equal to or higher than the height D2 of the support column 22.
  • the distance L horizontal distance
  • the imaging unit is configured as the stereo camera 30.
  • the parallax image 32 can be easily acquired by using a stereo camera that images the object P to be detected from a plurality of different directions.
  • the method of avoiding contact with obstacles of the X-ray imaging apparatus 100 is used for a plurality of teachers as a two-dimensional image including an image of the X-ray imaging apparatus main body 100a as teacher data given in advance. Identifying the X-ray imaging apparatus main body 100a based on the visible image 30a Based on the image of the X-ray imaging apparatus main body 100a in the visible image 31 identified based on the learning result of machine learning, it is identified based on the parallax image 32.
  • the object to be detected P is configured to include a step of discriminating between the X-ray imaging apparatus main body 100a and the obstacle 400 around the X-ray imaging apparatus main body 100a.
  • the X-ray imaging apparatus main body 100a and the obstacle 400 can be easily discriminated in the object P identified based on the disparity image 32, so that the obstacle 400 is detected based on the disparity image 32.
  • a method for avoiding obstacle contact of the X-ray imaging apparatus 100 capable of appropriately controlling the movement of the X-ray imaging apparatus main body 100a so that the obstacle 400 and the X-ray imaging apparatus main body 100a do not come into contact with each other. can do.
  • the X-ray imaging apparatus main body 100a is provided so as to be suspended from the ceiling surface 201, but the present invention is not limited to this.
  • the X-ray imaging apparatus main body 100a may be mounted on a wall surface or the like.
  • the height H2 of the lower end 100b of the X-ray imaging apparatus main body 100a is calculated by the image processing unit 41 based on the parallax image 32, but the present invention is not limited to this. ..
  • the height H2 may be calculated based on the amount of movement of the X-ray generating portion 10 or the like in the vertical direction due to the expansion and contraction of the holding portion 20.
  • the present invention is not limited to this.
  • the X-ray imaging apparatus main body 100a controls the contact avoidance of the X-ray imaging apparatus main body 100a with the obstacle 400 based on the learning result of machine learning acquired from the outside of the X-ray imaging apparatus main body 100a. It may be configured as.
  • the imaging unit is configured as a stereo camera 30
  • the present invention is not limited to this.
  • a visible image and a parallax image may be acquired by using an imaging unit other than the stereo camera.
  • the image processing unit 41 is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is the minimum unit region for performing image recognition based on the learning result.
  • An example is shown in which a region excluding the object to be detected P smaller than the region R10 is identified as the region R3, but the present invention is not limited to this.
  • the image processing unit may be configured not to perform the process of identifying the region R3.
  • the image processing unit 41 is configured to include the CPU 41a and the image processing circuit 41b (FPGA) is shown, but the present invention is not limited to this.
  • the image processing unit may be configured to include only one of the CPU and the image processing circuit.
  • control unit 42 and the image processing unit 41 has been described using a “flow-driven” flowchart, but the present invention is not limited to this.
  • the processing of the control unit 42 and the image processing unit 41 may be performed by an "event-driven type" that executes the processing in event units. In this case, it may be completely event-driven, or it may be a combination of event-driven and flow-driven.
  • (Item 1) At least the main body of the X-ray imaging device that is configured to be movable in the horizontal direction, An imaging unit that acquires visible images as a plurality of two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and an imaging unit. Based on the parallax image as a three-dimensional image generated from the plurality of visible images acquired by the imaging unit, the detected object including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body.
  • An image processing unit that identifies objects and A control unit that controls to avoid contact between the X-ray imaging apparatus main body and the obstacle, With
  • the image processing unit is a machine learning device that identifies the X-ray imaging apparatus main body based on a plurality of teacher visible images as two-dimensional images including an image of the X-ray imaging apparatus main body as teacher data given in advance.
  • the X-ray imaging apparatus main body specified based on the learning result is identified in at least one of the plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus main body is identified based on the learning result.
  • the X-ray imaging apparatus main body and the obstacle are discriminated in the detected object, and the discriminated X-ray imaging apparatus main body and the obstacle are It is configured to calculate the horizontal distance between
  • the control unit is configured to perform control for avoiding contact between the X-ray imaging apparatus main body and the obstacle based on the horizontal distance calculated by the image processing unit. Shooting device.
  • the image processing unit identifies a first region in which the X-ray imaging apparatus main body is photographed in the visible image acquired by the imaging unit, and the object to be detected is photographed based on the parallax image.
  • the X-ray imaging apparatus according to item 1 wherein an region other than the region corresponding to the first region is identified as a second region in which the obstacle is photographed. ..
  • the image processing unit is adjacent to the first region identified in the visible image based on the parallax image, and is smaller than the extraction unit region, which is the minimum unit region for performing image recognition based on the learning result.
  • the area excluding the object to be detected is identified as the third area, and the area other than the area corresponding to the first area and the third area among the areas where the object to be detected is photographed is the obstacle.
  • the X-ray imaging apparatus according to item 2 which is configured to identify an object as a second region in which an object is photographed.
  • the X-ray imaging apparatus main body is provided so as to be suspended from a ceiling surface in a room where the X-ray imaging apparatus main body is provided.
  • the image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the main body of the X-ray imaging apparatus, any one of items 1 to 3.
  • the X-ray imaging apparatus according to.
  • a vertical moving unit is provided so that the lower end of the X-ray imaging apparatus main body can be moved in the vertical direction.
  • the X-ray imaging apparatus according to item 4 wherein the image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the vertical moving unit.
  • the X-ray is performed by performing machine learning to identify the X-ray imaging apparatus main body based on a plurality of teacher visible images as two-dimensional images including an image of the X-ray imaging apparatus main body as teacher data given in advance. Steps to identify the main body of the imaging device and A step of acquiring a plurality of visible images as two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and Based on the parallax image as a three-dimensional image generated from the plurality of acquired visible images, the object to be detected including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body is identified.

Abstract

This X-ray imaging device (100) is configured to distinguish between an X-ray imaging device body (100a) and an obstacle (400) in the vicinity of the X-ray imaging device body (100a) in detected objects (P) identified on the basis of a parallax image (32), on the basis of an image of the X-ray imaging device body (100a) in a visible image (31) which is identified on the basis of learning results of machine learning to identify the X-ray imaging device body (100a) on the basis of a plurality of visible images for teaching (30a).

Description

X線撮影装置およびX線撮影装置の障害物接触回避方法How to avoid obstacle contact between X-ray equipment and X-ray equipment
 本発明は、X線撮影装置およびX線撮影装置の障害物接触回避方法に関する。 The present invention relates to an X-ray imaging apparatus and a method for avoiding obstacle contact of the X-ray imaging apparatus.
 従来、X線撮影装置およびX線撮影装置の障害物接触回避方法が知られている。X線撮影装置およびX線撮影装置の障害物接触回避方法は、たとえば、特開2012-205681号公報に開示されている。 Conventionally, an obstacle contact avoiding method for an X-ray imaging device and an X-ray imaging device has been known. An obstacle contact avoidance method for an X-ray imaging apparatus and an X-ray imaging apparatus is disclosed in, for example, Japanese Patent Application Laid-Open No. 2012-205681.
 上記特開2012-205681号公報には、室内の天井部分に移動可能に設置された保持装置を備えるX線撮影装置が開示されている。上記保持装置は、X線を照射するX線管等を保持している。上記X線撮影装置は、所定の条件に従って保持装置を移動させる移動ルートを算出している。そして、保持装置は、算出された移動ルートに沿って自動的に移動される。 The above-mentioned Japanese Patent Application Laid-Open No. 2012-205681 discloses an X-ray imaging apparatus including a holding device movably installed on a ceiling portion of a room. The holding device holds an X-ray tube or the like that irradiates X-rays. The X-ray imaging apparatus calculates a movement route for moving the holding apparatus according to a predetermined condition. Then, the holding device is automatically moved along the calculated movement route.
 また、上記特開2012-205681号公報のX線撮影装置には、室内を撮影するために用いられるカメラが設けられている。また、上記X線撮影装置には、上記カメラにより撮影された情報に基づいて室内の障害物の位置を算出する障害物位置算出部が設けられている。また、X線撮影装置のメインCPUは、保持装置の移動ルート上の障害物と保持装置との間の距離が設定距離以下になった場合に、保持装置の移動を抑制(停止または減速)する制御を行う。 Further, the X-ray imaging apparatus of JP2012-205681A is provided with a camera used for photographing the room. Further, the X-ray imaging apparatus is provided with an obstacle position calculation unit that calculates the position of an obstacle in the room based on the information captured by the camera. Further, the main CPU of the X-ray imaging device suppresses (stops or decelerates) the movement of the holding device when the distance between the obstacle on the moving route of the holding device and the holding device becomes less than the set distance. Take control.
 また、上記特開2012-205681号公報には記載されていないが、上記特開2012-205681号公報に開示されているような従来のX線撮影装置において、障害物の位置を算出する方法の1つとして、カメラにより撮影された視差画像に基づいて、カメラから障害物までの鉛直距離が算出される方法が考えられる。この方法では、算出された鉛直距離に基づいて、障害物の高さが算出される。そして、X線管等の高さ以上の高さを有する障害物が、X線管等と接触する可能性のある障害物として判定される。そして、この障害物にX線管等が接触しないように、保持装置の移動が制御される。 Further, although not described in Japanese Patent Application Laid-Open No. 2012-205681, a method for calculating the position of an obstacle in a conventional X-ray imaging apparatus as disclosed in Japanese Patent Application Laid-Open No. 2012-205681. One method is to calculate the vertical distance from the camera to an obstacle based on the parallax image taken by the camera. In this method, the height of the obstacle is calculated based on the calculated vertical distance. Then, an obstacle having a height equal to or higher than the height of the X-ray tube or the like is determined as an obstacle that may come into contact with the X-ray tube or the like. Then, the movement of the holding device is controlled so that the X-ray tube or the like does not come into contact with the obstacle.
特開2012-205681号公報Japanese Unexamined Patent Publication No. 2012-205681
 しかしながら、上記の視差画像には、障害物に加えてX線管等のX線撮影装置本体が撮影される場合がある。この場合、視差画像に基づいて障害物の高さを算出する方法では、視差画像に撮影されている物体が、障害物か、X線管等のX線撮影装置本体かを識別することが困難である。このため、障害物にX線管等が接触しないように、保持装置(X線撮影装置本体)の移動を適切に制御できない場合があるという問題点が考えらえる。 However, in the above parallax image, in addition to obstacles, the main body of the X-ray imaging apparatus such as an X-ray tube may be photographed. In this case, it is difficult to identify whether the object captured in the parallax image is an obstacle or the main body of the X-ray imaging device such as an X-ray tube by the method of calculating the height of the obstacle based on the parallax image. Is. Therefore, there may be a problem that the movement of the holding device (X-ray imaging apparatus main body) may not be appropriately controlled so that the X-ray tube or the like does not come into contact with the obstacle.
 この発明は、上記のような課題を解決するためになされたものであり、この発明の1つの目的は、視差画像に基づいて障害物を検出する場合でも、障害物とX線撮影装置本体とが接触しないように、X線撮影装置本体の移動を適切に制御することが可能なX線撮影装置およびX線撮影装置の障害物接触回避方法を提供することである。 The present invention has been made to solve the above-mentioned problems, and one object of the present invention is to use an obstacle and an X-ray imaging apparatus main body even when an obstacle is detected based on a differential image. It is an object of the present invention to provide an obstacle contact avoiding method of an X-ray photographing apparatus and an X-ray photographing apparatus capable of appropriately controlling the movement of the X-ray photographing apparatus main body so that
 上記目的を達成するために、この発明の第1の局面におけるX線撮影装置は、少なくとも水平方向に移動可能に構成されているX線撮影装置本体と、X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得する撮像部と、撮像部により取得された複数の可視画像から生成された3次元画像としての視差画像に基づいて、X線撮影装置本体とX線撮影装置本体の周囲の障害物とを含む被検出物を識別する画像処理部と、X線撮影装置本体と前記障害物との接触を回避するための制御を行う制御部と、を備え、画像処理部は、予め与えられた教師データとしてのX線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいてX線撮影装置本体を特定する機械学習の学習結果に基づいて特定されたX線撮影装置本体を、撮像部により取得された複数の可視画像のうちの少なくとも1つにおいて識別した場合に、学習結果に基づいて識別された可視画像におけるX線撮影装置本体の画像に基づいて、被検出物においてX線撮影装置本体と障害物とを判別するとともに、判別したX線撮影装置本体と障害物との間の水平距離を算出するように構成されており、制御部は、画像処理部により算出された水平距離に基づいて、X線撮影装置本体と障害物との接触を回避するための制御を行うように構成されている。 In order to achieve the above object, the X-ray apparatus according to the first aspect of the present invention has the X-ray apparatus main body configured to be movable at least in the horizontal direction and the same region around the X-ray apparatus main body. Based on the imaging unit that acquires visible images as a plurality of two-dimensional images in the above and the differential image as a three-dimensional image generated from the plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus main body and X-ray An image processing unit that identifies an object to be detected including an obstacle around the main body of the radiographing device and a control unit that controls to avoid contact between the main body of the X-ray radiographing device and the obstacle are provided. The processing unit is based on the learning result of machine learning that identifies the X-ray machine main body based on a plurality of visible images for teachers as two-dimensional images including the image of the X-ray machine main body as the teacher data given in advance. When the X-ray imaging apparatus main body identified in the above is identified in at least one of a plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus main body in the visible image identified based on the learning result. Based on the image, the X-ray imaging device main body and the obstacle are discriminated in the object to be detected, and the horizontal distance between the discriminated X-ray imaging device main body and the obstacle is calculated. The unit is configured to perform control for avoiding contact between the X-ray imaging apparatus main body and an obstacle based on the horizontal distance calculated by the image processing unit.
 上記目的を達成するために、第2の局面におけるX線撮影装置の障害物接触回避方法は、予め与えられた教師データとしてのX線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいてX線撮影装置本体を特定する機械学習を行うことにより、X線撮影装置本体を特定するステップと、X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得するステップと、取得された複数の可視画像から生成された3次元画像としての視差画像に基づいて、X線撮影装置本体とX線撮影装置本体の周囲の障害物とを含む被検出物を識別するステップと、機械学習の学習結果に基づいて特定されたX線撮影装置本体を、取得された複数の可視画像のうちの少なくとも1つにおいて識別した場合に、学習結果に基づいて識別された可視画像におけるX線撮影装置本体の画像に基づいて、被検出物においてX線撮影装置本体と障害物とを判別するステップと、判別したX線撮影装置本体と障害物との間の水平距離を算出するステップと、算出された水平距離に基づいて、X線撮影装置本体と障害物との接触を回避するための制御を行うステップと、を備える。 In order to achieve the above object, the method of avoiding obstacle contact of the X-ray apparatus in the second aspect is a plurality of teachers as a two-dimensional image including an image of the X-ray apparatus main body as the teacher data given in advance. By performing machine learning to identify the main body of the X-ray machine based on the visible image, the step of specifying the main body of the X-ray machine and as a plurality of two-dimensional images in the same area around the main body of the X-ray machine. An object including an X-ray machine main body and obstacles around the X-ray machine main body based on a step of acquiring a visible image and a disparity image as a three-dimensional image generated from a plurality of acquired visible images. Based on the learning result, when the step of identifying the detected object and the X-ray apparatus main body identified based on the learning result of machine learning are identified in at least one of the plurality of acquired visible images. Between the step of discriminating between the X-ray machine body and the obstacle in the object to be detected and the discriminated X-ray machine body and the obstacle based on the image of the X-ray machine body in the identified visible image. A step of calculating the horizontal distance and a step of performing control for avoiding contact between the main body of the X-ray imaging apparatus and an obstacle based on the calculated horizontal distance are provided.
 本発明によれば、上記のように、予め与えられた教師データとしてのX線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいてX線撮影装置本体を特定する機械学習の学習結果に基づいて識別された可視画像におけるX線撮影装置本体の画像に基づいて、視差画像に基づいて識別された被検出物においてX線撮影装置本体とX線撮影装置本体の周囲の障害物とが判別される。これにより、視差画像に基づいて識別された被検出物においてX線撮影装置本体と障害物とを容易に判別することができるので、視差画像に基づいて障害物を検出する場合でも、障害物とX線撮影装置本体とが接触しないように、X線撮影装置本体の移動を適切に制御することができる。 According to the present invention, as described above, the X-ray imaging device main body is specified based on a plurality of visible images for teachers as two-dimensional images including an image of the X-ray imaging device main body as teacher data given in advance. Around the X-ray machine body and the X-ray machine body in the object to be detected identified based on the differential image based on the image of the X-ray machine body in the visible image identified based on the learning result of machine learning. It is identified as an obstacle. As a result, the X-ray imaging apparatus main body and the obstacle can be easily discriminated in the object to be detected based on the parallax image, so that even when the obstacle is detected based on the parallax image, the obstacle and the obstacle can be easily distinguished. The movement of the X-ray imaging apparatus main body can be appropriately controlled so as not to come into contact with the X-ray imaging apparatus main body.
一実施形態によるX線撮影装置を示した斜視図である。It is a perspective view which showed the X-ray photographing apparatus by one Embodiment. 一実施形態によるX線撮影装置を示した側面図である。It is a side view which showed the X-ray photographing apparatus by one Embodiment. 一実施形態によるX線撮影装置の制御コントローラを示した図である。It is a figure which showed the control controller of the X-ray photographing apparatus by one Embodiment. 一実施形態によるX線撮影装置の制御基板の構成を示した図である。It is a figure which showed the structure of the control board of the X-ray photographing apparatus by one Embodiment. 一実施形態によるX線撮影装置と障害物との接触を回避するための制御方法を示したフロー図である。It is a flow chart which showed the control method for avoiding the contact between the X-ray imaging apparatus and an obstacle by one Embodiment. 一実施形態によるX線撮影装置の機械学習の方法を説明するための図である。It is a figure for demonstrating the machine learning method of the X-ray imaging apparatus by one Embodiment. 図5の各ステップにおいて画像処理部により行われる制御を説明するための図である。It is a figure for demonstrating the control performed by the image processing unit in each step of FIG. 一実施形態による画像処理部において被写体およびX線撮影装置本体の高さを算出するための方法を説明するための図である。It is a figure for demonstrating the method for calculating the height of the subject and the X-ray photographing apparatus main body in the image processing unit by one Embodiment. 一実施形態によるX線撮影装置の可視画像においてX線撮影装置本体が撮影された部分のうち機械学習に基づいて識別されなかったX線撮影装置本体の部分を説明するための図である。It is a figure for demonstrating the part of the X-ray photographing apparatus main body which was not identified based on the machine learning among the part where the X-ray photographing apparatus main body was photographed in the visible image of the X-ray photographing apparatus by one Embodiment.
 以下、本発明を具体化した実施形態を図面に基づいて説明する。 Hereinafter, embodiments embodying the present invention will be described with reference to the drawings.
 (X線撮影装置の構成)
 図1~図9を参照して、本実施形態によるX線撮影装置100の構成について説明する。
(Configuration of X-ray imaging device)
The configuration of the X-ray imaging apparatus 100 according to the present embodiment will be described with reference to FIGS. 1 to 9.
 図1に示すように、X線撮影装置100は、水平方向に移動可能に構成されているX線撮影装置本体100aを備える。X線撮影装置本体100aは、室内200に設けられている。また、X線撮影装置本体100aは、室内200の天井面201から吊り下げられるように設けられている。 As shown in FIG. 1, the X-ray imaging apparatus 100 includes an X-ray imaging apparatus main body 100a configured to be movable in the horizontal direction. The X-ray imaging apparatus main body 100a is provided in the room 200. Further, the X-ray imaging apparatus main body 100a is provided so as to be suspended from the ceiling surface 201 of the room 200.
 また、室内200の天井面201には、X方向に延びるレール202が取り付けられている。また、レール202の下部には、Y方向に延びるレール203が、ローラ等を介して取り付けられている。これにより、レール203は、レール202に沿ってX方向に移動可能に構成されている。 Further, a rail 202 extending in the X direction is attached to the ceiling surface 201 of the room 200. Further, a rail 203 extending in the Y direction is attached to the lower portion of the rail 202 via a roller or the like. As a result, the rail 203 is configured to be movable in the X direction along the rail 202.
 また、X線撮影装置本体100aは、ローラ等を介してレール203に取り付けられている。これにより、X線撮影装置本体100aは、レール203に沿ってY方向に移動可能に構成されている。したがって、レール202およびレール203によって、X線撮影装置本体100aは、水平方向(XY面内)において移動可能に構成されている。なお、X方向およびY方向は、互いに直交する方向である。 Further, the X-ray imaging apparatus main body 100a is attached to the rail 203 via a roller or the like. As a result, the X-ray imaging apparatus main body 100a is configured to be movable in the Y direction along the rail 203. Therefore, the rail 202 and the rail 203 make the X-ray imaging apparatus main body 100a movable in the horizontal direction (in the XY plane). The X and Y directions are orthogonal to each other.
 X線撮影装置本体100aは、X線を照射するX線管11を含むX線発生部10を含む。X線発生部10は、X線管11の下部に設けられるコリメータ12を含む。また、X線発生部10には、X線管11およびコリメータ12を手動で移動させるための操作盤13が設けられている。 The X-ray imaging apparatus main body 100a includes an X-ray generator 10 including an X-ray tube 11 that irradiates X-rays. The X-ray generating unit 10 includes a collimator 12 provided in the lower part of the X-ray tube 11. Further, the X-ray generating unit 10 is provided with an operation panel 13 for manually moving the X-ray tube 11 and the collimator 12.
 また、X線発生部10(X線管11、コリメータ12、および、操作盤13)は、水平方向に沿った軸および鉛直方向に沿った軸を回転軸として回転可能に構成されている。これにより、X線発生部10によるX線の照射方向を変えることが可能であるので、撮影方法が立位の場合および臥位の場合の各々に対応可能である。 Further, the X-ray generating unit 10 (X-ray tube 11, collimator 12, and operation panel 13) is configured to be rotatable around an axis along the horizontal direction and an axis along the vertical direction as rotation axes. As a result, the X-ray irradiation direction by the X-ray generating unit 10 can be changed, so that it is possible to correspond to each of the case where the imaging method is the standing position and the case where the imaging method is the lying position.
 また、X線撮影装置本体100aは、X線発生部10を保持する保持部20を含む。保持部20には、X線発生部10と接続される接続部21が設けられている。 Further, the X-ray imaging apparatus main body 100a includes a holding unit 20 that holds the X-ray generating unit 10. The holding unit 20 is provided with a connecting unit 21 connected to the X-ray generating unit 10.
 また、保持部20には、接続部21と接続される支柱部22が設けられている。支柱部22は、鉛直方向(Z方向)に延びるように設けられている。また、支柱部22は、X線撮影装置本体100aのX線発生部10を鉛直方向に移動可能に構成されている。これにより、X線撮影装置本体100aの下端100b(X線発生部10)は、支柱部22の伸縮に伴って鉛直方向に移動される。なお、支柱部22は、特許請求の範囲の「鉛直方向移動部」の一例である。 Further, the holding portion 20 is provided with a strut portion 22 connected to the connecting portion 21. The strut portion 22 is provided so as to extend in the vertical direction (Z direction). Further, the support column portion 22 is configured so that the X-ray generating portion 10 of the X-ray photographing apparatus main body 100a can be moved in the vertical direction. As a result, the lower end 100b (X-ray generating portion 10) of the X-ray photographing apparatus main body 100a is moved in the vertical direction as the support column portion 22 expands and contracts. The strut portion 22 is an example of a “vertical moving portion” within the scope of the claims.
 すなわち、支柱部22、レール202、および、レール203により、下端100b(X線発生部10)は、水平方向および鉛直方向において(すなわち3次元空間内において)任意の位置に移動可能に構成されている。 That is, the lower end 100b (X-ray generating portion 10) is configured to be movable to an arbitrary position in the horizontal direction and the vertical direction (that is, in the three-dimensional space) by the support column portion 22, the rail 202, and the rail 203. There is.
 また、保持部20には、天井面201のレール203に取り付けられている筐体部23が設けられている。なお、支柱部22は、筐体部23の下部に設けられている。 Further, the holding portion 20 is provided with a housing portion 23 attached to the rail 203 of the ceiling surface 201. The strut portion 22 is provided at the lower part of the housing portion 23.
 また、X線撮影装置本体100aは、X線発生部10(X線管11)に電流を供給するためのケーブル24を含む。 Further, the X-ray imaging apparatus main body 100a includes a cable 24 for supplying a current to the X-ray generating unit 10 (X-ray tube 11).
 図2に示すように、X線撮影装置100は、ステレオカメラ30を備える。ステレオカメラ30は、筐体部23の側面23aに取り付けられている。ステレオカメラ30は、X線撮影装置本体100aの周囲(下方)の同一領域における複数(2つ)の可視画像31を(同時に)取得するように構成されている。なお、ステレオカメラ30は、特許請求の範囲の「撮像部」の一例である。 As shown in FIG. 2, the X-ray photographing apparatus 100 includes a stereo camera 30. The stereo camera 30 is attached to the side surface 23a of the housing portion 23. The stereo camera 30 is configured to (simultaneously) acquire a plurality of (two) visible images 31 in the same region around (lower) the X-ray imaging apparatus main body 100a. The stereo camera 30 is an example of the "imaging unit" in the claims.
 また、側面23aには、制御基板40が取り付けられている。なお、ステレオカメラ30は、制御基板40に一体的に取り付けられている。なお、制御基板40は、筐体部23の内部に設けられていてもよい。 Further, the control board 40 is attached to the side surface 23a. The stereo camera 30 is integrally attached to the control board 40. The control board 40 may be provided inside the housing portion 23.
 図3に示すように、X線撮影装置100は、X線撮影装置本体100aを制御する制御コントローラ300を備える。制御コントローラ300には、緊急停止ボタン301と、撮影プログラムボタン302と、4つのオートポジショニングボタン303と、緊急停止解除ボタン304と、位置登録ボタン305と、駆動状況表示用LED306と、が設けられている。 As shown in FIG. 3, the X-ray imaging apparatus 100 includes a control controller 300 that controls the X-ray imaging apparatus main body 100a. The control controller 300 is provided with an emergency stop button 301, a shooting program button 302, four auto-positioning buttons 303, an emergency stop release button 304, a position registration button 305, and a drive status display LED 306. There is.
 X線撮影装置本体100aの自動運転時に、緊急停止ボタン301が押下されることにより、X線撮影装置本体100aの運転が緊急停止される。また、撮影プログラムボタン302が押下されることにより、X線撮影装置本体100aの撮影プログラムが変更される。また、オートポジショニングボタン303が押下されることにより、各オートポジショニングボタン303に対応して予め登録がされていた位置にX線撮影装置本体100aが自動的に移動される。 When the emergency stop button 301 is pressed during the automatic operation of the X-ray imaging apparatus main body 100a, the operation of the X-ray imaging apparatus main body 100a is urgently stopped. Further, by pressing the photographing program button 302, the photographing program of the X-ray photographing apparatus main body 100a is changed. Further, when the auto-positioning button 303 is pressed, the X-ray imaging apparatus main body 100a is automatically moved to a position registered in advance corresponding to each auto-positioning button 303.
 また、緊急停止解除ボタン304が押下されることにより、緊急停止されたX線撮影装置本体100aの移動が再開される。また、所定の位置において位置登録ボタン305が押下されることにより、上記所定の位置が、オートポジショニングボタン303が押下された場合にX線撮影装置本体100aが自動的に移動される位置として登録される。また、駆動状況表示用LED306は、X線撮影装置本体100aの駆動状況(たとえば緊急停止中)に基づいて、たとえばLEDの点滅パターンおよびLEDの色などを変化させる。 Further, by pressing the emergency stop release button 304, the movement of the X-ray imaging apparatus main body 100a that has been emergency stopped is restarted. Further, when the position registration button 305 is pressed at a predetermined position, the predetermined position is registered as a position where the X-ray imaging apparatus main body 100a is automatically moved when the auto-positioning button 303 is pressed. To. Further, the drive status display LED 306 changes, for example, the blinking pattern of the LED and the color of the LED based on the drive status (for example, during an emergency stop) of the X-ray imaging apparatus main body 100a.
 なお、制御コントローラ300からX線撮影装置本体100aに送信される送信信号に従って、制御基板40の後述する制御部42によりX線撮影装置本体100aの制御が行われる。 The X-ray imaging apparatus main body 100a is controlled by the control unit 42 described later of the control board 40 according to the transmission signal transmitted from the control controller 300 to the X-ray imaging apparatus main body 100a.
 図4に示すように、制御基板40は、画像処理部41と、制御部42と、を含む。画像処理部41は、ステレオカメラ30により撮像された複数(2つ)の可視画像31(図7(A)参照)の画像データをステレオカメラ30から取得する。 As shown in FIG. 4, the control board 40 includes an image processing unit 41 and a control unit 42. The image processing unit 41 acquires image data of a plurality of (two) visible images 31 (see FIG. 7A) captured by the stereo camera 30 from the stereo camera 30.
 また、画像処理部41は、CPU41aと、画像処理回路41bと、を含む。画像処理回路41bは、FPGA(Field-Programmable Gate Array)により構成されている。また、制御部42は、CPU42aを含む。なお、CPU41aおよび画像処理回路41bの両方が、1つのFPGAとして構成されていてもよい。 Further, the image processing unit 41 includes a CPU 41a and an image processing circuit 41b. The image processing circuit 41b is composed of an FPGA (Field-Programmable Gate Array). Further, the control unit 42 includes the CPU 42a. Both the CPU 41a and the image processing circuit 41b may be configured as one FPGA.
 (X線撮影装置本体と障害物との接触を回避するための制御)
 次に、図5~図9を参照して、X線撮影装置本体100aと障害物400との接触を回避するための制御について説明する。
(Control to avoid contact between the X-ray imaging device body and obstacles)
Next, control for avoiding contact between the X-ray imaging apparatus main body 100a and the obstacle 400 will be described with reference to FIGS. 5 to 9.
 まず、図5に示すように、ステップS1において、ステレオカメラ30により取得された可視画像31(図7参照)において、X線撮影装置本体100aを特定するための機械学習が実行される。この機械学習は、予め与えられた複数の教師用可視画像30a(図6参照)に基づいて行われる。教師用可視画像30aは、教師データとしてのX線撮影装置本体100aの画像を含む2次元画像である。具体的には、図6に示すように、X線撮影装置本体100a(たとえば、X線発生部10)の位置および向き等がそれぞれ異なる、複数(たとえば数万枚)の教師用可視画像30aが、ステレオカメラ30により撮像される。そして、複数の教師用可視画像30aが、教師データとして与えられることにより、X線撮影装置100は、X線撮影装置本体100aの形状を学習する。なお、X線撮影装置100では、機械学習として、深層学習(AI)が用いられる。なお、ステップS1における機械学習は、X線撮影装置100の医療現場における使用時(ステップS2~ステップS8)に先立って、事前に行われる。 First, as shown in FIG. 5, in step S1, machine learning for identifying the X-ray imaging apparatus main body 100a is executed in the visible image 31 (see FIG. 7) acquired by the stereo camera 30. This machine learning is performed based on a plurality of visible images for teachers 30a (see FIG. 6) given in advance. The teacher visible image 30a is a two-dimensional image including an image of the X-ray imaging apparatus main body 100a as teacher data. Specifically, as shown in FIG. 6, a plurality of (for example, tens of thousands) visible images for teachers 30a having different positions and orientations of the X-ray imaging apparatus main body 100a (for example, the X-ray generating unit 10) are provided. , Is imaged by the stereo camera 30. Then, the plurality of visible images for teachers 30a are given as teacher data, so that the X-ray imaging apparatus 100 learns the shape of the X-ray imaging apparatus main body 100a. In the X-ray imaging apparatus 100, deep learning (AI) is used as machine learning. The machine learning in step S1 is performed in advance prior to the use of the X-ray imaging apparatus 100 in the medical field (steps S2 to S8).
 次に、ステップS2において、可視画像31および視差画像32が取得される。具体的には、ステレオカメラ30により複数の(2つの)可視画像31(図7(A)参照)が撮像されるとともに、2つの可視画像31に基づいて、画像処理部41(画像処理回路41b)により3次元の深度(ステレオカメラ30からの離間距離D1(図8参照))を求めるための画像としての視差画像32(図7(B)参照)が生成される。可視画像31には、たとえば、人物401、人物402、器具403、および、X線撮影装置本体100aの一部(X線発生部10)が、視差画像32に表示される被検出物Pとして表示されているとする。なお被検出物Pとしては、その他に、ベッド、立位撮影用のスタンド、X線防護用の衝立、棚、および、ディスプレイ装置等が考えられる。なお、可視画像31の撮像、および、視差画像32の生成の各々は、約30fps毎に行われる。なお、「fps」とは、フレームレートを表す。 Next, in step S2, the visible image 31 and the parallax image 32 are acquired. Specifically, a plurality of (two) visible images 31 (see FIG. 7A) are captured by the stereo camera 30, and an image processing unit 41 (image processing circuit 41b) is imaged based on the two visible images 31. ) Generates a disparity image 32 (see FIG. 7B) as an image for obtaining a three-dimensional depth (distance distance D1 from the stereo camera 30 (see FIG. 8)). In the visible image 31, for example, a person 401, a person 402, an instrument 403, and a part of the X-ray imaging apparatus main body 100a (X-ray generating unit 10) are displayed as the object to be detected P displayed on the parallax image 32. It is assumed that it has been done. In addition, as the object P to be detected, a bed, a stand for standing photography, a tsuit for X-ray protection, a shelf, a display device, and the like can be considered. The imaging of the visible image 31 and the generation of the parallax image 32 are performed every about 30 fps. In addition, "fps" represents a frame rate.
 次に、ステップS3において、視差画像32に基づいて、画像処理部41(CPU41a)により、被検出物Pの高さH1(図8参照)が算出される。なお、視差画像32では、ステレオカメラ30からの離間距離D1(図8参照)に応じて、被検出物Pが色分け(図7(B)では視差画像32内の斜線により図示)されている。 Next, in step S3, the height H1 (see FIG. 8) of the object to be detected P is calculated by the image processing unit 41 (CPU 41a) based on the parallax image 32. In the parallax image 32, the object to be detected P is color-coded according to the distance D1 (see FIG. 8) from the stereo camera 30 (in FIG. 7B, it is shown by diagonal lines in the parallax image 32).
 次に、ステップS4において、X線撮影装置本体100aの高さH2(図8参照)以上の高さH1(図8参照)を有する被検出物Pが、画像処理部41(画像処理回路41b)により識別される。なお、X線撮影装置本体100aの高さH2とは、X線撮影装置本体100aの下端100b(図8参照)の床面204(図8参照)からの高さを意味する。また、支柱部22の鉛直方向の伸縮に伴ってX線撮影装置本体100aの下端100bの位置が変化(移動)した場合には、X線撮影装置本体100aの高さH2とは、変化(移動)後のX線撮影装置本体100aの下端100bの床面204からの高さを意味する。また、図8では、X線撮影装置本体100aの下端100bは、コリメータ12の下端であるように図示されているが、下端100bはX線発生部10の角度等により変化し得る。 Next, in step S4, the object to be detected P having a height H1 (see FIG. 8) equal to or higher than the height H2 (see FIG. 8) of the X-ray imaging apparatus main body 100a is the image processing unit 41 (image processing circuit 41b). Is identified by. The height H2 of the X-ray imaging apparatus main body 100a means the height of the lower end 100b (see FIG. 8) of the X-ray imaging apparatus main body 100a from the floor surface 204 (see FIG. 8). Further, when the position of the lower end 100b of the X-ray imaging apparatus main body 100a changes (moves) due to the vertical expansion and contraction of the support column portion 22, the height H2 of the X-ray imaging apparatus main body 100a changes (moves). ) Means the height of the lower end 100b of the later X-ray imaging apparatus main body 100a from the floor surface 204. Further, in FIG. 8, the lower end 100b of the X-ray imaging apparatus main body 100a is shown to be the lower end of the collimator 12, but the lower end 100b may change depending on the angle of the X-ray generating unit 10 or the like.
 具体的には、図7(C)に示すように、視差画像32に基づいて、画像処理部41(画像処理回路41b)により2値化処理が行われる。すなわち、画像処理部41(画像処理回路41b)は、視差画像32に表示される被検出物Pのうち、視差画像32に基づいて画像処理部41により算出された高さH1(図8参照)が、X線撮影装置本体100aの高さH2(図8参照)以上である被検出物Pと、高さH1が高さH2よりも小さい被検出物Pとを区別する。図7(C)に示すように、高さH1がX線撮影装置本体100aの高さH2以上である被検出物Pだけが白く表示(領域R)されるとともに、領域R以外の部分が灰色塗りで表示された二値化画像33が、画像処理部41(画像処理回路41b)により生成される。本実施形態の例では、二値化画像33において、人物401、人物402、および、器具403が、X線撮影装置本体100aの高さH2以上の高さH1を有する被検出物Pとして抽出されている。また、この時点では、X線撮影装置本体100a自身も、X線撮影装置本体100aの高さH2以上の高さH1を有する被検出物Pとして抽出されている。 Specifically, as shown in FIG. 7C, binarization processing is performed by the image processing unit 41 (image processing circuit 41b) based on the parallax image 32. That is, the image processing unit 41 (image processing circuit 41b) has a height H1 (see FIG. 8) calculated by the image processing unit 41 based on the parallax image 32 among the objects to be detected P displayed on the parallax image 32. However, the object to be detected P having a height H2 (see FIG. 8) or more of the X-ray imaging apparatus main body 100a and the object to be detected P having a height H1 smaller than the height H2 are distinguished. As shown in FIG. 7C, only the object to be detected P having a height H1 equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a is displayed in white (region R), and the portion other than the region R is gray. The binarized image 33 displayed by painting is generated by the image processing unit 41 (image processing circuit 41b). In the example of the present embodiment, in the binarized image 33, the person 401, the person 402, and the instrument 403 are extracted as the object to be detected P having a height H1 equal to or higher than the height H2 of the X-ray apparatus main body 100a. ing. Further, at this point, the X-ray imaging apparatus main body 100a itself is also extracted as the object to be detected P having a height H1 equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a.
 被検出物Pの高さH1を算出する方法として、たとえば、視差画像32に基づいて算出されたステレオカメラ30と被検出物Pとの鉛直方向(Z方向)の離間距離D1(図8参照)と、ステレオカメラ30の高さH3(図8参照)との差分(H1=H3-D1)を算出することが考えられる。なお、ステレオカメラ30の高さH3とは、ステレオカメラ30の床面204からの高さを意味する。また、ステレオカメラ30の高さH3の情報は、予めX線撮影装置100が所有している。 As a method of calculating the height H1 of the object to be detected P, for example, the distance D1 in the vertical direction (Z direction) between the stereo camera 30 and the object to be detected P calculated based on the parallax image 32 (see FIG. 8). It is conceivable to calculate the difference (H1 = H3-D1) between the height of the stereo camera 30 and the height H3 (see FIG. 8). The height H3 of the stereo camera 30 means the height of the stereo camera 30 from the floor surface 204. Further, the information on the height H3 of the stereo camera 30 is owned by the X-ray imaging apparatus 100 in advance.
 X線撮影装置本体100aの下端100bの高さH2を算出する方法として、たとえば、視差画像32に基づいて算出されたステレオカメラ30とX線発生部10の上端100cとの離間距離D2と、X線発生部10自身の高さH4との合計値(D2+H4)を、ステレオカメラ30の高さH3から差し引く(H2=H3-(D2+H4))ことが考えられる。なお、X線発生部10自身の高さH4の情報は、予めX線撮影装置100が所有している。 As a method of calculating the height H2 of the lower end 100b of the X-ray imaging apparatus main body 100a, for example, the separation distance D2 between the stereo camera 30 and the upper end 100c of the X-ray generating unit 10 calculated based on the parallax image 32, and X. It is conceivable that the total value (D2 + H4) with the height H4 of the line generating unit 10 itself is subtracted from the height H3 of the stereo camera 30 (H2 = H3- (D2 + H4)). The information on the height H4 of the X-ray generating unit 10 itself is owned by the X-ray photographing apparatus 100 in advance.
 ここで、本実施形態では、画像処理部41(画像処理回路41b)は、機械学習の学習結果に基づいて特定されたX線撮影装置本体100aを、ステレオカメラ30により取得された複数の可視画像31のうちの少なくとも1つにおいて識別した場合、学習結果に基づいて識別された可視画像31におけるX線撮影装置本体100aの画像に基づいて、被検出物PにおいてX線撮影装置本体100aと障害物400とを判別するように構成されている。 Here, in the present embodiment, the image processing unit 41 (image processing circuit 41b) has a plurality of visible images acquired by the stereo camera 30 of the X-ray imaging apparatus main body 100a specified based on the learning result of machine learning. When identified in at least one of 31, the X-ray apparatus main body 100a and the obstacle in the object P to be detected are based on the image of the X-ray apparatus main body 100a in the visible image 31 identified based on the learning result. It is configured to discriminate from 400.
 詳細には、画像処理部41(画像処理回路41b)は、ステレオカメラ30により取得された可視画像31において、X線撮影装置本体100aが撮影されている領域R1を識別するとともに、視差画像32に基づいて、被検出物Pが撮影されている領域Rのうち、領域R1に対応する領域R1a以外の領域Rを、障害物400が撮影されている領域R2として識別するように構成されている。なお、領域R1および領域R2は、それぞれ、特許請求の範囲の「第1領域」および「第2領域」の一例である。 Specifically, the image processing unit 41 (image processing circuit 41b) identifies the region R1 in which the X-ray imaging apparatus main body 100a is photographed in the visible image 31 acquired by the stereo camera 30, and also displays the parallax image 32. Based on this, among the regions R in which the object to be detected P is photographed, the regions R other than the region R1a corresponding to the region R1 are configured to be identified as the region R2 in which the obstacle 400 is photographed. The region R1 and the region R2 are examples of the "first region" and the "second region" of the claims, respectively.
 具体的には、ステップS5において、図7(D)に示すように、画像処理部41(画像処理回路41b)は、複数の可視画像31のうちの少なくとも1つにおいて、機械学習の学習結果に基づいて、X線撮影装置本体100aが画像処理部41(画像処理回路41b)により識別される。なお、図7(D)では、可視画像31においてX線撮影装置本体100aが存在する領域R1を含むように、複数の抽出単位領域R10(ハッチング部分)を示している。抽出単位領域R10は、画像処理部41(画像処理回路41b)が画像認識を行う際に用いられる最小単位領域である。 Specifically, in step S5, as shown in FIG. 7D, the image processing unit 41 (image processing circuit 41b) determines the learning result of machine learning in at least one of the plurality of visible images 31. Based on this, the X-ray imaging apparatus main body 100a is identified by the image processing unit 41 (image processing circuit 41b). In FIG. 7D, a plurality of extraction unit regions R10 (hatched portions) are shown so as to include the region R1 in which the X-ray imaging apparatus main body 100a exists in the visible image 31. The extraction unit area R10 is the smallest unit area used when the image processing unit 41 (image processing circuit 41b) performs image recognition.
 そして、ステップS6において、可視画像31においてX線撮影装置本体100aが識別された領域R1と対応する二値化画像33における領域R1aを、二値化画像33における領域Rから除去することにより、二値化画像34(図7(E)参照)が画像処理部41(画像処理回路41b)により生成される。すなわち、X線撮影装置本体100aと(X線撮影装置本体100aの周囲の)障害物410とが判別されることにより、二値化画像34において、X線撮影装置本体100aに対する障害物400(人物401、人物402、および、器具403)のみが抽出されている。これにより、図7(F)に示すように、画像処理部41(画像処理回路41b)は、人物401、人物402、および、器具403の各々を、障害物410として識別する。 Then, in step S6, the region R1a in the binarized image 33 corresponding to the region R1 in which the X-ray imaging apparatus main body 100a is identified in the visible image 31 is removed from the region R in the binarized image 33. A digitized image 34 (see FIG. 7E) is generated by the image processing unit 41 (image processing circuit 41b). That is, by discriminating between the X-ray imaging apparatus main body 100a and the obstacle 410 (around the X-ray imaging apparatus main body 100a), the obstacle 400 (person) with respect to the X-ray imaging apparatus main body 100a in the binarized image 34. Only 401, person 402, and instrument 403) have been extracted. As a result, as shown in FIG. 7 (F), the image processing unit 41 (image processing circuit 41b) identifies each of the person 401, the person 402, and the instrument 403 as an obstacle 410.
 なお、本実施形態では、画像処理部41(CPU41a)は、視差画像32に基づいて、可視画像31において識別された領域R1に隣接するとともに学習結果に基づく抽出単位領域R10よりも小さい被検出物Pを除いた領域Rを領域R3として識別するように構成されている。また、被検出物Pが撮影されている領域Rのうち、領域R1および領域R3に対応する領域R以外の領域Rを、障害物410が撮影されている領域R2として識別するように構成されている。なお、領域R3は、「第3領域」の一例である。 In the present embodiment, the image processing unit 41 (CPU 41a) is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is smaller than the extraction unit region R10 based on the learning result. The region R excluding P is configured to be identified as the region R3. Further, among the regions R in which the object to be detected P is photographed, the regions R other than the regions R corresponding to the regions R1 and R3 are configured to be identified as the region R2 in which the obstacle 410 is photographed. There is. The region R3 is an example of the "third region".
 具体的には、図7(D)に示すように、抽出単位領域R10が比較的大きい場合、領域R1として識別されない部分R1bが生じる場合がある。この場合、図9に示すように、二値化画像33(図7(C)参照)から領域R1を除去した二値化画像34aにおいて、二値化画像33(図7(C)参照)の領域R1に対応する位置に、抽出単位領域R10よりも小さい領域R3が抽出される。領域R3は、X線撮影装置本体100aを可視画像31において識別し損ねた部分であるので、画像処理部41(CPU41a)は、領域R1および領域R3に対応する領域R以外の領域Rを、障害物410が撮影されている領域R2として識別する。すなわち、画像処理部41(CPU41a)は、二値化画像34aから領域R3を除去して、二値化画像33(図7(C)参照)を生成する。 Specifically, as shown in FIG. 7 (D), when the extraction unit region R10 is relatively large, a portion R1b that is not identified as the region R1 may occur. In this case, as shown in FIG. 9, in the binarized image 34a in which the region R1 is removed from the binarized image 33 (see FIG. 7C), the binarized image 33 (see FIG. 7C). A region R3 smaller than the extraction unit region R10 is extracted at a position corresponding to the region R1. Since the region R3 is a portion where the X-ray imaging apparatus main body 100a cannot be identified in the visible image 31, the image processing unit 41 (CPU 41a) interferes with the region R1 and the region R other than the region R corresponding to the region R3. The object 410 is identified as the area R2 in which the image is taken. That is, the image processing unit 41 (CPU 41a) removes the region R3 from the binarized image 34a to generate the binarized image 33 (see FIG. 7C).
 そして、ステップS7において、ステップS6において判別されたX線撮影装置本体100aと(X線撮影装置本体100aの周囲の)障害物410との間の距離L(図7(F)参照)が、画像処理部41(画像処理回路41b)により算出される。図7(F)では、人物401とX線撮影装置本体100aとの間の水平方向の距離La、人物402とX線撮影装置本体100aとの間の水平方向の距離Lb、器具403とX線撮影装置本体100aとの間の水平方向の距離Lcを示している。なお、二値化画像33では、被検出物P(X線撮影装置本体100aおよび障害物400)は、X線撮影装置本体100aの高さH2以上の高さH1を有するものとして抽出されている。すなわち、画像処理部41(画像処理回路41b)は、障害物410の高さがX線撮影装置本体100aの高さ以上である場合に、距離L(図7(F)参照)を算出するように構成されている。なお、距離Lは、特許請求の範囲の「水平距離」の一例である。 Then, in step S7, the distance L (see FIG. 7 (F)) between the X-ray imaging apparatus main body 100a determined in step S6 and the obstacle 410 (around the X-ray imaging apparatus main body 100a) is an image. It is calculated by the processing unit 41 (image processing circuit 41b). In FIG. 7 (F), the horizontal distance La between the person 401 and the X-ray imaging apparatus main body 100a, the horizontal distance Lb between the person 402 and the X-ray imaging apparatus main body 100a, the instrument 403 and the X-ray. The horizontal distance Lc from the photographing apparatus main body 100a is shown. In the binarized image 33, the object to be detected P (X-ray imaging apparatus main body 100a and obstacle 400) is extracted as having a height H1 equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a. .. That is, the image processing unit 41 (image processing circuit 41b) calculates the distance L (see FIG. 7 (F)) when the height of the obstacle 410 is equal to or higher than the height of the X-ray imaging apparatus main body 100a. It is configured in. The distance L is an example of the "horizontal distance" in the claims.
 なお、距離Lは、機械学習の学習結果に基づいて識別されたX線撮影装置本体100aの位置に基づいて算出されている。具体的には、機械学習の学習結果に基づいて識別されたX線撮影装置本体100aの画像上での位置(座標)と、上記の制御により識別された障害物410の画像上での位置(座標)との差異が画像処理部41(CPU41a)により算出されることにより、距離Lが算出される。なお、画像処理部41(CPU41a)は、X線撮影装置本体100aと障害物410との最短距離を算出するように構成されている。 The distance L is calculated based on the position of the X-ray imaging apparatus main body 100a identified based on the learning result of machine learning. Specifically, the position (coordinates) of the X-ray apparatus main body 100a identified based on the learning result of machine learning on the image and the position (coordinates) of the obstacle 410 identified by the above control on the image ( The distance L is calculated by calculating the difference from the coordinates) by the image processing unit 41 (CPU 41a). The image processing unit 41 (CPU 41a) is configured to calculate the shortest distance between the X-ray imaging apparatus main body 100a and the obstacle 410.
 また、本実施形態では、制御部42(CPU42a)は、画像処理部41(CPU41a)により算出された距離L(La、LbおよびLc)に基づいて、X線撮影装置本体100aと障害物410との接触を回避するための制御を行うように構成されている。具体的な説明を、以下のステップS8およびステップS9の説明において行う。 Further, in the present embodiment, the control unit 42 (CPU 42a) has the X-ray imaging apparatus main body 100a and the obstacle 410 based on the distance L (La, Lb and Lc) calculated by the image processing unit 41 (CPU 41a). It is configured to perform control to avoid contact with. Specific explanations will be given in the following steps S8 and S9.
 図5に示すように、ステップS8において、画像処理部41(CPU41a)により算出された距離L(La、Lbおよび、Lc)が、所定の値以下であるか否かが制御部42(CPU42a)により判定される。距離L(La、LbおよびLc)が、所定の値以下である場合は、ステップS9に進み、また、距離L1(L1a、L1b、および、L1c)および距離L2(L2a)の各々が所定の値よりも大きい場合は、ステップS2に戻る。なお、上記所定の値としては、複数の値が設定されていてもよい。 As shown in FIG. 5, in step S8, the control unit 42 (CPU 42a) determines whether or not the distance L (La, Lb, and Lc) calculated by the image processing unit 41 (CPU 41a) is equal to or less than a predetermined value. Is determined by. If the distance L (La, Lb and Lc) is equal to or less than a predetermined value, the process proceeds to step S9, and each of the distance L1 (L1a, L1b and L1c) and the distance L2 (L2a) has a predetermined value. If it is larger than, the process returns to step S2. A plurality of values may be set as the predetermined value.
 そして、ステップS9では、制御部42(CPU42a)は、X線撮影装置本体100aと障害物410(510)との間の水平方向の距離Lが所定の値以下であった場合に、X線撮影装置本体100aを減速または停止させるように構成されている。たとえば、制御部42(CPU42a)は、距離Lが40cm以下であった場合はX線撮影装置本体100aを減速させるとともに、距離Lが20cm以下であった場合はX線撮影装置本体100aを停止させる。また、制御部42(CPU42a)は、距離Lが所定の値以下であった場合に、警報を発生させる制御を行ってもよい。また、X線撮影装置本体100aの移動に限らず、X線発生部10の回転に対しても、距離Lに基づいて接触回避の制御が行われる。そして、ステップS9の処理が終了すると、ステップS2に戻る。 Then, in step S9, the control unit 42 (CPU 42a) performs X-ray imaging when the horizontal distance L between the X-ray imaging apparatus main body 100a and the obstacle 410 (510) is equal to or less than a predetermined value. It is configured to decelerate or stop the device body 100a. For example, the control unit 42 (CPU 42a) decelerates the X-ray imaging apparatus main body 100a when the distance L is 40 cm or less, and stops the X-ray imaging apparatus main body 100a when the distance L is 20 cm or less. .. Further, the control unit 42 (CPU 42a) may perform control to generate an alarm when the distance L is equal to or less than a predetermined value. Further, not only the movement of the X-ray imaging apparatus main body 100a but also the rotation of the X-ray generating unit 10 is controlled to avoid contact based on the distance L. Then, when the process of step S9 is completed, the process returns to step S2.
 なお、上記のステップS2~ステップS9は、X線撮影装置本体100aが自動的に移動している際(または、X線発生部10が自動的に回転している際)に行われる。また、30fpsという比較的短時間で可視画像31(視差画像32)の取得(生成)(ステップS2)が行われているので、障害物410が移動する場合(たとえば断層撮影において患者が突然起き上がった場合)にも、障害物410とX線撮影装置本体100aとの接触を抑制することが可能である。さらに、X線撮影装置本体100aを比較的高速に移動させることが可能である。 Note that the above steps S2 to S9 are performed when the X-ray imaging apparatus main body 100a is automatically moving (or when the X-ray generating unit 10 is automatically rotating). Further, since the visible image 31 (parallax image 32) is acquired (generated) (step S2) in a relatively short time of 30 fps, the patient suddenly gets up when the obstacle 410 moves (for example, in tomography). In the case of), it is possible to suppress the contact between the obstacle 410 and the X-ray imaging apparatus main body 100a. Further, the X-ray imaging apparatus main body 100a can be moved at a relatively high speed.
 なお、本実施形態では、画像処理部41(CPU41a)は、視差画像32に基づいて、可視画像31において識別された領域R1に隣接するとともに学習結果に基づく抽出単位領域R10よりも小さい被検出物Pを除いた領域Rを領域R3として識別するように構成されている。また、被検出物Pが撮影されている領域Rのうち、領域R1および領域R3に対応する領域R以外の領域Rを、障害物410が撮影されている領域R2として識別するように構成されている。なお、領域R3は、「第3領域」の一例である。 In the present embodiment, the image processing unit 41 (CPU 41a) is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is smaller than the extraction unit region R10 based on the learning result. The region R excluding P is configured to be identified as the region R3. Further, among the regions R in which the object to be detected P is photographed, the regions R other than the regions R corresponding to the regions R1 and R3 are configured to be identified as the region R2 in which the obstacle 410 is photographed. There is. The region R3 is an example of the "third region".
 具体的には、図7(D)に示すように、抽出単位領域R10が比較的大きい場合、領域R1として識別されない部分R1bが生じる場合がある。この場合、図9に示すように、二値化画像33(図7(C)参照)から領域R1を除去した二値化画像34aにおいて、二値化画像33(図7(C)参照)の領域R1に対応する位置に、抽出単位領域R10よりも小さい領域R3が抽出される。領域R3は、X線撮影装置本体100aを可視画像31において識別し損ねた部分であるので、画像処理部41(CPU41a)は、領域R1および領域R3に対応する領域R以外の領域Rを、障害物410が撮影されている領域R2として識別する。すなわち、画像処理部41(CPU41a)は、二値化画像34aから領域R3を除去して、二値化画像33(図7(C)参照)を生成する。 Specifically, as shown in FIG. 7D, when the extraction unit region R10 is relatively large, a portion R1b that is not identified as the region R1 may occur. In this case, as shown in FIG. 9, in the binarized image 34a in which the region R1 is removed from the binarized image 33 (see FIG. 7C), the binarized image 33 (see FIG. 7C). A region R3 smaller than the extraction unit region R10 is extracted at a position corresponding to the region R1. Since the region R3 is a portion where the X-ray imaging apparatus main body 100a cannot be identified in the visible image 31, the image processing unit 41 (CPU 41a) interferes with the region R1 and the region R other than the region R corresponding to the region R3. The object 410 is identified as the area R2 in which the image is taken. That is, the image processing unit 41 (CPU 41a) removes the region R3 from the binarized image 34a to generate the binarized image 33 (see FIG. 7C).
(実施形態の効果)
 本実施形態の装置では、以下のような効果を得ることができる。
(Effect of embodiment)
With the device of this embodiment, the following effects can be obtained.
 本実施形態では、上記のように、X線撮影装置100を、予め与えられた教師データとしてのX線撮影装置本体100aの画像を含む2次元画像としての複数の教師用可視画像30aに基づいてX線撮影装置本体100aを特定する機械学習の学習結果に基づいて識別された可視画像31におけるX線撮影装置本体100aの画像に基づいて、視差画像32に基づいて識別された被検出物PにおいてX線撮影装置本体100aとX線撮影装置本体100aの周囲の障害物400とが判別されるように構成する。これにより、視差画像32に基づいて識別された被検出物PにおいてX線撮影装置本体100aと障害物400とを容易に判別することができるので、視差画像32に基づいて障害物400を検出する場合でも、障害物400とX線撮影装置本体100aとが接触しないように、X線撮影装置本体100aの移動を適切に制御することができる。また、視差画像32に基づいて識別された被検出物PのうちのX線撮影装置本体100aのみを、機械学習の学習結果に基づいて特定するので、複数種類の被検出物Pを、機械学習の学習結果に基づいて特定する場合と比較して、被検出物Pを特定するために必要となる機械学習の学習結果が多くなるのを抑制することができる。 In the present embodiment, as described above, the X-ray imaging apparatus 100 is based on a plurality of visible images for teachers 30a as two-dimensional images including an image of the X-ray imaging apparatus main body 100a as teacher data given in advance. In the object to be detected P identified based on the differential image 32 based on the image of the X-ray imaging apparatus main body 100a in the visible image 31 identified based on the learning result of the machine learning that identifies the X-ray imaging apparatus main body 100a. It is configured so that the X-ray imaging apparatus main body 100a and the obstacle 400 around the X-ray imaging apparatus main body 100a can be discriminated from each other. As a result, the X-ray imaging apparatus main body 100a and the obstacle 400 can be easily discriminated in the object P identified based on the parallax image 32, so that the obstacle 400 is detected based on the parallax image 32. Even in this case, the movement of the X-ray imaging apparatus main body 100a can be appropriately controlled so that the obstacle 400 and the X-ray imaging apparatus main body 100a do not come into contact with each other. Further, since only the X-ray imaging apparatus main body 100a among the detected objects P identified based on the disparity image 32 is specified based on the learning result of machine learning, a plurality of types of detected objects P can be machine-learned. It is possible to suppress an increase in the learning result of machine learning required for specifying the object P to be detected, as compared with the case of specifying based on the learning result of.
 また、本実施形態では、上記のように、画像処理部41を、ステレオカメラ30(撮像部)により取得された可視画像31において、X線撮影装置本体100aが撮影されている領域R1(第1領域)を識別するように構成する。また、画像処理部41を、視差画像32に基づいて、被検出物Pが撮影されている領域のうち、領域R1(第1領域)に対応する領域R1a以外の領域Rを、障害物400が撮影されている領域R2(第2領域)として識別するように構成する。これにより、視差画像32に基づいて識別された被検出物PにおいてX線撮影装置本体100a(可視画像31における領域R1(第1領域))と障害物400(視差画像32における領域R2(第2領域))とを容易に判別することができる。 Further, in the present embodiment, as described above, the image processing unit 41 is in the visible image 31 acquired by the stereo camera 30 (imaging unit), and the region R1 (first) in which the X-ray imaging apparatus main body 100a is photographed. It is configured to identify the area). Further, in the image processing unit 41, the obstacle 400 covers the area R other than the area R1a corresponding to the area R1 (first area) in the area where the object P to be detected is photographed based on the parallax image 32. It is configured to be identified as the area R2 (second area) in which the image is being photographed. As a result, in the object to be detected P identified based on the parallax image 32, the X-ray imaging apparatus main body 100a (region R1 (first region) in the visible image 31) and the obstacle 400 (region R2 (second region) in the parallax image 32). Area)) can be easily identified.
 また、本実施形態では、上記のように、画像処理部41を、視差画像32に基づいて、可視画像31において識別された領域R1(第1領域)に隣接するとともに学習結果に基づく画像認識を行う際の最小単位領域である抽出単位領域R10よりも小さい被検出物Pを除いた領域を領域R3(第3領域)として識別するように構成する。そして、画像処理部41を、視差画像32に基づいて、被検出物Pが撮影されている領域のうち、領域R1(第1領域)および領域R3(第3領域)に対応する領域R1a以外の領域Rを、障害物400が撮影されている領域R2(第2領域)として識別するように構成する。これにより、抽出単位領域R10が比較的大きい場合に、可視画像31において、X線撮影装置本体100aが撮影された領域の一部の領域(領域R3(第3領域))を、機械学習の学習結果に基づいて識別できない場合でも、視差画像32に基づいて領域R3(第3領域)が識別されるので、被検出物Pにおいて適切にX線撮影装置本体100aを識別することができる。 Further, in the present embodiment, as described above, the image processing unit 41 is adjacent to the region R1 (first region) identified in the visible image 31 based on the parallax image 32, and image recognition based on the learning result is performed. The region excluding the object to be detected P, which is smaller than the extraction unit region R10, which is the minimum unit region at the time of the operation, is configured to be identified as the region R3 (third region). Then, the image processing unit 41 sets the image processing unit 41 to a region other than the region R1 (first region) and the region R3 (third region) in the region where the object P to be detected is photographed based on the parallax image 32. The area R is configured to be identified as the area R2 (second area) where the obstacle 400 is photographed. As a result, when the extraction unit region R10 is relatively large, in the visible image 31, a part of the region (region R3 (third region)) in which the X-ray imaging apparatus main body 100a is photographed is learned by machine learning. Even if it cannot be identified based on the result, the region R3 (third region) is identified based on the parallax image 32, so that the X-ray imaging apparatus main body 100a can be appropriately identified in the object P to be detected.
 また、本実施形態では、上記のように、X線撮影装置本体100aを、X線撮影装置本体100aが設けられる室内200の天井面201から吊り下げられるように設けられるように構成する。また、画像処理部41を、障害物400の高さH1がX線撮影装置本体100aの高さH2以上である場合に、距離L(水平距離)を算出するように構成する。これにより、X線撮影装置本体100aと障害物400とが接触する可能性がある場合のみ、距離L(水平距離)を算出するので、画像処理部41の処理負担が大きくなるのを抑制することができる。 Further, in the present embodiment, as described above, the X-ray imaging apparatus main body 100a is configured to be provided so as to be suspended from the ceiling surface 201 of the room 200 in which the X-ray imaging apparatus main body 100a is provided. Further, the image processing unit 41 is configured to calculate the distance L (horizontal distance) when the height H1 of the obstacle 400 is equal to or higher than the height H2 of the X-ray imaging apparatus main body 100a. As a result, the distance L (horizontal distance) is calculated only when there is a possibility that the X-ray imaging apparatus main body 100a and the obstacle 400 come into contact with each other, so that the processing load of the image processing unit 41 is suppressed from becoming large. Can be done.
 また、本実施形態では、上記のように、X線撮影装置本体100aを、X線撮影装置本体100aの下端100bを鉛直方向に移動可能に構成されている支柱部22(鉛直方向移動部)を備えるように構成する。そして、画像処理部41を、障害物400の高さH1が支柱部22の高さD2以上である場合に、距離L(水平距離)を算出するように構成する。これにより、支柱部22(鉛直方向移動部)が移動することにより、X線撮影装置本体100aと障害物400とが接触する可能性があるX線撮影装置本体100aの高さ位置が変化する場合でも、X線撮影装置本体100aと障害物400とが接触する可能性がある場合のみ、距離L(水平距離)を算出するので、画像処理部41の処理負担が大きくなるのを抑制することができる。 Further, in the present embodiment, as described above, the support column portion 22 (vertical direction moving portion) configured so that the X-ray photographing apparatus main body 100a and the lower end 100b of the X-ray photographing apparatus main body 100a can be moved in the vertical direction is provided. Configure to be prepared. Then, the image processing unit 41 is configured to calculate the distance L (horizontal distance) when the height H1 of the obstacle 400 is equal to or higher than the height D2 of the support column 22. As a result, when the support column 22 (vertical moving portion) moves, the height position of the X-ray imaging apparatus main body 100a, which may come into contact with the X-ray imaging apparatus main body 100a and the obstacle 400, changes. However, since the distance L (horizontal distance) is calculated only when there is a possibility that the X-ray imaging apparatus main body 100a and the obstacle 400 come into contact with each other, it is possible to suppress an increase in the processing load of the image processing unit 41. it can.
 また、本実施形態では、上記のように、撮像部を、ステレオカメラ30として構成する。これにより、被検出物Pを異なる複数の方向から撮像するステレオカメラを用いて、容易に視差画像32を取得することができる。 Further, in the present embodiment, as described above, the imaging unit is configured as the stereo camera 30. As a result, the parallax image 32 can be easily acquired by using a stereo camera that images the object P to be detected from a plurality of different directions.
 本実施形態では、上記のように、X線撮影装置100の障害物接触回避方法を、予め与えられた教師データとしてのX線撮影装置本体100aの画像を含む2次元画像としての複数の教師用可視画像30aに基づいてX線撮影装置本体100aを特定する機械学習の学習結果に基づいて識別された可視画像31におけるX線撮影装置本体100aの画像に基づいて、視差画像32に基づいて識別された被検出物PにおいてX線撮影装置本体100aとX線撮影装置本体100aの周囲の障害物400とが判別されるステップを含むように構成する。これにより、視差画像32に基づいて識別された被検出物PにおいてX線撮影装置本体100aと障害物400とを容易に判別することができるので、視差画像32に基づいて障害物400を検出する場合でも、障害物400とX線撮影装置本体100aとが接触しないように、X線撮影装置本体100aの移動を適切に制御することが可能なX線撮影装置100の障害物接触回避方法を提供することができる。また、視差画像32に基づいて識別された被検出物PのうちのX線撮影装置本体100aのみを、機械学習の学習結果に基づいて特定するので、複数種類の被検出物Pを、機械学習の学習結果に基づいて特定する場合と比較して、被検出物Pを特定するために必要となる機械学習の学習結果が多くなるのを抑制することが可能なX線撮影装置100の障害物接触回避方法を提供することができる。 In the present embodiment, as described above, the method of avoiding contact with obstacles of the X-ray imaging apparatus 100 is used for a plurality of teachers as a two-dimensional image including an image of the X-ray imaging apparatus main body 100a as teacher data given in advance. Identifying the X-ray imaging apparatus main body 100a based on the visible image 30a Based on the image of the X-ray imaging apparatus main body 100a in the visible image 31 identified based on the learning result of machine learning, it is identified based on the parallax image 32. The object to be detected P is configured to include a step of discriminating between the X-ray imaging apparatus main body 100a and the obstacle 400 around the X-ray imaging apparatus main body 100a. As a result, the X-ray imaging apparatus main body 100a and the obstacle 400 can be easily discriminated in the object P identified based on the disparity image 32, so that the obstacle 400 is detected based on the disparity image 32. Provided is a method for avoiding obstacle contact of the X-ray imaging apparatus 100 capable of appropriately controlling the movement of the X-ray imaging apparatus main body 100a so that the obstacle 400 and the X-ray imaging apparatus main body 100a do not come into contact with each other. can do. Further, since only the X-ray imaging apparatus main body 100a among the detected objects P identified based on the disparity image 32 is specified based on the learning result of machine learning, a plurality of types of detected objects P can be machine-learned. Obstacles of the X-ray imaging apparatus 100 that can suppress an increase in the learning result of machine learning required for identifying the object to be detected P as compared with the case of specifying based on the learning result of A contact avoidance method can be provided.
 [変形例]
 なお、今回開示された実施形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施形態の説明ではなく特許請求の範囲によって示され、さらに特許請求の範囲と均等の意味および範囲内でのすべての変更(変形例)が含まれる。
[Modification example]
It should be noted that the embodiments disclosed this time are exemplary in all respects and are not considered to be restrictive. The scope of the present invention is shown by the scope of claims rather than the description of the above-described embodiment, and further includes all modifications (modifications) within the meaning and scope equivalent to the scope of claims.
 たとえば、上記実施形態では、X線撮影装置本体100aが天井面201に吊り下げられるように設けられている例を示したが、本発明はこれに限られない。たとえば、X線撮影装置本体100aが壁面等に取り付けられていてもよい。 For example, in the above embodiment, an example is shown in which the X-ray imaging apparatus main body 100a is provided so as to be suspended from the ceiling surface 201, but the present invention is not limited to this. For example, the X-ray imaging apparatus main body 100a may be mounted on a wall surface or the like.
 また、上記実施形態では、X線撮影装置本体100aの下端100bの高さH2が、画像処理部41により視差画像32に基づいて算出される例を示したが、本発明はこれに限られない。たとえば、高さH2は、保持部20の伸縮に伴うX線発生部10等の鉛直方向における移動量等に基づいて算出されてもよい。 Further, in the above embodiment, the height H2 of the lower end 100b of the X-ray imaging apparatus main body 100a is calculated by the image processing unit 41 based on the parallax image 32, but the present invention is not limited to this. .. For example, the height H2 may be calculated based on the amount of movement of the X-ray generating portion 10 or the like in the vertical direction due to the expansion and contraction of the holding portion 20.
 また、上記実施形態では、X線撮影装置本体100a自身が学習した機械学習の学習結果に基づいて、X線撮影装置本体100aの障害物400との接触回避の制御を行うように構成した例を示したが、本発明はこれに限られない。本発明では、X線撮影装置本体100aがX線撮影装置本体100aの外部から取得した機械学習の学習結果に基づいて、X線撮影装置本体100aの障害物400との接触回避の制御を行うように構成してもよい。 Further, in the above embodiment, an example configured to control contact avoidance of the X-ray imaging apparatus main body 100a with an obstacle 400 based on the learning result of machine learning learned by the X-ray imaging apparatus main body 100a itself. As shown, the present invention is not limited to this. In the present invention, the X-ray imaging apparatus main body 100a controls the contact avoidance of the X-ray imaging apparatus main body 100a with the obstacle 400 based on the learning result of machine learning acquired from the outside of the X-ray imaging apparatus main body 100a. It may be configured as.
 また、上記実施形態では、撮像部を、ステレオカメラ30として構成した例を示したが、本発明はこれに限られない。本発明では、ステレオカメラ以外の撮像部を用いて、可視画像および視差画像を取得するように構成してもよい。 Further, in the above embodiment, an example in which the imaging unit is configured as a stereo camera 30 is shown, but the present invention is not limited to this. In the present invention, a visible image and a parallax image may be acquired by using an imaging unit other than the stereo camera.
 また、上記実施形態では、画像処理部41を、視差画像32に基づいて、可視画像31において識別された領域R1に隣接するとともに学習結果に基づく画像認識を行う際の最小単位領域である抽出単位領域R10よりも小さい被検出物Pを除いた領域を領域R3として識別するように構成した例を示したが、本発明はこれに限られない。本発明では、画像処理部が、領域R3を識別する処理を行わないように構成してもよい。 Further, in the above embodiment, the image processing unit 41 is adjacent to the region R1 identified in the visible image 31 based on the parallax image 32, and is the minimum unit region for performing image recognition based on the learning result. An example is shown in which a region excluding the object to be detected P smaller than the region R10 is identified as the region R3, but the present invention is not limited to this. In the present invention, the image processing unit may be configured not to perform the process of identifying the region R3.
 また、上記実施形態では、画像処理部41を、CPU41aと、画像処理回路41b(FPGA)と、を含むように構成した例を示したが、本発明はこれに限られない。本発明では、画像処理部を、CPUと、画像処理回路とのいずれか一方のみを含むように構成してもよい。 Further, in the above embodiment, an example in which the image processing unit 41 is configured to include the CPU 41a and the image processing circuit 41b (FPGA) is shown, but the present invention is not limited to this. In the present invention, the image processing unit may be configured to include only one of the CPU and the image processing circuit.
 また、上記実施形態では、機械学習として、深層学習(AI)が用いられる例を示したが、本発明はこれに限られない。たとえば、機械学習として、深層学習以外の機械学習を用いてもよい。 Further, in the above embodiment, an example in which deep learning (AI) is used as machine learning is shown, but the present invention is not limited to this. For example, as machine learning, machine learning other than deep learning may be used.
 また、上記実施形態では、説明の便宜上、制御部42および画像処理部41による処理を「フロー駆動型」のフローチャートを用いて説明したが、本発明はこれに限られない。本発明では、制御部42および画像処理部41の処理をイベント単位で実行する「イベント駆動型」により行ってもよい。この場合、完全なイベント駆動型で行ってもよいし、イベント駆動およびフロー駆動を組み合わせて行ってもよい。 Further, in the above embodiment, for convenience of explanation, the processing by the control unit 42 and the image processing unit 41 has been described using a “flow-driven” flowchart, but the present invention is not limited to this. In the present invention, the processing of the control unit 42 and the image processing unit 41 may be performed by an "event-driven type" that executes the processing in event units. In this case, it may be completely event-driven, or it may be a combination of event-driven and flow-driven.
 [態様]
 上述した例示的な実施形態は、以下の態様の具体例であることが当業者により理解される。
[Aspect]
Those skilled in the art will appreciate that the exemplary embodiments described above are specific examples of the following embodiments.
 (項目1)
 少なくとも水平方向に移動可能に構成されているX線撮影装置本体と、
 前記X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得する撮像部と、
 前記撮像部により取得された前記複数の可視画像から生成された3次元画像としての視差画像に基づいて、前記X線撮影装置本体と前記X線撮影装置本体の周囲の障害物とを含む被検出物を識別する画像処理部と、
 前記X線撮影装置本体と前記障害物との接触を回避するための制御を行う制御部と、
を備え、
 前記画像処理部は、予め与えられた教師データとしての前記X線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいて前記X線撮影装置本体を特定する機械学習の学習結果に基づいて特定された前記X線撮影装置本体を、前記撮像部により取得された前記複数の可視画像のうちの少なくとも1つにおいて識別した場合に、前記学習結果に基づいて識別された前記可視画像における前記X線撮影装置本体の画像に基づいて、前記被検出物において前記X線撮影装置本体と前記障害物とを判別するとともに、判別した前記X線撮影装置本体と前記障害物との間の水平距離を算出するように構成されており、
 前記制御部は、前記画像処理部により算出された前記水平距離に基づいて、前記X線撮影装置本体と前記障害物との接触を回避するための制御を行うように構成されている、X線撮影装置。
(Item 1)
At least the main body of the X-ray imaging device that is configured to be movable in the horizontal direction,
An imaging unit that acquires visible images as a plurality of two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and an imaging unit.
Based on the parallax image as a three-dimensional image generated from the plurality of visible images acquired by the imaging unit, the detected object including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body. An image processing unit that identifies objects and
A control unit that controls to avoid contact between the X-ray imaging apparatus main body and the obstacle,
With
The image processing unit is a machine learning device that identifies the X-ray imaging apparatus main body based on a plurality of teacher visible images as two-dimensional images including an image of the X-ray imaging apparatus main body as teacher data given in advance. When the X-ray imaging apparatus main body specified based on the learning result is identified in at least one of the plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus main body is identified based on the learning result. Based on the image of the X-ray imaging apparatus main body in the visible image, the X-ray imaging apparatus main body and the obstacle are discriminated in the detected object, and the discriminated X-ray imaging apparatus main body and the obstacle are It is configured to calculate the horizontal distance between
The control unit is configured to perform control for avoiding contact between the X-ray imaging apparatus main body and the obstacle based on the horizontal distance calculated by the image processing unit. Shooting device.
 (項目2)
 前記画像処理部は、前記撮像部により取得された前記可視画像において、前記X線撮影装置本体が撮影されている第1領域を識別するとともに、前記視差画像に基づいて、前記被検出物が撮影されている領域のうち、前記第1領域に対応する領域以外の領域を、前記障害物が撮影されている第2領域として識別するように構成されている、項目1に記載のX線撮影装置。
(Item 2)
The image processing unit identifies a first region in which the X-ray imaging apparatus main body is photographed in the visible image acquired by the imaging unit, and the object to be detected is photographed based on the parallax image. The X-ray imaging apparatus according to item 1, wherein an region other than the region corresponding to the first region is identified as a second region in which the obstacle is photographed. ..
 (項目3)
 前記画像処理部は、前記視差画像に基づいて、前記可視画像において識別された前記第1領域に隣接するとともに前記学習結果に基づく画像認識を行う際の最小単位領域である抽出単位領域よりも小さい前記被検出物を除いた領域を第3領域として識別するとともに、前記被検出物が撮影されている領域のうち、前記第1領域および前記第3領域に対応する領域以外の領域を、前記障害物が撮影されている第2領域として識別するように構成されている、項目2に記載のX線撮影装置。
(Item 3)
The image processing unit is adjacent to the first region identified in the visible image based on the parallax image, and is smaller than the extraction unit region, which is the minimum unit region for performing image recognition based on the learning result. The area excluding the object to be detected is identified as the third area, and the area other than the area corresponding to the first area and the third area among the areas where the object to be detected is photographed is the obstacle. The X-ray imaging apparatus according to item 2, which is configured to identify an object as a second region in which an object is photographed.
 (項目4)
 前記X線撮影装置本体は、前記X線撮影装置本体が設けられる室内の天井面から吊り下げられるように設けられており、
 前記画像処理部は、前記障害物の高さが前記X線撮影装置本体の高さ以上である場合に、前記水平距離を算出するように構成されている、項目1~3のいずれか1項に記載のX線撮影装置。
(Item 4)
The X-ray imaging apparatus main body is provided so as to be suspended from a ceiling surface in a room where the X-ray imaging apparatus main body is provided.
The image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the main body of the X-ray imaging apparatus, any one of items 1 to 3. The X-ray imaging apparatus according to.
 (項目5)
 前記X線撮影装置本体の下端を鉛直方向に移動可能に構成されている鉛直方向移動部をさらに備え、
 前記画像処理部は、前記障害物の高さが前記鉛直方向移動部の高さ以上である場合に、前記水平距離を算出するように構成されている、項目4に記載のX線撮影装置。
(Item 5)
Further, a vertical moving unit is provided so that the lower end of the X-ray imaging apparatus main body can be moved in the vertical direction.
The X-ray imaging apparatus according to item 4, wherein the image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the vertical moving unit.
 (項目6)
 前記撮像部は、ステレオカメラを含む、項目1~5のいずれか1項に記載のX線撮影装置。
(Item 6)
The X-ray imaging apparatus according to any one of items 1 to 5, wherein the imaging unit includes a stereo camera.
 (項目7)
 予め与えられた教師データとしてのX線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいて前記X線撮影装置本体を特定する機械学習を行うことにより、前記X線撮影装置本体を特定するステップと、
 前記X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得するステップと、
 取得された前記複数の可視画像から生成された3次元画像としての視差画像に基づいて、前記X線撮影装置本体と前記X線撮影装置本体の周囲の障害物とを含む前記被検出物を識別するステップと、
 前記機械学習の学習結果に基づいて特定された前記X線撮影装置本体を、取得された前記複数の可視画像のうちの少なくとも1つにおいて識別した場合に、前記学習結果に基づいて識別された前記可視画像における前記X線撮影装置本体の画像に基づいて、前記被検出物において前記X線撮影装置本体と前記障害物とを判別するステップと、
 判別した前記X線撮影装置本体と前記障害物との間の水平距離を算出するステップと、
 算出された前記水平距離に基づいて、前記X線撮影装置本体と前記障害物との接触を回避するための制御を行うステップと、を備える、X線撮影装置の障害物接触回避方法。
(Item 7)
The X-ray is performed by performing machine learning to identify the X-ray imaging apparatus main body based on a plurality of teacher visible images as two-dimensional images including an image of the X-ray imaging apparatus main body as teacher data given in advance. Steps to identify the main body of the imaging device and
A step of acquiring a plurality of visible images as two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and
Based on the parallax image as a three-dimensional image generated from the plurality of acquired visible images, the object to be detected including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body is identified. Steps to do and
When the X-ray imaging apparatus main body specified based on the learning result of the machine learning is identified in at least one of the acquired plurality of visible images, the identification is made based on the learning result. A step of discriminating between the X-ray imaging apparatus main body and the obstacle in the detected object based on the image of the X-ray imaging apparatus main body in the visible image.
A step of calculating the horizontal distance between the determined X-ray imaging apparatus main body and the obstacle, and
An obstacle contact avoidance method for an X-ray imaging device, comprising a step of performing control for avoiding contact between the X-ray imaging apparatus main body and the obstacle based on the calculated horizontal distance.
 22 支柱部(鉛直方向移動部)
 30 ステレオカメラ(撮像部)
 30a 教師用可視画像
 31 可視画像
 32 視差画像
 41 画像処理部
 42 制御部
 100 X線撮影装置
 100a X線撮影装置本体
 100b (X線撮影装置本体の)下端
 200 室内
 201 天井面
 400 障害物
 H1 (障害物の)高さ
 H2 (X線撮影装置本体の)高さ
 L(La、Lb、Lc) 距離(水平距離)
 P 被検出物
 R (被検出物が撮影されている)領域
 R1 領域(第1領域)
 R2 領域(第2領域)
 R3 領域(第3領域)
 R10 抽出単位領域
22 Strut part (vertical movement part)
30 Stereo camera (imaging unit)
30a Visible image for teacher 31 Visible image 32 Disparity image 41 Image processing unit 42 Control unit 100 X-ray imaging device 100a X-ray imaging device main body 100b (X-ray imaging device main body) Lower end 200 Indoor 201 Ceiling surface 400 Obstacle H1 (obstacle) Height H2 (of the main body of the X-ray imaging device) Height L (La, Lb, Lc) Distance (horizontal distance)
P Detected object R (photographed object) area R1 area (first area)
R2 area (second area)
R3 area (third area)
R10 extraction unit area

Claims (7)

  1.  少なくとも水平方向に移動可能に構成されているX線撮影装置本体と、
     前記X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得する撮像部と、
     前記撮像部により取得された前記複数の可視画像から生成された3次元画像としての視差画像に基づいて、前記X線撮影装置本体と前記X線撮影装置本体の周囲の障害物とを含む被検出物を識別する画像処理部と、
     前記X線撮影装置本体と前記障害物との接触を回避するための制御を行う制御部と、
    を備え、
     前記画像処理部は、予め与えられた教師データとしての前記X線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいて前記X線撮影装置本体を特定する機械学習の学習結果に基づいて特定された前記X線撮影装置本体を、前記撮像部により取得された前記複数の可視画像のうちの少なくとも1つにおいて識別した場合に、前記学習結果に基づいて識別された前記可視画像における前記X線撮影装置本体の画像に基づいて、前記被検出物において前記X線撮影装置本体と前記障害物とを判別するとともに、判別した前記X線撮影装置本体と前記障害物との間の水平距離を算出するように構成されており、
     前記制御部は、前記画像処理部により算出された前記水平距離に基づいて、前記X線撮影装置本体と前記障害物との接触を回避するための制御を行うように構成されている、X線撮影装置。
    At least the main body of the X-ray imaging device that is configured to be movable in the horizontal direction,
    An imaging unit that acquires visible images as a plurality of two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and an imaging unit.
    Based on the parallax image as a three-dimensional image generated from the plurality of visible images acquired by the imaging unit, the detected object including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body. An image processing unit that identifies objects and
    A control unit that controls to avoid contact between the X-ray imaging apparatus main body and the obstacle,
    With
    The image processing unit is a machine learning device that identifies the X-ray imaging device main body based on a plurality of visible images for teachers as two-dimensional images including an image of the X-ray imaging device main body as teacher data given in advance. When the X-ray imaging apparatus main body specified based on the learning result is identified in at least one of the plurality of visible images acquired by the imaging unit, the X-ray imaging apparatus identified based on the learning result. Based on the image of the X-ray imaging apparatus main body in the visible image, the X-ray imaging apparatus main body and the obstacle are discriminated in the detected object, and the discriminated X-ray imaging apparatus main body and the obstacle are It is configured to calculate the horizontal distance between
    The control unit is configured to perform control for avoiding contact between the X-ray imaging apparatus main body and the obstacle based on the horizontal distance calculated by the image processing unit. Shooting device.
  2.  前記画像処理部は、前記撮像部により取得された前記可視画像において、前記X線撮影装置本体が撮影されている第1領域を識別するとともに、前記視差画像に基づいて、前記被検出物が撮影されている領域のうち、前記第1領域に対応する領域以外の領域を、前記障害物が撮影されている第2領域として識別するように構成されている、請求項1に記載のX線撮影装置。 The image processing unit identifies the first region in which the X-ray imaging apparatus main body is photographed in the visible image acquired by the imaging unit, and the object to be detected is photographed based on the parallax image. The X-ray imaging according to claim 1, wherein an region other than the region corresponding to the first region is identified as a second region in which the obstacle is photographed. apparatus.
  3.  前記画像処理部は、前記視差画像に基づいて、前記可視画像において識別された前記第1領域に隣接するとともに前記学習結果に基づく画像認識を行う際の最小単位領域である抽出単位領域よりも小さい前記被検出物を除いた領域を第3領域として識別するとともに、前記被検出物が撮影されている領域のうち、前記第1領域および前記第3領域に対応する領域以外の領域を、前記障害物が撮影されている第2領域として識別するように構成されている、請求項2に記載のX線撮影装置。 The image processing unit is adjacent to the first region identified in the visible image based on the parallax image, and is smaller than the extraction unit region, which is the minimum unit region for performing image recognition based on the learning result. The region excluding the object to be detected is identified as the third region, and among the regions in which the object to be detected is photographed, the regions other than the regions corresponding to the first region and the third region are the obstacles. The X-ray imaging apparatus according to claim 2, which is configured to identify an object as a second region in which an object is imaged.
  4.  前記X線撮影装置本体は、前記X線撮影装置本体が設けられる室内の天井面から吊り下げられるように設けられており、
     前記画像処理部は、前記障害物の高さが前記X線撮影装置本体の高さ以上である場合に、前記水平距離を算出するように構成されている、請求項1~3のいずれか1項に記載のX線撮影装置。
    The X-ray imaging apparatus main body is provided so as to be suspended from a ceiling surface in a room where the X-ray imaging apparatus main body is provided.
    Any one of claims 1 to 3, wherein the image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the main body of the X-ray imaging apparatus. The X-ray imaging apparatus according to the section.
  5.  前記X線撮影装置本体の下端を鉛直方向に移動可能に構成されている鉛直方向移動部をさらに備え、
     前記画像処理部は、前記障害物の高さが前記鉛直方向移動部の高さ以上である場合に、前記水平距離を算出するように構成されている、請求項4に記載のX線撮影装置。
    Further, a vertical moving unit is provided so that the lower end of the X-ray imaging apparatus main body can be moved in the vertical direction.
    The X-ray imaging apparatus according to claim 4, wherein the image processing unit is configured to calculate the horizontal distance when the height of the obstacle is equal to or higher than the height of the vertical moving unit. ..
  6.  前記撮像部は、ステレオカメラを含む、請求項1~5のいずれか1項に記載のX線撮影装置。 The X-ray imaging apparatus according to any one of claims 1 to 5, wherein the imaging unit includes a stereo camera.
  7.  予め与えられた教師データとしてのX線撮影装置本体の画像を含む2次元画像としての複数の教師用可視画像に基づいて前記X線撮影装置本体を特定する機械学習を行うことにより、前記X線撮影装置本体を特定するステップと、
     前記X線撮影装置本体の周囲の同一領域における複数の2次元画像としての可視画像を取得するステップと、
     取得された前記複数の可視画像から生成された3次元画像としての視差画像に基づいて、前記X線撮影装置本体と前記X線撮影装置本体の周囲の障害物とを含む前記被検出物を識別するステップと、
     前記機械学習の学習結果に基づいて特定された前記X線撮影装置本体を、取得された前記複数の可視画像のうちの少なくとも1つにおいて識別した場合に、前記学習結果に基づいて識別された前記可視画像における前記X線撮影装置本体の画像に基づいて、前記被検出物において前記X線撮影装置本体と前記障害物とを判別するステップと、
     判別した前記X線撮影装置本体と前記障害物との間の水平距離を算出するステップと、
     算出された前記水平距離に基づいて、前記X線撮影装置本体と前記障害物との接触を回避するための制御を行うステップと、を備える、X線撮影装置の障害物接触回避方法。
    The X-ray is performed by performing machine learning to identify the X-ray imaging apparatus main body based on a plurality of teacher visible images as two-dimensional images including an image of the X-ray imaging apparatus main body as teacher data given in advance. Steps to identify the main body of the imaging device and
    A step of acquiring a plurality of visible images as two-dimensional images in the same region around the main body of the X-ray imaging apparatus, and
    Based on the parallax image as a three-dimensional image generated from the plurality of acquired visible images, the object to be detected including the X-ray imaging apparatus main body and obstacles around the X-ray imaging apparatus main body is identified. Steps to do and
    When the X-ray imaging apparatus main body specified based on the learning result of the machine learning is identified in at least one of the acquired plurality of visible images, the identification is made based on the learning result. A step of discriminating between the X-ray imaging apparatus main body and the obstacle in the detected object based on the image of the X-ray imaging apparatus main body in the visible image.
    A step of calculating the horizontal distance between the determined X-ray imaging apparatus main body and the obstacle, and
    An obstacle contact avoidance method for an X-ray imaging device, comprising a step of performing control for avoiding contact between the X-ray imaging apparatus main body and the obstacle based on the calculated horizontal distance.
PCT/JP2019/020873 2019-05-27 2019-05-27 X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device WO2020240653A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2019/020873 WO2020240653A1 (en) 2019-05-27 2019-05-27 X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device
JP2021521585A JP7173321B2 (en) 2019-05-27 2019-05-27 X-ray imaging apparatus and obstacle contact avoidance method for X-ray imaging apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/020873 WO2020240653A1 (en) 2019-05-27 2019-05-27 X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device

Publications (1)

Publication Number Publication Date
WO2020240653A1 true WO2020240653A1 (en) 2020-12-03

Family

ID=73553670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/020873 WO2020240653A1 (en) 2019-05-27 2019-05-27 X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device

Country Status (2)

Country Link
JP (1) JP7173321B2 (en)
WO (1) WO2020240653A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11178818A (en) * 1997-10-01 1999-07-06 Siemens Ag Medical device
JP2012205681A (en) * 2011-03-29 2012-10-25 Toshiba Corp X-ray imaging system
JP2014097131A (en) * 2012-11-13 2014-05-29 Toshiba Corp X-ray diagnostic device
US20150117601A1 (en) * 2012-04-25 2015-04-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. X-ray source with module and detector for optical radiation
JP2015156896A (en) * 2014-02-21 2015-09-03 キヤノン株式会社 Radiographic apparatus, control method thereof, and program
WO2017043041A1 (en) * 2015-09-08 2017-03-16 富士フイルム株式会社 Monitor image display method for radiation emitting device, and radiation emitting device
WO2017043040A1 (en) * 2015-09-08 2017-03-16 富士フイルム株式会社 Conveyance assistance method and conveyance assistance device for radiation emitting device, and radiological imaging device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11178818A (en) * 1997-10-01 1999-07-06 Siemens Ag Medical device
JP2012205681A (en) * 2011-03-29 2012-10-25 Toshiba Corp X-ray imaging system
US20150117601A1 (en) * 2012-04-25 2015-04-30 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. X-ray source with module and detector for optical radiation
JP2014097131A (en) * 2012-11-13 2014-05-29 Toshiba Corp X-ray diagnostic device
JP2015156896A (en) * 2014-02-21 2015-09-03 キヤノン株式会社 Radiographic apparatus, control method thereof, and program
WO2017043041A1 (en) * 2015-09-08 2017-03-16 富士フイルム株式会社 Monitor image display method for radiation emitting device, and radiation emitting device
WO2017043040A1 (en) * 2015-09-08 2017-03-16 富士フイルム株式会社 Conveyance assistance method and conveyance assistance device for radiation emitting device, and radiological imaging device

Also Published As

Publication number Publication date
JP7173321B2 (en) 2022-11-16
JPWO2020240653A1 (en) 2020-12-03

Similar Documents

Publication Publication Date Title
KR102471422B1 (en) Method and system for non-contact control in surgical environment
US20220283563A1 (en) Safety in dynamic 3d healthcare environment
US11694355B2 (en) Predictive visualization of medical imaging scanner component movement
US20170181808A1 (en) Surgical system with haptic feedback based upon quantitative three-dimensional imaging
US11123031B2 (en) Augmented reality for radiation dose monitoring
CN102970928A (en) Radiological image photography display method and system
TWI551137B (en) Field display system, field display method, and computer-readable recording medium in which field display program is recorded
US20140314205A1 (en) Positioning distance control for x-ray imaging systems
Beyl et al. Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room
JP2008220553A (en) Radiation therapy system
WO2020240653A1 (en) X-ray imaging device and method for avoiding contact with obstacle for x-ray imaging device
WO2020235099A1 (en) X-ray imaging device and method of using x-ray imaging device
CN115005858A (en) Guide device for TEE probe
CN110650686A (en) Device and corresponding method for providing spatial information of an interventional device in live 2D X radiographs
JP7265392B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL OBSERVATION SYSTEM, IMAGE PROCESSING METHOD AND PROGRAM
JP2020188865A (en) Learning method for x-ray imaging device, and learned data generating program for x-ray imaging device
JP6245801B2 (en) Image display apparatus and medical image diagnostic apparatus
JPWO2020188802A1 (en) X-ray imager
WO2023104055A1 (en) Safety protection method and system, readable storage medium, and surgical robot system
EP4342385A1 (en) Medical device movement control apparatus
JP2021045285A (en) X-ray imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931118

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021521585

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19931118

Country of ref document: EP

Kind code of ref document: A1