WO2022110132A1 - Unmanned smart radiographing system and radiographing method - Google Patents

Unmanned smart radiographing system and radiographing method Download PDF

Info

Publication number
WO2022110132A1
WO2022110132A1 PCT/CN2020/132728 CN2020132728W WO2022110132A1 WO 2022110132 A1 WO2022110132 A1 WO 2022110132A1 CN 2020132728 W CN2020132728 W CN 2020132728W WO 2022110132 A1 WO2022110132 A1 WO 2022110132A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
target
camera assembly
control platform
beamer
Prior art date
Application number
PCT/CN2020/132728
Other languages
French (fr)
Chinese (zh)
Inventor
赵杰
奥雪
Original Assignee
江苏康众数字医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏康众数字医疗科技股份有限公司 filed Critical 江苏康众数字医疗科技股份有限公司
Priority to PCT/CN2020/132728 priority Critical patent/WO2022110132A1/en
Publication of WO2022110132A1 publication Critical patent/WO2022110132A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the invention relates to the field of medical imaging equipment, in particular to an unmanned intelligent filming system and a filming method.
  • the shooting technician adjusts the opening of the beamer so that the X-ray irradiation area just covers the part to be shot;
  • the shooting technician sets the exposure parameters such as kV, mAs, etc. according to the shooting site, patient size and other factors;
  • the doctor makes a diagnosis based on the X-ray film.
  • the technician plays a very important role in the entire filming process, and the level of experience directly determines the accuracy of parameter selection, which in turn affects the radiation dose and imaging quality received by the patient.
  • the two most important steps are patient setup and exposure dose control.
  • the present invention provides a kind of unmanned intelligent filming system and filming method, and the technical scheme is as follows:
  • the present invention provides an unmanned intelligent filming system, comprising:
  • a ray source for emitting rays
  • a camera assembly which is used to obtain image information of a target to be filmed and its surrounding environment
  • the beamer is used to adjust the radiation field and switch the filtering of the radiation
  • control platform is electrically connected to the camera assembly and the beamer, the control platform is provided with a processing operation module and an auxiliary module, and the processing operation module can be based on the images obtained by the camera assembly
  • the information obtains the operating parameters of the camera assembly and the beamer, as well as the positional relationship between the current position of the target and the preset test position, and the auxiliary module can guide the target to move to the preset test position according to the positional relationship
  • the control platform can control the operating parameters of the ray source, the camera assembly and the light beamer according to the operating parameters.
  • the auxiliary module includes a speaker for guiding the movement of the target to be filmed by voice and/or a display screen for guiding the movement of the target to be filmed by graphics and text.
  • the camera assembly includes a first RGB camera, and the first RGB camera is a zoom RGB camera.
  • the camera assembly includes a first RGB camera and a second RGB camera, and the second RGB camera is a wide-angle RGB camera.
  • the camera assembly includes a first RGB camera and a third RGB camera, and the third RGB camera is a telephoto RGB camera.
  • the camera assembly further includes a depth camera, and the depth camera is used to obtain the depth information of the target to be filmed and its surrounding environment.
  • the depth camera is a ToF camera or a structured light camera.
  • both the camera assembly and the control platform are arranged on the light beamer.
  • control platform is also provided with an interface for communicating with external devices.
  • the present invention provides an unmanned intelligent filming method, comprising the following steps:
  • step S3 if the positional relationship exceeds the preset threshold range, the control platform adjusts the camera according to the operating parameters in step S2, the auxiliary module guides the target to move to the preset test position according to the positional relationship, and executes S1-S3, Otherwise, execute S4-S5;
  • the control platform controls the beamer and the X-ray source to radiate the target to be filmed according to the operating parameters in the step S4.
  • the current posture of the patient is automatically recognized and the correct positioning is guided;
  • FIG. 1 is a schematic diagram of a framework of an unmanned intelligent filming system provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of different camera shooting ranges of an unmanned intelligent filming system provided by an embodiment of the present invention
  • FIG. 3 is a schematic diagram of successful placement of an unmanned intelligent filming system provided by an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of an auxiliary positioning guide of an unmanned intelligent filming system provided by an embodiment of the present invention.
  • the reference numerals are as follows: 1-camera assembly, 2-beam light device, 3-control platform, 11-first RGB camera, 12-depth camera, 13-second RGB camera.
  • an unmanned intelligent filming system including a ray source, a camera assembly 1 , a beamer 2 and a control platform 3 , and the ray source is used for shooting out rays;
  • the camera assembly 1 is used to obtain the image information of the target to be filmed and its surrounding environment
  • the beamer 2 is used to adjust the field of radiation and switch the filtering of rays, and the beamer is arranged in the In front of the output end of the X-ray source, the beamer 2 is an intelligent beamer, and the beamer 2 also provides a mounting structure for installing the camera assembly 1 and the control platform 3 .
  • the control platform 3 is electrically connected to the camera assembly 1 and the beamer 2, and the control platform 3 is provided with a processing operation module and an auxiliary module, and the processing operation module can be obtained according to the camera assembly 1.
  • the operating parameters of the camera assembly 1 and the beamer 2, as well as the positional relationship between the current position of the target and the preset test position, are obtained from the image information obtained by the auxiliary module, and the auxiliary module can guide the target to move to the preset position according to the positional relationship.
  • moving to the preset test position also includes guiding the target to pose a posture that meets the shooting requirements, and the auxiliary module includes a speaker for voice guidance to move the target to be filmed and/or for graphic guidance.
  • the display screen of the moving target to be filmed, the control platform 3 can control the working parameters of the ray source, the camera assembly 1 and the beamer 2 according to the operating parameters, and the control platform 3 is also provided with a control input
  • the interface for the module to communicate with external equipment, the control input module can be used for the doctor to input the area that the patient needs to be irradiated, and the input can be performed by displaying a touch screen.
  • the interface for external equipment communication is convenient for information output and direct control of external equipment.
  • the control platform 3 It has the ability to control other components and perform high-intensity operations of processing and computing modules, and provide the ability to communicate with external devices on the network.
  • the processing operation module utilizes the acquired RGB images and depth information, and utilizes traditional image processing algorithms and neural network algorithms to achieve contour/silhouette extraction, 2D/3D key point extraction, and pose determination of the object to be photographed. , Through the auxiliary module positioning guidance, exposure area calculation and determination of the area of interest of AEC (Automatic Exposure Control Technology), it is the intelligent brain of the entire system.
  • AEC Automatic Exposure Control Technology
  • the camera assembly 1 has the following composition schemes:
  • the camera assembly 1 includes a first RGB camera 11, and the first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object to be photographed and its surrounding environment from a general distance.
  • This solution is suitable for the camera and the photographed object. For application scenarios where the distance between objects is moderate and does not change much, it can provide basic positioning assistance and automatic exposure control assistance functions;
  • the camera assembly 1 includes a first RGB camera 11, the first RGB camera 11 is a zoom RGB camera, and the focal length can be adjusted to replace a plurality of cameras with different focal lengths. For medium and long-distance image acquisition needs, it can provide basic positioning assistance and automatic exposure control assistance functions;
  • the camera assembly 1 includes a first RGB camera 11 and a second RGB camera 13.
  • the first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object and its surrounding environment from a general distance.
  • the second RGB camera 13 is a wide-angle RGB camera, which can be used to obtain image information of the entire object and its surrounding environment when the camera is too close to the object to be photographed.
  • Moderate application scenarios which can cover most filming scenarios, and can provide basic setup assistance and automatic exposure control assistance functions;
  • the camera assembly 1 includes a first RGB camera 11 and a third RGB camera.
  • the first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object and its surrounding environment from a general distance.
  • the triple RGB camera is a telephoto RGB camera, which can be used to obtain high-resolution image information of the shooting part when the camera is far away from the subject or the part to be shot is small. This solution is suitable for the distance between the camera and the subject being moderate to For distant application scenarios, it can provide local high-resolution images for small shooting parts, and can provide basic positioning assistance and automatic exposure control assistance functions;
  • the camera assembly 1 includes a first RGB camera 11, a second RGB camera 13, a third RGB camera and a depth camera 12, the first RGB camera 11 is an ordinary RGB camera, and the second RGB camera 13 is a wide-angle RGB camera camera, the third RGB camera is a telephoto RGB camera, the depth camera 12 is a ToF camera or a structured light camera, and the depth camera 12 is used to obtain the depth information of the target to be filmed and its surrounding environment, and the solution provides The most complete and accurate results for the vast majority of DR filming applications.
  • the composition scheme of the camera assembly 1 includes but is not limited to the above, and the depth camera 12 can also be freely combined with the above scheme.
  • the camera assembly 1 includes a first RGB camera 11, a second RGB camera 13 and a depth camera 12, the first RGB camera 11 is a common RGB camera, and the second RGB camera 13 is a wide-angle camera RGB camera, the depth camera 12 is a ToF camera, which is used to obtain the depth information of the target to be filmed and its surrounding environment.
  • the ToF camera can obtain the three-dimensional information of the object and its surrounding environment, which can help the system to perceive more accurately
  • the position of the subject, the camera selected in this embodiment and its field of view at different distances are shown in Figure 2.
  • the maximum shooting angles of the first RGB camera 11, the second RGB camera 13 and the depth camera 12 are respectively It is 63°, 80° and 100°.
  • the second RGB camera 13 has the widest shooting range at 0.8m from the front of the camera assembly, and the second RGB camera 13 has the widest shooting range at 1.8m from the front of the camera assembly.
  • the range is large and not shown in the figure, and the first RGB camera 11 has the smallest shooting range. Therefore, when shooting is required, the second RGB camera 13 can first capture the target to be filmed, and then the filming system will activate the depth camera 12 and the camera.
  • the first RGB camera 11 is used for precise shooting and positioning in the next step.
  • control platform is an embedded high-performance computing platform, including a CPU (central processing unit), an MCU (microcontroller unit), a general parallel computing unit and a network communication interface, which meet the requirements of small size, low power consumption, With rich hardware interfaces and powerful computing power, the Jetson Nano chip produced by NVIDIA is preferred.
  • the processing operation module consists of multiple parts, mainly including: 1) basic unit, including image acquisition, camera calibration, data stream synchronization and mapping of different cameras, and definition of positioning requirements; 2) middle layer, using traditional image processing methods and neural
  • the network method realizes the extraction of the silhouette and key points of the photographed object, see Figure 3, for example, human skeleton points are used as key points to locate the preset radiation area; 3) Application layer, based on filming, placement requirements and extracted information , determine the exposure area, guide the placement, and then realize the functions of controlling the beamer adjustment, calculating the AEC area of interest, etc., as shown in Figure 4; 4) Automatic evolution and upgrade modules, sorting and uploading video streams to the cloud space , the neural network is trained and upgraded online, and then the upgrade file is sent to the terminal device.
  • the unmanned intelligent filming system also includes a flat panel detector.
  • the flat panel detector can be connected to the above-mentioned control platform 3 in communication. Realize intelligent filming.
  • an unmanned intelligent filming method comprising the following steps:
  • step S3 if the positional relationship exceeds the preset threshold range, the control platform adjusts the camera according to the operating parameters in step S2, the auxiliary module guides the target to move to the preset test position according to the positional relationship, and executes S1-S3, Otherwise, execute S4-S5;
  • the control platform controls the beamer and the X-ray source to radiate the target to be filmed according to the operating parameters in the step S4.
  • the unmanned intelligent filming system and filming method provided by the present invention solve the problems of low accuracy, stability and efficiency caused by the current key parameters of filming being completely determined by the shooting technician, and realize the automatic identification of patients according to the shooting position designated by the doctor
  • the current posture and the correct positioning are guided, the opening of the beamer is automatically adjusted so that the X-ray irradiation area just covers the shooting site, and the exposure dose is automatically controlled with the AEC technology, thereby greatly improving the filming efficiency, effectively reducing unnecessary radiation dose, and can Reduce unnecessary contact between doctors and patients, and promote medical imaging equipment to go further in the direction of unmanned and intelligent.

Abstract

An unmanned smart radiographing system and radiographing method. The radiographing system comprises a ray source, a camera assembly (1), a beam limiter (2) and a control platform (3); the camera assembly (1) is used for acquiring image information of a target to be radiographed and the surrounding environment thereof, the beam limiter (2) is used for adjusting the radiation field of a ray and switching the filtering of the ray, and the control platform (3) is electrically connected to both the camera assembly (1) and the beam limiter (2); the control platform (3) is provided with a processing computation module and an auxiliary module; the processing computation module can obtain, according to the image information acquired by the camera assembly (1), operation parameters of the camera assembly (1) and the beam limiter (2), and the positional relationship between the current position of the target and a preset detection position; the auxiliary module can guide the target to move to the preset detection position according to the positional relationship and guide the target to make a posture complying with radiographing requirements; and the control platform (3) can control the operation parameters of the ray source, the camera assembly (1) and the beam limiter (2) according to the operation parameters. The radiographing system and radiographing method can automatically recognize a current posture of a patient and guide same to be in a correct posture.

Description

一种无人化智能拍片系统及拍片方法A kind of unmanned intelligent filming system and filming method 技术领域technical field
本发明涉及医疗影像设备领域,尤其涉及一种无人化智能拍片系统及拍片方法。The invention relates to the field of medical imaging equipment, in particular to an unmanned intelligent filming system and a filming method.
背景技术Background technique
经过数十年的不断发展,数字化X射线摄影系统已经成为目前影像科检查首选的设备。随着人们对医疗条件和X射线辐射剂量等各方面的关注和期望越来越高,对医疗设备本身以及拍摄过程的要求也越来越高。对于DR来说,整个拍片过程,包括指导患者正确摆位、调整束光器以对准曝光区域、设置管电流、管电压等曝光参数,仍然完全依赖于拍摄技师的经验判断。而由于人体拍摄体位繁多,且体型大小各异,以致无法做到精准、稳定、可重复。After decades of continuous development, digital X-ray imaging systems have become the preferred equipment for imaging examinations. As people's concerns and expectations about medical conditions and X-ray radiation doses increase, so do the demands on the medical equipment itself and the filming process. For DR, the entire filming process, including instructing the patient to position correctly, adjusting the beamer to align the exposure area, and setting exposure parameters such as tube current and tube voltage, still relies entirely on the experience and judgment of the shooting technician. However, due to the variety of human body positions and sizes, it is impossible to achieve precise, stable and repeatable results.
目前在DR摄影应用中,一个典型的拍片流程包括:Currently in DR photography applications, a typical filming process includes:
1)医生根据诊断需求开具检查单,拍摄技师根据拍摄部位要求,调整球管和平板探测器之间的距离;1) The doctor issues a checklist according to the diagnostic needs, and the shooting technician adjusts the distance between the tube and the flat panel detector according to the requirements of the shooting site;
2)拍摄技师指导患者摆出正确的拍片姿势;2) The shooting technician instructs the patient to put on the correct shooting posture;
3)拍摄技师调整束光器开口,使得X射线照射区域刚好覆盖被拍摄部位;3) The shooting technician adjusts the opening of the beamer so that the X-ray irradiation area just covers the part to be shot;
4)拍摄技师根据拍摄部位、患者体型等因素,设定kV、mAs等曝光参数;4) The shooting technician sets the exposure parameters such as kV, mAs, etc. according to the shooting site, patient size and other factors;
5)拍摄技师按下手闸进行曝光;5) The shooting technician presses the handbrake for exposure;
6)拍摄技师判断成像质量,如果效果不好(过曝或欠曝),则需要重拍;6) The shooting technician judges the image quality. If the effect is not good (overexposure or underexposure), it needs to be retaken;
7)医生根据X光片给出诊断结论。7) The doctor makes a diagnosis based on the X-ray film.
由上述流程可以看出,技师在整个拍片过程中起到了非常重要的作用,其经验的多寡直接决定了参数选择的准确性,并进而影响患者受到的辐射剂量和成像质量。其中,最重要的两个步骤分别是患者摆位和曝光剂量控制。It can be seen from the above process that the technician plays a very important role in the entire filming process, and the level of experience directly determines the accuracy of parameter selection, which in turn affects the radiation dose and imaging quality received by the patient. Among them, the two most important steps are patient setup and exposure dose control.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术的问题,本发明提供了一种无人化智能拍片系统及拍片 方法,所述技术方案如下:In order to solve the problem of the prior art, the present invention provides a kind of unmanned intelligent filming system and filming method, and the technical scheme is as follows:
一方面,本发明提供了一种无人化智能拍片系统,包括In one aspect, the present invention provides an unmanned intelligent filming system, comprising:
射线源,所述射线源用于射出射线;a ray source for emitting rays;
摄像头组件,所述摄像头组件用于获取待拍片的目标及其周边环境的图像信息;a camera assembly, which is used to obtain image information of a target to be filmed and its surrounding environment;
束光器,所述束光器用于调节射线的射野以及切换射线的滤过;a beamer, the beamer is used to adjust the radiation field and switch the filtering of the radiation;
控制平台,所述控制平台与所述摄像头组件和所述束光器均电连接,所述控制平台设有处理运算模块和与辅助模块,所述处理运算模块能够根据所述摄像头组件获取的图像信息得到所述摄像头组件和所述束光器的运行参数,以及目标当前位置与预设测试位置的位置关系,所述辅助模块能够根据所述位置关系引导所述目标移动至预设测试位置,所述控制平台能够根据所述运行参数控制所述射线源、摄像头组件和所述束光器的工作参数。a control platform, the control platform is electrically connected to the camera assembly and the beamer, the control platform is provided with a processing operation module and an auxiliary module, and the processing operation module can be based on the images obtained by the camera assembly The information obtains the operating parameters of the camera assembly and the beamer, as well as the positional relationship between the current position of the target and the preset test position, and the auxiliary module can guide the target to move to the preset test position according to the positional relationship, The control platform can control the operating parameters of the ray source, the camera assembly and the light beamer according to the operating parameters.
进一步地,所述辅助模块包括用于语音引导待拍片的目标移动的扬声器和/或用于图文引导待拍片的目标移动的显示屏。Further, the auxiliary module includes a speaker for guiding the movement of the target to be filmed by voice and/or a display screen for guiding the movement of the target to be filmed by graphics and text.
进一步地,摄像头组件包括第一RGB摄像头,所述第一RGB摄像头为变焦RGB摄像头。Further, the camera assembly includes a first RGB camera, and the first RGB camera is a zoom RGB camera.
进一步地,摄像头组件包括第一RGB摄像头和第二RGB摄像头,所述第二RGB摄像头为广角RGB摄像头。Further, the camera assembly includes a first RGB camera and a second RGB camera, and the second RGB camera is a wide-angle RGB camera.
进一步地,摄像头组件包括第一RGB摄像头和第三RGB摄像头,所述第三RGB摄像头为长焦RGB摄像头。Further, the camera assembly includes a first RGB camera and a third RGB camera, and the third RGB camera is a telephoto RGB camera.
进一步地,摄像头组件还包括深度摄像头,所述深度摄像头用于获取待拍片的目标及其周边环境的深度信息。Further, the camera assembly further includes a depth camera, and the depth camera is used to obtain the depth information of the target to be filmed and its surrounding environment.
进一步地,所述深度摄像头为ToF摄像头或结构光摄像头。Further, the depth camera is a ToF camera or a structured light camera.
进一步地,所述摄像头组件和所述控制平台均设置在所述束光器上。Further, both the camera assembly and the control platform are arranged on the light beamer.
进一步地,所述控制平台还设有与外部设备通信的接口。Further, the control platform is also provided with an interface for communicating with external devices.
另一方面,本发明提供了一种无人化智能拍片方法,包括以下步骤:On the other hand, the present invention provides an unmanned intelligent filming method, comprising the following steps:
S1、通过一个或多个摄像头获取待拍片的目标及其周边环境的图像信息;S1. Obtain the image information of the target to be filmed and its surrounding environment through one or more cameras;
S2、通过处理运算模块处理所述摄像头获取的图像信息,以得到目标当前位置与预设测试位置的位置关系,以及控制所述摄像头所需的运行参数;S2, processing the image information obtained by the camera through the processing operation module, to obtain the positional relationship between the current position of the target and the preset test position, and the operating parameters required for controlling the camera;
S3、若所述位置关系超出预设阈值范围,控制平台根据S2步骤中的运行参数调整所述摄像头,辅助模块根据所述位置关系引导所述目标移动至预设测试位置,执行S1-S3,否则,执行S4-S5;S3, if the positional relationship exceeds the preset threshold range, the control platform adjusts the camera according to the operating parameters in step S2, the auxiliary module guides the target to move to the preset test position according to the positional relationship, and executes S1-S3, Otherwise, execute S4-S5;
S4、通过处理运算模块处理当前所述摄像头获取的图像信息,以得到束光器和X射线源的运行参数;S4, processing the image information currently obtained by the camera through the processing operation module to obtain the operating parameters of the beamer and the X-ray source;
S5、所述控制平台根据S4步骤中的运行参数控制所述束光器和所述X射线源对待拍片的目标进行放射。S5. The control platform controls the beamer and the X-ray source to radiate the target to be filmed according to the operating parameters in the step S4.
本发明提供的技术方案带来的有益效果如下:The beneficial effects brought by the technical scheme provided by the invention are as follows:
a.根据医生指定的拍摄部位,自动识别患者当前姿态并指引其正确摆位;a. According to the shooting position designated by the doctor, the current posture of the patient is automatically recognized and the correct positioning is guided;
b.自动调整束光器的开口使得X射线照射区域刚好覆盖拍摄部位、配合AEC技术自动控制曝光剂量,从而大幅提高拍片效率、有效降低不必要的辐射剂量;b. Automatically adjust the opening of the beamer so that the X-ray irradiation area just covers the shooting site, and cooperate with the AEC technology to automatically control the exposure dose, thereby greatly improving the filming efficiency and effectively reducing unnecessary radiation dose;
c.减少不必要的医患接触。c. Reduce unnecessary doctor-patient contact.
附图说明Description of drawings
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to illustrate the technical solutions in the embodiments of the present invention more clearly, the following briefly introduces the accompanying drawings used in the description of the embodiments. Obviously, the accompanying drawings in the following description are only some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained from these drawings without creative effort.
图1是本发明实施例提供的无人化智能拍片系统框架示意图;1 is a schematic diagram of a framework of an unmanned intelligent filming system provided by an embodiment of the present invention;
图2是本发明实施例提供的无人化智能拍片系统不同摄像头拍摄范围示意图;2 is a schematic diagram of different camera shooting ranges of an unmanned intelligent filming system provided by an embodiment of the present invention;
图3是本发明实施例提供的无人化智能拍片系统辅助摆位成功示意图;3 is a schematic diagram of successful placement of an unmanned intelligent filming system provided by an embodiment of the present invention;
图4是本发明实施例提供的无人化智能拍片系统辅助摆位引导示意图。FIG. 4 is a schematic diagram of an auxiliary positioning guide of an unmanned intelligent filming system provided by an embodiment of the present invention.
其中,附图标记如下:1-摄像头组件,2-束光器,3-控制平台,11-第一RGB摄像头,12-深度摄像头,13-第二RGB摄像头。Wherein, the reference numerals are as follows: 1-camera assembly, 2-beam light device, 3-control platform, 11-first RGB camera, 12-depth camera, 13-second RGB camera.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施 例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。In order to make those skilled in the art better understand the solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only Embodiments are part of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
需要说明的是,本发明的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本发明的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、装置、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其他步骤或单元。It should be noted that the terms "first", "second" and the like in the description and claims of the present invention and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the invention described herein can be practiced in sequences other than those illustrated or described herein. Furthermore, the terms "comprising" and "having", and any variations thereof, are intended to cover non-exclusive inclusion, for example, a process, method, apparatus, product or device comprising a series of steps or units is not necessarily limited to those expressly listed Rather, those steps or units may include other steps or units not expressly listed or inherent to these processes, methods, products or devices.
在本发明的一个实施例中,提供了一种无人化智能拍片系统,如图1所示,包括射线源、摄像头组件1、束光器2和控制平台3,所述射线源用于射出射线;所述摄像头组件1用于获取待拍片的目标及其周边环境的图像信息,所述束光器2用于调节射线的射野以及切换射线的滤过,所述束光器设置在所述X射线源输出端的前方,所述束光器2为智能束光器,所述束光器2还提供用于安装所述摄像头组件1和所述控制平台3的安装结构。In an embodiment of the present invention, an unmanned intelligent filming system is provided, as shown in FIG. 1 , including a ray source, a camera assembly 1 , a beamer 2 and a control platform 3 , and the ray source is used for shooting out rays; the camera assembly 1 is used to obtain the image information of the target to be filmed and its surrounding environment, the beamer 2 is used to adjust the field of radiation and switch the filtering of rays, and the beamer is arranged in the In front of the output end of the X-ray source, the beamer 2 is an intelligent beamer, and the beamer 2 also provides a mounting structure for installing the camera assembly 1 and the control platform 3 .
所述控制平台3与所述摄像头组件1和所述束光器2均电连接,所述控制平台3设有处理运算模块和与辅助模块,所述处理运算模块能够根据所述摄像头组件1获取的图像信息得到所述摄像头组件1和所述束光器2的运行参数,以及目标当前位置与预设测试位置的位置关系,所述辅助模块能够根据所述位置关系引导所述目标移动至预设测试位置,需要说明的是移动至预设测试位置还包括引导目标摆出符合拍摄要求的姿势,所述辅助模块包括用于语音引导待拍片的目标移动的扬声器和/或用于图文引导待拍片的目标移动的显示屏,所述控制平台3能够根据所述运行参数控制所述射线源、摄像头组件1和所述束光器2的工作参数,所述控制平台3还设有控制输入模块与外部设备通信的接口,控制输入模块能够用于医生输入患者需要放射的区域,可用显示触摸屏的方式进行输入,外部设备通信的接口便于信息输出以及外部设备的直接控制,所述 控制平台3具有控制其他部件及进行处理运算模块的高强度运算,并提供与外部设备进行网络通信的能力。The control platform 3 is electrically connected to the camera assembly 1 and the beamer 2, and the control platform 3 is provided with a processing operation module and an auxiliary module, and the processing operation module can be obtained according to the camera assembly 1. The operating parameters of the camera assembly 1 and the beamer 2, as well as the positional relationship between the current position of the target and the preset test position, are obtained from the image information obtained by the auxiliary module, and the auxiliary module can guide the target to move to the preset position according to the positional relationship. Assuming the test position, it should be noted that moving to the preset test position also includes guiding the target to pose a posture that meets the shooting requirements, and the auxiliary module includes a speaker for voice guidance to move the target to be filmed and/or for graphic guidance. The display screen of the moving target to be filmed, the control platform 3 can control the working parameters of the ray source, the camera assembly 1 and the beamer 2 according to the operating parameters, and the control platform 3 is also provided with a control input The interface for the module to communicate with external equipment, the control input module can be used for the doctor to input the area that the patient needs to be irradiated, and the input can be performed by displaying a touch screen. The interface for external equipment communication is convenient for information output and direct control of external equipment. The control platform 3 It has the ability to control other components and perform high-intensity operations of processing and computing modules, and provide the ability to communicate with external devices on the network.
其中,所述处理运算模块利用获取到的RGB图像和深度信息,利用传统图像处理算法和神经网络算法,实现对被拍摄物的轮廓/剪影提取、2D/3D关键点提取、摆位姿态的判定、通过辅助模块摆位引导、曝光区域计算以及确定AEC(自动曝光控制技术)的感兴趣区等功能,是整个系统的智能大脑。Wherein, the processing operation module utilizes the acquired RGB images and depth information, and utilizes traditional image processing algorithms and neural network algorithms to achieve contour/silhouette extraction, 2D/3D key point extraction, and pose determination of the object to be photographed. , Through the auxiliary module positioning guidance, exposure area calculation and determination of the area of interest of AEC (Automatic Exposure Control Technology), it is the intelligent brain of the entire system.
在本发明的一个实施例中,所述摄像头组件1有以下几种组成方案:In an embodiment of the present invention, the camera assembly 1 has the following composition schemes:
(1)所述摄像头组件1包括第一RGB摄像头11,第一RGB摄像头11为普通RGB摄像头,可用于一般距离上获取被拍摄物及其周边环境的图像信息,该方案适用于摄像头与被拍摄物之间的距离适中且变化不大的应用场景,可提供基本的摆位辅助和自动曝光控制辅助功能;(1) The camera assembly 1 includes a first RGB camera 11, and the first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object to be photographed and its surrounding environment from a general distance. This solution is suitable for the camera and the photographed object. For application scenarios where the distance between objects is moderate and does not change much, it can provide basic positioning assistance and automatic exposure control assistance functions;
(2)所述摄像头组件1包括第一RGB摄像头11,所述第一RGB摄像头11为变焦RGB摄像头,可以调节焦距,以代替多个不同焦距的摄像头,该方案结构紧凑,可同时满足近、中、远距离的图像获取需求,可提供基本的摆位辅助和自动曝光控制辅助功能;(2) The camera assembly 1 includes a first RGB camera 11, the first RGB camera 11 is a zoom RGB camera, and the focal length can be adjusted to replace a plurality of cameras with different focal lengths. For medium and long-distance image acquisition needs, it can provide basic positioning assistance and automatic exposure control assistance functions;
(3)所述摄像头组件1包括第一RGB摄像头11和第二RGB摄像头13,第一RGB摄像头11为普通RGB摄像头,可用于一般距离上获取被拍摄物及其周边环境的图像信息,所述第二RGB摄像头13为广角RGB摄像头,可用于在摄像头距离被拍摄物太近时获取被拍摄物整体及其周边环境的图像信息,该方案适用于摄像头与被拍摄物之间的距离较近至适中的应用场景,能够覆盖大部分拍片场景,可提供基本的摆位辅助和自动曝光控制辅助功能;(3) The camera assembly 1 includes a first RGB camera 11 and a second RGB camera 13. The first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object and its surrounding environment from a general distance. The second RGB camera 13 is a wide-angle RGB camera, which can be used to obtain image information of the entire object and its surrounding environment when the camera is too close to the object to be photographed. Moderate application scenarios, which can cover most filming scenarios, and can provide basic setup assistance and automatic exposure control assistance functions;
(4)所述摄像头组件1包括第一RGB摄像头11和第三RGB摄像头,第一RGB摄像头11为普通RGB摄像头,可用于一般距离上获取被拍摄物及其周边环境的图像信息,所述第三RGB摄像头为长焦RGB摄像头,可用于在摄像头距离被拍摄物较远或被拍摄部位较小时获取拍摄部位的高分辨率图像信息,该方案适用于摄像头与被拍摄物之间的距离适中至较远的应用场景,能够针对较小的拍摄部位提供局部的高分辨率图像,可提供基本的摆位辅助和自动曝光控制辅助功能;(4) The camera assembly 1 includes a first RGB camera 11 and a third RGB camera. The first RGB camera 11 is an ordinary RGB camera, which can be used to obtain image information of the object and its surrounding environment from a general distance. The triple RGB camera is a telephoto RGB camera, which can be used to obtain high-resolution image information of the shooting part when the camera is far away from the subject or the part to be shot is small. This solution is suitable for the distance between the camera and the subject being moderate to For distant application scenarios, it can provide local high-resolution images for small shooting parts, and can provide basic positioning assistance and automatic exposure control assistance functions;
(5)所述摄像头组件1包括第一RGB摄像头11、第二RGB摄像头13、 第三RGB摄像头和深度摄像头12,第一RGB摄像头11为普通RGB摄像头,所述第二RGB摄像头13为广角RGB摄像头,所述第三RGB摄像头为长焦RGB摄像头,所述深度摄像头12为ToF摄像头或结构光摄像头,所述深度摄像头12用于获取待拍片的目标及其周边环境的深度信息,该方案提供最完整和精确的结果,适用于绝大多数DR拍片的应用场景。需要说明的是所述摄像头组件1的组成方案包括但不限于上述,深度摄像头12还可与上述方案自由组合。(5) The camera assembly 1 includes a first RGB camera 11, a second RGB camera 13, a third RGB camera and a depth camera 12, the first RGB camera 11 is an ordinary RGB camera, and the second RGB camera 13 is a wide-angle RGB camera camera, the third RGB camera is a telephoto RGB camera, the depth camera 12 is a ToF camera or a structured light camera, and the depth camera 12 is used to obtain the depth information of the target to be filmed and its surrounding environment, and the solution provides The most complete and accurate results for the vast majority of DR filming applications. It should be noted that the composition scheme of the camera assembly 1 includes but is not limited to the above, and the depth camera 12 can also be freely combined with the above scheme.
在本发明的一个实施例中,所述摄像头组件1包括第一RGB摄像头11、第二RGB摄像头13和深度摄像头12,第一RGB摄像头11为普通RGB摄像头,所述第二RGB摄像头13为广角RGB摄像头,所述深度摄像头12为ToF摄像头,用于获取待拍片的目标及其周边环境的深度信息,ToF摄像头可以获取被拍摄物及其周边环境的三维立体信息,能够帮助系统更精确地感知被拍摄者的摆位情况,本实施例所选用的摄像头及其在不同距离时的视野情况如图2所示,第一RGB摄像头11、第二RGB摄像头13和深度摄像头12的最大拍摄角度分别为63°、80°和100°,在距离摄像头组件前方0.8m处,第二RGB摄像头13拍摄范围最广,在距离摄像头组件前方1.8m处,第二RGB摄像头13拍摄范围还是最广,由于范围较大未图示画出,第一RGB摄像头11拍摄范围最小,因而当需要拍摄时,第二RGB摄像头13能够最先捕捉到待拍片的目标,然后所述拍片系统便启动深度摄像头12和第一RGB摄像头11,以进行下一步的精准拍摄定位。在本实施例中,控制平台为嵌入式高性能计算平台,包括CPU(中央处理单元)、MCU(微控制器单元)、通用并行计算单元以及网络通信接口,其满足体积小、功耗低、硬件接口丰富、计算能力强大等特点,优选采用英伟达公司出品的Jetson Nano芯片。处理运算模块由多个部分组成,主要有:1)基础单元,包括图像采集、摄像头校准、不同摄像头的数据流同步和映射、摆位要求定义;2)中间层,利用传统图像处理方法和神经网络方法,实现对被拍摄物剪影、关键点的提取,参见图3,比如人体骨骼点作为关键点来定位预设的放射区域;3)应用层,基于拍片、摆位要求和提取到的信息,确定曝光区域,指导摆位,并进而实现控制束光器调节、计算AEC感兴趣区等功能,如图4所示;4)自动进化、升级模块,将视频流进行整理、上传到云空间,神经网络在线训练升级,再将升级文件下发到终端设备。无人化智能拍片系统还包括平板探测 器,平板探测器可与上述控制平台3通信连接,被拍摄的目标根据辅助模块的引导移动至束光器2和平板探测器之间规定的区域,以实现智能拍片。In an embodiment of the present invention, the camera assembly 1 includes a first RGB camera 11, a second RGB camera 13 and a depth camera 12, the first RGB camera 11 is a common RGB camera, and the second RGB camera 13 is a wide-angle camera RGB camera, the depth camera 12 is a ToF camera, which is used to obtain the depth information of the target to be filmed and its surrounding environment. The ToF camera can obtain the three-dimensional information of the object and its surrounding environment, which can help the system to perceive more accurately The position of the subject, the camera selected in this embodiment and its field of view at different distances are shown in Figure 2. The maximum shooting angles of the first RGB camera 11, the second RGB camera 13 and the depth camera 12 are respectively It is 63°, 80° and 100°. The second RGB camera 13 has the widest shooting range at 0.8m from the front of the camera assembly, and the second RGB camera 13 has the widest shooting range at 1.8m from the front of the camera assembly. The range is large and not shown in the figure, and the first RGB camera 11 has the smallest shooting range. Therefore, when shooting is required, the second RGB camera 13 can first capture the target to be filmed, and then the filming system will activate the depth camera 12 and the camera. The first RGB camera 11 is used for precise shooting and positioning in the next step. In this embodiment, the control platform is an embedded high-performance computing platform, including a CPU (central processing unit), an MCU (microcontroller unit), a general parallel computing unit and a network communication interface, which meet the requirements of small size, low power consumption, With rich hardware interfaces and powerful computing power, the Jetson Nano chip produced by NVIDIA is preferred. The processing operation module consists of multiple parts, mainly including: 1) basic unit, including image acquisition, camera calibration, data stream synchronization and mapping of different cameras, and definition of positioning requirements; 2) middle layer, using traditional image processing methods and neural The network method realizes the extraction of the silhouette and key points of the photographed object, see Figure 3, for example, human skeleton points are used as key points to locate the preset radiation area; 3) Application layer, based on filming, placement requirements and extracted information , determine the exposure area, guide the placement, and then realize the functions of controlling the beamer adjustment, calculating the AEC area of interest, etc., as shown in Figure 4; 4) Automatic evolution and upgrade modules, sorting and uploading video streams to the cloud space , the neural network is trained and upgraded online, and then the upgrade file is sent to the terminal device. The unmanned intelligent filming system also includes a flat panel detector. The flat panel detector can be connected to the above-mentioned control platform 3 in communication. Realize intelligent filming.
在本发明的一个实施例中,提供了一种无人化智能拍片方法,包括以下步骤:In one embodiment of the present invention, an unmanned intelligent filming method is provided, comprising the following steps:
S1、通过一个或多个摄像头获取待拍片的目标及其周边环境的图像信息;S1. Obtain the image information of the target to be filmed and its surrounding environment through one or more cameras;
S2、通过处理运算模块处理所述摄像头获取的图像信息,以得到目标当前位置与预设测试位置的位置关系,以及控制所述摄像头所需的运行参数;S2, processing the image information obtained by the camera through the processing operation module, to obtain the positional relationship between the current position of the target and the preset test position, and the operating parameters required for controlling the camera;
S3、若所述位置关系超出预设阈值范围,控制平台根据S2步骤中的运行参数调整所述摄像头,辅助模块根据所述位置关系引导所述目标移动至预设测试位置,执行S1-S3,否则,执行S4-S5;S3, if the positional relationship exceeds the preset threshold range, the control platform adjusts the camera according to the operating parameters in step S2, the auxiliary module guides the target to move to the preset test position according to the positional relationship, and executes S1-S3, Otherwise, execute S4-S5;
S4、通过处理运算模块处理当前所述摄像头获取的图像信息,以得到束光器和X射线源的运行参数;S4, processing the image information currently obtained by the camera through the processing operation module to obtain the operating parameters of the beamer and the X-ray source;
S5、所述控制平台根据S4步骤中的运行参数控制所述束光器和所述X射线源对待拍片的目标进行放射。S5. The control platform controls the beamer and the X-ray source to radiate the target to be filmed according to the operating parameters in the step S4.
本发明提供的无人化智能拍片系统及拍片方法解决了目前拍片关键参数完全由拍摄技师决定导致的准确性、稳定性和效率较低的问题,实现了根据医生指定的拍摄部位,自动识别患者当前姿态并指引其正确摆位、自动调整束光器的开口使得X射线照射区域刚好覆盖拍摄部位、配合AEC技术自动控制曝光剂量,从而大幅提高拍片效率、有效降低不必要的辐射剂量,并且可以减少不必要的医患接触,推动医疗影像设备在无人化、智能化的方向上再进一步。The unmanned intelligent filming system and filming method provided by the present invention solve the problems of low accuracy, stability and efficiency caused by the current key parameters of filming being completely determined by the shooting technician, and realize the automatic identification of patients according to the shooting position designated by the doctor The current posture and the correct positioning are guided, the opening of the beamer is automatically adjusted so that the X-ray irradiation area just covers the shooting site, and the exposure dose is automatically controlled with the AEC technology, thereby greatly improving the filming efficiency, effectively reducing unnecessary radiation dose, and can Reduce unnecessary contact between doctors and patients, and promote medical imaging equipment to go further in the direction of unmanned and intelligent.
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above are only preferred embodiments of the present invention and are not intended to limit the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the protection of the present invention. within the range.

Claims (10)

  1. 一种无人化智能拍片系统,其特征在于,包括An unmanned intelligent filming system, characterized in that it includes
    射线源,所述射线源用于射出射线;a ray source for emitting rays;
    摄像头组件(1),所述摄像头组件(1)用于获取待拍片的目标及其周边环境的图像信息;a camera assembly (1), the camera assembly (1) is used to obtain image information of a target to be filmed and its surrounding environment;
    束光器(2),所述束光器(2)用于调节射线的射野以及切换射线的滤过;a beamer (2), the beamer (2) is used for adjusting the radiation field and switching the filtering of the radiation;
    控制平台(3),所述控制平台(3)与所述摄像头组件(1)和所述束光器(2)均电连接,所述控制平台(3)设有处理运算模块和与辅助模块,所述处理运算模块能够根据所述摄像头组件(1)获取的图像信息得到所述摄像头组件(1)和所述束光器(2)的运行参数,以及目标当前位置与预设测试位置的位置关系,所述辅助模块能够根据所述位置关系引导所述目标移动至预设测试位置,所述控制平台(3)能够根据所述运行参数控制所述射线源、摄像头组件(1)和所述束光器(2)的工作参数。A control platform (3), the control platform (3) is electrically connected with the camera assembly (1) and the beamer (2), and the control platform (3) is provided with a processing operation module and an auxiliary module , the processing operation module can obtain the operating parameters of the camera assembly (1) and the beamer (2) according to the image information obtained by the camera assembly (1), and the difference between the current position of the target and the preset test position a positional relationship, the auxiliary module can guide the target to move to a preset test position according to the positional relationship, and the control platform (3) can control the ray source, the camera assembly (1) and all the The working parameters of the beamer (2).
  2. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述辅助模块包括用于语音引导待拍片的目标移动的扬声器和/或用于图文引导待拍片的目标移动的显示屏。The unmanned intelligent filming system according to claim 1, wherein the auxiliary module comprises a speaker for guiding the movement of the target to be filmed by voice and/or a display screen for guiding the movement of the target to be filmed with pictures and texts .
  3. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述摄像头组件(1)包括第一RGB摄像头(11),所述第一RGB摄像头(11)为变焦RGB摄像头。The unmanned intelligent filming system according to claim 1, wherein the camera assembly (1) comprises a first RGB camera (11), and the first RGB camera (11) is a zoom RGB camera.
  4. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述摄像头组件(1)包括第一RGB摄像头(11)和第二RGB摄像头(13),所述第二RGB摄像头(13)为广角RGB摄像头。The unmanned intelligent filming system according to claim 1, wherein the camera assembly (1) comprises a first RGB camera (11) and a second RGB camera (13), and the second RGB camera (13) ) is a wide-angle RGB camera.
  5. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述摄像头组件(1)包括第一RGB摄像头(11)和第三RGB摄像头,所述第三RGB摄像头为长焦RGB摄像头。The unmanned intelligent filming system according to claim 1, wherein the camera assembly (1) comprises a first RGB camera (11) and a third RGB camera, and the third RGB camera is a telephoto RGB camera .
  6. 根据权利要求3-5任一项所述的无人化智能拍片系统,其特征在于,所述摄像头组件(1)还包括深度摄像头(12),所述深度摄像头(12)用于 获取待拍片的目标及其周边环境的深度信息。The unmanned intelligent filming system according to any one of claims 3-5, wherein the camera assembly (1) further comprises a depth camera (12), and the depth camera (12) is used to obtain the film to be filmed The depth information of the target and its surrounding environment.
  7. 根据权利要求6所述的无人化智能拍片系统,其特征在于,所述深度摄像头(12)为ToF摄像头或结构光摄像头。The unmanned intelligent filming system according to claim 6, wherein the depth camera (12) is a ToF camera or a structured light camera.
  8. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述摄像头组件(1)和所述控制平台(3)均设置在所述束光器(2)上。The unmanned intelligent filming system according to claim 1, wherein the camera assembly (1) and the control platform (3) are both arranged on the beamer (2).
  9. 根据权利要求1所述的无人化智能拍片系统,其特征在于,所述控制平台(3)还设有与外部设备通信的接口。The unmanned intelligent filming system according to claim 1, characterized in that, the control platform (3) is further provided with an interface for communicating with external devices.
  10. 一种无人化智能拍片方法,其特征在于,包括以下步骤:An unmanned intelligent filming method, characterized in that it comprises the following steps:
    S1、通过一个或多个摄像头获取待拍片的目标及其周边环境的图像信息;S1. Obtain the image information of the target to be filmed and its surrounding environment through one or more cameras;
    S2、通过处理运算模块处理所述摄像头获取的图像信息,以得到目标当前位置与预设测试位置的位置关系,以及控制所述摄像头所需的运行参数;S2, processing the image information obtained by the camera through the processing operation module, to obtain the positional relationship between the current position of the target and the preset test position, and the operating parameters required for controlling the camera;
    S3、若所述位置关系超出预设阈值范围,控制平台根据S2步骤中的运行参数调整所述摄像头,辅助模块根据所述位置关系引导所述目标移动至预设测试位置,执行S1-S3,否则,执行S4-S5;S3, if the positional relationship exceeds the preset threshold range, the control platform adjusts the camera according to the operating parameters in step S2, the auxiliary module guides the target to move to the preset test position according to the positional relationship, and executes S1-S3, Otherwise, execute S4-S5;
    S4、通过处理运算模块处理当前所述摄像头获取的图像信息,以得到束光器和X射线源的运行参数;S4, processing the image information currently obtained by the camera through the processing operation module to obtain the operating parameters of the beamer and the X-ray source;
    S5、所述控制平台根据S4步骤中的运行参数控制所述束光器和所述X射线源对待拍片的目标进行放射。S5. The control platform controls the beamer and the X-ray source to radiate the target to be filmed according to the operating parameters in the step S4.
PCT/CN2020/132728 2020-11-30 2020-11-30 Unmanned smart radiographing system and radiographing method WO2022110132A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/132728 WO2022110132A1 (en) 2020-11-30 2020-11-30 Unmanned smart radiographing system and radiographing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/132728 WO2022110132A1 (en) 2020-11-30 2020-11-30 Unmanned smart radiographing system and radiographing method

Publications (1)

Publication Number Publication Date
WO2022110132A1 true WO2022110132A1 (en) 2022-06-02

Family

ID=81753874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/132728 WO2022110132A1 (en) 2020-11-30 2020-11-30 Unmanned smart radiographing system and radiographing method

Country Status (1)

Country Link
WO (1) WO2022110132A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025525A1 (en) * 2005-07-12 2007-02-01 Chaim Gilath Means for improving patient positioning during X-ray imaging
CN203576524U (en) * 2013-11-06 2014-05-07 上海西门子医疗器械有限公司 X-ray camera equipment and auxiliary positioning system thereof
CN109924994A (en) * 2019-04-02 2019-06-25 晓智科技(成都)有限公司 A kind of x photo-beat take the photograph during detection position automatic calibrating method and system
CN110960243A (en) * 2019-12-28 2020-04-07 上海健康医学院 Self-service radiation photograph imaging system
CN111528880A (en) * 2020-05-08 2020-08-14 江苏康众数字医疗科技股份有限公司 X-ray imaging system and method
CN111870268A (en) * 2020-07-30 2020-11-03 上海联影医疗科技有限公司 Method and system for determining target position information of beam limiting device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070025525A1 (en) * 2005-07-12 2007-02-01 Chaim Gilath Means for improving patient positioning during X-ray imaging
CN203576524U (en) * 2013-11-06 2014-05-07 上海西门子医疗器械有限公司 X-ray camera equipment and auxiliary positioning system thereof
CN109924994A (en) * 2019-04-02 2019-06-25 晓智科技(成都)有限公司 A kind of x photo-beat take the photograph during detection position automatic calibrating method and system
CN110960243A (en) * 2019-12-28 2020-04-07 上海健康医学院 Self-service radiation photograph imaging system
CN111528880A (en) * 2020-05-08 2020-08-14 江苏康众数字医疗科技股份有限公司 X-ray imaging system and method
CN111870268A (en) * 2020-07-30 2020-11-03 上海联影医疗科技有限公司 Method and system for determining target position information of beam limiting device

Similar Documents

Publication Publication Date Title
CN112386270A (en) Unmanned intelligent shooting system and shooting method
KR102085178B1 (en) The method and apparatus for providing location related information of a target object on a medical device
US11562471B2 (en) Arrangement for generating head related transfer function filters
CN112311965B (en) Virtual shooting method, device, system and storage medium
CN104287756B (en) Radioscopic image acquisition methods and device
CN105832290A (en) Intra-oral image acquisition alignment
US10849589B2 (en) X-ray imaging apparatus and control method thereof
US20100290707A1 (en) Image acquisition method, device and radiography system
CN104068886A (en) Medical photography system and photography method implemented by same
CN112450955A (en) CT imaging automatic dose adjusting method, CT imaging method and system
CN111528880A (en) X-ray imaging system and method
CN111467690A (en) Pulse exposure image acquisition system and method for double-flat-panel detector
CN107334487A (en) A kind of medical image system and its scan method
CN110960243A (en) Self-service radiation photograph imaging system
CN110811654A (en) X-ray exposure control system and control method thereof
JP6970203B2 (en) Computed tomography and positioning of anatomical structures to be imaged
US11207048B2 (en) X-ray image capturing apparatus and method of controlling the same
CN111161297B (en) Method and device for determining edge of beam limiter and X-ray system
US10702714B2 (en) Radiation therapy system
WO2022110132A1 (en) Unmanned smart radiographing system and radiographing method
CN104414660A (en) DR image obtaining and splicing method and system
US20230102782A1 (en) Positioning method, processing device, radiotherapy system, and storage medium
US10715787B1 (en) Depth imaging system and method for controlling depth imaging system thereof
KR102150143B1 (en) Positioning of partial volumes of an anatomy
CN110211681B (en) Remote control device and method for treatment bed

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963020

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20963020

Country of ref document: EP

Kind code of ref document: A1