WO2022028439A1 - 一种医学设备控制方法及系统 - Google Patents

一种医学设备控制方法及系统 Download PDF

Info

Publication number
WO2022028439A1
WO2022028439A1 PCT/CN2021/110409 CN2021110409W WO2022028439A1 WO 2022028439 A1 WO2022028439 A1 WO 2022028439A1 CN 2021110409 W CN2021110409 W CN 2021110409W WO 2022028439 A1 WO2022028439 A1 WO 2022028439A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
medical
motion
target object
model
Prior art date
Application number
PCT/CN2021/110409
Other languages
English (en)
French (fr)
Inventor
涂佳丽
韩业成
邹林融
周毅峰
衣星越
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010767758.3A external-priority patent/CN111772655A/zh
Priority claimed from CN202011114024.1A external-priority patent/CN112274166A/zh
Priority claimed from CN202011114737.8A external-priority patent/CN112071405A/zh
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Priority to EP21852413.0A priority Critical patent/EP4176812A4/en
Publication of WO2022028439A1 publication Critical patent/WO2022028439A1/zh
Priority to US18/163,923 priority patent/US20230172577A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7285Specific aspects of physiological measurement analysis for synchronising or triggering a physiological measurement or image acquisition with a physiological event or waveform, e.g. an ECG signal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/547Control of apparatus or devices for radiation diagnosis involving tracking of position of the device or parts of the device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/08Auxiliary means for directing the radiation beam to a particular spot, e.g. using light beams
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4435Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being coupled by a rigid structure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4494Means for identifying the diagnostic device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/545Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • A61B6/587Alignment of source unit to detector unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/44Constructional features of apparatus for radiation diagnosis
    • A61B6/4429Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units
    • A61B6/4452Constructional features of apparatus for radiation diagnosis related to the mounting of source units and detector units the source unit and the detector unit being able to move relative to each other
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/542Control of apparatus or devices for radiation diagnosis involving control of exposure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/034Recognition of patterns in medical or anatomical images of medical instruments

Definitions

  • the present application generally relates to the field of medical equipment, and more particularly, to a method and system for controlling medical equipment.
  • a medical device may be an automated device that performs medical diagnosis or research tasks.
  • Medical equipment may include: diagnostic equipment (eg, X-ray diagnostic equipment, ultrasound diagnostic equipment, functional inspection equipment, endoscopy equipment, nuclear medicine equipment, laboratory equipment) for diagnosing the physiological state of a target object (eg, a patient) Diagnostic equipment and pathological diagnostic equipment, etc.), therapeutic equipment for treating target objects (eg, patients) (eg, operating beds, contact therapy machines, superficial treatment machines, deep treatment machines, semiconductor cold knives, gas cold knives) , solid cold knife, cardiac defibrillation pacing equipment, artificial ventilator, ultrasonic nebulizer, etc.) and auxiliary equipment used to assist diagnostic equipment and/or therapeutic equipment in diagnosing and/or treating target objects (eg, patients) Classes (for example, disinfection and sterilization equipment, refrigeration equipment, central suction and oxygen supply systems, air conditioning equipment, pharmaceutical machinery and equipment, blood bank equipment, medical data processing equipment, medical video photography equipment, etc
  • One of the embodiments of the present application provides a medical device control method, including: acquiring relevant information of the medical device and/or relevant information of a target object; medical equipment.
  • the medical device includes a scanning device; the acquiring relevant information of the medical device and/or the relevant information of the target object includes: acquiring position information of one or more ionization chambers in the scanning device, the The scanning device is used for scanning the target object; the controlling the medical device based on the relevant information of the medical device and/or the relevant information of the target object includes: determining, based on the position information of the one or more ionization chambers a detection region of at least one of the one or more ionization chambers; determining projection data for a projection device, the projection data including image data corresponding to the detection region of the at least one ionization chamber; and controlling the projection device Projecting the projection data onto the target object.
  • the projection data further includes image data corresponding to a region of interest of the target object to be scanned.
  • the method further includes: acquiring a reference image of the target object, the reference image being captured by a camera after the projection device projects the projection data onto the target object to be scanned; identifying the reference image The first area in the image, the first area corresponds to the area of interest to be scanned of the target object; the second area in the reference image is identified, the second area corresponds to the projection on the target object a detection region of at least one ionization chamber; and determining, based on the first region and the second region, whether the at least one ionization chamber includes one or more candidate ionization chambers, wherein the detection of the one or more candidate ionization chambers The area is covered by the region of interest to be scanned of the target object.
  • the method further comprises: in response to determining that no candidate ionization chamber is included in the at least one ionization chamber, causing the terminal device to generate a prompt message; in response to determining that no candidate ionization chamber is included in the at least one ionization chamber An ionization chamber that moves one or more reference ionization chambers of the at least one ionization chamber relative to a region of interest of the target object.
  • the method further comprises: obtaining identification information for one or more target ionization chambers selected from the at least one ionization chamber; and based on the identification information, adjusting the projection data such that the projections
  • the first characteristic value of the image data corresponding to the detection area of the one or more target ionization chambers in the data is different from the second characteristic value of the image data corresponding to the detection area of other ionization chambers, wherein the first characteristic value is value and the second feature value correspond to the same image feature.
  • acquiring the relevant information of the medical device and/or the relevant information of the target object includes: acquiring a virtual model of the medical device, wherein the medical device includes at least one movable first component, corresponding to Preferably, the virtual model includes a second component that simulates the first component, and the equipment coordinate system where the first component is located has a mapping relationship with the model coordinate system where the second component is located; current motion information of one of the first part and the second part of the virtual model; the controlling the medical device based on the related information of the medical device and/or the related information of the target object includes: controlling the medical device The other of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model performs the same motion as one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model from which the motion instruction is obtained .
  • the acquiring current motion information of one of the first part of the medical device and the second part of the virtual model includes: acquiring the current motion of the first part of the medical diagnosis and treatment device information; wherein, before acquiring the current motion information of the first component of the medical diagnosis and treatment equipment, the first component receives motion control information that will execute the current motion.
  • the acquiring current motion information of one of the first part of the medical device and the second part of the virtual model includes: acquiring current motion information of the second part of the virtual model; Wherein, before acquiring the current motion information of the second part of the virtual model, the second part receives motion control information that will execute the current motion.
  • the virtual model is displayed on a display interface, and real-time position information of the second component under the current motion is also displayed on the display interface and updated with the motion.
  • the acquiring the relevant information of the medical equipment and/or the relevant information of the target object includes: acquiring a model of the X-ray imaging gantry based on the physical structure of the X-ray imaging gantry, and obtaining a model of the X-ray imaging gantry based on the X-ray imaging gantry The motion trajectory of the physical structure of the X-ray camera gantry, using the model of the X-ray camera gantry to simulate the movement trajectory of the corresponding model; and acquiring the motion instruction of the current physical structure of the X-ray camera gantry, the motion instruction Including the target position to which one part of the current X-ray camera frame needs to be moved and relevant movement time information; the medical equipment is controlled based on the relevant information of the medical equipment and/or the relevant information of the target object , including: the physical structure of the X-ray camera gantry reaches the target position based on the motion instruction, the model of the X-ray camera gantry is based on the motion instruction
  • the model is obtained by acquiring an image of the X-ray gantry
  • the medical device is a medical imaging device; the acquiring relevant information of the medical device and/or the relevant information of the target object includes: acquiring the physical sign information of the target object; the obtaining the relevant information based on the medical device and/or the relevant information of the target object, controlling the medical device includes: analyzing the sign information to determine abnormal sign information; and adaptively determining the abnormal sign information of the target object based on the abnormal sign information of the medical imaging device. Scanning parameters and/or image processing parameters.
  • the adaptively determining scan parameters and/or image processing parameters of a medical imaging device based on the abnormal sign information of the target object includes: based on the abnormal sign information of the target object , determine the disease type of the target object; based on the disease type, adaptively determine the scanning parameters and/or image processing parameters of the medical imaging device.
  • the adaptively determining scanning parameters and/or image processing parameters of the medical imaging device based on the disease type includes: determining an abnormality level of the abnormal sign information; According to the abnormality level, scanning parameters and/or image processing parameters of the medical imaging device are adaptively determined.
  • the analyzing the sign information to determine abnormal sign information includes: inputting the sign information into a trained sign recognition model, and acquiring abnormal sign information output by the sign recognition model or, comparing the sign information with normal sign parameters to determine abnormal sign information.
  • the analyzing the sign information to determine abnormal sign information further includes: comparing the sign information with standard sign information corresponding to the type of the target object, and determining that the sign information does not meet the requirements of the target object type.
  • the abnormal sign information of the standard sign information wherein, the type of the target object is determined according to the basic human body information and/or historical medical records of the target object, and the basic human body information at least includes: gender, age, weight and height.
  • the medical device is a medical imaging device; the acquiring relevant information of the medical device and/or the relevant information of the target object includes: acquiring scanning parameters of the medical imaging device for the target object; acquiring the target Abnormal sign information of the object; the controlling the medical device based on the relevant information of the medical device and/or the relevant information of the target object includes: adaptively adjusting the medical imaging device based on the abnormal sign information Scanning parameters and/or image processing parameters; scanning the target object and obtaining a medical image, wherein the scanning is based on the adjusted scan parameters and/or the medical image is an adjusted image processing parameter processed.
  • One of the embodiments of the present application provides a medical device control system, including: an information acquisition module for acquiring relevant information of a medical device and/or relevant information of a target object; a control module for obtaining relevant information based on the medical device and/or relevant information of the target object to control the medical device.
  • One of the embodiments of the present application provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the medical device control described in any embodiment of the present application. method.
  • One of the embodiments of the present application provides a medical device control apparatus, the apparatus includes at least one processor and at least one memory; the at least one memory is used to store computer instructions; the at least one processor is used to execute the computer instructions At least part of the instructions in the application to implement the medical device control method according to any embodiment of the present application.
  • One of the embodiments of the present application provides a method for marking a detection area of an ionization chamber, the method comprising: acquiring position information of one or more ionization chambers in a scanning device, the scanning device being used to scan objects; location information of one or more ionization chambers, determining a detection area of at least one of the one or more ionization chambers; determining projection data of the projection device, the projection data including the detection area corresponding to the at least one ionization chamber and controlling the projection device to project the projection data onto the object.
  • the projection data further includes image data corresponding to a region of interest of the object to be scanned.
  • the method further includes: acquiring a reference image of the object, the reference image being captured by a camera after the projection device projects the projection data onto the object to be scanned; identifying the reference image in the reference image a first region of the object, the first region corresponding to the region of interest to be scanned of the object; identifying a second region in the reference image, the second region corresponding to at least one ionization chamber projected onto the object and determining, based on the first and second regions, whether the at least one ionization chamber includes one or more candidate ionization chambers, wherein the detection region of the one or more candidate ionization chambers is determined by the The object's area of interest to be scanned is covered.
  • the method further comprises: in response to determining that the at least one ionization chamber does not include any candidate ionization chambers, causing the terminal device to generate an alert message.
  • the method in response to determining that no candidate ionization chambers are included in the at least one ionization chamber, the method further comprises: making one or more reference ionization chambers of the at least one ionization chamber relative to the subject the region of interest to move.
  • the method further comprises: in response to determining that the at least one ionization chamber includes one or more candidate ionization chambers, selecting one or more target ionization chambers from the one or more candidate ionization chambers , the one or more target ionization chambers will operate during scanning of the object.
  • the method further comprises: obtaining identification information for one or more target ionization chambers selected from the at least one ionization chamber; and based on the identification information, adjusting the projection data such that the projections
  • the first characteristic value of the image data corresponding to the detection area of the one or more target ionization chambers in the data is different from the second characteristic value of the image data corresponding to the detection area of other ionization chambers, wherein the first characteristic value is value and the second feature value correspond to the same image feature.
  • One of the embodiments of the present application provides a system for marking a detection area of an ionization chamber, including: an acquisition module for acquiring position information of one or more ionization chambers in a scanning device; a detection area determination module for the location information of the one or more ionization chambers, to determine the detection area of at least one of the one or more ionization chambers; the projection data determination module is used to determine the projection data of the projection device, the projection data includes corresponding to image data of the detection area of the at least one ionization chamber; and a control module for controlling the projection device to project the projection data onto the object to be scanned.
  • One of the embodiments of the present application provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the same as described in any embodiment of the present application. Methods.
  • One of the embodiments of the present application provides an apparatus for marking a detection area of an ionization chamber, the apparatus including a program for marking a detection area of an ionization chamber, the program implementing the method according to any embodiment of the present application .
  • One of the embodiments of the present application provides a method for controlling medical diagnosis and treatment equipment, the method includes: acquiring a virtual model of the medical diagnosis and treatment equipment, wherein the medical diagnosis and treatment equipment includes at least one movable first component, correspondingly, the The virtual model includes a second component that simulates the first component, and the equipment coordinate system where the first component is located has a mapping relationship with the model coordinate system where the second component is located; the first component of the medical diagnosis and treatment equipment is obtained and the current motion information of one of the second components of the virtual model; the other of the first component of the medical diagnosis and treatment equipment and the second component of the virtual model, and the medical diagnosis and treatment equipment that obtains motion instructions One of the first part of the virtual model and the second part of the virtual model performs the same motion.
  • the same motion includes synchronized motion.
  • the medical diagnostic equipment includes an X-ray photography system.
  • the first component includes a gantry of an X-ray imaging system.
  • the housing includes a bulb, a probe, or a support element for the bulb or a support element for the probe.
  • acquiring the current motion information of one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model includes: acquiring the current motion information of the first part of the medical diagnosis and treatment equipment; Wherein, before acquiring the current motion information of the first component of the medical diagnosis and treatment equipment, the first component receives motion control information that will execute the current motion.
  • the motion control information includes control instructions for automatic motion of the first component or manual operation of the first component.
  • acquiring the current motion information of one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model comprises: acquiring the current motion information of the second part of the virtual model; wherein , before acquiring the current motion information of the second part of the virtual model, the second part receives motion control information that will perform the current motion.
  • the motion control information is input by mouse, keyboard or voice, or by touch input.
  • the virtual model is displayed on a display interface, and real-time position information of the second component under the current motion is also displayed on the display interface and updated with the motion.
  • the display interface is a computer or a mobile terminal or a public display interface.
  • the virtual model is obtained by modeling data of the gantry of the X-ray imaging system.
  • the virtual model is obtained by: acquiring an image of a gantry in the X-ray imaging system; extracting feature points of the image; reconstructing based on the feature points to obtain the model.
  • the display interface when one part of the first part moves, the display interface highlights a movement track of a part of the second part corresponding to one part of the first part.
  • One of the embodiments of the present application provides a method for controlling medical diagnosis and treatment equipment, the method comprising: acquiring a model of the X-ray camera gantry based on the physical structure of the X-ray camera gantry, and obtaining a model of the X-ray camera gantry based on the physical structure of the X-ray camera gantry
  • the motion trajectory of the physical structure of the X-ray camera is used to simulate the corresponding model motion trajectory by using the model of the X-ray camera gantry; the motion instruction of the physical structure of the current X-ray camera gantry is obtained, and the motion instruction includes the current X-ray camera gantry.
  • One of the parts of the radiographic gantry needs to move to the target position and related movement time information; the physical structure of the X-ray imaging gantry reaches the target position based on the motion instruction, and the model of the X-ray imaging gantry is based on For the movement instruction, the simulation of the movement trajectory of the model is performed synchronously based on the movement time information; and the simulation of the movement trajectory of the model is displayed on the display device.
  • the model is obtained by modeling data from the X-ray gantry.
  • the model is obtained by: acquiring an image of the X-ray camera gantry; extracting feature points of the image; reconstructing based on the feature points to obtain the model.
  • the displaying the simulation of the movement trajectory of the model on the display device includes: highlighting on the display device and the X-ray One part of the camera gantry corresponds to a movement trajectory of a part of the model of the X-ray camera gantry.
  • the display device is provided outside the machine room of the X-ray camera rack, and the method further includes: acquiring interaction data through the display device; controlling the current X-ray camera based on the interaction data The solid structure of the rack moves.
  • the display device includes a touch screen
  • the acquiring interaction data through the display device includes: controlling the model of the X-ray camera gantry by touching on the touch screen to generate all the described interaction data.
  • One of the embodiments of the present application provides a control system for medical diagnosis and treatment equipment, the system includes: a model acquisition module for obtaining a virtual model of medical diagnosis and treatment equipment, wherein the medical diagnosis and treatment equipment includes at least one movable first Correspondingly, the virtual model includes a second part that simulates the first part, and the device coordinate system where the first part is located has a mapping relationship with the model coordinate system where the second part is located; motion information acquisition module , used to obtain the current motion information of one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model; a motion execution module, used for the first part of the medical diagnosis and treatment equipment and the virtual model The other one of the second parts performs the same motion as one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model that acquire the motion instruction.
  • a model acquisition module for obtaining a virtual model of medical diagnosis and treatment equipment, wherein the medical diagnosis and treatment equipment includes at least one movable first
  • the virtual model includes a second part that simulates
  • One of the embodiments of the present application provides a control system for medical diagnosis and treatment equipment, the system includes: a motion simulation module for acquiring a model of the X-ray camera gantry based on the physical structure of the X-ray camera gantry, and based on the physical structure of the X-ray camera gantry
  • the motion trajectory of the physical structure of the X-ray photography gantry is simulated by using the model of the X-ray photography gantry to simulate the motion trajectory of the corresponding model
  • the instruction acquisition module is used to obtain the physical structure of the current X-ray photography gantry.
  • a motion instruction includes a target position to which one part of the current X-ray camera frame needs to be moved and related movement time information; an analog control module, used to control the physical structure of the X-ray camera frame based on the The motion instruction reaches the target position, and the model of the X-ray camera frame is based on the motion instruction and synchronously simulates the motion trajectory of the model based on the motion time information; the display module is used to display the simulation of the motion trajectory of the model. on the display device.
  • One of the embodiments of the present application provides a method for controlling medical diagnosis and treatment equipment.
  • the method includes: acquiring a motion instruction of a physical structure of a current X-ray camera gantry, where the motion instruction includes a part of the current X-ray camera gantry. A part needs to move to the target position and related movement time information; the physical structure of the X-ray camera gantry reaches the target position based on the movement instructions, and the model of the X-ray camera gantry is based on the movement instructions and based on the movement time.
  • the information is synchronized to simulate the motion trajectory of the model; and the simulation of the motion trajectory of the model is displayed on the display device.
  • One of the embodiments of the present application provides a control system for medical diagnosis and treatment equipment, the system includes: an instruction acquisition module for acquiring a motion instruction of a physical structure of a current X-ray camera frame, where the motion instruction includes the current X-ray camera frame One of the parts of the gantry needs to move to the target position and related movement time information; the simulation control module is used to control the physical structure of the X-ray camera gantry to reach the target position based on the motion instructions, and the model of the X-ray camera gantry is based on The motion instruction is used for synchronizing the simulation of the motion trajectory of the model based on the motion time information; the display module is used for displaying the simulation of the motion trajectory of the model on the display device.
  • One of the embodiments of the present application provides a control apparatus for medical diagnosis and treatment equipment, including a processor, where the processor is configured to execute computer instructions to implement the method described in any of the embodiments of the present application.
  • One of the embodiments of the present application provides a method for determining parameters of a medical imaging device, including: acquiring sign information of a target object; analyzing the sign information to determine abnormal sign information; and based on the abnormal sign information of the target object, Scanning parameters and/or image processing parameters of the medical imaging device are adaptively determined.
  • the scanning parameters include: scanning voltage, scanning current, scanning field of view, scanning layer number or scanning layer thickness.
  • the image processing parameters include: image contrast or image equalization.
  • the acquiring physical information of the target object is achieved by a sensor.
  • the senor includes a camera, a temperature sensor, a heartbeat sensor, or a respiration sensor.
  • the adaptively determining scan parameters and/or image processing parameters of a medical imaging device based on the abnormal sign information of the target object includes: based on the abnormal sign information of the target object , determine the disease type of the target object; based on the disease type, adaptively determine the scanning parameters and/or image processing parameters of the medical imaging device.
  • the adaptively determining scanning parameters and/or image processing parameters of the medical imaging device based on the disease type includes: determining an abnormality level of the abnormal sign information; According to the abnormality level, scanning parameters and/or image processing parameters of the medical imaging device are adaptively determined.
  • the analyzing the sign information to determine abnormal sign information includes: inputting the sign information into a trained sign recognition model, and acquiring abnormal sign information output by the sign recognition model or, comparing the sign information with normal sign parameters to determine abnormal sign information.
  • the analyzing the sign information to determine abnormal sign information further includes: comparing the sign information with standard sign information corresponding to the type of the target object, and determining that the sign information does not meet the requirements of the target object type.
  • the abnormal sign information of the standard sign information wherein, the type of the target object is determined according to the basic human body information and/or historical medical records of the target object, and the basic human body information at least includes: gender, age, weight and height.
  • the method further includes: determining a target scanning protocol of the target object based on the scanning parameters; scanning the target object based on the target scanning protocol and the image processing parameters to obtain the target scanning protocol. A scanned image of the target object.
  • the image processing parameters are parameters for processing the scanning algorithm of the region of interest of the target object.
  • One of the embodiments of the present application provides an imaging method for a medical imaging device, including: acquiring scan parameters of a medical imaging device for a target object; acquiring abnormal sign information of the target object; Adjusting scanning parameters and/or image processing parameters of the medical imaging device; scanning the target object and obtaining a medical image, wherein the scanning is based on the adjusted scanning parameters and/or the medical image is After the adjusted image processing parameters are processed.
  • the medical imaging device comprises an X-ray imaging device, an MR device, a CT device, a PET device, an ultrasound device or a DSA device, or a multimodal imaging device.
  • One of the embodiments of the present application provides a device for determining parameters of medical imaging equipment, including: a physical sign information acquisition module for acquiring physical sign information of a target object; an abnormal sign information determination module for analyzing the physical sign information to determine abnormality Sign information; a parameter determination module for adaptively determining scanning parameters and/or image processing parameters of a medical imaging device based on the abnormal sign information of the target object.
  • One of the embodiments of the present application provides a medical imaging device, including: an imaging component for scanning a target object to obtain a medical image; a sensor for acquiring abnormal sign information of the target object; a controller, which cooperates with the imaging component coupled to the sensor, adaptively adjusting scanning parameters of the imaging assembly based on the abnormal sign information, and/or adaptively adjusting image processing parameters of the medical image based on the abnormal sign information.
  • the senor includes a camera, a temperature sensor, a heartbeat or pulse sensor, or a respiration sensor.
  • One of the embodiments of the present application provides a device, the device includes: one or more processors; a storage device for storing one or more programs; when the one or more programs are stored by the one or more programs
  • the processor executes such that the one or more processors implement the method for determining a parameter of a medical imaging device and/or an imaging method for a medical imaging device as described in any embodiment of the present application.
  • One of the embodiments of the present application provides a storage medium containing computer-executable instructions, when executed by a computer processor, the computer-executable instructions are used to execute the method for determining a parameter of a medical imaging device according to any embodiment of the present application and/or imaging methods of medical imaging equipment.
  • FIG. 1 is a schematic diagram of an application scenario of a medical device control system according to some embodiments of the present application
  • FIG. 2 is an exemplary flowchart of a method for controlling a medical device according to some embodiments of the present application
  • FIG. 3 is an exemplary flowchart of a method for marking a detection region of an ionization chamber according to some embodiments of the present application
  • FIG. 4 is an exemplary flowchart of automatically selecting a target ionization chamber according to some embodiments of the present application
  • FIG. 5 is an exemplary flowchart of adjusting projection data according to some embodiments of the present application.
  • FIG. 6 is a block diagram of a system for marking a detection area of an ionization chamber according to some embodiments of the present application.
  • FIG. 7 is an exemplary flowchart of a method for controlling a medical diagnosis and treatment device according to some embodiments of the present application.
  • FIG. 8A is an exemplary structural schematic diagram of a medical diagnosis and treatment device according to some embodiments of the present application.
  • FIG. 8B is a schematic diagram of an exemplary structure of a virtual model according to some embodiments of the present application.
  • FIG. 9 is an exemplary flowchart of another method for controlling a medical diagnosis and treatment device according to some embodiments of the present application.
  • FIG. 10 is an exemplary flowchart of another method for controlling a medical diagnosis and treatment device according to some embodiments of the present application.
  • FIG. 11 is a schematic block diagram of a control system of a medical diagnosis and treatment equipment according to some embodiments of the present application.
  • FIG. 12 is a schematic block diagram of a control system of another medical diagnosis and treatment equipment according to some embodiments of the present application.
  • FIG. 13 is an exemplary flowchart of a method for determining parameters of a medical imaging device according to some embodiments of the present application.
  • FIG. 14 is an exemplary flowchart of a method for determining parameters of a medical imaging device according to some embodiments of the present application.
  • FIG. 15 is an exemplary flowchart of a method for determining parameters of a medical imaging device according to some embodiments of the present application.
  • FIG. 16 is an exemplary flowchart of an imaging method of a medical imaging device according to some embodiments of the present application.
  • FIG. 17 is a schematic diagram of an exemplary structure of a device for determining parameters of medical imaging equipment according to some embodiments of the present application.
  • FIG. 18 is a schematic diagram of an exemplary structure of an imaging device of a medical imaging device according to some embodiments of the present application.
  • Fig. 19 is a schematic diagram of an exemplary structure of a medical imaging device according to some embodiments of the present application.
  • FIG. 20 is a schematic diagram of an exemplary structure of a device according to some embodiments of the present application.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting signals into signals.
  • unit means for converting signals into signals.
  • module means for converting signals into signals.
  • modules or units in systems according to embodiments of the present specification, any number of different modules or units may be used and run on clients and/or servers.
  • the modules are illustrative only, and different aspects of the systems and methods may use different modules.
  • image in this application is used to collectively refer to image data (eg, scan data, projection data) and/or various forms of images, including two-dimensional (2D) images, three-dimensional (3D) images, four-dimensional (4D) images images etc.
  • the medical equipment may include angiography (Digital SubtractionAngiography, DSA), digital breast tomography (DBT), cone beam CT (CBCT), direct digital radiography (DR), X-ray computed tomography (CT), moving C-arms, etc.
  • the medical device control system and method provided by the embodiments of the present application may be used to control the medical device, and may also be used to control the components in the medical device.
  • the systems and methods provided by the embodiments of the present application can be used to control components such as bulbs, ionization chambers, detectors, and scanning beds.
  • the medical device control system and method can be applied to mark the detection area of the ionization chamber.
  • ROI region of interest
  • the operator cannot obtain the exact position of the detection area of the ionization chamber due to the lack of the position identification of the detection area of the ionization chamber or the occlusion of the object, so that it is impossible to accurately judge whether the object to be scanned or the area of interest covers the ionization chamber. detection area, which may result in reduced image quality.
  • the system and method for marking the detection area of the ionization chamber provided in this application can help the operator to obtain the accurate position of the detection area of the ionization chamber, so that it can accurately determine whether the object to be scanned or the area of interest covers the detection of the ionization chamber area, which can improve the quality of the scanned image and reduce the operator's time and effort.
  • the medical device control system and method may also be applied to a blood imaging machine or a DR device.
  • a medical equipment control system and method when an angiography machine performs a photographic task, due to the limitation of its working environment, the operator cannot observe the specific movement positions of each joint or structure of the angiography machine frame in real time, a medical equipment control system and method.
  • the movement of the angiography machine frame can be synchronously displayed to the visual display device, and the movement of the angiography machine can be synchronously displayed through the model, which is convenient for the operator to observe and monitor the movement of the angiography machine frame and the movement parameters.
  • the medical equipment control system and method can also control the motion of the angiography machine frame by controlling the motion of the model in the display equipment, which further improves the interaction between the angiography machine and the user, and is convenient for the operator to use.
  • the medical device control system and method may be applied to medical imaging devices.
  • the medical imaging equipment may include at least one of X-ray imaging equipment, MR equipment, CT equipment, PET equipment, ultrasound equipment, DSA equipment and multimodal imaging equipment.
  • abnormal sign information is determined, and scanning parameters and/or image processing parameters of the medical imaging device are determined based on the abnormal sign information of the target object.
  • the scanning parameters of the medical imaging device can be determined in time based on the sign information of the target object, so as to achieve the purpose of improving the diagnosis efficiency. At the same time, it can avoid the problem of requiring doctors to continuously adjust scanning parameters and/or image processing parameters, resulting in poor image quality, repeated scanning, low efficiency and affecting diagnosis, and can avoid the problem that the target object receives too much radiation dose.
  • FIG. 1 is a schematic diagram of an application scenario of a medical device control system according to some embodiments of the present application.
  • the medical device control system 100 may include a processing device 110 , a storage device 120 , one or more terminals 130 , a network 140 , a medical device 150 and an information acquisition device 160 .
  • scanning device 110, processing device 110, storage device 120, terminal 130, projection device 150, and/or information acquisition device 160 may be interconnected and/or communicate via wireless connections, wired connections, or a combination thereof.
  • the connections between the components of the system 100 may be variable.
  • the information acquisition device 160 may be connected to the processing device 110 through the network 140 or directly.
  • storage device 120 may be connected to processing device 110 through network 140 or directly.
  • the processing device 110 may process data and/or information acquired from the storage device 120 and/or the terminal 130 .
  • processing device 110 may cause scanning device 151 to acquire image data from scanning device 151 .
  • the processing device 110 may acquire the user instruction from the terminal 130 .
  • the processing device 110 may obtain information about the medical device 150 and/or the target object from the information obtaining device 160 .
  • Processing device 110 may also send control instructions to one or more components of system 100 (eg, storage device 120, terminal 130, medical device 150, and/or information acquisition device 160). For example, processing device 110 may send control commands to medical device 150 to move active components of medical device 150 (eg, ionization chambers, detectors, etc.) to specified locations. For another example, the processing device 110 may send a control instruction to the terminal 130, so that the terminal 130 displays image data on its display interface. For another example, the processing device 110 may determine the detection area of the ionization chamber of the medical device 150 and control the projection device accordingly to project the detection area of the ionization chamber of the medical device 150 onto the object (ie, the target object).
  • processing device 110 may send control instructions to one or more components of system 100 (eg, storage device 120, terminal 130, medical device 150, and/or information acquisition device 160). For example, processing device 110 may send control commands to medical device 150 to move active components of medical device 150 (eg, ionization chambers, detectors, etc.) to specified
  • the information acquisition device 160 may acquire current motion information of one of the virtual model of the medical device 150, the first part of the medical device 150, and the second part of the virtual model, wherein the second part of the virtual model is used for simulation
  • the first part of the medical device 150, the device coordinate system where the first part is located has a mapping relationship with the model coordinate system where the second part is located, and the processing device 110 can control the first part of the medical diagnosis and treatment device and the second part of the virtual model.
  • the same motion is performed as one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model that acquire the motion instruction.
  • the information acquisition device 160 can acquire the physical sign information of the target object, and the processing device 110 can analyze the physical sign information, determine abnormal physical sign information, and adaptively determine the scanning parameters and parameters of the medical device 150 based on the abnormal physical sign information of the target object. / or image processing parameters.
  • processing device 110 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 110 may be local to system 100 or remote. For example, processing device 110 may access information and/or data from scanning device 110 , storage device 120 , terminal 130 , medical device 150 , and/or information acquisition device 160 via network 140 . As another example, processing device 110 may be directly connected to storage device 120, terminal 130, medical device 150, and/or information acquisition device 160 to access information and/or data. In some embodiments, the processing device 110 may be implemented on a cloud platform. For example, cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, inter-cloud, multi-cloud, etc., or a combination thereof.
  • processing device 110 may include one or more processors (eg, a single-chip processor or a multi-chip processor).
  • the processing device 110 may include a central processing unit (CPU), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a graphics processing unit (GPU), a physical processing unit (PPU), digital signal processing device (DSP), field programmable gate array (FPGA), programmable logic device (PLD), controller, microcontroller unit, reduced instruction set computer (RISC), microprocessor, etc., or any combination thereof.
  • CPU central processing unit
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • GPU graphics processing unit
  • PPU physical processing unit
  • DSP digital signal processing device
  • FPGA field programmable gate array
  • PLD programmable logic device
  • controller microcontroller unit, reduced instruction set computer (RISC), microprocessor, etc., or any combination thereof.
  • Storage device 120 may store data, instructions, and/or any other information.
  • storage device 120 may store data acquired from processing device 110 , terminal 130 , medical device 150 , and/or information acquisition device 160 .
  • storage device 120 may store data and/or instructions executable by processing device 110 or used to perform the example methods described herein.
  • storage device 120 may include mass storage devices, removable storage devices, volatile read-write memory, read-only memory (ROM), the like, or any combination thereof.
  • Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like.
  • Exemplary removable storage devices may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tapes, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • RAMs may include dynamic random access memory (DRAM), double data rate synchronous dynamic access memory (DDR SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), and zero capacitance random access memory Access memory (Z-RAM), etc.
  • Exemplary ROMs may include masked read only memory (MROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable read only memory (EEPROM), optical disk read only memory (CD-ROM) and Digital Versatile Disc Read-Only Memory, etc.
  • storage device 120 may be implemented on a cloud platform as described elsewhere in this application.
  • storage device 120 may be connected to network 140 to communicate with one or more other components of system 100 (eg, terminal 130, medical device 150, and/or information acquisition device 160). One or more components of system 100 may access data or instructions stored in storage device 120 via network 140 . In some embodiments, storage device 120 may be part of processing device 110 .
  • Terminal 130 may enable interaction between a user and one or more components of system 100 .
  • the terminal 130 may display image data, eg, image data of a detection area of an ionization chamber, image data of a region of interest of an object to be scanned (ie, a target object), and the like.
  • the user can issue an instruction through the terminal 130 according to the image data, for example, send an instruction to specify the selected target ionization chamber to the medical device 150, send an instruction to start imaging and/or scanning to the medical device 150, and, for example, send an instruction to the storage device 120 Instructions for storing image data, etc.
  • terminal 130 may include mobile device 130-1, tablet computer 130-2, laptop computer 130-3, etc., or any combination thereof.
  • mobile device 130-1 may include a mobile phone, personal digital assistant (PDA), gaming device, navigation device, point-of-sale (POS) device, laptop computer, tablet computer, desktop computer, etc., or any combination thereof.
  • terminal 130 may include input devices, output devices, and the like.
  • terminal 130 may be part of processing device 110 .
  • Network 140 may include any suitable network that may facilitate the exchange of information and/or data for system 100 .
  • one or more components of system 100 eg, processing device 110 , storage device 120 , terminal 130 , medical device 150 , and/or information acquisition device 160
  • Components communicate information and/or data.
  • processing device 110 may obtain medical image data from medical device 150 via network 140 .
  • the processing device 110 may acquire the user instruction from the terminal 130 via the network 140 .
  • the projection device 150 may acquire projection data from the scanning device 110, the processing device 110, the storage device 120, and/or the information acquisition device 160 via the network 140.
  • the network 140 may be or include a public network (eg, the Internet), a private network (eg, a local area network (LAN)), a wired network, a wireless network (eg, an 802.11 network, a Wi-Fi network), a frame relay network, a virtual network Private network (VPN), satellite network, telephone network, router, hub, switch, server computer and/or any combination thereof.
  • a public network eg, the Internet
  • a private network eg, a local area network (LAN)
  • a wireless network eg, an 802.11 network, a Wi-Fi network
  • a frame relay network e.g, a virtual network Private network (VPN)
  • satellite network e.g, telephone network, router, hub, switch, server computer and/or any combination thereof.
  • the network 140 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), a Bluetooth network, a ZigBee network, Near Field Communication (NFC) networks, etc., or any combination thereof.
  • network 140 may include one or more network access points.
  • network 140 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which one or more components of system 100 may be connected to network 140 to Exchange data and/or information.
  • the medical device 150 may be an automated device that performs medical diagnosis or research tasks.
  • the medical diagnosis or research tasks may include, but are not limited to, medical photography tasks, surgical tasks, rehabilitation treatment tasks, and the like.
  • the medical device 150 may include a scanning device 151 and a projection device 152 .
  • the scanning device 151 may generate or provide image data related to the object (ie, the target object) by scanning the object (ie, the target object).
  • objects (ie, target objects) may include biological objects and/or non-biological objects.
  • an object ie, a target object
  • an object ie, a target object
  • scanning device 151 may be a non-invasive biomedical imaging device used for disease diagnosis or research purposes.
  • Scanning device 151 may include a single-modality scanner and/or a multi-modality scanner.
  • Single-modality scanners may include, for example, X-ray scanners, computed tomography (CT) scanners, digital radiography (DR) scanners (eg, mobile digital radiography), digital subtraction angiography (DSA) scanners, dynamic A spatial reconstruction (DSR) scanner, an X-ray microscope scanner, etc., or any combination thereof.
  • an X-ray imaging device may include an X-ray source and a detector.
  • the X-ray source may be configured to emit X-rays at the object to be scanned.
  • the detector may be configured to detect X-rays transmitted through the object.
  • the X-ray imaging device may be, for example, a C-shaped X-ray imaging device, an upright X-ray imaging device, a ceiling X-ray imaging device, or the like.
  • Multimodality scanners may include, for example, X-ray imaging-magnetic resonance imaging (X-ray-MRI) scanners, positron emission tomography-X-ray imaging (PET-X-ray) scanners, positron emission tomography-computed tomography photography (PET-CT) scanner, digital subtraction angiography-magnetic resonance imaging (DSA-MRI) scanner, etc.
  • X-ray-MRI X-ray imaging-magnetic resonance imaging
  • PET-X-ray positron emission tomography-X-ray imaging
  • PET-CT positron emission tomography-computed tomography photography
  • DSA-MRI digital subtraction angiography-magnetic resonance imaging
  • imaging modality or “modality” broadly refers to an imaging method or technique that collects, generates, processes, and/or analyzes imaging information of a subject.
  • this application primarily describes systems and methods related to X-ray imaging systems. It should be noted that the X-ray imaging systems described below are provided by way of example only, and are not intended to limit the scope of this application. The systems and methods disclosed herein can be any other imaging system.
  • the scanning device 151 may include a gantry 151-1, a detector 151-2, a detection area 151-3, a scanning table 151-4, and a radiation source 151-5.
  • the gantry 151-1 may support the detector 151-2 and the radiation source 151-5.
  • the object may be placed on the scanning table 151-4 and then moved into the detection area 151-3 for scanning.
  • the radioactive source 151-5 may emit radioactive rays to the subject. Radioactive rays may include particle rays, photon rays, etc., or combinations thereof.
  • radioactive rays may include at least two radiation particles (eg, neutrons, protons, electrons, muons, heavy ions), at least two radiation photons (eg, X-rays, gamma rays, ultraviolet rays, lasers) etc., or a combination thereof.
  • the detector 151-2 can detect radiation emitted from the detection area 151-3.
  • detector 151-2 may include at least two detector units. The detector units can be single-row detectors or multi-row detectors.
  • Projection device 152 may include any suitable device capable of projecting image data.
  • projection device 152 may be a cathode ray tube (CRT) projector, a liquid crystal display (LCD) projector, a digital light processor (DLP) projector, a digital light path vacuum tube (DLV) projector, or other device that can project image data .
  • CTR cathode ray tube
  • LCD liquid crystal display
  • DLP digital light processor
  • DLV digital light path vacuum tube
  • the projection device 152 may be configured to project the projection data to be projected onto the object to be scanned.
  • the projection data may include image data corresponding to a detection region of at least one of the one or more ionization chambers.
  • the projection device 152 may acquire image data of the detection region of at least one of the one or more ionization chambers and project the image data onto the object to be scanned.
  • the projection data may also include image data corresponding to a region of interest (ROI) of the object to be scanned.
  • ROI region of interest
  • projection device 152 may be a separate device from scanning device 151 .
  • the projection device 152 may be a projector mounted on the ceiling of the examination room, and the scanning device 151 may be located in the examination room.
  • projection device 152 may be integrated into or mounted on scanning device 151 (eg, gantry 111).
  • the medical device 150 may include a medical imaging device, and the medical imaging device may include an X-ray imaging device, an MR (Magnetic Resonance) device, a CT (Computed Tomography) device, a PET (Positron Emission Computed Tomography) device, an ultrasound device , DSA (Digital subtraction angiography) equipment, or multi-modal imaging equipment, etc.
  • the medical device 150 may further include medical diagnosis and treatment equipment, including but not limited to Digital Subtraction Angiography (DSA), Digital Breast Tomosynthesis (DBT), Cone Beam CT (CBCT), digital radiography system (DR), X-ray computed tomography (CT), mobile C-arm, etc.
  • DSA Digital Subtraction Angiography
  • DBT Digital Breast Tomosynthesis
  • CBCT Cone Beam CT
  • DR digital radiography system
  • CT X-ray computed tomography
  • mobile C-arm etc.
  • the information acquisition device 160 may be used to acquire relevant information of the medical device 150 and/or relevant information of the target object.
  • the target subject may be a subject undergoing non-invasive imaging for disease diagnosis or research purposes, eg, may be a human or an animal, or the like.
  • the related information of the medical device 150 and/or the related information of the target object acquired by the information acquisition device 160 may include location information of one or more ionization chambers in the scanning device 151 .
  • the related information of the medical device 150 and/or the related information of the target object acquired by the information acquisition device 160 may include a virtual model of the medical device 150, wherein the medical device 150 may include at least one movable first
  • the virtual model may include a second part simulating the first part, and the equipment coordinate system where the first part is located has a mapping relationship with the model coordinate system where the second part is located; the information acquisition device 160 acquires the correlation of the medical device 150
  • the information and/or information about the target object may also include current motion information of one of the first part of the medical device and the second part of the virtual model.
  • the relevant information of the medical device 150 and/or the relevant information of the target object acquired by the information acquisition device 160 may include physical information of the target object, wherein the physical information may be basic physical information of the target object, for example, It can be, but not limited to, body temperature information, blood pressure information, blood lipid information, respiration information, pulse information, eye sign information, hand sign information, leg sign information, or head sign information of the target object.
  • the information acquisition device 160 may include a sensor for acquiring characteristic information of the target object, wherein the sensor may include a position sensor, an image sensor, a temperature sensor, a heartbeat sensor, or a breathing sensor, and the like. In some embodiments, information acquisition device 160 may be part of processing device 110 .
  • the system 100 may also include an image capture device (eg, a camera or camera) for capturing image data of the object.
  • the image capturing device may capture projection data projected on the object at the same time as capturing the image data of the object.
  • the image capture device may be a type of information acquisition device 160 .
  • the image capture device may be and/or include any suitable device capable of capturing image data of an object.
  • an image capture device may include a camera (eg, digital camera, analog camera, etc.), red-green-blue (RGB) sensor, RGB depth (RGB-D) sensor, or other device that can capture color image data of an object.
  • a camera eg, digital camera, analog camera, etc.
  • RGB red-green-blue
  • RGB-D RGB depth
  • the image capture device may be a separate device from scanning device 151 .
  • the image capture device may be integrated into or mounted on the scanning device 151 (eg, the gantry 111).
  • image data acquired by the image capture device may be transmitted to processing device 110 for further analysis. Additionally or alternatively, image data acquired by the image capture apparatus may be sent to a terminal device (eg, terminal 130 ) for display and/or a storage device (eg, storage device 120 ) for storage.
  • the image capture device may capture image data of the subject continuously or intermittently (eg, periodically) before, during, and/or after scanning of the subject is performed by the scanning device 151 .
  • the acquisition of image data by the image capture device, the transmission of the captured image data to the processing device 110, and the analysis of the image data may be performed substantially in real-time, such that the image data may provide substantially real-time indication of the object status information.
  • system 100 may include one or more additional components. Additionally or alternatively, one or more components of system 100, such as an image capture device, may be omitted. As another example, two or more components of system 100 may be integrated into a single component. For example only, processing device 110 (or a portion thereof) may be integrated into scanning device 151 .
  • the medical device control system 100 may be used to perform a method of marking a detection region of an ionization chamber, a control method of a medical diagnosis and treatment device, and/or a medical imaging device parameter determination method, an imaging method.
  • the method of marking the detection area of the ionization chamber, the control method of the medical diagnosis and treatment equipment, the parameter determination method of the medical imaging equipment, and the imaging method may be implemented individually or in combination with each other.
  • FIG. 2 is an exemplary flowchart of a method for controlling a medical device according to some embodiments of the present application.
  • the process 200 may be performed by the processing device 110 .
  • process 200 may be stored in a storage device (eg, storage device 120 ) in the form of programs or instructions that, when executed by system 100 , may implement process 200 for marking a detection region of an ionization chamber.
  • the process 200 may include the following steps.
  • Step 210 Acquire relevant information of the medical device and/or relevant information of the target object.
  • the medical device may be an automated device that performs medical diagnosis and treatment tasks.
  • the medical diagnosis and treatment tasks may include, but are not limited to, medical photography tasks, surgery tasks, rehabilitation treatment tasks, and the like.
  • the medical device may include a scanning device and a projection device, wherein the scanning device may include a single modality scanner (eg, X-ray scanner, computed tomography (CT) scanner, digital radiography (DR) Scanners (eg, mobile digital radiography), digital subtraction angiography (DSA) scanners, dynamic spatial reconstruction (DSR) scanners, X-ray microscope scanners, etc.) and/or multimodality scanners (eg, X-ray Imaging-magnetic resonance imaging (X-ray-MRI) scanners, positron emission tomography-X-ray imaging (PET-X-ray) scanners, positron emission tomography-computed tomography (PET-CT) scanners, digital Subtraction Angiography-Magnetic Resonance Imaging (DSA-MRI) scanner
  • the medical equipment may include medical imaging equipment, for example, X-ray imaging equipment, MR (Magnetic Resonance) equipment, CT (Computed Tomography) equipment, PET (Positron Emission Computed Tomography) equipment, ultrasound equipment, DSA (Digital Tomography) equipment subtraction angiography) equipment, or multimodal imaging equipment, etc.
  • the medical equipment may also include medical diagnosis and treatment equipment, for example, Digital Subtraction Angiography (DSA), Digital Breast Tomosynthesis (DBT), Cone Beam CT (CBCT), Direct Digital X-ray Radiography system (DR), X-ray computed tomography (CT), mobile C-arm, etc. More descriptions of the medical device can be found in Figure 1 and its associated description.
  • the medical device-related information may include imaging-related information, eg, location information for one or more ionization chambers, wherein the ionization chamber location information may include one or more location information of the ionization chamber relative to the scanning device.
  • Position information of a component (eg, detector) and/or the position of the ionization chamber in a 3D coordinate system may be fixed, and the operator and/or the scanning device may adjust the position of the detector according to the position of the ROI of the object before scanning Location.
  • the processing device 110 may acquire the position of the detector of the scanning device in the examination room and the fixed position of the ionization chamber relative to the detector of the scanning device to determine the position information of the ionization chamber in the examination room.
  • the position of the ionization chamber relative to the detector of the scanning device is adjustable.
  • the ionization chamber may be mounted within a movable cassette, which and/or other components of the scanning device may have position sensors mounted therein.
  • the processing device 110 may obtain data detected by the position sensor to determine the position of the ionization chamber.
  • FIG. 3 and related descriptions please refer to FIG. 3 and related descriptions.
  • the relevant information of the medical device may further include a virtual model of the medical device and current motion information, wherein the medical device may include at least one movable first component, and correspondingly, the virtual model may include a simulated first component
  • the medical device may include at least one movable first component
  • the virtual model may include a simulated first component
  • the equipment coordinate system where the first part is located has a mapping relationship with the model coordinate system where the second part is located;
  • the current motion information can be the motion of at least one of the first part of the medical equipment and the second part of the virtual model information.
  • the virtual model can be obtained by modeling the data of the medical diagnosis and treatment equipment, and the motion information can be obtained based on the motion control information.
  • target objects may include biological objects and/or non-biological objects.
  • an object may include a particular part of the body, such as the head, chest, abdomen, etc., or a combination thereof.
  • an object may be a man-made object of animate or inanimate organic and/or inorganic matter.
  • the relevant information of the target object may include location information and physical information of the target object, and the like.
  • the sign information may be basic sign information of the target object, such as, but not limited to, body temperature information, blood pressure information, blood lipid information, respiration information, pulse information, eye sign information, hand information of the target object, but not limited to Sign information, leg sign information or head sign information, etc.
  • the relevant information of the target object may be acquired through sensors, wherein the sensors may include a position sensor, a camera, a temperature sensor, a heartbeat sensor or a breathing sensor, and the like. More description of the physical information of the target object. See Figure 13 and its associated description.
  • Step 220 control the medical device based on the related information of the medical device and/or the related information of the target object.
  • the processing device 110 may control the medical device to project the target object based on the relevant information of the medical device. Furthermore, the processing device 110 may determine projection data based on the position information of one or more ionization chambers, and control the projection device to project the projection data onto the target object. For example, the processing device 110 may determine a detection area of at least one of the one or more ionization chambers based on the location information of the one or more ionization chambers, and determine projection data of the projection device, the projection data including the projection data corresponding to the at least one ionization chamber The image data of the detection area is controlled, and the projection device is controlled to project the projection data onto the object. For more descriptions about controlling the medical device to project the target object, please refer to FIG. 3 and related descriptions.
  • the processing device 110 may control the movement of the medical device based on the relevant information of the medical device. Further, the processing device 110 may control the movement of the medical device based on the virtual model of the medical device. For example, the processing device 110 may acquire current motion information of one of the first part of the medical diagnosis and treatment device and the second part of the virtual model, and control the other of the first part of the medical diagnosis and treatment device and the second part of the virtual model or, perform the same motion as one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model that acquire the motion instruction. More descriptions on controlling movement of medical equipment. See Figure 7 and its associated description.
  • the processing device 110 may control the medical device to scan the target object based on the relevant information of the target object. Furthermore, the processing device 110 may control the medical device to scan the target object based on the physical information of the target object. For example, the processing device 110 may analyze the sign information of the target object, determine abnormal sign information, and adaptively determine scanning parameters and/or image processing parameters of the medical imaging device based on the abnormal sign information of the target object. For more descriptions on controlling the movement of medical equipment, please refer to FIG. 13 and related descriptions.
  • the processing device 100 may control the medical device based on the relevant information of the medical device and the relevant information of the target object. For example, the processing device 100 may control the medical device to scan the target object based on the relevant information of the medical device and the relevant information of the target object. In some embodiments, before scanning the target object, the processing device 110 may analyze the sign information of the target object, determine abnormal sign information, and adaptively determine the scanning parameters and parameters of the medical device based on the abnormal sign information of the target object. / or image processing parameters.
  • the processing device 110 may determine the detection area of at least one of the one or more ionization chambers based on the position information of the one or more ionization chambers, and determine the projection data of the projection device , and then control the projection device to project the projection data onto the target object. Further, the processing device 110 may be based on the projection result of the projection data of the detection area of at least one ionization chamber (eg, if the image data of the detection area of one ionization chamber is projected to the body surface of the target object, the ionization chamber may be marked as a candidate ionization chamber), identify one or more target ionization chambers.
  • the processing device 110 may scan the target objects in one or more target ionization chambers according to the determined scanning parameters of the medical device, and/or the processing device 110 may, after acquiring the scanned images, perform a pair of scans according to the determined image processing parameters of the medical device. Scan the image for processing.
  • the processing device 110 may control the scanning table 151-4 to move based on the location information of the target object to move the target object Move to the detection area of at least one ionization chamber for scanning. More descriptions of candidate ion chambers, target ion chambers. See Figure 7 and its associated description.
  • the automatic exposure control (Automatic Exposure Control, AEC) technology uses the ionization chamber to detect the amount of radiation after passing through the scanned object, so as to control the exposure time of the X-ray machine and the total amount of X-rays, so that different parts and different patients can be photographed.
  • the X-ray images obtained have the same level of light sensitivity, avoiding the phenomenon that the dose difference between the captured images is too large and the image quality is uneven.
  • ROI region of interest
  • the detection area of the ionization chamber is generally marked on the surface of the equipment by means of a marking frame or a marking line.
  • a marking frame or a marking line In actual operation, because the position identification of the detection area of the ionization chamber is easily blocked by the human body or clothing, or the surface of some movable objects (such as the movable table surface of the examination bed) cannot mark the detection area of the ionization chamber, which makes the operator It is difficult to obtain the exact position of the detection area of the ionization chamber, so that it is impossible to accurately determine whether the object to be scanned or the area of interest covers the detection area of the ionization chamber. Therefore, there is a need to provide a method and system for marking the detection region of an ionization chamber.
  • FIG. 3 is a flowchart of a method of marking a detection region of an ionization chamber according to some embodiments of the present application.
  • the process 300 may be performed by the processing device 110 .
  • process 300 may be stored in a storage device (eg, storage device 120 ) in the form of programs or instructions that, when executed by system 100 , may implement process 300 for marking a detection region of an ionization chamber.
  • the process 300 may include the following steps.
  • Step 310 Obtain position information of one or more ionization chambers in the examination room in the scanning device, where the scanning device is used to scan the object.
  • this step can be completed by the obtaining module 610 .
  • a scanning device (eg, scanning device 151 ) is used to scan an object (ie, a target object) located in an examination room, such as a patient.
  • the scanning device may be a medical imaging device, such as a pendant X-ray imaging device, an upright X-ray imaging device, a digital radiography (DR) device (eg, a mobile digital X-ray medical imaging device), etc., or as such Apply for similar equipment as described elsewhere.
  • the scanning device may be an upright X-ray imaging device.
  • the radiation emitted by the X-ray source passes through a region of interest (ROI) of a standing patient, and the image receiver of the scanning device can detect the radiation passing through the patient.
  • the intensity of the X-ray of the ROI may contain one or more body parts (eg, tissues, organs) of the subject that needs to be scanned.
  • the processing device 110 may determine a region of interest (ROI) of the target object based on physical information of the target object (eg, a patient). Further, the processing device 110 may determine abnormal sign information based on the sign information of the target object (eg, patient), determine the disease type of the target object based on the abnormal sign information of the target object, and then determine the target based on the disease type of the target object.
  • the region of interest (ROI) of the object For example, if the target subject's body temperature is too high, the target subject's lungs may have abnormalities, and the lungs may be determined as the target subject's region of interest (ROI).
  • the ionization chamber in the scanning device may be configured to detect the amount of radiation reaching the detectors of the scanning device (eg, the amount of radiation in the detection region of the ionization chamber over a period of time).
  • the ionization chamber can usually be placed between the detector and the object to be scanned.
  • the ionization chamber may include a solid ionization chamber, a liquid ionization chamber, an air ionization chamber, and other ionization chambers suitable for medical imaging processes, which are not limited in this application.
  • one or more target ionization chambers may be selected among the plurality of ionization chambers (described in conjunction with operation 460 in FIG. 4).
  • One or more of the target ionization chambers can be activated while the target object is being scanned, while other ionization chambers (if any) can be turned off during the scanning of the target object.
  • an Automatic Exposure Control (AEC) method may be implemented while scanning the object.
  • a radiation controller eg, a component of scanning device 110 or control module 640 ) may cause the radiation source of the scanning device to stop scanning when the cumulative amount of radiation detected by one or more target ionization chambers exceeds a threshold.
  • the location information of the ionization chamber in the examination chamber may include information about the location of the ionization chamber relative to one or more components (eg, detectors) of the scanning device 110 and/or the location of the ionization chamber in a 3D coordinate system Location.
  • the position of the ionization chamber relative to a detector (eg, a flat panel detector) of the scanning device may be fixed.
  • the ionization chamber can be fixed at a constant position relative to the detector, and the position of the ionization chamber relative to the detector does not change during different scanning operations.
  • the operator and/or the scanning device can adjust the position of the detector according to the position of the ROI of the object before scanning.
  • the processing device 110 may acquire the position of the detector of the scanning device in the examination room and the fixed position of the ionization chamber relative to the detector of the scanning device to determine the position information of the ionization chamber in the examination room.
  • the position of the ionization chamber relative to the detector of the scanning device is adjustable.
  • the ionization chamber may be mounted within a movable cassette, which and/or other components of the scanning device may have position sensors mounted therein.
  • the processing device 110 may obtain data detected by the position sensor to determine the position of the ionization chamber.
  • the location information of the ionization chamber in the examination chamber may be the location of the ionization chamber in a 3D coordinate system.
  • a 3D coordinate system may be established throughout the examination room for describing the location of the ionization chamber and/or other components of the system 100 (eg, detectors, projection device 152).
  • Step 320 Determine a detection area of at least one of the one or more ionization chambers based on the location information of the one or more ionization chambers.
  • this step can be completed by the detection area determination module 620 .
  • the detection area of the ionization chamber and the location information of the ionization chamber may be correlated.
  • the detection area of an ionization chamber may be a fixed range around the ionization chamber, such as a fixed size circular area, square area, triangular area or other shaped area.
  • the size and shape of the detection region can be related to the size of the ionization chamber.
  • the size of the region eg, radius, side length, area, etc.
  • the processing device 110 may determine the actual detection area of a certain ionization chamber based on the location information of the chamber and the size of the detection area.
  • the processing device 110 may determine the detection area of only one of the one or more ionization chambers. In some embodiments, the processing device 110 may determine a detection area for each of the plurality of ionization chambers. In some embodiments, the processing device 110 may determine the detection area of only a portion of the one or more ionization chambers. For example, the processing device 110 may, after determining the position information of the plurality of ionization chambers, further determine the position of a part of the ionization chambers in the ionization chambers close to a certain position (eg, the center point, the upper half, the lower half, etc.) of the flat panel detector. detection area.
  • a certain position eg, the center point, the upper half, the lower half, etc.
  • Step 330 Determine projection data of the projection device, where the projection data includes image data corresponding to the detection area of the at least one ionization chamber.
  • this step can be completed by the projection data determination module 630 .
  • processing device 110 may determine projection data for projection device 152, the projection data including projection data corresponding to at least one of the one or more ionization chambers (hereinafter referred to as "at least one ionization chamber") image data of the detection area.
  • the image data may be data generated based on the actual detection area of the at least one ionization chamber.
  • the image data may be color image data or grayscale image data.
  • the processing device 110 determines the detection regions of multiple ionization chambers, the image data may include multiple graphics (eg, circles, squares, etc.), wherein each graphics corresponds to one ionization chamber respectively detection area.
  • the graph can be a color-filled graph or a line that outlines the detection area of the ionization chamber.
  • the detection areas may be represented by the same color and/or graphics, or may be represented by different colors and/or graphics.
  • the processing device 110 may obtain the location of the projection device in the examination room and make changes based on one or more of the location of the projection device, the location of the ionization chamber, the detection area of the ionization chamber, the body thickness of the subject, and the like to determine the size of the graph corresponding to the detection area of the ionization chamber, so that the detection area of the ionization chamber projected onto the object is consistent with the actual detection area of the ionization chamber.
  • the projection device is used for projecting the projection data onto the object to be scanned, so as to mark the detection area of the ionization chamber.
  • the projection data also includes other forms of data or parameters, such as projection direction, brightness, resolution, contrast, etc., or combinations thereof. At least some of these data or parameters may have default values or values manually set by a user (eg, an operator).
  • the projection direction may be along the direction from the lens of the projection device to the center point of the detector (or the center point of the detection area of the at least one ionization chamber).
  • the image data may include graphics, text, patterns, icons, numbers, and the like.
  • Step 340 Control the projection device to project the projection data onto the object.
  • this step may be completed by the control module 640 .
  • processing device 110 may control projection device 150 to project projection data onto the object.
  • the projection device 152 may be and/or include any suitable device capable of projecting image data.
  • projection device 152 may be a cathode ray tube (CRT) projector, a liquid crystal display (LCD) projector, a digital light processor (DLP) projector, a digital light path vacuum tube (DLV) projector, or other device that can project image data .
  • the projection device 152 can project the projection data onto the object by means of center projection or parallel projection.
  • the projection device 152 may use the orthographic projection or oblique projection to project the projection data onto the object, preferably, the orthographic projection may be used to project the projection data onto the object.
  • the projection data may include image data corresponding to the detection region of the at least one ionization chamber.
  • the projection device may project the data to be projected towards the center point of the detector, and due to the occlusion of the object, the image data corresponding to the detection area of the at least one ionization chamber may be projected onto the body surface of the object, thereby marking the at least one ionization chamber.
  • the detection area of an ionization chamber The operator can easily observe the projected detection area of the at least one ionization chamber, so as to judge whether the at least one ionization chamber includes one or more candidate ionization chambers.
  • the term "candidate ionization chamber” refers to an ionization chamber covered by the subject's ROI to be scanned.
  • the operator can visually determine the approximate extent of the ROI.
  • the projection data projected by the projection device onto the object may also include image data corresponding to the ROI.
  • the ROI can be represented by a colored-filled figure (eg, a rectangle), or the outline of the ROI can be outlined with a colored line.
  • Different display modes can be used to distinguish the detection area of the ROI and the ionization chamber. For example, different colors can be used to represent the detection area of the ROI and the ionization chamber.
  • a line may be used to outline the ROI, and a color-filled graph may be used to represent the detection area of the ionization chamber.
  • a projection manner facilitates the operator to visually observe whether the ROI of the object covers the detection area of at least a part of the ionization chambers in the at least one ionization chamber.
  • the operator can view the extent of the ROI by projecting a laser light on the subject.
  • the processing device 110 may automatically determine the ROI of the object.
  • the manner of determining the ROI may adopt a manner commonly used by those skilled in the art, which is not limited in this application.
  • the processing device 110 may acquire an image (eg, the reference image described in FIG. 4 ) of the to-be-scanned part (eg, chest, lower extremity) containing the object, and use a template matching algorithm, a machine learning model, or the like to determine the ROI of the object.
  • the operator may adjust the at least one ionization chamber The location of one or more ionization chambers (also referred to as reference ionization chambers) relative to the ROI of a subject (eg, a patient). For example, the operator can instruct the patient to change position or change posture. As another example, if the ionization chamber is movable relative to the detector, the operator can adjust the position of one or more reference ionization chambers relative to the detector.
  • the operator may adjust the position of the detector (eg, move the detector up and down and/or side to side), thereby changing the position of one or more reference ionization chambers relative to the ROI of the subject.
  • the processing device 110 may repeat the process 300 so that the operator can determine whether the detection area of at least one reference ionization chamber is covered by the ROI in the one or more reference ionization chambers.
  • the processing device 110 may update the projection data in real time, so that the projection data can reflect the changed position of the one or more reference ionization chambers.
  • the actual detection area so that the user can observe and judge whether it is necessary to continue to adjust the position of the one or more reference ionization chambers.
  • the processing device 110 may also adjust one or more ionization chambers (also referred to as reference ionization chambers) of the at least one ionization chamber relative to the object (e.g. the location of the ROI of the patient). For example, processing device 110 may generate motion control instructions based on the position of one or more reference ionization chambers relative to the detector, control movement of scanning table 151-4 based on the motion control instructions, and adjust one or more of the at least one ionization chambers The position of the ROI relative to the object. Further, the processing device 110 may acquire a virtual model of the scanning table 151-4, and control the movement of the scanning table 151-4 on which the target object is placed based on the virtual model of the scanning table 151-4.
  • one or more ionization chambers also referred to as reference ionization chambers of the at least one ionization chamber relative to the object.
  • processing device 110 may generate motion control instructions based on the position of one or more reference ionization chambers relative to the detector, control movement of scanning table
  • processing device 110 may generate motion control instructions based on the position of one or more reference ionization chambers relative to the detector, and processing device 110 may control virtual model or scan table 151-4 to execute motion control instructions, virtual model or scan table 151-4 The other of 4 performs the same motion as the virtual model or scan table 151-4 that acquires motion control instructions.
  • the scanning stage 151-4 may obtain motion control instructions from the processing device 110 and automatically perform corresponding motions based on the motion control instructions.
  • the processing device 110 may obtain current motion information of the scanning table 151-4 when the scanning table 151-4 receives a motion control instruction to perform the current motion.
  • the process of performing the same motion may be implemented based on a mapping relationship between the device coordinate system of the scanning stage 151-4 and the model coordinate system of the virtual model.
  • the processing device 110 may acquire the current motion information of the scanning table 151-4, and convert the position information in the current motion information of the scanning table 151-4 (eg, in the device coordinate system) The coordinates) are mapped to the model coordinate system, and then the virtual model is controlled to move to the position based on the mapped position information (eg, the coordinates in the model coordinate system).
  • the motion of the medical diagnosis and treatment equipment is displayed synchronously through the virtual model, which is convenient for the operator to observe and monitor the motion and parameters of the medical diagnosis and treatment equipment, further improve the interaction between the medical diagnosis and treatment equipment and the user, and is convenient for the operator to use.
  • the user can pass A terminal (eg, terminal 130) selects one or more target ionization chambers from the one or more candidate ionization chambers.
  • the processing device 110 may obtain user input from the terminal 130 to determine the selected target ionization chamber.
  • the processing device 110 may adjust the projection data of the projection device 152 such that the image data corresponding to the detection area of the one or more target ionization chambers and the images corresponding to the detection areas of the other ionization chambers in the projection data
  • the data has visually distinct differences. See Figure 5 for a more detailed description of how the projection data is adjusted after the target ionization chamber is determined.
  • processing device 110 may automatically determine whether one or more candidate ionization chambers are included in the at least one ionization chamber. Specifically, the processing device 110 may automatically determine whether the ROI of the object covers the detection area of at least a part of the ionization chambers in the at least one ionization chamber. In response to the ROI of the object covering the detection regions of at least a portion of the ionization chambers of the at least one ionization chamber, the processor 110 may designate the ionization chambers whose detection regions are covered as candidate ionization chambers. Optionally, the processing device 110 may also automatically select one or more target ionization chambers from the one or more candidate ionization chambers. For the description of the above automatic determination process, reference may be made to FIG. 4 .
  • FIG. 4 is a flow chart of automatically selecting a target ionization chamber according to some embodiments of the present application.
  • the process 400 may be performed by the processing device 110 .
  • process 400 may be stored in a storage device (eg, storage device 120 ) in the form of programs or instructions, which, when executed by system 100 , may implement process 400 for automatically selecting a target ionization chamber.
  • the process 400 may include the following steps.
  • Step 410 Obtain a reference image of the object, where the reference image is captured by a camera after the projection device projects the projection data onto the object to be scanned.
  • step 410 may be performed by acquisition module 610 .
  • an image capture device eg, a camera of the system 100 may acquire a reference image of the object.
  • the processing device 110 eg, the control module 640
  • the camera may control the camera to capture a reference image of the object. While capturing the image data of the object, the camera can also capture the graphics projected on the object.
  • the reference image may include an ROI of an object and a marked detection region of at least one ionization chamber projected on the object.
  • the graphics projected by the projection device on the object may include graphics corresponding to the detection area of the at least one ionization chamber and graphics corresponding to the ROI of the object. Accordingly, the reference image may include the ROI marked by projection and the detection area of at least one ionization chamber.
  • the image capture device may be and/or include any suitable device capable of capturing image data of an object.
  • an image capture device may include a camera (eg, digital camera, analog camera, etc.), camera, red-green-blue (RGB) sensor, RGB-depth (RGB-D) sensor, or other device that can capture image data of an object.
  • the image capturing device eg, a camera
  • the image capturing device can capture the reference image of the object in the form of an orthophoto.
  • the image capturing device eg, camera
  • Step 420 identifying a first region in the reference image, where the first region corresponds to a region of interest to be scanned of the object.
  • image data corresponding to the ROI is not included in the projection data of the projection device.
  • the processing device 110 may identify the first region corresponding to the ROI from the reference image.
  • the processing device 110 may use a template matching algorithm, a machine learning model, or the like to determine the ROI of the object from the reference image, which is not limited in this application.
  • the reference image may be input into a trained machine learning model, and the trained machine learning model may output the identified first region after processing the reference image.
  • the training samples used to train the machine learning model may include multiple sample images and ROIs manually labeled according to the sample images.
  • the projection data of the projection device may include image data corresponding to the ROI.
  • the processing device 110 may employ an image recognition algorithm to identify the first region from the reference image.
  • the image recognition algorithm may include a recognition algorithm based on image features such as color features, texture features, shape features, and local feature points.
  • a component of system 100 eg, terminal 130
  • Step 430 identifying a second area in the reference image, where the second area corresponds to the detection area of at least one ionization chamber projected onto the object.
  • the processing device 110 may employ an image recognition algorithm to identify a second region corresponding to the detection region of the at least one ionization chamber from the reference image.
  • the image recognition algorithm may include a recognition algorithm based on image features such as color features, texture features, shape features, and local feature points.
  • the second region corresponding to the detection region of the at least one ionization chamber projected onto the object may consist of a plurality of separated regions.
  • the projection data includes image data of detection areas of two ionization chambers
  • the second area may be composed of two separated sub-areas, each of which corresponds to a detection area of one ionization chamber.
  • Step 440 based on the first region and the second region, determine whether the at least one ionization chamber includes one or more candidate ionization chambers.
  • step 440 may be determined by candidate ionization chamber determination module 650 .
  • a component of system 100 eg, processing device 110
  • processing device 110 can determine whether the first area covers at least one of the second area or one or more sub-areas of the second area.
  • processing device 110 may determine that the at least one ionization chamber includes at least one candidate ionization chamber.
  • the target ionization chamber can be selected from candidate ionization chambers. If the first region does not cover the second region or any sub-region of the second region, the processing device 110 may determine that the at least one ionization chamber does not include any candidate ionization chambers.
  • the above judgment may be automatically performed by a component of the system 100 (eg, the processing device 110 ) based on the first area and the second area, or may be manually judged by a user.
  • the user may perform human judgment through the first area and the second area displayed on the display interface of the component of the system 100 (eg, the terminal 130 ), and input the judgment result into the terminal 130 .
  • Step 450 in response to determining that the at least one ionization chamber does not include any candidate ionization chamber, cause the terminal device to generate a prompt message.
  • step 450 may be determined by candidate ionization chamber determination module 650 .
  • the prompt information may be in the form of text, voice, image, video, alarm, etc., or any combination thereof.
  • the prompt information may be in the form of text and voice.
  • the display interface of the terminal 130 may display the text for prompting (For example, "The ROI does not cover any detection area of the ionization chamber"), and the terminal 130 can issue a voice prompt corresponding to the text at the same time.
  • the prompt information may be in the form of an image.
  • the terminal 130 When a component of the system 100 (eg, the processing device 110 ) determines that the detection area of any one of the at least one ionization chamber does not cover the area of interest, the terminal 130 The part of the display interface for displaying the first area and/or the second area may change color and/or flash to prompt the user.
  • the user after receiving the prompt information, the user can manually change the position of one or more reference ionization chambers in the at least one ionization chamber relative to the ROI to be scanned, so that the ROI can cover one or more ROIs
  • the detection area of at least one of the reference ionization chambers is referenced. For example, the user can adjust the position of one or more reference ionization chambers.
  • the user can adjust the pose and/or position of the object to move the ROI of the object relative to the one or more reference ionization chambers so that the ROI can cover the one or more reference ionization chambers At least one of the reference ionization chamber detection areas.
  • a component of the system 100 may ionize one or more reference ionization chambers in the at least one ionization chamber The chamber is moved relative to the ROI of the subject so that the ROI can cover the detection area of at least one of the one or more reference ionization chambers.
  • Step 460 in response to determining that the at least one ionization chamber includes one or more candidate ionization chambers, selecting one or more target ionization chambers from the one or more candidate ionization chambers, the one or more target ionization chambers The chamber will run during the scan of the object.
  • step 460 may be performed by target ionization chamber determination module 660 .
  • the processing device 110 may select one or more target ionization chambers from the one or more candidate ionization chambers, so The one or more target ionization chambers will operate during scanning of the object.
  • the processing device 110 may select one or more target ionization chambers near the ROI of the subject among the one or more candidate ionization chambers. For example, the processing device 110 may select one or more target ionization chambers near the ROI from the candidate ionization chambers based on the distance between the candidate ionization chambers and the ROI.
  • the distance between the candidate ion chamber and the ROI may refer to the distance between a point (eg, center point) of the candidate ion chamber and a point (eg, center point) of the ROI.
  • the processing device 110 may determine the distance between the candidate ion chamber and the ROI based on the position information of the candidate ion chamber and the position information of the ROI.
  • the processing device 110 may determine the distance between the candidate ionization chamber and the ROI. Processing device 110 may determine whether the distance is less than a distance threshold (eg, 2 centimeters). If the distance between the candidate ionization chamber and the ROI is less than the distance threshold, processing device 110 may determine that the candidate ionization chamber is near the ROI and designate the candidate ionization chamber as a target ionization chamber. For another example, the processing device 110 may select, among the multiple candidate ionization chambers, the candidate ionization chamber closest to the ROI, that is, the candidate ionization chamber with the smallest distance from the ROI, as the target ionization chamber.
  • a distance threshold eg, 2 centimeters
  • the processing device 110 may also select the candidate ionization chamber closest to the important part of the ROI as the target ionization chamber. For example, when the ROI is the chest cavity, the processing device 110 may select the candidate ionization chamber closest to the heart site as the target ionization chamber. Optionally, the processing device 110 may also randomly select one or more target ionization chambers from the candidate ionization chambers.
  • FIG. 5 is a flowchart of adjusting projection data according to some embodiments of the present application.
  • the process 500 may be performed by the processing device 110 .
  • process 500 may be stored in a storage device (eg, storage device 120 ) in the form of programs or instructions, and when system 100 executes the program or instructions, process 500 for adjusting projection data may be implemented.
  • the process 500 may include the following steps.
  • Step 510 Obtain identification information of one or more target ionization chambers selected from the at least one ionization chamber.
  • step 510 may be performed by target ionization chamber determination module 560 .
  • processing device 110 may obtain identification information for one or more target ionization chambers selected from the at least one ionization chamber.
  • the identification information is information for distinguishing different ionization chambers.
  • the identification information can be the serial number of the ionization chamber, or the location information of the ionization chamber and/or other information that can distinguish the target ionization chamber from other ionization chambers.
  • the one or more target ionization chambers may be determined manually by the user after observing the projection result (for example, the image data projected by the projection device in step 340 in the process 300), or may be determined automatically by the processing device 110 ( For example, determined according to process 400).
  • Step 520 Based on the identification information, adjust the projection data so that the first feature value of the image data corresponding to the detection area of the one or more target ionization chambers in the projection data is the same as the image data corresponding to the other ionization chambers.
  • the second feature values of the image data of the detection regions are different, wherein the first feature value and the second feature value correspond to the same image feature.
  • the image feature may be a feature characterizing different attributes of the image, such as the fill color of the image, the color of the image outline, the thickness of the image outline, and the like.
  • the first feature value and the second feature value may be different feature values corresponding to the same image feature.
  • the first feature value may be red, and the second feature value may be green; before adjusting the projection data, corresponding to the detection areas of all ionization chambers in the at least one ionization chamber
  • the color of the graph can be all green.
  • a component of the system 100 eg, processing device 110
  • the first feature value may be 5 mm, and the second feature value may be 1 mm.
  • the image features may also be words and/or symbols.
  • the image feature may be an arrow, and in this case, the feature value of the image feature may be whether or not the image feature contains the arrow.
  • the first feature value may contain the arrow, and the second feature value may not contain the arrow.
  • the image feature can also be text, such as "selected" and so on.
  • the first characteristic value and/or the second characteristic value may be preset in the system 100, or may be set by the user during operation (for example, through the terminal 130). For example, the user may set the first feature value of the image feature corresponding to the color of the image to yellow through the terminal 130 .
  • FIG. 6 is a block diagram of a system for marking a detection region of an ionization chamber according to some embodiments of the present application.
  • the components of system 600 eg, processing device 110 for marking a detection region of an ionization chamber may include acquisition module 610 , detection region determination module 620 , projection data determination module 630 , control module 640 .
  • components of the system for marking the detection region of an ionization chamber may also include a candidate ionization chamber determination module 650 and a target ionization chamber determination module 660 .
  • Acquisition module 610 may be used to acquire location information for one or more ionization chambers in a scanning device (eg, scanning device 151).
  • the position of the ionization chamber relative to the detector of the scanning device is fixed, and the acquisition module 610 may obtain the position of the detector of the scanning device in the inspection chamber and the fixed position of the ionization chamber relative to the detector of the scanning device , to determine the location of the ionization chamber in the examination room.
  • the position of the ionization chamber relative to the detector of the scanning device is adjustable.
  • the ionization chamber may be mounted within a movable cassette, which and/or other components of the scanning device may have position sensors mounted therein.
  • the acquisition module 610 may acquire the data detected by the position sensor to determine the position of the ionization chamber.
  • the location information of the ionization chamber in the examination chamber may be the location of the ionization chamber in a 3D coordinate system.
  • a 3D coordinate system may be established throughout the examination room for describing the location of the ionization chamber and/or other components of the system 100 (eg, detectors, projection device 152).
  • the obtaining module 610 reference may be made to the rest of this specification (eg, step 310), which will not be repeated here.
  • the detection area determination module 620 may be configured to determine a detection area of at least one of the one or more ionization chambers based on the location information of the one or more ionization chambers. In some embodiments, the detection area of the ionization chamber and the location information of the ionization chamber may be correlated. For example, the size and shape of the detection region can be related to the size of the ionization chamber. In some embodiments, the size (eg, radius, side length, area, etc.) of the detection area may be preset in the system 100 . The detection area determination module 620 may determine the actual detection area of a certain ionization chamber based on the location information of the ionization chamber and the size of the detection area. For more information about the detection area determination module 620, reference may be made to the rest of this specification (eg, step 320), which will not be repeated here.
  • the projection data determination module 630 may be configured to determine projection data of the projection device, the projection data including image data corresponding to the detection area of the at least one ionization chamber. In some embodiments, the projection data further includes image data corresponding to the ROI of the object to be scanned. For more information about the projection data determination module 630, reference may be made to the rest of this specification (eg, step 330), which will not be repeated here.
  • the control module 640 may be used to control the projection device 152 to project the projection data onto the object to be scanned.
  • the projection device 152 may be and/or include any suitable device capable of projecting image data.
  • projection device 152 may be a cathode ray tube (CRT) projector, a liquid crystal display (LCD) projector, a digital light processor (DLP) projector, a digital light path vacuum tube (DLV) projector, or other device that can project image data .
  • CTR cathode ray tube
  • LCD liquid crystal display
  • DLP digital light processor
  • DLV digital light path vacuum tube
  • the candidate ionization chamber determination module 650 may be used to determine whether one or more candidate ionization chambers are included in the at least one ionization chamber. For example, the candidate ionization chamber determination module 650 may identify a first region corresponding to the ROI from the reference image. In some embodiments, the candidate ionization chamber determination module 650 may also be used to determine whether one or more candidate ionization chambers are included in the at least one ionization chamber based on the first region and the second region.
  • the candidate ionization chamber determination module 650 may also be configured to, in response to determining that the at least one ionization chamber does not include any candidate ionization chambers, cause a terminal device (eg, terminal 130 ) to generate an alert message.
  • a terminal device eg, terminal 130
  • the candidate ion chamber determination module 650 may also be configured to, in response to determining that the at least one ionization chamber does not include any candidate ionization chambers, cause a terminal device (eg, terminal 130 ) to generate an alert message.
  • a terminal device eg, terminal 130
  • the target ionization chamber determination module 660 may determine one or more target ionization chambers. For example, in response to determining that the at least one ionization chamber includes one or more candidate ionization chambers, the target ionization chamber determination module 660 may select one or more target ionization chambers from the one or more candidate ionization chambers, the One or more target ionization chambers will operate during the scanning of the object. In some embodiments, the target ionization chamber determination module 660 may select one or more target ionization chambers near the ROI of the subject among the plurality of candidate ionization chambers. For more information about the target ionization chamber determination module 660, reference may be made to the rest of this specification (eg, step 460 ), which will not be repeated here.
  • the acquisition module 610 , the detection area determination module 620 , the projection data determination module 630 , and the control module 640 disclosed in FIG. 6 may be different modules in one device (eg, the processing device 110 ), or may be one module to implement the above two function of one or more modules.
  • the detection area determination module 620 and the projection data determination module 630 may be two modules, or one module may have the functions of the above two modules at the same time.
  • the candidate ion chamber determination module 650 and the target ion chamber determination module 660 may be omitted. Such deformations are all within the protection scope of this specification.
  • the embodiment of this specification also discloses a computer-readable storage medium, which can store computer instructions. After the computer reads the computer instructions in the storage medium, the computer can execute the detection area of the marked ionization chamber provided by the present application. Methods.
  • An embodiment of the present specification also discloses an apparatus for marking a detection area of an ionization chamber.
  • the apparatus includes a program for marking a detection area of an ionization chamber, and the program can implement the method for marking a detection area of an ionization chamber provided in this application. .
  • the possible beneficial effects of the embodiments of the present specification include, but are not limited to: (1) Projecting the projection data that needs to be projected (for example, the image data of the detection area of the ionization chamber) onto the object through the projection device can clearly and effectively mark or The detection area of the ionization chamber is shown, which overcomes the obstacle of lack of detection area identification of the ionization chamber or the detection area identification is blocked by the object, which can improve the quality of the scanned image and reduce the time and effort required by the operator; (2) by Obtaining a reference image of the object, and identifying the reference image, it can be automatically determined whether the region of interest to be scanned of the object covers the detection region of at least one of the one or more ionization chambers; and, by generating Prompting information and/or controlling the movement of the ionization chamber can improve the accuracy of the object to be scanned or the region of interest covering the detection area of the ionization chamber, and can also reduce the time and effort required by the operator. It should be noted that different embodiments
  • the working environment of medical diagnosis and treatment equipment (such as X-ray photography equipment, angiography machines, etc.) is often isolated from the operator.
  • medical diagnosis and treatment equipment for medical diagnosis and treatment operations (such as image acquisition)
  • control instructions. to control its performance of diagnosis and treatment.
  • the operator cannot observe the specific motion of the medical diagnosis and treatment equipment in real time, nor can he know the motion parameters of the moving parts on the medical diagnosis and treatment equipment. Therefore, it is necessary to propose a control method for medical diagnosis and treatment equipment to enhance the interaction between the operator and the medical diagnosis and treatment equipment.
  • FIG. 7 is a flowchart of a control method of a medical diagnosis and treatment device according to some embodiments of the present application.
  • process 700 includes:
  • Step 710 obtaining a virtual model of the medical diagnosis and treatment equipment.
  • step 710 may be performed by model acquisition module 1110.
  • the medical diagnosis and treatment equipment may be an automated device that performs medical diagnosis and treatment tasks.
  • the medical diagnosis and treatment tasks may include, but are not limited to, medical photography tasks, surgical tasks, rehabilitation treatment tasks, and the like.
  • the medical diagnostic equipment may include an X-ray imaging system, a digital vascular subtraction apparatus (DSA), and the like.
  • the medical diagnostic apparatus includes at least one movable first component.
  • the first component may comprise an X-ray imaging gantry, that is, a gantry of the X-ray imaging system.
  • an X-ray gantry may include multiple locations, which may include, but are not limited to, a bulb, a detector, a support element of the bulb, and a support element of the detector. one or more.
  • the virtual model is a virtual outline structure of a medical diagnosis and treatment device (eg, an X-ray imaging system) constructed by the processing device.
  • the virtual model may have the same or similar appearance as the physical structure of the medical device.
  • the virtual model can be visually displayed through a display device.
  • the virtual model may be three-dimensional or two-dimensional.
  • the virtual model may include a second component corresponding to the first component of the medical diagnostic apparatus.
  • the device coordinate system where the first component is located has a mapping relationship with the model coordinate system where the second component is located.
  • the equipment coordinate system refers to a coordinate system constructed according to the actual environment in which the medical diagnosis and treatment equipment is located
  • the model coordinate system refers to a coordinate system constructed in a virtual model.
  • the mapping relationship may be a correspondence relationship between the coordinates of any point on the first component in the device coordinate system and the coordinates of the corresponding point on the second component in the model coordinate system.
  • the corresponding relationship may be that the coordinate values are the same. For example, the coordinates of point A on the first part in the device coordinate system are (10, 10, 10), correspondingly, the coordinates of the corresponding point A' on the second part in the model coordinate system are (10, 10, 10), then point A has a corresponding relationship with point A'.
  • the virtual model can be obtained by modeling the data of the medical diagnosis and treatment equipment.
  • the data used for modeling may be geometric data of the physical structure of the medical diagnostic equipment (eg, geometric coordinates of each endpoint, length of each edge, etc.).
  • the processing device may build a virtual model of the medical diagnosis and treatment device in the virtual modeling environment after scaling the geometric data of the physical structure of the medical diagnosis and treatment device according to a certain proportion.
  • the virtual model may be obtained based on images of the physical structure of the medical device.
  • the processing device may acquire images or images of medical diagnosis and treatment equipment captured by the photographing device.
  • the processing device may extract the feature points of the image or image through a preset algorithm.
  • the feature points may be points capable of expressing the spatial distribution and surface characteristics of the medical diagnosis and treatment equipment.
  • the preset algorithm may include, but is not limited to, Harris algorithm, Sift algorithm, SURF algorithm, and the like.
  • the processing device may extract a large number of feature points and form a Point Cloud.
  • the processing device may reconstruct the point cloud to obtain a virtual model of the medical diagnosis and treatment device.
  • the reconstruction process may be implemented based on an Iterative Closest Point (ICP) algorithm.
  • ICP Iterative Closest Point
  • the virtual model can be visually displayed through the display interface.
  • the display interface may be a visual interface on a display device, and the display device may include, but is not limited to, a computer, a mobile terminal, a public display screen or a projection screen, and the like.
  • Step 720 Obtain current motion information of one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model.
  • step 720 may be performed by the motion information acquisition module 1120 .
  • the current motion information refers to information generated when the first component and/or the second component performs current motion.
  • the current motion information may include motion information for any one or more portions of the first component and/or the second component.
  • current motion information may include, but is not limited to, position, time, or velocity information, such as, but not limited to, one or more of start position, target position, motion time, and motion speed, among others.
  • the first component of the medical diagnosis and treatment device receives motion control information that will perform the current motion, and the first component may perform the current motion based on the motion control information.
  • the motion control information includes control instructions for automatic motion of the first component or manual manipulation of the first component.
  • the first component may acquire control instructions for automatic movement from the medical task, and automatically execute the corresponding medical task based on the control instructions.
  • the operator of the medical diagnostic equipment may manually operate the first part to move.
  • the processing device may obtain current motion information for the first part when the first part performs the current motion.
  • the second part of the virtual model receives motion control information that will perform the current motion, and the second part can perform the current motion based on the motion control information.
  • the motion control information may be input by mouse, keyboard, voice, gesture, or by touch.
  • the system may include a touch screen, and the operator can click or drag on the touch screen for input.
  • a microphone may be included in the system, and the operator may perform voice input by using the microphone.
  • a camera may be included in the system, and the camera may acquire the operator's gesture as an input.
  • an external mouse may be included in the system, and the operator may perform input through the mouse.
  • the system may include an external keyboard, and the operator can input characters through the keyboard.
  • the operator may drag one part of the second component through the touch screen to generate motion control information, and the second component performs the current motion based on the motion control information.
  • the processing device may obtain current motion information for the second part when the second part performs the current motion.
  • real-time position information of the second component under the current motion is also displayed on the display interface and updated with the motion.
  • the height information of one part of the second component may be displayed on the display interface.
  • Step 730 the other of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model, and one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model for obtaining the motion instruction perform the same movement.
  • step 730 may be performed by motion execution module 1130 .
  • the motion instruction refers to motion control information that is received by the first component of the medical diagnosis and treatment equipment or the second component of the virtual model and will perform the current motion.
  • a second part of the virtual model when a first part of the medical device receives a motion instruction to perform a current motion, a second part of the virtual model will perform the same motion. In some embodiments, when the second part of the virtual model receives the movement instruction to perform the current movement, the first part of the medical diagnosis and treatment device will also perform the same movement. In some embodiments, the process of performing the same motion may be implemented based on a mapping relationship between the device coordinate system where the first component is located and the model coordinate system where the second component is located.
  • the processing device may obtain the current motion information of the first part, and map the position information (eg, the coordinates in the device coordinate system) in the current motion information of the first part to the model coordinates system, and then control the second part to move to the position based on the mapped position information (eg, the coordinates in the model coordinate system).
  • the position information eg, the coordinates in the device coordinate system
  • the same movement may be a synchronized movement.
  • the processing device may obtain position and/or velocity information, such as starting position, target position, and/or velocity information, from current motion information of one performing the current motion, and control the other based on this information to perform synchronous movement.
  • the processing device may sample the current motion information of the one performing the current motion at preset time intervals, acquire the sampling position of the one performing the current motion, and control the motion of the other based on the sampling position to the corresponding location.
  • a threshold eg, 0.1 seconds, 0.01 seconds, or 0.001 seconds, etc.
  • the same movement may not be synchronized, and there may be a time gap between the current movement of one and the same movement of the other.
  • the time interval may be 5 seconds or longer.
  • the processing device may achieve the same motion based on the real-time position of both.
  • the processing device may generate the same motion based only on the starting position and the target position of performing one of the current motions.
  • the processing device may also generate the same motion based on the starting position, the target position, and the time of the motion of one of the current motions being performed.
  • the processing device may also generate the same motion based on the starting position, the target position, and the speed of the motion of one of the current motions being performed.
  • the display interface when one part of the first part performs the current motion, the display interface highlights a movement track of a part of the second part corresponding to one part of the first part.
  • the motion trajectory is a trajectory generated when a part of the second component performs the same motion as one of the parts of the first component.
  • FIG. 8A is a schematic structural diagram of a medical diagnosis and treatment device according to some embodiments of the present application
  • FIG. 8B is a schematic structural diagram of a virtual model according to some embodiments of the present application.
  • the medical diagnosis and treatment equipment may be an X-ray camera gantry.
  • the X-ray photography frame may include a detector assembly 801, a guide rail 802, a radiation source assembly 803 and a bed board 804.
  • the detector assembly 801 and the radiation source assembly 803 constitute a photography module, wherein the radiation source assembly 803 generally includes a high voltage generator, a bulb and a beam limiting element.
  • the detector unit 801 can move on the upright guide rail 802, the ray source module 803 can slide on the suspended, horizontal guide rail 802 through the bracket 801, and can move or rotate in the vertical direction relative to the bracket 801,
  • the bed board 804 can be raised and lowered relative to the ground.
  • the display interface 810 can display a virtual model of the X-ray camera gantry, and the virtual model of the X-ray camera gantry has a structure similar to that of the X-ray camera gantry.
  • the virtual model of the X-ray camera gantry may include a detector assembly model 811 , a guide rail model 812 , a radiation source assembly model 813 , and a bed plate model 814 .
  • the positions of the detector assembly model 811 , the guide rail model 812 , the ray source assembly model 813 and the bed plate model 814 in the virtual model are the same as those of the detector assembly 801 , the guide rail 802 , the ray source assembly 803 and the bed plate 804 in the X-ray camera gantry.
  • the positions correspond one-to-one.
  • the virtual model of the X-ray gantry also performs the same movement.
  • the display interface 810 can also display parameters of each part of the virtual model of the X-ray camera gantry, and the parameters can reflect the current motion information of each part of the X-ray camera gantry.
  • the display interface 810 can display the horizontal position 540mm of the bracket of the ray source assembly, and this parameter can indicate that the current horizontal position of the bracket 801 of the ray source assembly of the X-ray camera gantry is 540mm.
  • a direction indication may be displayed in the display interface 810 to indicate the movement direction of each part in the virtual model, and the movement direction is the current movement direction of each part of the X-ray camera gantry.
  • FIG. 9 is a flowchart of another method for controlling a medical diagnosis and treatment device according to some embodiments of the present application.
  • process 900 includes:
  • Step 910 Obtain a model of the X-ray camera gantry based on the physical structure of the X-ray camera gantry, and use the model of the X-ray camera gantry based on the movement trajectory of the physical structure of the X-ray camera gantry. The simulation of the corresponding model motion trajectory.
  • step 910 may be performed by motion simulation module 120 .
  • the X-ray photography rack refers to the shape structure of the X-ray photography equipment.
  • the X-ray camera gantry may include a plurality of components, which may include, but are not limited to, a base, a bracket, a bed board, a rail, a console, a robotic arm, a display of the X-ray camera equipment One or more of the module, the bulb, the detector, the support element of the bulb, the support element of the detector, and the like.
  • the plurality of components may include one or more drive devices for driving movement of one or more of the plurality of parts.
  • the physical structure of the X-ray camera gantry refers to the real shape structure of the X-ray camera gantry.
  • the model of the X-ray gantry ie, the virtual model
  • the model of the X-ray gantry may have the same or similar appearance as the physical structure of the X-ray gantry.
  • the model of the X-ray gantry can be visually displayed by a display device.
  • the model of the X-ray gantry may be three-dimensional or two-dimensional.
  • the model of the X-ray gantry may include one or more model parts corresponding to one or more parts of the physical structure of the X-ray gantry.
  • the one or more model parts may be movable in the model of the X-ray gantry relative to other parts of the model.
  • the method of obtaining the model of the X-ray camera gantry can be obtained by referring to the obtaining method of the virtual model described in FIG. 7 of the present application, which is not repeated here.
  • the model of the X-ray camera gantry can be used to simulate the movement trajectory of the physical structure of the X-ray camera gantry.
  • the movement trajectory of the physical structure of the X-ray camera gantry refers to the trajectory generated when any part of the X-ray camera gantry moves from the current position to the target position.
  • the simulation refers to a process of reproducing the motion trajectory of the physical structure of the X-ray camera gantry in a virtual environment through a model of the X-ray camera gantry.
  • the processing device may establish a correspondence between the real coordinates of the physical structure of the X-ray gantry and the virtual coordinates of the model of the X-ray gantry, and implement the simulation based on the correspondence.
  • the processing device may establish a correspondence between the real coordinates of the physical structure of the X-ray gantry and the virtual coordinates of the model of the X-ray gantry, and implement the simulation based on the correspondence.
  • further reference may be made to the relevant descriptions of steps 920 and 930 .
  • Step 920 Obtain a motion instruction of the physical structure of the current X-ray camera gantry, where the motion instruction includes a target position to be reached by one of the components of the current X-ray camera gantry and related motion time information.
  • step 920 may be performed by instruction fetch module 1220.
  • the current X-ray camera gantry refers to the X-ray camera gantry that is currently performing the imaging operation.
  • the motion instruction may be an instruction to control the current X-ray gantry to perform imaging operations.
  • the motion instructions may be determined based on the medical task of the current X-ray gantry. For example, if the current medical task is to photograph the lumbar spine, the motion instruction includes moving the imaging module of the X-ray camera gantry to the vicinity of the lumbar spine of the patient. In some embodiments, the motion instruction includes a target position to which at least one of the components of the current X-ray gantry needs to be moved.
  • the movement instruction may be to control the movement of the camera module to reach the photographing area.
  • the motion instruction may also be to control the camera module (eg, the geometric center of the tube) to move to a specified coordinate point (405, 100, 405).
  • the aforementioned relevant movement time information may be based on historical data to determine the movement time required for one part of the current X-ray camera gantry to reach the target position.
  • the processing device may acquire historical data of the current X-ray camera gantry, the historical data including the historical movement time of one of the components of the current X-ray camera gantry reaching the target position when the current X-ray camera gantry receives the historical motion instruction , for example: 2 seconds or 0.5 seconds, etc., the time is related to the current position of the X-ray camera gantry and the historical motion command.
  • the historical movement instruction may include the movement speed of the current X-ray gantry, and the historical movement time may be determined based on the position of the current X-ray gantry, the target position, and the movement speed.
  • the processing device may determine, based on the historical movement times, a movement time required for one of the components of the current X-ray gantry to reach the target position.
  • the components may include, but are not limited to, a base, a console, a robotic arm, and a camera module.
  • the processing device may also determine, based on the historical movement time, the movement time required for the multiple components of the current X-ray gantry to reach the target position.
  • the processing device may calculate an average of a plurality of historical exercise times to determine the exercise time.
  • the model of the X-ray camera gantry and the movement trajectory of the physical structure of the X-ray camera gantry can be more synchronized, so the simulation can be smoother.
  • the aforementioned relative movement time information may include multiple real-time time points related to multiple positions on the way of one part of the current X-ray camera gantry reaching the target position.
  • Step 930 the physical structure of the X-ray camera gantry reaches the target position based on the motion instruction, and the model of the X-ray camera gantry is based on the motion instruction and synchronously simulates the motion trajectory of the model based on the motion time information.
  • step 930 may be performed by analog control module 1230 .
  • the processing device may input the motion instructions to drive means of one or more parts of the X-ray gantry, which drive means may drive one or more parts of the X-ray gantry based on the motion instructions Multiple parts are moved to one or more target positions.
  • the processing device can synchronously input the motion instructions into the model of the X-ray camera gantry.
  • the processing device may map the target position in the motion instruction to the model of the camera frame to determine the specific position of the target position in the model. For example, the processing device may determine the specific position of the target position in the model of the X-ray gantry based on the correspondence between the real coordinates of the target position and the virtual coordinates in the model of the X-ray gantry.
  • the processing device can control one or more parts of the model of the X-ray camera gantry to move to the specific position, and the trajectory formed by this process is the model movement trajectory.
  • the processing device may set the determined movement time of the physical structure of the X-ray gantry as the time when the model of the X-ray gantry completes the movement trajectory of the model, so that the movement of the X-ray gantry is
  • the movement trajectory of the solid structure and the movement trajectory of the model of the X-ray camera frame can be completed simultaneously, which improves the real-time accuracy of the simulation. As mentioned earlier, this can also be done by tracking the real-time nature of the time points between the gantry and the model at multiple approach locations.
  • the simulation may also be implemented by: arranging an optical monitoring device in the machine room of the X-ray camera gantry for monitoring the movement of the X-ray camera gantry.
  • an optical monitoring device in the machine room of the X-ray camera gantry for monitoring the movement of the X-ray camera gantry.
  • the monitoring point may be an endpoint on one or more components on the X-ray gantry, and there may be at least one monitoring point on the one component.
  • the reference point may be a certain fixed point in the X-ray camera rack or the machine room.
  • the processing device can map the movement of the X-ray camera gantry monitored by the optical monitoring device into an animation of a model of the X-ray camera gantry, so as to simulate the movement trajectory of the X-ray camera gantry.
  • the mapping can be performed based on the position change relationship between the monitoring point and the reference point.
  • the effect of controlling the model to simulate the movement trajectory can be achieved.
  • Step 940 displaying the simulation of the motion trajectory of the model on the display device.
  • step 940 may be performed by display module 1240 .
  • the display device may include, but is not limited to, a computer display screen, a mobile phone display screen, a projection display screen, a public display screen, and the like.
  • the display device may be located inside the room of the X-ray gantry or outside the room of the X-ray gantry.
  • the display device may be a local display device or a remote display device.
  • the processing device can send the model motion trajectory to a remote display device for display through the network.
  • the display device may also display parameter changes of various parts of the model.
  • the parameters may include, but are not limited to, height, lateral movement distance, vertical movement distance, location coordinates, device model, and the like.
  • the display device can simultaneously display the spatial coordinate change of the part.
  • a display device may include multiple display areas. In some embodiments, the display device may highlight a portion of the model in one of the plurality of display areas. In some embodiments, when one of the components of the physical structure of the x-ray gantry moves, the display device may highlight on one of the display areas the movement of a portion of the model of the x-ray gantry corresponding to that component trajectory. For example, the display device may include a main display area for displaying the model as a whole, and a plurality of display areas for highlighting portions of the model. In some embodiments, the display device may display an optional viewing angle in one of the multiple display areas, and the operator may select the optional viewing angle to cause the display device to display the model motion trajectory from different viewing angles.
  • the display device may also receive operator interaction data.
  • the interaction data is input by the operator, and is used to realize the instructions for the exchange of information between the operator and the physical structure of the X-ray camera gantry and its model.
  • the operator may input an instruction in the display device to display and control the model of the X-ray camera gantry.
  • the display device may include a touch screen on which an operator can operate to input interactive data.
  • the operator can click on the corresponding list or option on the touch screen to input interaction data.
  • the operator can zoom in or zoom out the display of the model controlling the X-ray camera gantry on the touch screen.
  • the operator may drag one part of the model of the X-ray camera gantry on the touch screen to input interactive data.
  • the processing device may generate motion instructions based on interaction data input by the user on the display device, and control the physical structure of the X-ray gantry to move through the motion instructions.
  • the display device can display an optional medical task list in one of the multiple display areas, and the operator can generate corresponding motion instructions for the X-ray camera gantry by clicking on the medical task in the medical task list, and then control the X-ray camera.
  • Radiograph gantry movement For controlling the physical structure of the X-ray camera gantry to move based on the interactive data input by the user on the display device, reference may be made to the related description of FIG. 10 .
  • FIG. 10 is a flowchart of another method for controlling a medical diagnosis and treatment device according to some embodiments of the present application.
  • process 1000 includes:
  • Step 1010 Acquire interaction data through the display device.
  • the interaction data is input by the operator, and is used to realize the instructions for the exchange of information between the operator and the physical structure of the X-ray camera gantry and its model.
  • the operator can input interaction data in the display device in various ways, including but not limited to touch input, voice input, image recognition input, and external device input.
  • the display device may include a touch screen on which the operator can click or drag for input.
  • the display device may include a microphone, and the operator may perform voice input by using the microphone.
  • the display device may include a camera that may acquire the operator's gesture as input.
  • the display device may include an external mouse, and the operator may perform input through the mouse.
  • the display device may include an external keyboard, and the operator may input text through the keyboard.
  • Step 1020 controlling the physical structure of the X-ray camera gantry to move based on the interaction data.
  • the interaction data may include instructions to change the display state of the model of the X-ray gantry.
  • the operator may switch the display angle of view of the model of the X-ray gantry based on the interaction data.
  • the operator can zoom in or zoom out based on the display of the interactive data multi-model.
  • the interaction data may include instructions to change the motion state of the model of the X-ray gantry.
  • the operator can generate interactive data by causing the model to pause or start motion.
  • the operator can generate interactive data by dragging one of the parts of the model to move.
  • the processing device may generate corresponding motion instructions based on this type of interaction data (instructions to change the motion state of the model of the X-ray gantry).
  • the generated motion instructions may be used to control the physical structure of the X-ray gantry to perform movements corresponding to the model of the X-ray gantry.
  • the processing device may generate corresponding motion instructions, based on which motion instructions of the X-ray gantry The part corresponding to this part in the solid structure also moves with the drag track.
  • the motion instruction may be generated based on the following manner: the processing device may acquire the coordinates of a plurality of sampling points on the dragging trajectory of the model, and determine a plurality of physical structures of the X-ray camera gantry based on the coordinates of the sampling points Target positions and their sequence, which in turn generate motion commands.
  • FIG. 11 is a schematic block diagram of a control system of a medical diagnosis and treatment device according to some embodiments of the present application.
  • control system 1100 of the medical diagnosis and treatment equipment may include a model acquisition module 1110 , a motion information acquisition module 1120 and a motion execution module 1130 .
  • the model obtaining module 1110 may be used to obtain a virtual model of a medical diagnosis and treatment device, wherein the medical diagnosis and treatment device includes at least one movable first component, and accordingly, the virtual model includes a simulation of the first component.
  • a second part of a part, the device coordinate system where the first part is located has a mapping relationship with the model coordinate system where the second part is located.
  • the motion information acquisition module 1120 may be configured to acquire current motion information of one of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model. In some embodiments, the motion information acquisition module 1120 may also be configured to acquire current motion information of the first part of the medical diagnosis and treatment equipment; wherein, before acquiring the current motion information of the first part of the medical diagnosis and treatment equipment, all The first component receives motion control information that will perform the current motion. In some embodiments, the motion information acquisition module 1120 may be further configured to acquire current motion information of the second part of the virtual model; wherein, before acquiring the current motion information of the second part of the virtual model, the The second component receives motion control information that will perform the current motion.
  • the motion execution module 1130 may be used for the other of the first part of the medical diagnosis and treatment equipment and the second part of the virtual model, and the first part and One of the second parts of the virtual model performs the same movement.
  • FIG. 12 is a schematic block diagram of a control system of another medical diagnosis and treatment equipment according to some embodiments of the present application.
  • control system 1200 of the medical diagnosis and treatment equipment may include a motion simulation module 1210 , an instruction acquisition module 1220 , a simulation control module 1230 and a display module 1240 .
  • the motion simulation module 1210 may be configured to obtain a model of the X-ray camera gantry based on the physical structure of the X-ray camera gantry, and based on the motion trajectory of the physical structure of the X-ray camera gantry, use The model of the X-ray camera gantry is simulated corresponding to the movement trajectory of the model.
  • the instruction acquisition module 1220 may be configured to acquire a motion instruction of the physical structure of the current X-ray camera gantry, where the motion instruction includes a target position to which one part of the current X-ray camera gantry needs to move, based on The historical data determines the movement time required for one of the parts of the current X-ray gantry to reach the target position.
  • the simulation control module 1230 may be used to control the physical structure of the X-ray gantry to reach the target position based on the motion instructions, and the model of the X-ray gantry based on the motion instructions, in all Simultaneously simulate the motion trajectory of the model within the above motion time.
  • the display module 1240 may be configured to display the simulation of the motion trajectory of the model on the display device. In some embodiments, the display module 1240 may also be configured to highlight on the display device a movement trajectory of a portion of the model of the X-ray gantry corresponding to one of the parts of the X-ray gantry .
  • the system 1200 may further include a data acquisition module, which may be used to acquire interaction data through the display device.
  • the acquisition module may be further configured to control the model of the X-ray camera gantry by touch on the touch screen to generate the interaction data.
  • system 1200 may further include a motion control module for controlling the physical structure of the current X-ray gantry to move based on the interaction data.
  • Embodiments of the present application further provide a control apparatus for medical diagnosis and treatment equipment, including a processor configured to execute computer instructions to implement the control method for medical diagnosis and treatment equipment in any one or more of the foregoing embodiments of the present application.
  • the possible beneficial effects of the embodiments of the present application include, but are not limited to: (1) the movement of the medical diagnosis and treatment equipment is displayed synchronously through the display device, which is convenient for the operator to observe and monitor the movement conditions and parameters of the medical diagnosis and treatment equipment; (2) it can be Simultaneously display the diagnosis and treatment process of the medical diagnosis and treatment equipment, which is convenient for the operator to prepare or perform follow-up operations; (3) By determining the movement time to simulate the movement trajectory of the X-ray camera gantry, the model of the X-ray camera gantry can be matched with the X-ray camera.
  • the motion trajectory of the physical structure of the rack is synchronized, and the simulation can be made smoother; (4) By controlling the motion of the model in the display device to control the motion of the photographic equipment, the interaction between the medical diagnosis and treatment equipment and the user can be further improved, and the operation is convenient. users use. It should be noted that different embodiments may have different beneficial effects, and in different embodiments, the possible beneficial effects may be any one or a combination of the above, or any other possible beneficial effects.
  • the embodiments of the present application provide a method for determining parameters of a medical imaging device.
  • the method 1300 may be performed by an apparatus for determining parameters of medical imaging equipment, the apparatus for determining parameters for medical imaging equipment may be implemented by software and/or hardware, and the apparatus for determining parameters for medical imaging equipment may be configured on a computing device, and specifically includes the following steps:
  • the target object may be an object to be image-scanned, for example, a person or an animal.
  • the sign information can be the basic sign information of the target object, for example, it can be, but not limited to, the body temperature information, blood pressure information, blood lipid information, respiration information, pulse information, eye sign information, hand sign information, leg sign information of the target object information or head sign information, etc.
  • the physical information of the target object may be obtained based on professional instruments according to some performances of the target object.
  • acquiring the physical information of the target object may be achieved by using a sensor.
  • the sensor here may be a camera, a temperature sensor, a heartbeat sensor, or a breathing sensor.
  • the physician obtains the temperature information of the patient based on the fever symptoms of the patient and the temperature sensor.
  • the physical information of the target object can be automatically acquired based on the corresponding sensor.
  • the physical sign information of the target object can be automatically acquired, so that the scanning parameters and/or image processing parameters of the medical imaging device can be subsequently determined based on the physical sign information.
  • the abnormal sign information may be sign information that is inconsistent with standard sign information, or may be sign information that is inconsistent with normal sign information of the target object, or the like.
  • the analysis of the sign information to determine the abnormal sign information may specifically be: inputting the sign information into the trained sign recognition model, and obtaining the abnormal sign information output by the sign recognition model; Compare with normal sign parameters to determine abnormal sign information.
  • the sign recognition model may be a model that analyzes the input sign information and outputs abnormal sign information.
  • it can be a support vector machine, a fully convolutional neural network (FullyConvolutional Networks, FCN), a U-net neural network, a two-dimensional convolutional neural network (CNN-2d), a feature pyramid network (Feature PyramidNetworks, FPN) and the like.
  • the sign recognition model is obtained by training based on historical sign information.
  • the acquired sign information of the target object is input into the trained sign recognition model, and the model can output the abnormal sign information of the target object. In this way, the abnormal sign information of the target object is obtained.
  • the normal state sign information may be the sign information of the normal state of the target subject. For example, for a target object, under normal circumstances, the body temperature of the target object is 36-36.4°. If the body temperature of the target object at this time is 37° after measurement, it is determined that the body temperature of the target object at this time is abnormal. Physical Information.
  • the analyzing the sign information to determine abnormal sign information may further specifically include: comparing the sign information with standard sign information corresponding to the type of the target object, and determining abnormal sign information that does not meet the standard sign information.
  • the type of the target object may be determined according to basic human body information and/or historical medical records of the target object, where the basic human body information at least includes information such as gender, age, weight, and height.
  • the standard sign information may be sign information corresponding to different genders, ages, weights, and heights determined by the state. For example, a man aged 45-55 years with a height of 170-175 cm has a body temperature of 36-36.4°.
  • a target object is a male aged 50 years old and a height of 172 cm, and a male aged 45-55 years old and a height of 170-175 cm.
  • the standard physical information is 36-36.4° . If the body temperature of the target object is measured to be 37°, the sign information at the body temperature is abnormal sign information.
  • the historical medical record may be the historical medical record information of the target subject, for example, the target subject suffers from diseases such as hypertension all the year round. This may cause the sign information of the target object to be inconsistent with the normal standard sign information, but this does not mean that the sign information of the target object is abnormal.
  • the standard sign information for body temperature is 36-36.4°.
  • a patient is a 50-year-old male with hypertension who is 172 cm tall, but the patient's body temperature is measured to be 35.7°.
  • the body temperature is normal, so the body temperature is not an abnormal body information.
  • the abnormal sign information in the sign information of the target object is determined through the above three methods, so that the abnormal sign information of the target object can be quickly determined, and the diagnosis efficiency is improved.
  • the sign information of the target object After acquiring the sign information of the target object, the sign information of the target object can be analyzed to determine the abnormal sign information of the target object. In this way, abnormal sign information of the target object can be quickly determined, so that scanning parameters and/or image processing parameters of the medical imaging device can be adjusted in a targeted manner based on the abnormal sign information.
  • the scan parameter may be the scan parameter of the target object to perform image scanning, for example, the scan parameter of the target object to perform the magnetic resonance scan or the scan parameter of the electronic computed tomography scan.
  • Specific examples can be: scanning voltage, scanning current, scanning field of view, scanning layer number or scanning layer thickness, etc.
  • the voltage parameter can be reduced, and the current parameter can be increased.
  • the image processing parameters may be parameters that process the scanning algorithm of the region of interest of the target object.
  • the region of interest here may be a region where the target object determined by the abnormal sign information may have abnormality. For example, if the body temperature of the target subject is too high, the lungs of the target subject may have abnormalities, and the lungs are the region of interest of the target subject.
  • the image processing parameters can be parameters that are processed by the scanning algorithm of the region of interest of the target object. For example, if the target object is to undergo a magnetic resonance scan, and the abnormal sign information is determined to be abnormal body temperature, such as excessive body temperature, targeted treatment can be performed.
  • Lung image processing for example, can be to adjust the image processing parameters of the lung, such as adjusting the contrast and balance of the lung soft tissue and bone tissue, etc. Specifically, it can reduce the equalization intensity and increase the contrast enhancement intensity. Make the lung texture of the target object clearer, so that a clearer and more targeted scan image can be obtained.
  • the medical imaging equipment here may include, but is not limited to, X-ray imaging equipment, MR equipment, CT equipment, PET equipment, ultrasound equipment or DSA equipment, or multimodal imaging equipment.
  • the scan voltage can be reduced and the scan current can be increased in the scan parameters; the image balance can be reduced and the image contrast can be increased in the image processing parameters.
  • the scan voltage can be reduced and the scan current can be increased in the scan parameters; the image balance can be reduced and the image contrast can be increased in the image processing parameters.
  • the scan voltage can be reduced and the scan current can be increased in the scan parameters; the image balance can be increased and the image contrast can be reduced in the image processing parameters.
  • the adaptive determination of the scanning parameters and/or image processing parameters of the medical imaging device may be adjusting the scanning parameters and/or image processing parameters of the medical imaging device according to the condition of the target object.
  • the term "adaptively" here refers to the situation in which the abnormal sign information is matched with the scanning parameters and/or the image processing parameters.
  • the scanning parameters and/or image image processing parameters of the target object are determined, which can be obtained by inputting the abnormal sign information of the target object into the neural network model or the corresponding database. Scanning parameters and/or image processing parameters of the target object are determined. In this way, the scanning parameters and/or image processing parameters of the target object can be quickly obtained, thereby improving the diagnosis efficiency.
  • the scanning parameters and/or image processing parameters of the target object can be determined. It solves the problems in the prior art that physicians are required to continuously adjust scanning parameters and/or image processing parameters, resulting in poor image quality, repeated scanning, low efficiency and affecting diagnosis, while avoiding the problem that the target object receives too much radiation dose.
  • the scanning parameters of the medical imaging device can be determined in time based on the target sign information, so as to achieve the purpose of improving the diagnosis efficiency.
  • the abnormal sign information is determined by acquiring the sign information of the target object and analyzing the sign information. Based on the abnormal sign information of the target object, scan parameters and/or image processing parameters of the medical scanning imaging device are determined. In this way, the scanning parameters of the medical imaging device can be determined in time based on the target sign information, so as to achieve the purpose of improving the diagnosis efficiency.
  • FIG. 14 is a flowchart of a method for determining parameters of a medical imaging device according to some embodiments of the present application.
  • the embodiments of the present application may be combined with various optional solutions in the foregoing embodiments.
  • the adaptively determining the scanning parameters and/or image processing parameters of the medical imaging device based on the abnormal sign information of the target object includes: determining the target based on the abnormal sign information of the target object The disease type of the subject; based on the disease type, scanning parameters and/or image processing parameters of the medical imaging device are adaptively determined.
  • the method 1400 of the embodiment of the present application specifically includes the following steps:
  • the disease type may be a disease that may exist in the target object determined according to abnormal sign information.
  • the body temperature of the target subject is too high, the lungs of the target subject may be abnormal, and it is determined that the target subject may have lung diseases such as pneumonia.
  • the target subject is short of breath, the target subject may have an abnormality in the respiratory tract, and it is determined that the target subject may have a disease such as asthma.
  • the whites of the eyes of the target object are yellow, the eyes of the target object may be abnormal, and it is determined that the target object may have diseases such as liver disease.
  • the disease type of the target object can be directly determined, so that scanning parameters and/or image processing parameters can be adjusted subsequently based on the disease type.
  • the scanning parameters and/or image processing parameters of the target object can be adaptively adjusted in a targeted manner, so as to obtain an accurate and targeted scanned image.
  • the scanning parameters can be determined in time based on the physical information of the target object, so as to achieve the purpose of improving the diagnosis efficiency. It solves the problems in the prior art that physicians are required to continuously adjust scanning parameters and/or image processing parameters, resulting in poor image quality, repeated scanning, low efficiency and affecting diagnosis.
  • the adaptively determining the scanning parameters and/or image processing parameters of the medical imaging device based on the disease type may specifically include: determining the abnormality level of the abnormal sign information; adaptively determining the abnormality level based on the disease type and the abnormality level. Scanning parameters and/or image processing parameters of medical imaging equipment.
  • the abnormality level may be the level of abnormal sign information.
  • the body temperature of a target object is 38°
  • the standard body temperature is 36-37°.
  • 37-37.5° is defined as mild fever
  • 37.6-38° is moderate fever
  • above 38° is severe fever. Then it is determined that the abnormal level of the target object is moderate fever.
  • the target object may have pneumonia
  • the scanning parameters and/or image processing parameters of the target object are adjusted according to the determined pneumonia and the determination that the target object is moderately febrile.
  • scanning parameters and/or image processing parameters of different disease types are different here.
  • the scanning parameters and/or image processing parameters may also be different for different abnormal grades. Therefore, scanning parameters and/or image processing parameters of the medical imaging device can be determined in a targeted manner according to the disease type and abnormality level, so as to obtain accurate and targeted scanning images, so that doctors can better diagnose the images.
  • the disease type of the medical imaging device is determined based on the abnormal sign information of the target object, and the scanning parameters and/or image processing parameters of the medical imaging device are adaptively determined based on the disease type, so that accurate The precise and targeted scanning images can determine the scanning parameters in time based on the physical information of the target object, so as to achieve the purpose of improving the diagnosis efficiency. It solves the problems in the prior art that physicians are required to continuously adjust scanning parameters and/or image processing parameters, resulting in poor image quality, repeated scanning, low efficiency and affecting diagnosis.
  • FIG. 15 is a flowchart of a method for determining parameters of a medical imaging device according to some embodiments of the present application, and the embodiments of the present application may be combined with various optional solutions in the foregoing embodiments.
  • the method further includes: determining a target scanning protocol of the target object based on scanning parameters; scanning the target object based on the target scanning protocol and image processing parameters to obtain a scan of the target object image.
  • the method 1500 of the embodiment of the present application specifically includes the following steps:
  • the target scanning protocol may be a scanning protocol that is ultimately used for image scanning of the target object. After the scanning parameters of the target object are determined, the target scanning protocol of the target object can be generated.
  • S1560 Scan the target object based on the target scanning protocol and the image processing parameters to obtain a scanned image of the target object.
  • the target object can be scanned in an image, so that a targeted, high-quality and effective scanned image can be obtained that can well reflect the abnormality of the target object.
  • the whole body scan may be performed on the target object, but the abnormal parts of the target object are highlighted. For example, if it is determined that a target subject may have pneumonia, when the target subject is scanned based on the target scanning protocol and image processing parameters, the target subject may be scanned full body, but with clear lung texture highlighted in the image. It is also possible to scan only the abnormal parts of the target object. For example, after it is determined that a target object may have pneumonia, when the target object is scanned based on the target scanning protocol and image processing parameters, only the lungs of the target object can be scanned. Scanned to highlight clear lung texture. Specifically, the whole body scan is performed on the target object, and the targeted scan is only performed on abnormal parts, which can be set according to the user's needs, which is not limited here.
  • the target scanning protocol of the target object is determined based on the scanning parameters, the target object is scanned based on the target scanning protocol and the image processing parameters, and the scanned image of the target object is obtained. Scanned images with good quality effects that can well reflect the abnormality of the target object.
  • FIG. 16 is a flowchart of an imaging method of a medical imaging device according to some embodiments of the present application. This embodiment can be applied to determine the scanning parameters and image processing parameters of the medical imaging device based on the physical information of the target object, and then the target In the case where the object is scanned and imaged, the method can be performed by an imaging device of a medical imaging device, and the imaging device of the medical imaging device can be implemented by software and/or hardware, and the imaging device of the medical imaging device can be configured in a computing on the device.
  • the explanations of terms in the embodiments of the present application that are the same as or corresponding to the above embodiments are not repeated here.
  • the imaging method 1600 of the medical imaging device specifically includes the following steps:
  • S1640 Scan the target object and obtain a medical image, wherein the scan is obtained based on the adjusted scanning parameters and/or the medical image is obtained by processing the adjusted image processing parameters.
  • the abnormal sign information of the target object is acquired by acquiring the scanning parameters of the medical imaging equipment for the target object, and based on the abnormal sign information, the scanning parameters and/or image processing parameters of the medical imaging equipment are adaptively adjusted , scan the target object and obtain a medical image, wherein the scan is obtained based on the adjusted scanning parameters and/or the medical image is obtained by processing the adjusted image processing parameters, so that based on the physical information of the target object, timely Adjust the scanning parameters of the medical imaging equipment to scan the target object with the adjusted scanning parameters, and then use the adjusted image processing parameters to process the scanned images to obtain better and clearer scanned images, so as to achieve The purpose of improving diagnostic efficiency.
  • FIG. 17 is a schematic structural diagram of an apparatus for determining parameters of medical imaging equipment according to some embodiments of the present application. As shown in FIG. 17 , the apparatus 1700 includes: a sign information acquisition module 1710 , an abnormal sign information determination module 1720 and a parameter determination module 1730 .
  • the physical sign information acquisition module 1710 is used to acquire the physical sign information of the target object
  • Abnormal sign information determination module 1720 configured to analyze the sign information to determine abnormal sign information
  • a parameter determination module 1730 configured to adaptively determine scanning parameters and/or image processing parameters of the medical imaging device based on the abnormal sign information of the target object.
  • the scanning parameters include: scanning voltage, scanning current, scanning field of view, scanning layer number or scanning layer thickness.
  • the image processing parameter includes: image contrast or image equalization.
  • the acquiring physical information of the target object is achieved by using a sensor.
  • the senor includes a camera, a temperature sensor, a heartbeat sensor or a respiration sensor.
  • the parameter determination module 33 includes:
  • a disease type determination unit configured to determine the disease type of the target object based on the abnormal sign information of the target object
  • a parameter determination unit configured to adaptively determine scanning parameters and/or image processing parameters of the target object based on the disease type.
  • the parameter determination unit includes:
  • an abnormality level determination subunit used for determining the abnormality level of the abnormal sign information
  • a parameter determination subunit for adaptively determining scan parameters and/or image processing parameters of the medical imaging device based on the disease type and the abnormality level.
  • the abnormal sign information determination module 32 includes:
  • a first abnormal sign information determination unit configured to input the sign information into the trained sign recognition model, and obtain abnormal sign information output by the sign recognition model
  • the third abnormal sign information determining unit is configured to compare the sign information with normal sign parameters to determine abnormal sign information.
  • the abnormal sign information determination module 32 further includes:
  • a second abnormal sign information determination unit configured to compare the sign information with standard sign information corresponding to the type of the target object, and determine abnormal sign information that does not conform to the standard sign information; wherein the target object The type is determined according to the basic human body information and/or historical medical records of the target object, and the basic human body information at least includes: gender, age, weight, and height.
  • the device further includes:
  • a target scan protocol determination module configured to determine the target scan protocol of the target object based on the scan parameters
  • a scanned image acquisition module configured to scan the target object based on the target scanning protocol and the image processing parameters to obtain a scanned image of the target object.
  • the image processing parameters are parameters for processing the scanning algorithm of the region of interest of the target object.
  • the apparatus for determining parameters of medical imaging equipment provided in the embodiments of the present application can execute the method for determining parameters of medical imaging equipment provided in any embodiment of the present application, and has functional modules and beneficial effects corresponding to the execution methods.
  • FIG. 18 is a schematic structural diagram of an imaging apparatus of a medical imaging device according to some embodiments of the present application.
  • the apparatus 1800 includes: a scanning parameter acquisition module 1810 , an abnormal sign information acquisition module 1820 , and a parameter adjustment module 1830 and scan module 1840.
  • the scan parameter acquisition module 1810 is used to acquire scan parameters of the medical imaging device for the target object
  • Abnormal sign information acquisition module 1820 configured to acquire abnormal sign information of the target object
  • a parameter adjustment module 1830 configured to adaptively adjust scanning parameters and/or image processing parameters of the medical imaging device based on the abnormal sign information
  • Scanning module 1840 configured to scan the target object and obtain a medical image, wherein the scan is obtained based on the adjusted scan parameters and/or the medical image is obtained by processing the adjusted image processing parameters .
  • the imaging apparatus of the medical imaging device provided by the embodiment of the present application can execute the imaging method of the medical imaging device provided by any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 19 is a schematic structural diagram of a medical imaging device according to some embodiments of the present application. As shown in FIG. 7 , the medical imaging device 1900 includes an imaging component 1910 , a sensor 1920 and a controller 1930 .
  • the imaging component 1910 which is used to scan the target object to obtain medical images
  • a sensor 1920 for acquiring abnormal sign information of the target object
  • a controller 1930 coupled to the imaging assembly and the sensor, adaptively adjusts scan parameters of the imaging assembly based on the abnormal sign information, and/or adaptively adjusts the imaging assembly based on the abnormal sign information Image processing parameters for medical images.
  • the senor includes a camera, a temperature sensor, a heartbeat or pulse sensor, or a respiration sensor.
  • FIG. 20 is a schematic structural diagram of a device according to some embodiments of the present application.
  • the device 2000 includes a processor 2010, a memory 2020, an input device 2030, and an output device 2040; The number can be one or more.
  • one processor 2010 is used as an example; the processor 2010, the memory 2020, the input device 2030 and the output device 2040 in the device can be connected by a bus or in other ways. Connect as an example.
  • the memory 2020 can be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the method for determining parameters of medical imaging equipment in the embodiments of the present application (for example, a physical sign information acquisition module).
  • program instructions/modules corresponding to the method for determining parameters of medical imaging equipment in the embodiments of the present application for example, a physical sign information acquisition module.
  • Abnormal sign information determination module 32 and parameter determination module 33 and/or program instructions/modules corresponding to the imaging method of the medical imaging device in the embodiment of the present application (for example, scan parameter acquisition module 1810, abnormal sign information acquisition module 1820, parameter adjustment module 1830, and scan module 1840).
  • the processor 2010 executes various functional applications and data processing of the device by running the software programs, instructions and modules stored in the memory 2020, that is, to implement the above-mentioned parameter determination method.
  • the memory 2020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Additionally, memory 2020 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 2020 may further include memory located remotely from processor 2010, which may be connected to the device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the input device 2030 may be used to receive input numerical or character information, and generate key signal input related to user settings and function control of the device.
  • the output device 2040 may include a display device such as a display screen.
  • the ninth embodiment of the present application further provides a storage medium containing computer-executable instructions, where the computer-executable instructions, when executed by a computer processor, are used to execute a method for determining parameters of a medical imaging device and/or imaging of a medical imaging device method.
  • a storage medium containing computer-executable instructions provided by the embodiments of the present application, the computer-executable instructions of which are not limited to the above-mentioned method operations, and can also perform the medical imaging device parameter determination provided by any embodiment of the present application. Methods and/or related operations in an imaging method of a medical imaging device.
  • aspects of the disclosure herein may be illustrated and described in any of a number of patentable classes or contexts, including any new and useful processes, machines, manufacture or composition of matter, or any new useful improvement thereof. Accordingly, various aspects of the present application may be implemented entirely in hardware, entirely in software (including firmware, resident software, microcode, etc.), or a combination of software and hardware implementations, which are generally referred to herein as "units” . Module or "System”. Furthermore, aspects of the present application may take the form of a computer program product having computer readable program code embodied in one or more computer readable media.
  • a computer-readable signal medium may include a propagated data signal having computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such propagating signals may take a variety of forms, including electromagnetic, optical, etc., or any suitable combination.
  • a computer-readable signal medium can be any computer-readable medium, other than a computer-readable storage medium, that can communicate, propagate, or transport a program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a computer-readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, fiber optic cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations of various aspects of the present application may be written using a combination of one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C, C, and the like.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer over any type of network, including a local area network (LAN) or wide area network (WAN), or a connection can be established with an external computer (for example, by using an Internet service provider) Internet) or in a cloud computing environment or as a service such as software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • Internet Internet service provider
  • SaaS software as a service

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本说明书实施例公开了一种医学设备控制方法和系统。所述医学设备控制方法包括:获取医学设备的相关信息和/或目标对象的相关信息;基于医学设备的相关信息和/或目标对象的相关信息,控制医学设备。

Description

一种医学设备控制方法及系统
交叉引用
本申请要求2020年08月03日提交的中国专利申请202010767758.3、2020年10月18日提交的中国专利申请202011114737.8和2020年10月18日提交的中国专利申请202011114024.1的优先权,其内容全部通过引用并入本文。
技术领域
本申请一般地涉及医学设备领域,更具体地,涉及医学设备控制方法及系统。
背景技术
随着科技水平的发展,人类对于医疗方面的需求越来越大。医学设备可以是执行医学诊疗或研究任务的自动化设备。医学设备可以包括:用于对目标对象(例如,病人)的生理状态进行诊断的诊断设备(例如,X射线诊断设备、超声诊断设备、功能检查设备、内窥镜检查设备、核医学设备、实验诊断设备及病理诊断装备等)、用于对目标对象(例如,病人)进行治疗的治疗设备(例如,手术床、接触治疗机、浅层治疗机、深度治疗机、半导体冷刀、气体冷刀、固体冷刀、心脏除颤起搏设备、人工呼吸机、超声雾化器等)及用于辅助诊断设备和/或治疗设备对目标对象(例如,病人)进行诊断和/或治疗的辅助设备类(例如,消毒灭菌设备、制冷设备、中心吸引及供氧系统、空调设备、制药机械设备、血库设备、医用数据处理设备、医用录像摄影设备等)。
发明内容
本申请实施例之一提供一种医学设备控制方法,包括:获取医学设备的相关信息和/或目标对象的相关信息;基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备。
在一些实施例中,所述医学设备包括扫描设备;所述获取医学设备的相关信息和/或目标对象的相关信息包括:获取所述扫描设备中一个或多个电离室的位置信息,所述扫描设备用于扫描所述目标对象;所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及控制所述投影设备将所述投影数据投影到所述目标对象上。
在一些实施例中,所述投影数据还包括对应于所述目标对象的待扫描的感兴趣区域的图像数据。
在一些实施例中,所述方法进一步包括:获取所述目标对象的参考图像,所述参考图像由摄像头在所述投影设备将所述投影数据投影到待扫描的目标对象上之后拍摄;识别参考图像中的第一区域,所述第一区域对应于所述目标对象的待扫描的感兴趣区域;识别参考图像中的第二区域,所述第二区域对应于投影到所述目标对象上的至少一个电离室的探测区域;以及基于所述第一区域和第二区域,确定所述至少一个电离室中是否包括一个或多个候选电离室,其中所述一个或多个候选电离室的探测区域被所述目标对象的待扫描的感兴趣区域覆盖。
在一些实施例中,所述方法进一步包括:响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备生成提示信息;响应于确定所述至少一个电离室中不包括任何候选电离室,使所述至少一个电离室中的一个或多个参考电离室相对于所述目标对象的感兴趣区域进行移动。
在一些实施例中,所述方法进一步包括:获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息;基于所述识别信息,调整所述投影数据,使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,其中,所述第一特征值和所述第二特征值对应于相同的图像特征。
在一些实施例中,所述获取医学设备的相关信息和/或目标对象的相关信息包括:获取所述医学设备的虚拟模型,其中,所述医学设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;以及获取所述医学设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:控制所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
在一些实施例中,所述获取所述医学设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息,包括:获取所述医学诊疗设备的第一部件的当前运动信息;其中,在获取所述医学诊疗设备的第一部件的当前运动信息前,所述第一部件收到将执行所述当前运动的运动控制信息。
在一些实施例中,所述获取所述医学设备的第一部件和所述虚拟模型的第二部件其中 之一者的当前运动信息包括:获取所述虚拟模型的第二部件的当前运动信息;其中,在获取所述虚拟模型的第二部件的当前运动信息前,所述第二部件收到将执行所述当前运动的运动控制信息。
在一些实施例中,所述虚拟模型在显示界面上显示,且所述第二部件在当前运动下的实时位置信息也在所述显示界面上显示并随着运动更新。
在一些实施例中,所述获取医学设备的相关信息和/或目标对象的相关信息,包括:基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;以及获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备,包括:所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;以及将所述模型运动轨迹的模拟显示在显示设备上。
在一些实施例中,所述模型通过以下方式获得:获取所述X射线摄影机架的图像;
提取所述图像的特征点;基于所述特征点进行重建,获得所述模型。
在一些实施例中,所述医学设备为医疗成像设备;所述获取医学设备的相关信息和/或目标对象的相关信息包括:获取目标对象的体征信息;所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:对所述体征信息进行分析,确定异常体征信息;以及基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:基于所述目标对象的所述异常体征信息,确定所述目标对象的疾病类型;基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:确定所述异常体征信息的异常等级;基于所述疾病类型和所述异常等级,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述对所述体征信息的进行分析,确定异常体征信息,包括:将所述体征信息输入至被训练的体征识别模型中,获取所述体征识别模型输出的异常体征信息;或者,将所述体征信息与常态体征参数进行比对,确定异常体征信息。
在一些实施例中,所述对所述体征信息的进行分析,确定异常体征信息,还包括:将所述体征信息与所述目标对象的类型对应的标准体征信息进行比对,确定不符合所述标准体征信息的异常体征信息;其中,所述目标对象的类型根据所述目标对象的人体基础信息和/或历史病历确定,所述人体基础信息至少包括:性别、年龄、体重和身高。
在一些实施例中,所述医学设备为医疗成像设备;所述获取医学设备的相关信息和/或目标对象的相关信息,包括:获取针对目标对象的医疗成像设备的扫描参数;获取所述目标对象的异常体征信息;所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备,包括:基于所述异常体征信息,适应性地调整所述医疗成像设备的扫描参数和/或图像处理参数;对所述目标对象进行扫描并获得医学图像,其中,所述扫描是基于所述调整后的扫描参数和/或所述医学图像是经过调整后的图像处理参数处理得到的。
本申请实施例之一提供一种医学设备控制系统,包括:信息获取模块,用于获取医学设备的相关信息和/或目标对象的相关信息;控制模块,用于基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备。
本申请实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如本申请任一实施例所述的医学设备控制方法。
本申请实施例之一提供一种医学设备控制装置,所述装置包括至少一个处理器以及至少一个存储器;所述至少一个存储器用于存储计算机指令;所述至少一个处理器用于执行所述计算机指令中的至少部分指令以实现如本申请任一实施例所述的医学设备控制方法。
本申请实施例之一提供一种标记电离室的探测区域的方法,所述方法包括:获取扫描设备中一个或多个电离室的位置信息,所述扫描设备用于扫描对象;基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及控制所述投影设备将所述投影数据投影到所述对象上。
在一些实施例中,所述投影数据还包括对应于所述对象的待扫描的感兴趣区域的图像数据。
在一些实施例中,所述方法进一步包括:获取所述对象的参考图像,所述参考图像由摄像头在所述投影设备将所述投影数据投影到待扫描的对象上之后拍摄;识别参考图像中的第一区域,所述第一区域对应于所述对象的待扫描的感兴趣区域;识别参考图像中的第二区域,所述第二区域对应于投影到所述对象上的至少一个电离室的探测区域;以及基于所述第一区域和第二区域,确定所述至少一个电离室中是否包括一个或多个候选电离室,其中所述 一个或多个候选电离室的探测区域被所述对象的待扫描的感兴趣区域覆盖。
在一些实施例中,所述方法进一步包括:响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备生成提示信息。
在一些实施例中,响应于确定所述至少一个电离室中不包括任何候选电离室,所述方法进一步包括:使所述至少一个电离室中的一个或多个参考电离室相对于所述对象的感兴趣区域进行移动。
在一些实施例中,所述方法进一步包括:响应于确定所述至少一个电离室中包括一个或多个候选电离室,从所述一个或多个候选电离室中选择一个或多个目标电离室,所述一个或多个目标电离室将会在扫描所述对象的过程中运行。
在一些实施例中,所述方法进一步包括:获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息;基于所述识别信息,调整所述投影数据,使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,其中,所述第一特征值和所述第二特征值对应于相同的图像特征。
本申请实施例之一提供一种用于标记电离室的探测区域的系统,包括:获取模块,用于获取扫描设备中一个或多个电离室的位置信息;探测区域确定模块,用于基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;投影数据确定模块,用于确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及控制模块,用于控制所述投影设备将所述投影数据投影到待扫描的对象上。
本申请实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取所述存储介质中的计算机指令后,所述计算机执行如本申请任一实施例所述的方法。
本申请实施例之一提供一种用于标记电离室的探测区域的装置,所述装置包括用于标记电离室的探测区域的程序,所述程序实现如本申请任一实施例所述的方法。
本申请实施例之一提供一种医学诊疗设备的控制方法,所述方法包括:获取医学诊疗设备的虚拟模型,其中,所述医学诊疗设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中 之一者执行相同的运动。
在一些实施例中,所述相同的运动包括同步运动。
在一些实施例中,所述医学诊疗设备包括X射线摄影系统。
在一些实施例中,所述第一部件包括X射线摄影系统的机架。
在一些实施例中,所述机架包括球管、探测器或者所述球管的支撑元件或者所述探测器的支撑元件。
在一些实施例中,获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息包括:获取所述医学诊疗设备的第一部件的当前运动信息;其中,在获取所述医学诊疗设备的第一部件的当前运动信息前,所述第一部件收到将执行所述当前运动的运动控制信息。
在一些实施例中,所述运动控制信息包括第一部件的自动运动的控制指令或者对第一部件的手动操作。
在一些实施例中,获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息包括:获取所述虚拟模型的第二部件的当前运动信息;其中,在获取所述虚拟模型的第二部件的当前运动信息前,所述第二部件收到将执行所述当前运动的运动控制信息。
在一些实施例中,所述运动控制信息由鼠标、键盘或者语音输入或者通过触控输入。
在一些实施例中,所述虚拟模型在显示界面上显示,且所述第二部件在当前运动下的实时位置信息也在所述显示界面上显示并随着运动更新。
在一些实施例中,所述显示界面为电脑的或者移动终端的或者公用的显示界面。
在一些实施例中,所述虚拟模型通过对所述X射线摄影系统的机架的数据进行建模获得。
在一些实施例中,所述虚拟模型通过以下方式获得:获取所述X射线摄影系统中机架的图像;提取所述图像的特征点;基于所述特征点进行重建,获得所述模型。
在一些实施例中,所述第一部件的其中一个部位运动时,所述显示界面上突出显示与所述第一部件的其中一个部位相对应的所述第二部件的一部分的运动轨迹。
本申请实施例之一提供一种医学诊疗设备的控制方法,所述方法包括:基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;所述X射线摄影机架的实体结 构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;以及将所述模型运动轨迹的模拟显示在显示设备上。
在一些实施例中,所述模型通过对所述X射线摄影机架的数据进行建模获得。
在一些实施例中,所述模型通过以下方式获得:获取所述X射线摄影机架的图像;提取所述图像的特征点;基于所述特征点进行重建,获得所述模型。
在一些实施例中,所述X射线摄影机架的其中一个部位运动时,所述将所述模型运动轨迹的模拟显示在显示设备上包括:在所述显示设备上突出显示与所述X射线摄影机架的其中一个部位相对应的所述X射线摄影机架的模型的一部分的运动轨迹。
在一些实施例中,所述显示设备设置在所述X射线摄影机架的机房外,所述方法还包括:通过所述显示设备获取交互数据;基于所述交互数据控制所述当前X射线摄影机架的实体结构进行运动。
在一些实施例中,所述显示设备包括触控屏,所述通过所述显示设备获取交互数据,包括:在所述触控屏上通过触摸控制所述X射线摄影机架的模型,生成所述交互数据。
本申请实施例之一提供一种医学诊疗设备的控制系统,所述系统包括:模型获取模块,用于获取医学诊疗设备的虚拟模型,其中,所述医学诊疗设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;运动信息获取模块,用于获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;运动执行模块,用于所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
本申请实施例之一提供一种医学诊疗设备的控制系统,所述系统包括:运动模拟模块,用于基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;指令获取模块,用于获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;模拟控制模块,用于控制所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;显示模块,用于将所述模型运动轨迹的模拟显示在显示设备上。
本申请实施例之一提供一种医学诊疗设备的控制方法,所述方法包括:获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个 部位需要运动到达的目标位置和相关的运动时间信息;X射线摄影机架的实体结构基于所述运动指令到达目标位置,X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;以及将所述模型运动轨迹的模拟显示在显示设备上。
本申请实施例之一提供一种医学诊疗设备的控制系统,所述系统包括:指令获取模块,用于获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关运动时间信息;模拟控制模块,用于控制X射线摄影机架的实体结构基于所述运动指令到达目标位置,X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;显示模块,用于将所述模型运动轨迹的模拟显示在显示设备上。
本申请实施例之一提供一种医学诊疗设备的控制装置,包括处理器,所述处理器用于执行计算机指令,以实现如本申请任一实施例所述的方法。
本申请实施例之一提供一种医疗成像设备参数确定方法,包括:获取目标对象的体征信息;对所述体征信息进行分析,确定异常体征信息;基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述扫描参数包括:扫描电压、扫描电流、扫描视野、扫描层数或者扫描层厚。
在一些实施例中,所述图像处理参数包括:图像对比度或者图像均衡度。
在一些实施例中,所述获取目标对象的体征信息是通过传感器来实现的。
在一些实施例中,所述传感器包括摄像头、温度传感器、心跳传感器或呼吸传感器。
在一些实施例中,所述基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:基于所述目标对象的所述异常体征信息,确定所述目标对象的疾病类型;基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:确定所述异常体征信息的异常等级;基于所述疾病类型和所述异常等级,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
在一些实施例中,所述对所述体征信息的进行分析,确定异常体征信息,包括:将所述体征信息输入至被训练的体征识别模型中,获取所述体征识别模型输出的异常体征信息;或者,将所述体征信息与常态体征参数进行比对,确定异常体征信息。
在一些实施例中,所述对所述体征信息的进行分析,确定异常体征信息,还包括:将所述体征信息与所述目标对象的类型对应的标准体征信息进行比对,确定不符合所述标准体 征信息的异常体征信息;其中,所述目标对象的类型根据所述目标对象的人体基础信息和/或历史病历确定,所述人体基础信息至少包括:性别、年龄、体重和身高。
在一些实施例中,所述方法还包括:基于所述扫描参数,确定所述目标对象的目标扫描协议;基于所述目标扫描协议和所述图像处理参数对所述目标对象进行扫描,得到所述目标对象的扫描图像。
在一些实施例中,所述图像处理参数为对所述目标对象的感兴趣区域的扫描算法进行处理的参数。
本申请实施例之一提供一种医疗成像设备的成像方法,包括:获取针对目标对象的医疗成像设备的扫描参数;获取所述目标对象的异常体征信息;基于所述异常体征信息,适应性地调整所述医疗成像设备的扫描参数和/或图像处理参数;对所述目标对象进行扫描并获得医学图像,其中,所述扫描是基于所述调整后的扫描参数和/或所述医学图像是经过调整后的图像处理参数处理得到的。
在一些实施例中,所述医疗成像设备包括X射线摄影设备、MR设备、CT设备、PET设备、超声设备或DSA设备,或多模态成像设备。
本申请实施例之一提供一种医疗成像设备参数确定装置,包括:体征信息获取模块,用于获取目标对象的体征信息;异常体征信息确定模块,用于对所述体征信息进行分析,确定异常体征信息;参数确定模块,用于基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
本申请实施例之一提供一种医疗成像设备,包括:成像组件,其用于扫描目标对象以获得医学图像;传感器,获取所述目标对象的异常体征信息;控制器,其与所述成像组件和所述传感器耦接,基于所述异常体征信息适应性地调整所述成像组件的扫描参数,和/或基于所述异常体征信息适应性地调整所述医学图像的图像处理参数。
在一些实施例中,所述传感器包括摄像头、温度传感器、心跳或脉搏传感器或呼吸传感器。
本申请实施例之一提供一种设备,所述设备包括:一个或多个处理器;存储装置,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本申请任一实施例所述的医疗成像设备参数确定方法和/或医疗成像设备的成像方法。
本申请实施例之一提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本申请任一实施例所述的医疗成像设备参数确定方法和/或医疗成像设备的成像方法。
附图说明
本申请将通过示例性实施例进行进一步描述。这些示例性实施例将通过附图进行详细描述。附图未按比例绘制。这些实施例是非限制性的示例性实施例,在这些实施例中,各图中相同的编号表示相似的结构,其中:
图1是根据本申请一些实施例所示的医学设备控制系统的应用场景的示意图;
图2是根据本申请一些实施例所示的医学设备控制方法的示例性流程图;
图3是根据本申请一些实施例所示的标记电离室的探测区域的方法的示例性流程图;
图4是根据本申请一些实施例所示的自动选择目标电离室的示例性流程图;
图5是根据本申请一些实施例所示的调整投影数据的示例性流程图;
图6是根据本申请一些实施例所示的用于标记电离室的探测区域的系统的模块图;
图7是根据本申请一些实施例所示的一种医学诊疗设备的控制方法的示例性流程图;
图8A是根据本申请一些实施例所示的医学诊疗设备的示例性结构示意图;
图8B是根据本申请一些实施例所示的虚拟模型的示例性结构示意图;
图9是根据本申请一些实施例所示的另一种医学诊疗设备的控制方法的示例性流程图;
图10是根据本申请一些实施例所示的另一种医学诊疗设备的控制方法的示例性流程图;
图11是根据本申请一些实施例所示的一种医学诊疗设备的控制系统的模块示意图;
图12是根据本申请一些实施例所示的另一种医学诊疗设备的控制系统的模块示意图;
图13是根据本申请一些实施例所示的医疗成像设备参数确定方法的示例性流程图;
图14是根据本申请一些实施例所示的医疗成像设备参数确定方法的示例性流程图;
图15是根据本申请一些实施例所示的医疗成像设备参数确定方法的示例性流程图;
图16是根据本申请一些实施例所示的医疗成像设备的成像方法的示例性流程图;
图17是根据本申请一些实施例所示的医疗成像设备参数确定装置的示例性结构示意图;
图18是根据本申请一些实施例所示的医疗成像设备的成像装置的示例性结构示意图;
图19是根据本申请一些实施例所示的一种医疗成像设备的示例性结构示意图;
图20是根据本申请一些实施例所示的一种设备的示例性结构示意图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模组”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
虽然本说明书对根据本说明书的实施例的系统中的某些模块或单元做出了各种引用,然而,任何数量的不同模块或单元可以被使用并运行在客户端和/或服务器上。所述模块仅是说明性的,并且所述系统和方法的不同方面可以使用不同模块。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
在本申请中,术语“和/或”可包括任何一个或以上相关所列条目或其组合。本申请中的“图像”一词用于统称图像数据(例如,扫描数据、投影数据)和/或各种形式的图像,包括二维(2D)图像、三维(3D)图像、四维(4D)图像等。
本申请提供了用于控制医学设备的系统和方法。在一些实施例中,医学设备可以包括血管造影机(Digital SubtractionAngiography,DSA)、数字乳腺断层摄影(Digital Breast Tomosynthesis,DBT)、锥形束CT(CBCT)、直接数字化X射线摄影系统(DR)、X射线计算机断层摄影(CT)、移动C形臂等。在一些实施例中,本申请实施例提供的医学设备控制系统和方法可以用于对医学设备进行控制,也可以用于对医学设备中的部件进行控制。例如,本申请实施例提供的系统和方法可以用于对球管、电离室、探测器、扫描床等部件进行控制。
在一些实施例中,医学设备控制系统和方法可以应用于标记电离室的探测区域,在临床使用过程中,需要使待扫描对象或感兴趣区域(ROI)正确覆盖电离室的探测区域,否则可能会导致曝光剂量偏低,影响图像质量。而在实际操作中,由于电离室探测区域的位置标识 缺失或被对象遮挡,操作者不能获取电离室的探测区域的准确位置,从而无法准确判断待扫描对象或感兴趣区域是否覆盖了电离室的探测区域,可能导致图像质量降低。本申请中提供的用于标记电离室的探测区域的系统和方法可以帮助操作者获取电离室的探测区域的准确位置,使其能够准确判断待扫描对象或感兴趣区域是否覆盖了电离室的探测区域,可以提高扫描图像的质量,也可以减少操作者需付出的时间和精力。
在一些实施例中,医学设备控制系统和方法还可以应用于血影造影机或DR设备。以血影造影机为例,血管造影机在执行摄影任务时,由于其工作环境的限制,操作人员无法实时观察到血管造影机机架各关节或结构的具体运动位置,医学设备控制系统和方法可以将血管造影机机架的运动同步显示到可视化的显示设备,通过模型对血管造影机的运动进行同步显示,便于操作者观察以及监控血管造影机机架的运动情况以及运动参数。医学设备控制系统和方法还可以通过控制显示设备中的模型的运动来控制血管造影机机架的运动,进一步提高血管造影机与用户的交互性,便于操作者使用。
在一些实施例中,医学设备控制系统和方法可以应用于医疗成像设备。其中,医疗成像设备可以包括X射线摄影设备、MR设备、CT设备、PET设备、超声设备、DSA设备和多模态成像设备中的至少一个。通过获取目标对象的体征信息,对体征信息进行分析,确定异常体征信息,基于目标对象的异常体征信息以及确定医疗成像设备的扫描参数及/或图像处理参数。能够通过基于目标对象的体征信息,及时确定医疗成像设备的扫描参数,以实现提高诊断效率的目的。同时,能够避免需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题,而且能够避免目标对象过多接收射线剂量的问题。
应当理解的是,本申请的医学设备控制方法及系统的应用场景仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其它类似情景。例如,本申请实施例的方法及系统可以应用于工业上使用的机械臂或机器人。
图1是根据本申请一些实施例所示的医学设备控制系统的应用场景的示意图。如图1所示,医学设备控制系统100可以包括处理设备110、存储设备120、一个或以上终端130、网络140、医学设备150和信息获取设备160。在一些实施例中,扫描设备110、处理设备110、存储设备120、终端130、投影设备150和/或信息获取设备160可以相互连接和/通过无线连接、有线连接或其组合通信。系统100的组件之间的连接可以是可变的。仅作为示例,信息获取设备160可以通过网络140或直接连接到处理设备110。再例如,存储设备120可以通过网络140或直接连接到处理设备110。
处理设备110可以处理从存储设备120和/或终端130获取的数据和/或信息。例如,处理设备110可以使扫描设备151从扫描设备151获取图像数据。又例如,处理设备110可以从终端130获取用户指令。又例如,处理设备110可以从信息获取设备160获取医学设备150和/或目标对象的相关信息。
处理设备110还可以向系统100的一个或多个组件(例如,存储设备120、终端130、医学设备150和/或信息获取设备160)发送控制指令。例如,处理设备110可以医学设备150发送控制指令,使医学设备150的活动组件(例如,电离室、探测器等)移动到指定位置。又例如,处理设备110可以向终端130发送控制指令,使终端130在其显示界面上显示图像数据。再例如,处理设备110可以确定医学设备150的电离室的探测区域,并相应地控制投影设备将医学设备150的电离室的探测区域投影到对象(即目标对象)上。再例如,信息获取设备160可以获取医学设备150的虚拟模型、医学设备150的第一部件和虚拟模型的第二部件其中之一者的当前运动信息,其中,虚拟模型的第二部件用于模拟医学设备150的第一部件,第一部件所在的设备坐标系与第二部件所在的模型坐标系具有映射关系,处理设备110可以控制医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。再例如,信息获取设备160可以获取目标对象的体征信息,处理设备110可以对体征信息进行分析,确定异常体征信息,并基于目标对象的异常体征信息,适应性地确定医学设备150的扫描参数及/或图像处理参数。
在一些实施例中,处理设备110可以是单个服务器或服务器组。服务器组可以是集中式或分布式的。在一些实施例中,处理设备110可以是系统100本地的或远端的。例如,处理设备110可以经由网络140访问来自扫描设备110、存储设备120、终端130、医学设备150和/或信息获取设备160的信息和/或数据。又例如,处理设备110可以直接连接到存储设备120、终端130、医学设备150和/或信息获取设备160以访问信息和/或数据。在一些实施例中,处理设备110可以在云平台上实现。例如,云平台可以包括私有云、公共云、混合云、社区云、分布式云、云间、多云等,或其组合。
在一些实施例中,处理设备110可以包括一个或以上处理器(例如,单芯片处理器或多芯片处理器)。仅作为示例,处理设备110可以包括中央处理单元(CPU)、专用集成电路(ASIC)、专用指令集处理器(ASIP)、图像处理单元(GPU)、物理运算处理单元(PPU)、数字信号处理器(DSP)、现场可编程门阵列(FPGA)、可编程逻辑器件(PLD)、控制器、微控制器单元、精简指令集计算机(RISC)、微处理器等或其任意组合。
存储设备120可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备 120可以存储从处理设备110,终端130、医学设备150和/或信息获取设备160获取的数据。在一些实施例中,存储设备120可以存储由处理设备110可以执行或用来执行本申请中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备120可以包括大容量存储设备、可移动存储设备、易失性读写内存、只读存储器(ROM)等或其任意组合。示例性大容量存储设备可以包括磁盘、光盘、固态驱动器等。示例性可移动存储设备可以包括闪存驱动器、软盘、光盘、内存卡、压缩盘、磁带等。示例性易失性读写内存可以包括随机存取内存(RAM)。示例性RAM可以包括动态随机存取内存(DRAM)、双倍数据速率同步动态访问内存(DDR SDRAM)、静态随机存取内存(SRAM)、晶闸管随机存取内存(T-RAM)和零电容随机存取内存(Z-RAM)等。示例性ROM可以包括掩模式只读存储器(MROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除只读存储器(EEPROM)、光盘只读存储器(CD-ROM)和数字通用光盘只读存储器等。在一些实施例中,可以在本申请中其他地方描述的云平台上实现存储设备120。
在一些实施例中,存储设备120可以连接到网络140以与系统100的一个或以上其他组件(例如,终端130、医学设备150和/或信息获取设备160)通信。系统100的一个或以上组件可以经由网络140访问存储在存储设备120中的数据或指令。在一些实施例中,存储设备120可以是处理设备110的一部分。
终端130可以实现用户与系统100的一个或多个组件之间的交互。例如,终端130可以显示图像数据,例如,电离室的探测区域的图像数据、待扫描的对象(即目标对象)的感兴趣区域的图像数据等。用户可以根据该图像数据通过终端130发出指令,例如,向医学设备150发送指定选择的目标电离室的指令,向医学设备150发送开始成像和/或扫描的指令,再例如,向存储设备120发送存储图像数据的指令等。在一些实施例中,终端130可以包括移动设备130-1、平板计算机130-2、膝上型计算机130-3等,或其任意组合。例如,移动设备130-1可以包括移动电话、个人数字助理(PDA)、游戏设备、导航设备、销售点(POS)设备、膝上型计算机、平板计算机、台式计算机等,或其任何组合。在一些实施例中,终端130可以包括输入设备、输出设备等。在一些实施例中,终端130可以是处理设备110的一部分。
网络140可以包括可以促进系统100的信息和/或数据的交换的任何合适的网络。在一些实施例中,一个或以上系统100的组件(例如,处理设备110、存储设备120、终端130、医学设备150和/或信息获取设备160)可以经由网络140与一个或以上系统100的其他组件通信信息和/或数据。例如,处理设备110可以通过网络140从医学设备150获取医疗图像数据。又例如,处理设备110可以经由网络140从终端130获取用户指令。再例如,投影设备 150可以经由网络140从扫描设备110、处理设备110、存储设备120和/或信息获取设备160获取投影数据。
网络140可以是或包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN))、有线网络、无线网络(例如,802.11网络、Wi-Fi网络)、帧中继网络、虚拟专用网(VPN)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任何组合。例如,网络140可以包括电缆网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公共电话交换网络(PSTN)、蓝牙网络、ZigBee网络、近场通信(NFC)网络等,或其任意组合。在一些实施例中,网络140可以包括一个或以上网络接入点。例如,网络140可以包括诸如基站和/或互联网交换点之类的有线和/或无线网络接入点,系统100的一个或以上组件可以通过该有线和/或无线接入点连接到网络140以交换数据和/或信息。
医学设备150可以是执行医学诊疗或研究任务的自动化设备。所述医学诊疗或研究任务可以包括但不限于医学摄影任务、手术任务、康复治疗任务等等。在一些实施例中,医学设备150可以包括扫描设备151和投影设备152。扫描设备151可以通过扫描对象(即目标对象)来生成或提供与对象(即目标对象)有关的图像数据。在一些实施例中,对象(即目标对象)可以包括生物对象和/或非生物对象。例如,对象(即目标对象)可以包括身体的特定部分,例如头部、胸部、腹部等,或其组合。又例如,对象(即目标对象)可以是有生命或无生命的有机和/或无机物质的人造物体。在一些实施例中,扫描设备151可以是用于疾病诊断或研究目的的非侵入性生物医疗成像装置。扫描设备151可以包括单模态扫描仪和/或多模态扫描仪。单模态扫描仪可以包括例如X射线扫描仪、计算机断层扫描(CT)扫描仪、数字射线照相(DR)扫描仪(例如,移动数字射线照相)、数字减法血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪、X射线显微镜扫描仪等,或其任意组合。例如,X射线成像设备可以包括X射线源和探测器。X射线源可以被配置为向待扫描的对象发射X射线。探测器可以被配置为检测透过对象的X射线。在一些实施例中,X射线成像设备可以是,例如,C形X射线成像设备、直立X射线成像设备、悬挂式X射线成像设备等。多模态扫描仪可以包括例如X射线成像-磁共振成像(X射线-MRI)扫描仪、正电子发射断层扫描-X射线成像(PET-X射线)扫描仪、正电子发射断层扫描-计算机断层摄影(PET-CT)扫描仪、数字减影血管造影-磁共振成像(DSA-MRI)扫描仪等。
上面提供的扫描仪仅用于说明目的,而无意限制本申请的范围。如本文所用,术语“成像模态”或“模态”广泛地是指收集、生成、处理和/或分析对象的成像信息的成像方法或技术。
为了说明目的,本申请主要描述与X射线成像系统有关的系统和方法。应当注意的是,以下说明的X射线成像系统仅作为示例提供,并不用于限制本申请的范围。本文公开的系统和方法可以是任何其他成像系统。
在一些实施例中,扫描设备151可以包括机架151-1、探测器151-2、探测区域151-3、扫描台151-4和放射源151-5。机架151-1可以支撑探测器151-2和放射源151-5。对象可以被放置在扫描台151-4上,然后被移动到探测区域151-3中进行扫描。放射源151-5可以向对象发射放射性射线。放射性射线可以包括粒子射线、光子射线等或其组合。在一些实施例中,放射性射线可以包括至少两个辐射粒子(例如,中子、质子、电子、μ介子、重离子)、至少两个辐射光子(例如,X射线、γ射线、紫外线、激光)等,或其组合。探测器151-2可以检测从探测区域151-3发出的辐射。在一些实施例中,探测器151-2可以包括至少两个探测器单元。探测器单元可以是单行探测器或多行探测器。
投影设备152可以包括能够投影图像数据的任何合适的设备。例如,投影设备152可以是阴极射线管(CRT)投影机、液晶显示器(LCD)投影机、数字光处理器(DLP)投影机、数码光路真空管(DLV)投影机或可以投影图像数据的其他设备。
在扫描设备151对对象执行扫描之前,可以将投影设备152配置为向待扫描的对象投影需要投影的投影数据。在一些实施例中,该投影数据可以包括对应于一个或多个电离室中至少一个电离室的探测区域的图像数据。例如,在扫描之前,投影设备152可以获取所述一个或多个电离室中至少一个电离室的探测区域的图像数据,并将该图像数据投影到待扫描的对象上。在一些实施例中,该投影数据还可以包括对应于所述对象的待扫描的感兴趣区域(ROI)的图像数据。
在一些实施例中,如图1所示,投影设备152可以是独立于扫描设备151的设备。例如,投影设备152可以是安装在检查室的天花板上的投影机,扫描设备151可以位于检查室中。可替代地,投影设备152可以被集成到或安装在扫描设备151(例如,机架111)。
在一些实施例中,医学设备150可以包括医疗成像设备,医疗成像设备可以包括X射线摄影设备、MR(Magnetic Resonance)设备、CT(Computed Tomography)设备、PET(Positron Emission Computed Tomography)设备、超声设备、DSA(Digital subtraction angiography)设备,或多模态成像设备等。
在一些实施例中,医学设备150还可以包括医学诊疗设备,医学诊疗设备包括但不限于血管造影机(Digital Subtraction Angiography,DSA)、数字乳腺断层摄影(Digital Breast Tomosynthesis,DBT)、锥形束CT(CBCT)、数字化X射线摄影系统(DR)、X射线计算机断层摄影(CT)、移动C形臂等。
在一些实施例中,信息获取设备160可以用于获取医学设备150的相关信息和/或目标对象的相关信息。在一些实施例中,目标对象可以为进行用于疾病诊断或研究目的的非侵入式成像的对象,例如,可以是人或动物等。在一些实施例中,信息获取设备160获取的医学设备150的相关信息和/或目标对象的相关信息可以包括扫描设备151中一个或多个电离室的位置信息。在另一些实施例中,信息获取设备160获取的医学设备150的相关信息和/或目标对象的相关信息可以包括医学设备150的虚拟模型,其中,医学设备150可以包括至少一个可运动的第一部件,相应地,虚拟模型可以包括模拟第一部件的第二部件,第一部件所在的设备坐标系与第二部件所在的模型坐标系具有映射关系;信息获取设备160获取的医学设备150的相关信息和/或目标对象的相关信息还可以包括医学设备的第一部件和虚拟模型的第二部件其中之一者的当前运动信息。在另一些实施例中,信息获取设备160获取的医学设备150的相关信息和/或目标对象的相关信息可以包括目标对象的体征信息,其中,体征信息可以是目标对象的基本体征信息,例如,可以是但不限于是目标对象的体温信息、血压信息、血脂信息、呼吸信息、脉搏信息、眼部体征信息、手部体征信息、腿部体征信息或头部体征信息等。
在一些实施例中,信息获取设备160可以包括用于获取目标对象的特征信息的传感器,其中,传感器可以包括位置传感器、图像传感器、温度传感器、心跳传感器或呼吸传感器等。在一些实施例中,信息获取设备160可以是处理设备110的一部分。
在一些实施例中,系统100还可以包括用于捕获对象的图像数据的图像捕获装置(例如,相机或摄像头)。在一些实施例中,该图像捕获装置在捕获对象的图像数据的同时,还可以捕获投影在该对象上的投影数据。在一些实施例中,图像捕获装置可以为信息获取设备160的一种。
该图像捕获装置可以是和/或包括能够捕获对象的图像数据的任何合适的设备。例如,图像捕获装置可以包括相机(例如,数码相机,模拟相机等)、红绿蓝(RGB)传感器、RGB深度(RGB-D)传感器或可以捕获对象的颜色图像数据的其他设备。
在一些实施例中,该图像捕获装置可以是独立于扫描设备151的设备。可替代地,该图像捕获装置可以被集成到或安装在扫描设备151(例如,机架111)上。在一些实施例中,由该图像捕获装置获取的图像数据可以被传送到处理设备110以供进一步分析。附加地或替代地,由该图像捕获装置获取的图像数据可以被发送到终端设备(例如,终端130)用于显示和/或存储设备(例如,存储设备120)用于存储。
在一些实施例中,在由扫描设备151对对象执行扫描之前、期间和/或之后,该图像捕获装置可以连续地或间歇地(例如,周期性地)捕获对象的图像数据。
在一些实施例中,由该图像捕获装置获取图像数据、将捕获的图像数据传输到处理设备110并对图像数据进行分析可以基本上是实时进行的,以使图像数据可以提供指示对象的基本实时状态的信息。
应当注意的是,关于系统100的以上描述旨在是说明性的,而不是限制本申请的范围。许多替代、修改和变化对本领域普通技术人员将是显而易见的。本文描述的示例性实施方式的特征、结构、方法和其它特征可以以各种方式组合以获取另外的和/或替代的示例性实施例。例如,系统100可以包括一个或以上附加组件。附加地或替代地,可以省略系统100的一个或以上组件,例如图像捕获装置。又例如,系统100的两个或以上组件可以集成到单个组件中。仅作为示例,处理设备110(或其一部分)可以集成到扫描设备151中。
在一些实施例中,医学设备控制系统100(如处理设备110)可以用于执行标记电离室的探测区域的方法、医学诊疗设备的控制方法和/或医疗成像设备参数确定方法、成像方法。在一些实施例中,标记电离室的探测区域的方法、医学诊疗设备的控制方法以及医疗成像设备参数确定方法、成像方法可以单独实施或者相互结合实施。
图2是根据本申请一些实施例所示的医学设备控制方法的示例性流程图。具体地,流程200可以由处理设备110执行。例如,流程200可以以程序或指令的形式存储在存储装置(如存储设备120)中,当系统100执行该程序或指令时,可以实现用于标记电离室的探测区域的流程200。如图2所示,流程200可以包括以下步骤。
步骤210,获取医学设备的相关信息和/或目标对象的相关信息。
医学设备可以是执行医学诊疗任务的自动化设备。所述医学诊疗任务可以包括但不限于医学摄影任务、手术任务、康复治疗任务等等。在一些实施例中,医学设备可以包括扫描设备和投影设备,其中,扫描设备可以包括单模态扫描仪(例如,X射线扫描仪、计算机断层扫描(CT)扫描仪、数字射线照相(DR)扫描仪(例如,移动数字射线照相)、数字减法血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪、X射线显微镜扫描仪等)和/或多模态扫描仪(例如,X射线成像-磁共振成像(X射线-MRI)扫描仪、正电子发射断层扫描-X射线成像(PET-X射线)扫描仪、正电子发射断层扫描-计算机断层摄影(PET-CT)扫描仪、数字减影血管造影-磁共振成像(DSA-MRI)扫描仪等);投影设备可以为能够投影图像数据的设备,例如,阴极射线管(CRT)投影机、液晶显示器(LCD)投影机、数字光处理器(DLP)投影机、数码光路真空管(DLV)投影机或可以投影图像数据的其他设备。在一些实施例中,医学设备可以包括医疗成像设备,例如,X射线摄影设备、MR(Magnetic Resonance)设备、CT(Computed Tomography)设备、PET(Positron Emission Computed Tomography)设备、超声设备、DSA(Digital subtraction angiography)设备,或多模态成像设备等。在一些实施例中, 医学设备还可以包括医学诊疗设备,例如,血管造影机(Digital SubtractionAngiography,DSA)、数字乳腺断层摄影(Digital Breast Tomosynthesis,DBT)、锥形束CT(CBCT)、直接数字化X射线摄影系统(DR)、X射线计算机断层摄影(CT)、移动C形臂等。关于医学设备的更多描述可以参见图1及其相关描述。
在一些实施例中,医学设备的相关信息可以包括与成像相关的信息,例如,一个或多个电离室的位置信息,其中,电离室的位置信息可以包括电离室相对于扫描设备的一个或多个组件(例如,探测器)的位置信息和/或电离室在3D坐标系中的位置。在一些实施例中,电离室相对于扫描设备的探测器(例如平板探测器)的位置可以是固定的,操作者和/或扫描设备可以在扫描前根据对象的ROI的位置来调整探测器的位置。处理设备110可以获取扫描设备的探测器在检查室中的位置以及电离室相对于扫描设备的探测器的固定位置,来确定电离室在检查室中的位置信息。在一些实施例中,电离室相对于扫描设备的探测器的位置是可以调整的。例如,电离室可以安装在可移动的片盒内,该片盒和/或扫描设备的其他组件中可以安装有位置传感器。处理设备110可以获取位置传感器检测到的数据来确定电离室的位置。更多关于电离室的位置信息的描述,可以参见图3及其相关描述。
在一些实施例中,医学设备的相关信息还可以包括医学设备的虚拟模型以及当前运动信息,其中,医学设备可以包括至少一个可运动的第一部件,相应地,虚拟模型可以包括模拟第一部件的第二部件,第一部件所在的设备坐标系与第二部件所在的模型坐标系具有映射关系;当前运动信息可以为医学设备的第一部件和虚拟模型的第二部件中的至少一个的运动信息。在一些实施例中,虚拟模型可以通过对医学诊疗设备的数据进行建模得到,运动信息可以基于运动控制信息获取。更多关于虚拟模型和/或当前运动信息的描述,可以参见图7及其相关描述。
目标对象可以为进行用于疾病诊断或研究目的的非侵入式成像的对象。在一些实施例中,目标对象可以包括生物对象和/或非生物对象。例如,对象可以包括身体的特定部分,例如头部、胸部、腹部等,或其组合。又例如,对象可以是有生命或无生命的有机和/或无机物质的人造物体。
在一些实施例中,目标对象的相关信息可以包括目标对象的位置信息和体征信息等。在一些实施例中,体征信息可以是目标对象的基本体征信息,例如,可以是但不限于是目标对象的体温信息、血压信息、血脂信息、呼吸信息、脉搏信息、眼部体征信息、手部体征信息、腿部体征信息或头部体征信息等。在一些实施例中,可以通过传感器获取目标对象的相关信息,其中,传感器可以包括、位置传感器、摄像头、温度传感器、心跳传感器或呼吸传感器等。更多关于目标对象的体征信息的描述。可以参见图13及其相关描述。
步骤220,基于医学设备的相关信息和/或目标对象的相关信息,控制医学设备。
在一些实施例中,处理设备110可以基于医学设备的相关信息,控制医学设备对目标对象进行投影。更进一步地,处理设备110可以基于一个或多个电离室的位置信息确定投影数据,并控制投影设备将投影数据投影到目标对象上。例如,处理设备110可以基于一个或多个电离室的位置信息,确定一个或多个电离室中至少一个电离室的探测区域,并确定投影设备的投影数据,投影数据包括对应于至少一个电离室的探测区域的图像数据,控制投影设备将投影数据投影到对象上,更多关于控制医学设备对目标对象进行投影的描述,可以参见图3及其相关描述。
在一些实施例中,处理设备110可以基于医学设备的相关信息,控制医学设备运动。更进一步地,处理设备110可以基于医学设备的虚拟模型,控制医学设备运动。例如,处理设备110可以获取医学诊疗设备的第一部件和虚拟模型的第二部件其中之一者的当前运动信息,并控制医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。更多关于控制医学设备运动的描述。可以参见图7及其相关描述。
在一些实施例中,处理设备110可以基于目标对象的相关信息,控制医学设备对目标对象进行扫描。更进一步地,处理设备110可以基于目标对象的体征信息,控制医学设备对目标对象进行扫描。例如,处理设备110可以对目标对象的体征信息进行分析,确定异常体征信息,基于目标对象的异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。更多关于控制医学设备运动的描述,可以参见图13及其相关描述。
在一些实施例中,处理设备100可以基于医学设备的相关信息和目标对象的相关信息,控制医学设备。例如,处理设备100可以基于医学设备的相关信息和目标对象的相关信息,控制医学设备对目标对象进行扫描。在一些实施例中,在对目标对象进行扫描前,处理设备110可以对目标对象的体征信息进行分析,确定异常体征信息,基于目标对象的异常体征信息,适应性地确定医学设备的扫描参数及/或图像处理参数。与此同时,在对目标对象进行扫描前,处理设备110可以基于一个或多个电离室的位置信息,确定一个或多个电离室中至少一个电离室的探测区域,并确定投影设备的投影数据,进而控制投影设备将投影数据投影到目标对象上。进一步,处理设备110可以基于至少一个电离室的探测区域的投影数据的投影结果(例如,一个电离室的探测区域的图像数据若被投影到目标对象的身体表面,该电离室可以为标记为候选电离室),确定一个或多个目标电离室。处理设备110可以根据确定的医学设备的扫描参数对一个或多个目标电离室中的目标对象进行扫描,和/或处理设备110可以在获取扫描图像后,根据确定的医学设备的图像处理参数对扫描图像进行处理。在一些 实施例中,在对目标对象进行扫描前,响应于确定至少一个电离室中不包括任何候选电离室,处理设备110可以基于目标对象的位置信息控制扫描台151-4移动,将目标对象移动到至少一个电离室的探测区域中进行扫描。更多关于候选电离室、目标电离室的描述。可以参见图7及其相关描述。
自动曝光控制(Automatic Exposure Control,AEC)技术是利用电离室探测穿过被扫描对象之后的射线量,从而控制X射线机曝光时间以及X射线的总量,可以使对不同部位、不同病人所拍摄的X线图像具有相同水平的感光量,避免出现拍摄所得图像之间剂量差异过大、图像质量参差不齐的现象。在临床使用过程中,需要使待扫描对象或待扫描的感兴趣区域(ROI)正确覆盖电离室的探测区域,否则可能会导致曝光剂量偏低,降低图像质量。传统方式中,一般采用标记框或标记线的方式在设备表面标识出电离室的探测区域。而在实际操作中,由于电离室探测区域的位置标识非常容易被人体或衣物遮挡,或者部分可移动的物体表面(如检查床的活动床面)无法标记电离室的探测区域,这使得操作者难以获取电离室的探测区域的准确位置,从而无法准确判断待扫描对象或感兴趣区域是否覆盖了电离室的探测区域。因此,有必要提供一种标记电离室的探测区域的方法和系统。
图3是根据本申请一些实施例所示的标记电离室的探测区域的方法的流程图。具体地,流程300可以由处理设备110执行。例如,流程300可以以程序或指令的形式存储在存储装置(如存储设备120)中,当系统100执行该程序或指令时,可以实现用于标记电离室的探测区域的流程300。如图3所示,流程300可以包括以下步骤。
步骤310,获取扫描设备中一个或多个电离室在检查室中的位置信息,所述扫描设备用于扫描对象。
具体地,该步骤可以由获取模块610完成。
在一些实施例中,扫描设备(例如,扫描设备151)用于扫描位于检查室中的对象(即,目标对象),例如病人。仅作为示例,该扫描设备可以是医学成像设备,例如悬挂式X射线成像设备、直立X射线成像设备、数字射线照相(DR)设备(例如,移动数字X射线医学成像装置)等,或如本申请其他部分所述类似设备。例如,该扫描设备可以是立式X射线成像设备,扫描时,X射线源发射的射线穿过站立的病人的感兴趣区域(ROI),该扫描设备的图像接收器可以检测到穿过病人的ROI的X射线的强度。ROI可以包含需要扫描的对象的一个或以上身体部位(例如,组织、器官)。
在一些实施例中,处理设备110可以基于目标对象(例如,病人)的体征信息,确定目标对象的感兴趣区域(ROI)。更进一步地,处理设备110可以基于目标对象(例如,病人)的体征信息,确定异常体征信息,并基于目标对象的异常体征信息,确定目标对象的疾病类 型,再基于目标对象的疾病类型确定目标对象的感兴趣区域(ROI)。例如,若目标对象的体温过高,则目标对象的肺部可能具有异常,则肺部可以被确定为目标对象的感兴趣区域(ROI)。
扫描设备中的电离室可以被配置为检测到达扫描设备的探测器的辐射量(例如,在一定时间内电离室的探测区域中的辐射量)。电离室通常可以设于探测器与待扫描对象之间。在一些实施例中,所述电离室可包括固体电离室、液体电离室、空气电离室等各种适用于医学成像过程的电离室,本申请对此不作限制。在一些实施例中,可以在多个电离室中,选择一个或多个目标电离室(将结合图4中的操作460进行描述)。一个或多个目标电离室可以在对目标对象进行扫描时启动,而其他的电离室(如果有的话)在扫描目标对象的过程中可以关闭。在一些实施例中,在扫描对象时可以实施自动曝光控制(Automatic Exposure Control,AEC)方法。当一个或多个目标电离室检测到的辐射累积量超过阈值时,辐射控制器(例如扫描设备110的组件或控制模块640)可以使扫描设备的放射源停止扫描。
在一些实施例中,电离室在检查室中的位置信息可以包括电离室相对于扫描设备110的一个或多个组件(例如,探测器)的位置信息和/或电离室在3D坐标系中的位置。在一些实施例中,电离室相对于扫描设备的探测器(例如平板探测器)的位置可以是固定的。例如,电离室可以固定在相对于探测器不变的某一位置,在不同的扫描操作中,电离室相对于探测器的位置是不发生变化的。而操作者和/或扫描设备可以在扫描前根据对象的ROI的位置来调整探测器的位置。处理设备110可以获取扫描设备的探测器在检查室中的位置以及电离室相对于扫描设备的探测器的固定位置,来确定电离室在检查室中的位置信息。在一些实施例中,电离室相对于扫描设备的探测器的位置是可以调整的。例如,电离室可以安装在可移动的片盒内,该片盒和/或扫描设备的其他组件中可以安装有位置传感器。处理设备110可以获取位置传感器检测到的数据来确定电离室的位置。在一些实施例中,电离室在检查室中的位置信息可以是电离室在3D坐标系中的位置。例如,可以在整个检查室中建立3D坐标系,用于描述电离室和/或系统100的其他组件(例如探测器、投影设备152)的位置。
步骤320,基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域。
具体地,该步骤可以由探测区域确定模块620完成。
在一些实施例中,电离室的探测区域与电离室的位置信息可以是相关的。一个电离室的探测区域可以是该电离室周围的一个固定范围,例如可以是固定大小的圆形区域、方形区域、三角形区域或其他形状的区域。该探测区域的大小和形状可以与电离室的大小相关。在一些实施例中,该区域的尺寸(例如半径、边长、面积等)可以是预先设置在系统100中的。处理设备110可以基于某个电离室的位置信息和探测区域的尺寸来确定所述电离室的实际探 测区域。在一些实施例中,处理设备110可以只确定一个或多个电离室中的一个电离室的探测区域。在一些实施例中,处理设备110可以确定多个电离室中的每个电离室的探测区域。在一些实施例中,处理设备110可以只确定所述一个或多个电离室中的一部分电离室的探测区域。例如,处理设备110可以在确定多个电离室的位置信息后,进一步确定这些电离室中靠近平板检测器的某个位置(例如中心点、上半部分、下半部分等)的一部分电离室的探测区域。
步骤330,确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据。
具体地,该步骤可以由投影数据确定模块630完成。
在一些实施例中,处理设备110可以确定投影设备152的投影数据,所述投影数据包括对应于所述一个或多个电离室中至少一个电离室(下文中简称为“至少一个电离室”)的探测区域的图像数据。该图像数据可以是基于所述至少一个电离室的实际探测区域而生成的数据。该图像数据可以是彩色图像数据,也可以是灰度图像数据。在一些实施例中,当处理设备110确定了多个电离室的探测区域时,该图像数据中可以包括多个图形(例如,圆形、方形等),其中每个图形分别对应于一个电离室的探测区域。该图形可以是有颜色填充的图形,也可以是描绘电离室的探测区域的轮廓的线条。可以用相同的颜色和/或图形表示这些探测区域,也可以用不同的颜色和/或图形表示这些探测区域。在一些实施例中,处理设备110可以获取投影设备在检查室中的位置,并根据投影设备的位置、电离室的位置、电离室的探测区域、对象的体厚等中的一项或多项来确定对应于电离室的探测区域的图形的尺寸,使得投影到对象身上的电离室的探测区域与电离室的实际探测区域大小相符。所述投影设备用于将所述投影数据投影到待扫描的对象身上,以标记电离室的探测区域。在一些实施例中,所述投影数据还包括其他形式的数据或参数,例如投影方向、亮度、分辨率、对比度等或其组合。这些数据或参数中的至少一部分可以具有默认值或由用户(如操作者)手动设置的值。投影方向可以是沿着投影设备的镜头至探测器的中心点(或所述至少一个电离室的探测区域的中心点)的方向。在一些实施例中,该图像数据可以包括图形、文字、图案、图标、数字等内容。
步骤340,控制所述投影设备将所述投影数据投影到所述对象上。
具体地,该步骤可以由控制模块640完成。
在一些实施例中,处理设备110可以控制投影设备150将投影数据投影到对象上。该投影设备152可以是和/或包括能够投影图像数据的任何合适的设备。例如,投影设备152可以是阴极射线管(CRT)投影机、液晶显示器(LCD)投影机、数字光处理器(DLP)投影机、 数码光路真空管(DLV)投影机或可以投影图像数据的其他设备。在一些实施例中,投影设备152可以采用中心投影或平行投影的方式将投影数据投影到对像上。当采用平行投影的方式时,投影设备152可以采用正投影或斜投影的方式将投影数据投影到对像上,优选地,可以采用正投影的方式将投影数据投影到对像上。
该投影数据可以包括对应于至少一个电离室的探测区域的图像数据。例如,投影设备可以朝着探测器的中心点来投影需要投影的数据,由于对象的遮挡,对应于至少一个电离室的探测区域的图像数据可以被投影到对象的身体表面,从而标记所述至少一个电离室的探测区域。操作者可以方便地观察到投影出来的至少一个电离室的探测区域,从而判断至少一个电离室中是否包括一个或多个候选电离室。如本文中所使用的,术语“候选电离室”是指被所述对象的待扫描的ROI所覆盖的电离室。在一些实施例中,操作者可以通过目测来判断ROI的大致范围。在一些实施例中,投影设备所投影到对象上的投影数据还可以包括对应于ROI的图像数据。例如,可以用有颜色填充的图形(如矩形)来表示ROI,或者用有颜色的线条来描绘ROI的轮廓。可以用不同显示方式来区分ROI和电离室的探测区域。例如,可以用不同的颜色来表示ROI和电离室的探测区域。再例如,可以用线条来描绘ROI的轮廓,而用有颜色填充的图形来表示电离室的探测区域。这样的投影方式便于操作者直观地观察到对象的ROI是否覆盖了至少一个电离室中至少一部分电离室的探测区域。可选地,操作者可以用激光定位灯投射在对象身上,来观察ROI的范围。在一些实施例中,处理设备110可以自动确定对象的ROI。ROI的确定方式可以采用本领域技术人员常用的方式,本申请对此不做限制。例如,处理设备110可以获取含有对象的待扫描部位(例如胸部,下肢)的图像(例如图4中描述的参考图像),并采用模板匹配算法、机器学习模型等方式来确定对象的ROI。
如果操作者确定所述至少一个电离室中不包括任何候选电离室(即对象的ROI没有覆盖所述至少一个电离室中任何电离室的探测区域),则操作者可以调整至少一个电离室中的一个或多个电离室(又称为参考电离室)相对于对象(例如病人)的ROI的位置。例如,操作者可以指导病人改变位置或改变姿势。再例如,若电离室可以相对于探测器进行移动,操作者可以调整一个或多个参考电离室相对于探测器的位置。又例如,操作者可以调整探测器的位置(例如向上下和/或左右方向移动探测器),从而改变一个或多个参考电离室相对于对象的ROI的位置。调整电离室相对于ROI的位置后,处理设备110可以重复进行流程300,以便于操作者确定所述一个或多个参考电离室中是否有至少一个参考电离室的探测区域被ROI覆盖。在一些实施例中,若一个或多个参考电离室的位置发生了改变,处理设备110可以实时更新所述投影数据,使投影数据能够反映出所述一个或多个参考电离室的改变后的实际探测区域,以便用户观察和判断是否需要继续调整所述一个或多个参考电离室的位置。
在一些实施例中,处理设备110还可以通过控制放置有目标对象的扫描台151-4运动,调整至少一个电离室中的一个或多个电离室(又称为参考电离室)相对于对象(例如病人)的ROI的位置。例如,处理设备110可以基于一个或多个参考电离室相对于探测器的位置生成运动控制指令,基于运动控制指令控制扫描台151-4运动,调整至少一个电离室中的一个或多个电离室相对于对象的ROI的位置。更进一步地,处理设备110可以获取扫描台151-4的虚拟模型,基于扫描台151-4的虚拟模型控制放置有目标对象的扫描台151-4运动。例如,处理设备110可以基于一个或多个参考电离室相对于探测器的位置生成运动控制指令,处理设备110可以控制虚拟模型或扫描台151-4执行运动控制指令,虚拟模型或扫描台151-4中的另一个与获取运动控制指令的虚拟模型或扫描台151-4执行相同的运动。例如,扫描台151-4可以从处理设备110获取运动控制指令,并基于该运动控制指令自动执行相应的运动。在一些实施例中,当扫描台151-4接收到运动控制指令执行当前运动时,处理设备110可以获取扫描台151-4的当前运动信息。在一些实施例中,执行相同的运动的过程可以基于扫描台151-4的设备坐标系与虚拟模型的模型坐标系具有映射关系实现。例如,当扫描台151-4执行当前运动时,处理设备110可以获取扫描台151-4的当前运动信息,并将扫描台151-4的当前运动信息中的位置信息(如,设备坐标系中的坐标)映射至模型坐标系,进而基于映射后的位置信息(如,模型坐标系中的坐标)控制虚拟模型运动至该位置。通过虚拟模型对医学诊疗设备的运动进行同步显示,便于操作者观察以及监控医学诊疗设备的运动情况及参数,进一步提高医学诊疗设备与用户的交互性,便于操作者使用。
进一步地,如果操作者确定所述至少一个电离室中包括一个或多个候选电离室(即对象的ROI覆盖了所述至少一个电离室中的至少一部分电离室的探测区域),则用户可以通过终端(例如终端130)从所述一个或多个候选电离室中选取一个或多个目标电离室。处理设备110可以从终端130获取用户的输入来确定被选取的目标电离室。可选地,处理设备110可以调整投影设备152的投影数据,使得所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据与对应于其他电离室的探测区域的图像数据具有视觉上的明显差异。更多关于确定目标电离室之后调整投影数据的方式的描述,可以参见图5。
在一些实施例中,处理设备110可以自动确定所述至少一个电离室中是否包括一个或多个候选电离室。具体地,处理设备110可以自动确定对象的ROI是否覆盖了所述至少一个电离室中至少一部分电离室的探测区域。响应于对象的ROI覆盖了所述至少一个电离室中至少一部分电离室的探测区域,处理器110可以将这些探测区域被覆盖的电离室指定为候选电离室。可选地,处理设备110还可以自动从一个或多个候选电离室中选取一个或多个目标电离室。关于上述自动确定过程的描述可以参考图4。
图4是根据本申请一些实施例所示的自动选择目标电离室的流程图。具体地,流程400可以由处理设备110执行。例如,流程400可以以程序或指令的形式存储在存储装置(如存储设备120)中,当系统100执行该程序或指令时,可以实现用于自动选择目标电离室的流程400。如图4所示,流程400可以包括以下步骤。
步骤410,获取所述对象的参考图像,所述参考图像由摄像头在所述投影设备将所述投影数据投影到待扫描的对象上之后拍摄。
在一些实施例中,步骤410可以由获取模块610执行。
在一些实施例中,系统100的图像捕获装置(例如,摄像头)可以采集所述对象的参考图像。例如,在所述投影设备将所述投影数据投影到待扫描的对象上之后,处理设备110(例如控制模块640)可以控制摄像头采集对象的参考图像。摄像头在捕获对象的图像数据的同时,还可以捕获投影在该对象上的图形。在一些实施例中,该参考图像可以包括对象的ROI以及标记出来的投影在该对象上的至少一个电离室的探测区域。在一些实施例中,所述投影设备投影在该对象上的图形可以包括对应于至少一个电离室的探测区域的图形以及对应于对象的ROI的图形。相应地,该参考图像可以包括通过投影方式标记的ROI以及至少一个电离室的探测区域。
该图像捕获装置可以是和/或包括能够捕获对象的图像数据的任何合适的设备。例如,图像捕获装置可以包括相机(例如,数码相机,模拟相机等)、摄像头、红绿蓝(RGB)传感器、RGB深度(RGB-D)传感器或可以捕获对象的图像数据的其他设备。优选地,图像捕获装置(例如,摄像头)可以采用正射影像的方式拍摄对象的参考图像。图像捕获装置(例如,摄像头)也可以采用其他方式(例如,倾斜摄影等)拍摄对象的参考图像,本申请对此不作限制。
步骤420,识别参考图像中的第一区域,所述第一区域对应于所述对象的待扫描的感兴趣区域。
在一些实施例中,投影设备的投影数据中不包括对应于ROI的图像数据。处理设备110(例如候选电离室确定模块650)可以从参考图像中识别出对应于ROI的第一区域。例如,处理设备110可以采用模板匹配算法、机器学习模型等方式来从参考图像中确定对象的ROI,本申请对此不做限制。仅作为示例,若采用机器学习模型方式,可以将参考图像输入训练后的机器学习模型,所述训练后的机器学习模型对参考图像进行处理后可以输出识别出来的第一区域。用于训练机器学习模型的训练样本可以包括多个样本图像和根据样本图像手工标记的ROI。
在一些实施例中,投影设备的投影数据中可以包括对应于ROI的图像数据。处理设 备110可以采用图像识别算法,从参考图像中识别出所述第一区域。例如,所述图像识别算法可以包括基于颜色特征、纹理特征、形状特征以及局部特征点等图像特征的识别算法。在一些实施例中,系统100的组件(例如,终端130)可以将该第一区域显示在终端的显示界面上。
步骤430,识别参考图像中的第二区域,所述第二区域对应于投影到所述对象上的至少一个电离室的探测区域。
在一些实施例中,处理设备110可以采用图像识别算法,从参考图像中识别出对应于至少一个电离室的探测区域的第二区域。例如,所述图像识别算法可以包括基于颜色特征、纹理特征、形状特征以及局部特征点等图像特征的识别算法。在一些实施例中,对应于投影到对象上的至少一个电离室的探测区域的第二区域可以由多个分隔的区域组成。例如,当投影数据中包含2个电离室的探测区域的图像数据时,该第二区域可以是由2个分隔的子区域组成,其中的每一个子区域分别对应一个电离室的探测区域。
步骤440,基于所述第一区域和第二区域,确定所述至少一个电离室中是否包括一个或多个候选电离室。
在一些实施例中,步骤440可以由候选电离室确定模块650确定。例如,系统100的组件(例如,处理设备110)可以判断该第一区域是否覆盖了第二区域或该第二区域的一个或多个子区域中的至少一个。响应于第一区域覆盖了第二区域或第二区域的一个或多个子区域中的至少一个,则处理设备110可以确定所述至少一个电离室中包括至少一个候选电离室。目标电离室可以从候选电离室中选择。若第一区域没有覆盖第二区域或第二区域的任何子区域,处理设备110可以确定所述至少一个电离室中不包括任何候选电离室。
需要说明的是,上述判断可以由系统100的组件(例如,处理设备110)基于第一区域和第二区域自动执行,也可以由用户进行人为判断。例如,用户可以通过显示在系统100的组件(例如,终端130)的显示界面上的第一区域和第二区域进行人为判断,并将判断结果输入到终端130中。
步骤450,响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备生成提示信息。
在一些实施例中,步骤450可以由候选电离室确定模块650确定。
例如,该提示信息可以是文本、语音、图像、视频、警报等或其任何组合的形式。例如,该提示信息可以是文本和语音形式,当处理设备110确定ROI未覆盖所述至少一个电离室中的任何一个电离室的探测区域时,终端130的显示界面上可以显示用于提示的文字(例如,“ROI未覆盖任何电离室探测区域”),同时终端130可以发出对应于该文字的语音提示。 再例如,该提示信息可以是图像形式,当系统100的组件(例如,处理设备110)确定所述至少一个电离室中的任何一个电离室的探测区域未覆盖所述感兴趣区域时,终端130的显示界面上用于显示该第一区域和/或第二区域的部分可以改变颜色和/或进行闪烁以提示用户。在一些实施例中,用户在收到提示信息后,可以人为改变所述至少一个电离室中一个或多个参考电离室相对于待扫描的ROI的位置,使所述ROI可以覆盖一个或多个参考电离室中的至少一个参考电离室的探测区域。例如,用户可以调整一个或多个参考电离室的位置。再例如,用户可以调整对象的摆位姿势和/或位置,使所述对象的ROI相对于所述一个或多个参考电离室进行移动,以使所述ROI可以覆盖一个或多个参考电离室中的至少一个参考电离室的探测区域。
在一些实施例中,响应于确定所述至少一个电离室中不包括任何候选电离室,系统100的组件(例如,处理设备110)可以使所述至少一个电离室中的一个或多个参考电离室相对于所述对象的ROI进行移动,以使所述ROI可以覆盖一个或多个参考电离室中的至少一个参考电离室的探测区域。
步骤460,响应于确定所述至少一个电离室中包括一个或多个候选电离室,从所述一个或多个候选电离室中选择一个或多个目标电离室,所述一个或多个目标电离室将会在扫描所述对象的过程中运行。
在一些实施例中,步骤460可以由目标电离室确定模块660执行。
在一些实施例中,响应于确定所述至少一个电离室中包括一个或多个候选电离室,处理设备110可以从所述一个或多个候选电离室中选择一个或多个目标电离室,所述一个或多个目标电离室将会在扫描所述对象的过程中运行。
在一些实施例中,处理设备110可以在一个或多个候选电离室中选择对象的ROI附近的一个或多个目标电离室。例如,处理设备110可以基于候选电离室和ROI之间的距离,从候选电离室中选择ROI附近的一个或多个目标电离室。候选电离室和ROI之间的距离可以指候选电离室的点(例如,中心点)和ROI的点(例如,中心点)之间的距离。处理设备110可以基于候选电离室的位置信息和ROI的位置信息来确定候选电离室和ROI之间的距离。仅作为示例,对于一个候选电离室,处理设备110可以确定该候选电离室与ROI之间的距离。处理设备110可以确定该距离是否小于距离阈值(例如,2厘米)。若该候选电离室与ROI之间的距离小于距离阈值,则处理设备110可以确定该候选电离室在ROI附近,并且将该候选电离室指定为一个目标电离室。又例如,处理设备110可以在多个候选电离室中选择最接近ROI的候选电离室,即与ROI的距离最小的候选电离室,作为目标电离室。可选地,处理设备110还可以选择距离ROI的重要部位最近的候选电离室作为目标电离室。例如,当ROI 为胸腔时,处理设备110可以选择距离心脏部位最近的候选电离室作为目标电离室。可选地,处理设备110还可以从所述候选电离室中随机选取一个或多个目标电离室。
图5是根据本申请一些实施例所示的调整投影数据的流程图。具体地,流程500可以由处理设备110执行。例如,流程500可以以程序或指令的形式存储在存储装置(如存储设备120)中,当系统100执行该程序或指令时,可以实现用于调整投影数据的流程500。如图5所示,流程500可以包括以下步骤。
步骤510,获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息。
在一些实施例中,步骤510可以由目标电离室确定模块560执行。
在一些实施例中,处理设备110可以获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息。该识别信息是用于区分不同电离室的信息。例如,该识别信息可以是电离室的编号,也可以是电离室的位置信息和/或其它可以将目标电离室与其它电离室区分开来的信息。所述一个或多个目标电离室可以是由用户在观察投影结果(例如流程300中步骤340中投影设备所投影的图像数据)后人为确定的,也可以是由处理设备110自动确定的(例如根据流程400确定的)。
步骤520,基于所述识别信息,调整所述投影数据,使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,其中,所述第一特征值和所述第二特征值对应于相同的图像特征。
在一些实施例中,该图像特征可以是表征图像不同属性的特征,例如图像的填充颜色、图像轮廓的颜色、图像轮廓的粗细等。该第一特征值和第二特征值可以是对应于相同的图像特征的不同特征值。通过使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,可以便于用户区分投影到对象上的目标电离室的探测区域和其他电离室的探测区域。例如,对应于图像的颜色这一图像特征,第一特征值可以是红色,第二特征值可以是绿色;在调整投影数据之前,对应于所述至少一个电离室中的所有电离室的探测区域的图形的颜色可以均为绿色。系统100的组件(例如,处理设备110)可以基于识别信息,使目标电离室的探测区域的图形的颜色改变为红色,其他电离室的探测区域的图形的颜色保持绿色不变,以达到将目标电离室与其它电离室区分开来的目的。再例如,对应于图像边框的宽度这一图像特征,第一特征值可以是5mm,第二特征值可以是1mm。在一些实施例中,该图像特征还可以是文字和/或符号。例如,该图像特征可以是一个箭头,此时,该图像特征的特征值可以是是否含有该箭头。例如,对应于该图像特征,第一特征值可以是含有该箭头,第二特征值可以是不含有 该箭头。类似地,该图像特征还可以是文字,例如“选中”等。在一些实施例中,该第一特征值和/或第二特征值可以是预先设置在系统100中的,也可以是用户在操作中(例如,通过终端130)设定的。例如,用户可以通过终端130将对应于图像的颜色这一图像特征的第一特征值设定为黄色。
图6是根据本申请一些实施例所示的用于标记电离室的探测区域的系统的模块图。如图6所示,用于标记电离室的探测区域的系统600的组件(例如,处理设备110)可以包括获取模块610、探测区域确定模块620、投影数据确定模块630、控制模块640。可选地,在一些实施例中,用于标记电离室的探测区域的系统的组件(例如,处理设备110)还可以包括候选电离室确定模块650以及目标电离室确定模块660。
获取模块610可以用于获取扫描设备(例如,扫描设备151)中一个或多个电离室的位置信息。在一些实施例中,电离室相对于扫描设备的探测器的位置是固定的,获取模块610可以获取扫描设备的探测器在检查室中的位置以及电离室相对于扫描设备的探测器的固定位置,来确定电离室在检查室中的位置信息。在一些实施例中,电离室相对于扫描设备的探测器的位置是可以调整的。例如,电离室可以安装在可移动的片盒内,该片盒和/或扫描设备的其他组件中可以安装有位置传感器。在此情况下,获取模块610可以获取位置传感器检测到的数据来确定电离室的位置。在一些实施例中,电离室在检查室中的位置信息可以是电离室在3D坐标系中的位置。例如,可以在整个检查室中建立3D坐标系,用于描述电离室和/或系统100的其他组件(例如探测器、投影设备152)的位置。关于获取模块610的更多信息可以参考本说明书的其余部分(例如,步骤310),此处不再赘述。
探测区域确定模块620可以用于基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域。在一些实施例中,电离室的探测区域与电离室的位置信息可以是相关的。例如,该探测区域的大小和形状可以与电离室的大小相关。在一些实施例中,该探测区域的尺寸(例如半径、边长、面积等)可以是预先设置在系统100中的。探测区域确定模块620可以基于某个电离室的位置信息和探测区域的尺寸来确定所述电离室的实际探测区域。关于探测区域确定模块620的更多信息可以参考本说明书的其余部分(例如,步骤320),此处不再赘述。
投影数据确定模块630可以用于确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据。在一些实施例中,所述投影数据还包括对应于所述对象的待扫描的ROI的图像数据。关于投影数据确定模块630的更多信息可以参考本说明书的其余部分(例如,步骤330),此处不再赘述。
控制模块640可以用于控制所述投影设备152将所述投影数据投影到待扫描的对象 上。该投影设备152可以是和/或包括能够投影图像数据的任何合适的设备。例如,投影设备152可以是阴极射线管(CRT)投影机、液晶显示器(LCD)投影机、数字光处理器(DLP)投影机、数码光路真空管(DLV)投影机或可以投影图像数据的其他设备。关于控制模块640的更多信息可以参考本说明书的其余部分(例如,步骤340),此处不再赘述。
候选电离室确定模块650可以用于确定所述至少一个电离室中是否包括一个或多个候选电离室。例如,候选电离室确定模块650可以从参考图像中识别出对应于ROI的第一区域。在一些实施例中,候选电离室确定模块650还可以用于基于第一区域和第二区域,确定至少一个电离室中是否包括一个或多个候选电离室。在一些实施例中,候选电离室确定模块650还可以用于,响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备(例如,终端130)生成提示信息。关于候选电离室确定模块650的更多信息可以参考本说明书的其余部分(例如,步骤420、步骤4400、步骤450等),此处不再赘述。
目标电离室确定模块660可以确定一个或多个目标电离室。例如,响应于确定所述至少一个电离室中包括一个或多个候选电离室,目标电离室确定模块660可以从所述一个或多个候选电离室中选择一个或多个目标电离室,所述一个或多个目标电离室将会在扫描对象的过程中运行。在一些实施例中,目标电离室确定模块660可以在多个候选电离室中选择对象的ROI附近的一个或多个目标电离室。关于目标电离室确定模块660的更多信息可以参考本说明书的其余部分(例如,步骤460),此处不再赘述。
需要注意的是,以上对于用于标记电离室的探测区域的系统及其装置/模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个装置/模块进行任意组合,或者构成子系统与其他装置/模块连接。例如,图6中披露的获取模块610、探测区域确定模块620、投影数据确定模块630、控制模块640可以是一个装置(例如处理设备110)中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,探测区域确定模块620、投影数据确定模块630可以是两个模块,也可以是一个模块同时具有上述两个模块的功能。又例如,候选电离室确定模块650和目标电离室确定模块660可以被省略。以诸如此类的变形,均在本说明书的保护范围之内。
本说明书实施例还公开了一种计算机可读存储介质,该存储介质可以存储计算机指令,当计算机读取所述存储介质中的计算机指令后,计算机可以执行本申请提供的标记电离室的探测区域的方法。
本说明书实施例还公开了一种用于标记电离室的探测区域的装置,该装置包括用于标记电离室的探测区域的程序,该程序可以实现本申请提供的标记电离室的探测区域的方法。
本说明书实施例可能带来的有益效果包括但不限于:(1)通过投影设备将需要投影的投影数据(例如,电离室的探测区域的图像数据)投影到对象上,可以清楚有效地标记或表示出电离室的探测区域,克服了缺乏电离室探测区域标识或者探测区域标识被对象遮挡的障碍,可以提高扫描图像的质量,也可以减少操作者所需付出的时间和精力;(2)通过获取对象的参考图像,并对参考图像进行识别,可以自动确定所述对象的待扫描的感兴趣区域是否覆盖了所述一个或多个电离室中至少一个电离室的探测区域;并且,通过生成提示信息和/或控制电离室的移动,可以提高待扫描的对象或感兴趣区域覆盖电离室的探测区域的准确性,也可以减少操作者所需付出的时间和精力。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
医学诊疗设备(如,X射线摄影设备、血管造影机等)的工作环境往往与操作者相互隔离,操作者在使用医学诊疗设备进行医学诊疗操作(如,影像采集)时,往往只依靠控制指令来控制其执行诊疗动作。通常情况下,医学诊疗过程中,操作者不能实时观察医学诊疗设备的具体运动情况,也无法获知医学诊疗设备上的运动部件的运动参数。因此,有必要提出一种医学诊疗设备的控制方法,增强操作者与医学诊疗设备之间的交互性。
图7是根据本申请一些实施例所示的一种医学诊疗设备的控制方法的流程图。在一些实施例中,流程700包括:
步骤710,获取医学诊疗设备的虚拟模型。在一些实施例中,步骤710可以由模型获取模块1110执行。
所述医学诊疗设备可以是执行医学诊疗任务的自动化设备。在一些实施例中,所述医学诊疗任务可以包括但不限于医学摄影任务、手术任务、康复治疗任务等等。在一些实施例中,医学诊疗设备可以包括X射线摄影系统、数字血管减影设备(DSA)等。在一些实施例中,所述医学诊疗设备包括至少一个可运动的第一部件。在一些实施例中,当所述医疗诊疗设备为X射线摄影系统时,所述第一部件可以包括X射线摄影机架,即X射线摄影系统的机架。在一些实施例中,X射线摄影机架可以包括多个部位,所述多个部位可以包括但不限于球管、探测器、所述球管的支撑元件以及所述探测器的支撑元件中的一个或多个。
所述虚拟模型是由处理设备构建的医学诊疗设备(例如,X射线摄影系统)的虚拟外形结构。在一些实施例中,虚拟模型可以具有与医学诊疗设备的实体结构相同或相近的外观。在一些实施例中,虚拟模型可以通过显示设备进行可视化的显示。在一些实施例中,虚拟模型可以是三维的,也可以是二维的。在一些实施例中,虚拟模型可以包括与医学诊疗设备的第一部件相对应的第二部件。在一些实施例中,所述第一部件所在的设备坐标系与所述第二 部件所在的模型坐标系具有映射关系。所述设备坐标系是指依据医学诊疗设备所处的实际环境构建的坐标系,所述模型坐标系是指虚拟模型中构建的坐标系。在一些实施例中,所述映射关系可以是第一部件上的任意一个点在设备坐标系中的坐标与第二部件上相对应的点在模型坐标系中的坐标之间的对应关系。在一些实施例中,所述对应关系可以是坐标值相同。例如,第一部件上的点A在设备坐标系中的坐标为(10,10,10),相对应的,第二部件上相对应的点A’在模型坐标系中的坐标为(10,10,10),则点A与点A’具有对应关系。
在一些实施例中,虚拟模型可以通过对医学诊疗设备的数据进行建模得到。在一些实施例中,用于建模的数据可以是医学诊疗设备的实体结构的几何数据(例如,各端点的几何坐标、各边线的长度等)。例如,处理设备可以基于医学诊疗设备的实体结构的几何数据按照一定比例进行缩放后,在虚拟建模环境中构建医学诊疗设备的虚拟模型。
在一些实施例中,虚拟模型可以基于医学诊疗设备的实体结构的图像获得。示例性的,在一些实施例中,处理设备可以获取拍摄装置拍摄到的医学诊疗设备的图像或影像。在一些实施例中,处理设备可以通过预设算法提取所述图像或影像的特征点。所述特征点可以是能够表达医学诊疗设备的空间分布和表面特性的点。在一些实施例中,所述预设算法可以包括但不限于Harris算法、Sift算法、SURF算法等。在一些实施例中,处理设备可以提取大量的特征点,并形成点云(Point Cloud)。在一些实施例中,处理设备可以对点云进行重建获得医学诊疗设备的虚拟模型。在一些实施例中,重建过程可以基于迭代最近点(Iterative Closest Point,ICP)算法实现。
在一些实施例中,虚拟模型可以通过显示界面进行可视化的显示。在一些实施例中,显示界面可以是显示设备上的可视化界面,显示设备可以包括但不限于电脑、移动终端、公用显示屏或者投影屏幕等等。
步骤720,获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息。在一些实施例中,步骤720可以由运动信息获取模块1120执行。
所述当前运动信息是指第一部件和/或第二部件执行当前运动时产生的信息。在一些实施例中,当前运动信息可以包括第一部件和/或第二部件中任意一个或多个部分的运动信息。在一些实施例中,当前运动信息可以包括但不限于位置、时间或速度信息,诸如,包括但不限于起始位置、目标位置、运动时间以及运动速度等中的一个或多个。
在一些实施例中,医学诊疗设备的第一部件收到将执行所述当前运动的运动控制信息,第一部件可以基于该运动控制信息执行当前运动。在一些实施例中,该运动控制信息包括第一部件的自动运动的控制指令或者对第一部件的手动操作。例如,第一部件可以从医疗任务中获取自动运动的控制指令,并基于该控制指令自动执行相应的医疗任务。又例如,医学诊 疗设备的操作者可以手动操作第一部件进行运动。在一些实施例中,当第一部件执行当前运动时,处理设备可以获取第一部件的当前运动信息。
在一些实施例中,虚拟模型的第二部件收到将执行所述当前运动的运动控制信息,第二部件可以基于该运动控制信息执行当前运动。在一些实施例中,该运动控制信息可以由鼠标、键盘、语音、手势或者通过触控的方式输入。例如,系统中可以包括触控屏,操作者可以在触控屏上点击或拖动进行输入。又例如,系统中可以包括麦克风,操作者可以通过使用麦克风进行语音输入。又例如,系统中可以包括摄像头,所述摄像头可以获取操作者的手势作为输入。又例如,系统中可以包括外接鼠标,操作者可以通过鼠标进行输入。又例如,系统中可以包括外接键盘,操作者可以通过键盘进行文字输入。示例性的,操作者可以通过触控屏拖动第二部件的其中一个部分从而产生运动控制信息,第二部件基于该运动控制信息执行当前运动。在一些实施例中,当第二部件执行当前运动时,处理设备可以获取第二部件的当前运动信息。
在一些实施例中,第二部件在当前运动下的实时位置信息也在所述显示界面上显示并随着运动更新。例如,显示界面上可以显示第二部件的其中一个部分的高度信息。关于显示界面的显示内容具体可以参见图8B的相关描述,此处不在赘述。
步骤730,所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。在一些实施例中,步骤730可以由运动执行模块1130执行。
运动指令是指医学诊疗设备的第一部件或虚拟模型的第二部件接收到的、将执行所述当前运动的运动控制信息。
在一些实施例中,当医学诊疗设备的第一部件接收到运动指令执行当前运动时,虚拟模型的第二部件将执行相同的运动。在一些实施例中,当虚拟模型的第二部件接收到运动指令执行当前运动时,医学诊疗设备的第一部件也将执行相同的运动。在一些实施例中,执行相同的运动的过程可以基于所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系实现。例如,当第一部件执行当前运动时,处理设备可以获取第一部件的当前运动信息,并将第一部件的当前运动信息中的位置信息(如,设备坐标系中的坐标)映射至模型坐标系,进而基于映射后的位置信息(如,模型坐标系中的坐标)控制第二部件运动至该位置。
在一些实施例中,所述相同的运动可以是同步运动。在一些实施例中,处理设备可以从执行当前运动的一者的当前运动信息中获取位置和/或速度信息,诸如,起始位置、目标位置和/或速度信息,并基于这些信息控制另一者进行运动,以此来实现同步运动。在一些实施例中,处理设备可以从执行当前运动的一者的当前运动信息中按照预设的时间间隔进行采样, 获取执行当前运动的一者的采样位置,并基于采样位置控制另一者运动至相对应的位置。当预设的时间间隔足够小或者小于阈值(例如,0.1秒、0.01秒或0.001秒等)时,二者的运动的同步性更佳。
在一些实施例中,所述相同的运动也可以不是同步运动,一者的当前运动和另一者的相同的运动之间可以存在时间间隔。例如,时间间隔可以是5秒或者更长。在一些实施例中,处理设备可以基于两者的实时位置来实现相同的运动。在一些实施例中,处理设备可以仅基于执行当前运动的一者的起始位置和目标位置来生成相同的运动。在一些实施例中,处理设备也可以基于执行当前运动的一者的起始位置、目标位置以及运动时间来生成相同的运动。在一些实施例中,处理设备还可以基于执行当前运动的一者的起始位置、目标位置以及运动速度来生成相同的运动。
在一些实施例中,当第一部件的其中一个部位执行当前运动时,所述显示界面上突出显示与所述第一部件的其中一个部位相对应的所述第二部件的一部分的运动轨迹。所述运动轨迹即为第二部件的一部分执行与第一部件的其中一个部位相同的运动时所产生的轨迹。
图8A是根据本申请一些实施例所示的医学诊疗设备的结构示意图;图8B是根据本申请一些实施例所示的虚拟模型的结构示意图。
如图8A所示,医学诊疗设备可以是X射线摄影机架。X射线摄影机架可以包括探测器组件801、导轨802、射线源组件803以及床板804,探测器组件801和射线源组件803构成摄影模块,其中射线源组件803通常包括高压发生器、球管和限束元件。其中,探测器单元801可以在竖立的导轨802上运动,射线源模块803可以通过支架801在悬吊的、横向的导轨802上滑动,且可以相对于支架801在竖直方向上移动或旋转,床板804可以相对于地面升降。
如图8B所示,显示界面810可以显示X射线摄影机架的虚拟模型,X射线摄影机架的虚拟模型具有与X射线摄影机架相近的结构。X射线摄影机架的虚拟模型可以包括探测器组件模型811、导轨模型812、射线源组件模型813以及床板模型814。其中,探测器组件模型811、导轨模型812、射线源组件模型813以及床板模型814在虚拟模型中的位置与探测器组件801、导轨802、射线源组件803以及床板804在X射线摄影机架中的位置一一对应。在一些实施例中,当X射线摄影机架的其中任意一个部位运动时,X射线摄影机架的虚拟模型也执行相同的运动。
在一些实施例中,显示界面810中还可以显示X射线摄影机架的虚拟模型中各部分的参数,该参数即可反映出X射线摄影机架的各部位的当前运动信息。例如,显示界面810中可以显示射线源组件的支架所在水平位置540mm,该参数即可表示X射线摄影机架的射线 源组件的支架801的当前水平位置为540mm。又例如,显示界面810中可以显示方向指示,用于指示出虚拟模型中各部分的运动方向,该运动方向即为X射线摄影机架的各部件的当前运动方向。
图9是根据本申请一些实施例所示的另一种医学诊疗设备的控制方法的流程图。在一些实施例中,流程900包括:
步骤910,基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟。
在一些实施例中,步骤910可以由运动模拟模块120执行。
X射线摄影机架是指X射线摄影设备的外形结构。在一些实施例中,所述X射线摄影机架可以包括多个部件,所述多个部件可以包括但不限于X射线摄影设备的基座、支架、床板、导轨、操控台、机械臂、显示模块、球管、探测器、球管的支撑元件以及探测器的支撑元件等中的一个或多个。在一些实施例中,所述多个部件可以包括一个或多个驱动装置,用于驱动所述多个部位中的一个或多个进行运动。
X射线摄影机架的实体结构是指X射线摄影机架的真实外形结构。X射线摄影机架的模型(即虚拟模型)是由处理设备构建的X射线摄影机架的虚拟外形结构。在一些实施例中,X射线摄影机架的模型可以具有与X射线摄影机架的实体结构相同或相近的外观。在一些实施例中,X射线摄影机架的模型可以通过显示设备进行可视化的显示。在一些实施例中,X射线摄影机架的模型可以是三维的,也可以是二维的。在一些实施例中,X射线摄影机架的模型可以包括与X射线摄影机架的实体结构的一个或多个部位相对应的一个或多个模型部分。在一些实施例中,所述一个或多个模型部分可以在X射线摄影机架的模型中相对于模型的其它部分运动。
在一些实施例中,X射线摄影机架的模型获得方式可以参见本申请图7中描述的虚拟模型的获得方式类似得到,此处不在赘述。
在一些实施例中,处理设备获取X射线摄影机架的模型后,可以采用X射线摄影机架的模型对X射线摄影机架的实体结构的运动轨迹进行模拟。所述X射线摄影机架的实体结构的运动轨迹是指X射线摄影机架任意一个部位从当前位置运动至目标位置时所产生的轨迹。所述模拟是指通过X射线摄影机架的模型在虚拟环境中复现X射线摄影机架的实体结构的运动轨迹的过程。在一些实施例中,处理设备可以建立X射线摄影机架的实体结构的真实坐标与X射线摄影机架的模型的虚拟坐标的对应关系,并基于所述对应关系实现所述模拟。在一些实施例中,所述模拟的具体方法可以进一步参见步骤920、步骤930的相关说明。
步骤920,获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部件需要运动到达的目标位置以及相关的运动时间信息。
在一些实施例中,步骤920可以由指令获取模块1220执行。
当前X射线摄影机架是指当前正在执行摄影操作的X射线摄影机架。在一些实施例中,所述运动指令可以是控制当前X射线摄影机架执行摄影操作的指令。在一些实施例中,运动指令可以根据当前X射线摄影机架的医疗任务确定。例如,当前医疗任务为腰椎拍摄,则运动指令包括将X射线摄影机架的摄影模块移动至患者的腰椎附近。在一些实施例中,运动指令包括当前X射线摄影机架的至少其中一个部件需要运动到达的目标位置。例如,运动指令可以是控制摄像模块运动到达摄影区域。又例如,运动指令也可以是控制摄像模块(例如,球管的几何中心)运动到达指定的坐标点(405,100,405)。
在一些实施例中,前述的相关运动时间信息可以是基于历史数据确定所述当前X射线摄影机架的其中一个部位到达所述目标位置需要的运动时间。处理设备可以获取当前X射线摄影机架的历史数据,所述历史数据包括当前X射线摄影机架接收历史运动指令时,当前X射线摄影机架的其中一个部件到达所述目标位置的历史运动时间,例如:2秒或者0.5秒等,该时间与当前X射线摄影机架所处的位置以及历史运动指令相关。例如,历史运动指令可以包括当前X射线摄影机架的运动速度,历史运动时间可以基于当前X射线摄影机架所处的位置、目标位置以及运动速度确定。在一些实施例中,处理设备可以基于所述历史运动时间确定当前X射线摄影机架的其中一个部件到达所述目标位置需要的运动时间。在一些实施例中,所述部件可以包括但不限于基座、操控台、机械臂以及摄影模块。在一些实施例中,处理设备也可以基于所述历史运动时间确定当前X射线摄影机架的多个部件到达所述目标位置需要的运动时间。例如,处理设备可以计算多个历史运动时间的平均值确定所述运动时间。通过确定运动时间对X射线摄影机架的运动轨迹进行模拟,可以使X射线摄影机架的模型与X射线摄影机架的实体结构的运动轨迹同步性更好,因此能够使模拟更加平滑。可以理解,在一些实施例中,前述的相关运动时间信息可以包括所述当前X射线摄影机架的其中一个部位到达所述目标位置途径的多个位置相关的多个实时时间点。
步骤930,所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于运动时间信息同步进行模型运动轨迹的模拟。
在一些实施例中,步骤930可以由模拟控制模块1230执行。
在一些实施例中,处理设备可以将所述运动指令输入到X射线摄影机架的一个或多个部位的驱动装置,所述驱动装置可以基于所述运动指令驱动X射线摄影机架的一个或多个部位运动至一个或多个目标位置。在此过程中,处理设备可以将所述运动指令同步输入到X 射线摄影机架的模型中。在一些实施例中,处理设备可以将所述运动指令中的目标位置映射到摄影机架的模型中,以确定所述目标位置在模型中的具体位置。例如,处理设备可以基于所述目标位置的真实坐标与所述X射线摄影机架的模型中的虚拟坐标的对应关系确定所述目标位置在X射线摄影机架的模型中的具体位置。在一些实施例中,处理设备可以控制X射线摄影机架的模型中的一个或多个部分运动至所述具体位置,该过程所形成的轨迹即为模型运动轨迹。在一些实施例中,处理设备可以将确定的X射线摄影机架的实体结构的运动时间设定为X射线摄影机架的模型完成所述模型运动轨迹的时间,以使得X射线摄影机架的实体结构的运动轨迹与X射线摄影机架的模型的运动轨迹能够同时完成,提高模拟的实时准确性。如前所述,也可以通过在多个途径位置处跟踪机架和模型之间时间点的实时性来完成。
在一些实施例中,所述模拟还可以通过以下方式实现:在X射线摄影机架的机房内设置光学监控装置,用于监控X射线摄影机架的运动。例如,可以监控X射线摄影机架上的监控点相对于参考点的位置变化关系。所述监控点可以是X射线摄影机架上一个或多个部件上的端点,所述一个部件上可以有至少一个监控点。所述参考点可以是X射线摄影机架或机房中的某一个固定点。处理设备可以将光学监控装置监控到的X射线摄影机架的运动映射为X射线摄影机架的模型的动画,实现X射线摄影机架的运动轨迹的模拟。例如,可以基于监控点和参考点的位置变化关系进行映射。在该实施例中,需要将监控装置监控到的X射线摄影机架的运动映射为模型的动画后,才能实现控制模型模拟运动轨迹的效果。
步骤940,将所述模型运动轨迹的模拟显示在显示设备上。
在一些实施例中,步骤940可以由显示模块1240执行。
在一些实施例中,显示设备可以包括但不限于电脑显示屏、手机显示屏、投影显示屏以及公用显示屏等等。在一些实施例中,显示设备可以位于X射线摄影机架的机房内部,也可以位于X射线摄影机架的机房外部。在一些实施例中,显示设备可以是本地的显示设备,也可以是远程的显示设备。例如,处理设备可以通过网络将模型运动轨迹发送至远程的显示设备进行显示。
在一些实施例中,显示设备还可以显示模型各部分的参数变化。在一些实施例中,所述参数可以包括但不限于高度、横向移动距离、纵向移动距离、位置坐标、设备型号等等。例如,显示设备可以在显示模型其中一部分的运动轨迹时,同步显示该部分的空间坐标变化。
在一些实施例中,显示设备可以包括多个显示区域。在一些实施例中,显示设备可以在多个显示区域的其中一个显示区域对模型的一部分进行突出显示。在一些实施例中,当X射线摄影机架的实体结构的其中一个部件运动时,显示设备可以在其中一个显示区域上突出显示与该部件相对应的X射线摄影机架的模型的一部分的运动轨迹。例如,显示设备可以包 括一个用于显示模型整体的主显示区域,以及多个用于突出显示模型的一部分的显示区域。在一些实施例中,显示设备可以在多个显示区域的其中一个显示区域显示可选视角,操作者可以通过选择可选视角使显示设备从不同视角展示模型运动轨迹。
在一些实施例中,显示设备还可以接收操作者的交互数据。所述交互数据是操作者输入的,用于实现操作者与X射线摄影机架的实体结构及其模型之间的信息交换的指令。例如,操作者可以在显示设备中输入指令对X射线摄影机架的模型进行显示控制。在一些实施例中,显示设备可以包括触控屏,操作者可以在触控屏上操作输入交互数据。例如,操作者可以在触控屏上点击相应的列表或选项输入交互数据。又例如,操作者可以在触控屏上对控制X射线摄影机架的模型的显示进行放大或缩小。又例如,操作者可以在触控屏上拖动X射线摄影机架的模型的其中一个部分输入交互数据。在一些实施例中,处理设备可以基于用户在显示设备上输入的交互数据生成运动指令,并通过该运动指令控制X射线摄影机架的实体结构进行运动。例如,显示设备可以在多个显示区域的其中一个显示区域显示可选的医疗任务列表,操作者可以通过点击医疗任务列表中的医疗任务为X射线摄影机架生成相应的运动指令,进而控制X射线摄影机架运动。关于基于用户在显示设备上输入的交互数据控制X射线摄影机架的实体结构进行运动可以进一步参见图10的相关描述。
图10是根据本申请一些实施例所示的另一种医学诊疗设备的控制方法的流程图。在一些实施例中,流程1000包括:
步骤1010,通过所述显示设备获取交互数据。
所述交互数据是操作者输入的,用于实现操作者与X射线摄影机架的实体结构及其模型之间的信息交换的指令。在一些实施例中,操作者可以通过多种方式在显示设备中输入交互数据,所述多种方式包括但不限于触控输入、语音输入、图像识别输入以及外接设备输入。例如,显示设备可以包括触控屏,操作者可以在触控屏上点击或拖动进行输入。又例如,显示设备可以包括麦克风,操作者可以通过使用麦克风进行语音输入。又例如,显示设备可以包括摄像头,所述摄像头可以获取操作者的手势作为输入。又例如,显示设备可以包括外接鼠标,操作者可以通过鼠标进行输入。又例如,显示设备可以包括外接键盘,操作者可以通过键盘进行文字输入。
步骤1020,基于所述交互数据控制所述X射线摄影机架的实体结构进行运动。
在一些实施例中,所述交互数据可以包括改变X射线摄影机架的模型的显示状态的指令。例如,操作者可以基于交互数据切换X射线摄影机架的模型的显示视角。又例如,操作者可以基于交互数据多模型的显示进行放大或缩小。
在一些实施例中,所述交互数据可以包括改变X射线摄影机架的模型的运动状态的 指令。例如,操作者可以通过使模型暂停运动或开始运动产生交互数据。又例如,操作者可以通过拖动模型的其中一个部分进行运动产生交互数据。在一些实施例中,处理设备可以基于这一类交互数据(改变X射线摄影机架的模型的运动状态的指令)生成相应的运动指令。生成的运动指令可以用于控制X射线摄影机架的实体结构执行与X射线摄影机架的模型相对应的运动。例如,当操作者在显示设备中将X射线摄影机架的模型的运动暂停时,对应地,X射线摄影机架的实体结构也暂停运动。在一些实施例中,当操作者在显示设备中拖动X射线摄影机架的模型的其中一个部分运动时,处理设备可以生成相对应的运动指令,基于该运动指令,X射线摄影机架的实体结构中与该部分相对应的部位也随着拖动轨迹运动。在一些实施例中,该运动指令可以基于如下方式生成:处理设备可以获取模型拖动轨迹上的多个采样点的坐标,并基于采样点的坐标确定X射线摄影机架的实体结构的多个目标位置及其顺序,进而生成运动指令。
图11是根据本申请一些实施例所示的一种医学诊疗设备的控制系统的模块示意图。
如图11所示,该医学诊疗设备的控制系统1100可以包括模型获取模块1110、运动信息获取模块1120以及运动执行模块1130。
在一些实施例中,模型获取模块1110可以用于获取医学诊疗设备的虚拟模型,其中,所述医学诊疗设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系。
在一些实施例中,运动信息获取模块1120可以用于获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息。在一些实施例中,运动信息获取模块1120还可以用于获取所述医学诊疗设备的第一部件的当前运动信息;其中,在获取所述医学诊疗设备的第一部件的当前运动信息前,所述第一部件收到将执行所述当前运动的运动控制信息。在一些实施例中,运动信息获取模块1120还可以用于获取所述虚拟模型的第二部件的当前运动信息;其中,在获取所述虚拟模型的第二部件的当前运动信息前,所述第二部件收到将执行所述当前运动的运动控制信息。
在一些实施例中,运动执行模块1130可以用于所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
图12是根据本申请一些实施例所示的另一种医学诊疗设备的控制系统的模块示意图。
如图12所示,该医学诊疗设备的控制系统1200可以包括运动模拟模块1210、指令获取模块1220、模拟控制模块1230以及显示模块1240。
在一些实施例中,运动模拟模块1210可以用于基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟。
在一些实施例中,指令获取模块1220可以用于获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部位需要运动到达的目标位置,基于历史数据确定所述当前X射线摄影机架的其中一个部位到达所述目标位置需要的运动时间。
在一些实施例中,模拟控制模块1230可以用于控制所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,在所述运动时间内同步进行模型运动轨迹的模拟。
在一些实施例中,显示模块1240可以用于将所述模型运动轨迹的模拟显示在显示设备上。在一些实施例中,显示模块1240还可以用于在所述显示设备上突出显示与所述X射线摄影机架的其中一个部位相对应的所述X射线摄影机架的模型的一部分的运动轨迹。
在一些实施例中,系统1200还可以包括数据获取模块,可以用于通过所述显示设备获取交互数据。在一些实施例中,获取模块还可以用于在所述触控屏上通过触摸控制所述X射线摄影机架的模型,生成所述交互数据。
在一些实施例中,系统1200还可以包括运动控制模块,用于基于所述交互数据控制所述当前X射线摄影机架的实体结构进行运动。
本申请实施例还提供一种医学诊疗设备的控制装置,包括处理器,所述处理器用于执行计算机指令,以实现本申请上述任意一个或多个实施例中的医学诊疗设备的控制方法。
本申请实施例可能带来的有益效果包括但不限于:(1)通过显示设备对医学诊疗设备的运动进行同步显示,便于操作者观察以及监控医学诊疗设备的运动情况及参数;(2)可以同步展示医学诊疗设备的诊疗进程,便于操作者准备或进行后续操作;(3)通过确定运动时间对X射线摄影机架的运动轨迹进行模拟,可以使X射线摄影机架的模型与X射线摄影机架的实体结构的运动轨迹同步,并能够使模拟更加平滑;(4)通过控制显示设备中的模型的运动来控制摄影设备的运动,可以进一步提高医学诊疗设备与用户的交互性,便于操作者使用。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
随着科技水平的发展,人类对于医疗方面的需求越来越大。同时,对于第一次获得最佳扫描效果的要求也越来越大。目前医疗设备普遍存在在拍摄之前无法知道患者病情的问题,医疗成像设备的扫描参数主要依赖医生的经验或者通过额外进行的预扫描获得。通过依赖医 生的经验来设置扫描参数时,存在扫描参数调整不准确,需要多次调整的情况,这样病人可能会接收过多的射线剂量,对病人身体造成影响。通过额外进行的预扫描来获取扫描参数,这样病人同样需多经过一次甚至多次预扫描才可确定扫描参数,同时病人在预扫描过程中也会接收射线剂量,同样会造成病人接收过多射线剂量的情况。这样会使医生无法及时了解病人情况,无法及时对其进行诊断,诊断效率低下,且病人会接收过多剂量,对病人身体造成影响。因此,本申请实施例提供了一种医疗成像设备参数确定方法。
图13为根据本申请一些实施例所示的医疗成像设备参数确定方法的流程图,本实施例可适用于基于目标对象的体征信息来确定医疗成像设备的扫描参数和图像处理参数的情况,该方法1300可以由医疗成像设备参数确定装置来执行,该医疗成像设备参数确定装置可以由软件和/或硬件来实现,该医疗成像设备参数确定装置可以配置在计算设备上,具体包括如下步骤:
S1310、获取目标对象的体征信息。
示例性的,目标对象可以是要进行图像扫描的对象,例如,可以是人或动物等。
体征信息可以是目标对象的基本体征信息,例如,可以是但不限于是目标对象的体温信息、血压信息、血脂信息、呼吸信息、脉搏信息、眼部体征信息、手部体征信息、腿部体征信息或头部体征信息等。
需要说明的是,目标对象的体征信息可以是根据目标对象的一些表现,基于专业仪器获取的。可选的,获取目标对象的体征信息可以是通过传感器来实现的。示例性的,这里的传感器可以是摄像头、温度传感器、心跳传感器或呼吸传感器等。例如,一患者出现发热的症状,则医师基于患者的发热症状,基于温度传感器来获取该患者的体温信息。这样可基于对应的传感器来自动获取目标对象的体征信息。这样可自动获取目标对象的体征信息,以便后续基于该体征信息,确定医疗成像设备的扫描参数及/或图像处理参数。
S1320、对体征信息进行分析,确定异常体征信息。
示例性的,异常体征信息可以是与标准体征信息不符的体征信息,或者可以是与目标对象的平常的体征信息不符的体征信息等。
可选的,所述对体征信息的进行分析,确定异常体征信息,具体可以是:将体征信息输入至被训练的体征识别模型中,获取体征识别模型输出的异常体征信息;或者,将体征信息与常态体征参数进行比对,确定异常体征信息。
示例性的,体征识别模型可以是对输入的体征信息进行分析,输出异常体征信息的模型。例如,可以是支持向量机、全卷积神经网络(FullyConvolutional Networks,FCN)、U-net神经网络、二维卷积神经网络(CNN-2d)、特征金字塔网络(Feature PyramidNetworks,FPN)等。 该体征识别模型是基于历史体征信息进行训练而得到的。
将获取的目标对象的体征信息输入至被训练的体征识别模型中,该模型即可输出目标对象的异常体征信息。以此来得到目标对象的异常体征信息。
常态体征信息可以是目标对象的平常状态的体征信息。例如,一目标对象,平常情况下,该目标对象的体温在36-36.4°,若经测量,该目标对象此时的体温为37°,则确定此时该目标对象的体温这个体征信息为异常体征信息。
可选的,所述对体征信息进行分析,确定异常体征信息,还可以具体是:将体征信息与目标对象的类型对应的标准体征信息进行比对,确定不符合标准体征信息的异常体征信息。
示例性的,目标对象的类型可以是根据目标对象的人体基础信息和/或历史病历确定,这里的人体基础信息至少包括:性别、年龄、体重、身高等信息。
标准体征信息可以是国家确定的不同性别、年龄、体重、身高所应对应的体征信息。例如,年龄为45-55岁、身高在170-175厘米的男性,体温为36-36.4°。
例如,有一目标对象是年龄为50岁、身高在172厘米的男性,年龄为45-55岁、身高在170-175厘米的男性,针对体温这个体征信息来说,标准体征信息为36-36.4°。若经测量该目标对象的体温为37°,在体温这个体征信息为异常体征信息。
历史病历可以是该目标对象的历史病历信息,例如,该目标对象常年患有高血压等疾病。这样可能会引起该目标对象的体征信息与正常的标准体征信息不符,但这并不代表该目标对象的该项体征信息异常。例如,年龄为45-55岁、身高在170-175厘米的男性,针对体温这个体征信息来说,标准体征信息为36-36.4°。若某患者是年龄为50岁、身高在172厘米的患有高血压的男性,但经测量该患者的体温为35.7°。但是对于高血压患者来说,该体温是正常的,因此,体温这个体征信息不属于异常体征信息。
这样经上述三种方式来确定目标对象的体征信息中的异常体征信息,这样可快速确定目标对象的异常体征信息,提高了诊断效率。
当获取到目标对象的体征信息后,可对目标对象的体征信息进行分析,以此来确定目标对象的异常体征信息。这样可快速确定目标对象的异常体征信息,以便后续基于异常体征信息,来对医疗成像设备的扫描参数及/或图像处理参数进行针对性的调整。
S1330、基于目标对象的异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
示例性的,扫描参数可以是目标对象进行图像扫描的扫描参数,例如,可以是目标对象进行磁共振扫描的扫描参数或者电子计算机断层扫描的扫描参数等。具体的例如可以是:扫描电压、扫描电流、扫描视野、扫描层数或者扫描层厚等,具体的是比如,若目标对象体 温过高,则可降低电压参数,提高电流参数等。
图像处理参数可以是对目标对象的感兴趣区域的扫描算法进行处理的参数。这里的感兴趣区域可以是经异常体征信息确定的目标对象可能具有异常的区域。例如,若目标对象的体温过高,则目标对象的肺部可能具有异常,则肺部为目标对象的感兴趣区域。
图像处理参数可以是对目标对象的感兴趣区域的扫描算法进行处理的参数,例如,目标对象要进行磁共振扫描,确定异常体征信息为体温异常,例如体温过高,则可有针对性的对肺部图像进行处理,比如,可以是调整肺部的图像处理参数,比如是调整肺部软组织与骨组织的对比度和均衡度等,具体的可以是降低均衡强度,增加对比度的增强强度。使目标对象的肺部纹理更加清晰,这样可使得到的更为清晰、更有针对性的扫描图像。
可选的,这里的医疗成像设备可以但不限于包括:X射线摄影设备、MR设备、CT设备、PET设备、超声设备或DSA设备,或多模态成像设备。
具体的,当异常体征信息为体温时,例如,可以是体温过高,则在扫描参数上可以降低扫描电压、提高扫描电流;在图像处理参数上,可以降低图像均衡度、增加图像对比度。
当异常体征信息为呼吸信息时,例如,可以是体温急促,则在扫描参数上可以降低扫描电压、提高扫描电流;在图像处理参数上,可以降低图像均衡度、增加图像对比度。
当异常体征信息为眼部体征信息时,例如,可以是眼白发黄,则在扫描参数上可以降低扫描电压、提高扫描电流;在图像处理参数上,可以增加图像均衡度、降低图像对比度。
需要说明的是,这里的适应性地确定医疗成像设备的扫描参数及/或图像处理参数,可以是根据目标对象的自身情况等状况来调整医疗成像设备的扫描参数及/或图像处理参数。这里的“适应性地”是以达到将异常体征信息与扫描参数及/或图像处理参数相匹配的情况为准。
需要说明的是,这里根据目标对象的异常体征信息,确定目标对象的扫描参数及/或图像图像处理参数,可以是通过将目标对象的异常体征信息输入神经网络模型或者对应数据库中,以此来确定目标对象的扫描参数及/或图像图像处理参数。这样可快速得到目标对象的扫描参数及/或图像图像处理参数,进而提高诊断效率。
这样根据目标对象的异常体征信息,即可确定目标对象的扫描参数及/或图像处理参数。解决了现有技术中,需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题,同时避免目标对象过多接收射线剂量的问题。这样可以基于目标体征信息,及时确定医疗成像设备的扫描参数,以实现提高诊断效率的目的。
本申请实施例的技术方案,通过获取目标对象的体征信息,对体征信息进行分析,确定异常体征信息。基于目标对象的异常体征信息,确定医疗扫描成像设备的扫描参数及/或图像处理参数。这样可以基于目标体征信息,及时确定医疗成像设备的扫描参数,以实现提高 诊断效率的目的。这样解决了现有技术中,需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题,同时避免目标对象过多接收射线剂量的问题。
图14为根据本申请一些实施例所示的医疗成像设备参数确定方法的流程图,本申请实施例与上述实施例中各个可选方案可以结合。在本申请实施例中,可选的,所述基于目标对象的异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:基于目标对象的异常体征信息,确定目标对象的疾病类型;基于疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
如图14所示,本申请实施例的方法1400具体包括如下步骤:
S1410、获取目标对象的体征信息。
S1420、对体征信息进行分析,确定异常体征信息。
S1430、基于目标对象的异常体征信息,确定目标对象的疾病类型。
示例性的,疾病类型可以是根据异常体征信息,判断目标对象可能存在的疾病。例如,若目标对象的体温过高,则目标对象的肺部可能具有异常,则判断该目标对象可能有肺炎等肺部疾病。再例如,若目标对象的呼吸急促,则目标对象可能呼吸道具有异常,判断该目标对象可能有哮喘等疾病。再例如,若目标对象的眼白发黄,则目标对象的眼部可能具有异常,判断该目标对象可能具有肝部病变等疾病。
这样基于异常体征信息,可直接确定目标对象的疾病类型,以便后续基于疾病类型,来调整扫描参数及/或图像处理参数。
S1440、基于疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
示例性的,根据确定的疾病类型,可有针对性的适应性地调整目标对象的扫描参数及/或图像处理参数,以得到精确的、有针对性的扫描图像。这样可以基于目标对象的体征信息,及时确定扫描参数,以实现提高诊断效率的目的。解决了现有技术中,需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题。
可选的,所述基于疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,具体可以是:确定异常体征信息的异常等级;基于疾病类型和异常等级,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
示例性的,异常等级可以是异常体征信息的等级。例如,一目标对象的体温为38°,标准体温为36-37°,这里规定37-37.5°为轻度发热,37.6-38°为中度发热,38°以上为重度发热。则确定该目标对象的异常等级为中度发热。
根据目标对象的体温,确定该目标对象可能具有肺炎,根据确定的肺炎,以及确定该 目标对象为中度发热,来调节目标对象的扫描参数及/或图像处理参数。
需要说明的是,这里不同的疾病类型的扫描参数及/或图像处理参数是不同的。相同疾病类型,不同异常等级的扫描参数及/或图像处理参数可能也是不同的。因此,可根据疾病类型和异常等级,有针对性的确定医疗成像设备的扫描参数及/或图像处理参数,以得到精确的、有针对性的扫描图像,以便医师更好的对图像进行诊断。
本申请实施例的技术方案,通过基于目标对象的异常体征信息,确定医疗成像设备的疾病类型,基于疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,这样可得到精确的、有针对性的扫描图像,可以基于目标对象的体征信息,及时确定扫描参数,以实现提高诊断效率的目的。解决了现有技术中,需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题。
图15为根据本申请一些实施例所示的医疗成像设备参数确定方法的流程图,本申请实施例与上述实施例中各个可选方案可以结合。在本申请实施例中,可选的,所述方法还包括:基于扫描参数,确定目标对象的目标扫描协议;基于目标扫描协议和图像处理参数对所述目标对象进行扫描,得到目标对象的扫描图像。
如图15所示,本申请实施例的方法1500具体包括如下步骤:
S1510、获取目标对象的体征信息。
S1520、对体征信息进行分析,确定异常体征信息。
S1530、基于目标对象的异常体征信息,确定目标对象的疾病类型。
S1540、基于疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
S1550、基于扫描参数,确定目标对象的目标扫描协议。
示例性的,目标扫描协议可以是最终用于对目标对象进行图像扫描的扫描协议。当确定了目标对象的扫描参数后,即可生成该目标对象的目标扫描协议。
S1560、基于目标扫描协议和图像处理参数对目标对象进行扫描,得到目标对象的扫描图像。
示例性的,根据目标扫描协议和图像处理参数,可对目标对象进行图像扫描,这样可得到具有针对性的、质量效果好的、可很好反映目标对象的异常的扫描图像。
需要说明的是,在基于目标扫描协议和图像处理参数对目标对象进行扫描时,可以是针对目标对象进行全身扫描,但是突出目标对象的异常部位。例如,经判断一目标对象可能具有肺炎,则在基于目标扫描协议和图像处理参数对目标对象进行扫描时,可对目标对象进行全身扫描,但是在图像中突出显示清晰肺部纹理。还可以是仅针对目标对象有异常的部位进行扫描,例如,经判断一目标对象可能具有肺炎,则在基于目标扫描协议和图像处理参数 对目标对象进行扫描时,可仅对目标对象的肺部进行扫描,突出显示清晰肺部纹理。这里具体是对目标对象进行全身扫描还对有针对性的仅对异常部位进行扫描,可根据用户需求自行设定,这里不做限定。
本申请实施例的技术方案,通过基于扫描参数,确定目标对象的目标扫描协议,基于目标扫描协议和图像处理参数对目标对象进行扫描,得到目标对象的扫描图像,这样可得到具有针对性的、质量效果好的、可很好反映目标对象的异常的扫描图像。
图16为根据本申请一些实施例所示的医疗成像设备的成像方法的流程图,本实施例可适用于基于目标对象的体征信息来确定医疗成像设备的扫描参数和图像处理参数,进而对目标对象进行扫描成像的情况,该方法可以由医疗成像设备的成像装置来执行,该由医疗成像设备的成像装置可以由软件和/或硬件来实现,该由医疗成像设备的成像装置可以配置在计算设备上。本申请实施例中与上述各实施例相同或相应的术语的解释在此不再赘述。
参见图16,本申请实施例的医疗成像设备的成像方法1600具体包括如下步骤:
S1610、获取针对目标对象的医疗成像设备的扫描参数。
S1620、获取目标对象的异常体征信息。
S1630、基于异常体征信息,适应性地调整医疗成像设备的扫描参数和/或图像处理参数。
S1640、对目标对象进行扫描并获得医学图像,其中,所述扫描是基于调整后的扫描参数和/或医学图像是经过调整后的图像处理参数处理得到的。
本申请实施例的技术方案,通过获取针对目标对象的医疗成像设备的扫描参数,获取目标对象的异常体征信息,基于异常体征信息,适应性地调整医疗成像设备的扫描参数和/或图像处理参数,对目标对象进行扫描并获得医学图像,其中,所述扫描是基于调整后的扫描参数和/或医学图像是经过调整后的图像处理参数处理得到的,这样可基于目标对象的体征信息,及时调整医疗成像设备的扫描参数,以利用调整后的扫描参数对目标对象进行扫描,进而利用调整后的图像处理参数对扫描的图像进行处理,以得到质量更加好、更加清晰的扫描图像,以实现提高诊断效率的目的。这样解决了现有技术中,需要医师不断调整扫描参数及/或图像处理参数,导致图像质量不佳,重复扫描,效率低下,影响诊断的问题,同时避免目标对象过多接收射线剂量的问题。
图17为根据本申请一些实施例所示的医疗成像设备参数确定装置的结构示意图,如图17所示,该装置1700包括:体征信息获取模块1710、异常体征信息确定模块1720和参数确定模块1730。
其中,体征信息获取模块1710,用于获取目标对象的体征信息;
异常体征信息确定模块1720,用于对所述体征信息进行分析,确定异常体征信息;
参数确定模块1730,用于基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
可选的,所述扫描参数包括:扫描电压、扫描电流、扫描视野、扫描层数或者扫描层厚。
可选的,所述图像处理参数包括:图像对比度或者图像均衡度。
可选的,所述获取目标对象的体征信息是通过传感器来实现的。
可选的,所述传感器包括摄像头、温度传感器、心跳传感器或呼吸传感器。
在上述实施例的技术方案的基础上,参数确定模块33包括:
疾病类型确定单元,用于基于所述目标对象的所述异常体征信息,确定所述目标对象的疾病类型;
参数确定单元,用于基于所述疾病类型,适应性地确定所述目标对象的扫描参数及/或图像处理参数。
在上述实施例的技术方案的基础上,参数确定单元包括:
异常等级确定子单元,用于确定所述异常体征信息的异常等级;
参数确定子单元,用于基于所述疾病类型和所述异常等级,适应性地确定所述医疗成像设备的扫描参数及/或图像处理参数。
在上述实施例的技术方案的基础上,异常体征信息确定模块32包括:
第一异常体征信息确定单元,用于将所述体征信息输入至被训练的体征识别模型中,获取所述体征识别模型输出的异常体征信息;
或者,
第三异常体征信息确定单元,用于将所述体征信息与常态体征参数进行比对,确定异常体征信息。
在上述实施例的技术方案的基础上,异常体征信息确定模块32还包括:
第二异常体征信息确定单元,用于将所述体征信息与所述目标对象的类型对应的标准体征信息进行比对,确定不符合所述标准体征信息的异常体征信息;其中,所述目标对象的类型根据所述目标对象的人体基础信息和/或历史病历确定,所述人体基础信息至少包括:性别、年龄、体重、身高。
在上述实施例的技术方案的基础上,该装置还包括:
目标扫描协议确定模块,用于基于所述扫描参数,确定所述目标对象的目标扫描协议;
扫描图像获取模块,用于基于所述目标扫描协议和所述图像处理参数对所述目标对象 进行扫描,得到所述目标对象的扫描图像。
可选的,所述图像处理参数为对所述目标对象的感兴趣区域的扫描算法进行处理的参数。
本申请实施例所提供的医疗成像设备参数确定装置可执行本申请任意实施例所提供的医疗成像设备参数确定方法,具备执行方法相应的功能模块和有益效果。
图18为根据本申请一些实施例所示的医疗成像设备的成像装置的结构示意图,如图18所示,该装置1800包括:扫描参数获取模块1810、异常体征信息获取模块1820、参数调整模块1830和扫描模块1840。
其中,扫描参数获取模块1810,用于获取针对目标对象的医疗成像设备的扫描参数;
异常体征信息获取模块1820,用于获取所述目标对象的异常体征信息;
参数调整模块1830,用于基于所述异常体征信息,适应性地调整所述医疗成像设备的扫描参数和/或图像处理参数;
扫描模块1840,用于对所述目标对象进行扫描并获得医学图像,其中,所述扫描是基于所述调整后的扫描参数和/或所述医学图像是经过调整后的图像处理参数处理得到的。
本申请实施例所提供的医疗成像设备的成像装置可执行本申请任意实施例所提供的医疗成像设备的成像方法,具备执行方法相应的功能模块和有益效果。
图19为根据本申请一些实施例所示的一种医疗成像设备的结构示意图,如图7所示,该医疗成像设备1900包括:成像组件1910、传感器1920和控制器1930。
其中,成像组件1910,其用于扫描目标对象以获得医学图像;
传感器1920,获取所述目标对象的异常体征信息;
控制器1930,其与所述成像组件和所述传感器耦接,基于所述异常体征信息适应性地调整所述成像组件的扫描参数,和/或基于所述异常体征信息适应性地调整所述医学图像的图像处理参数。
可选的,所述传感器包括摄像头、温度传感器、心跳或脉搏传感器或呼吸传感器。
图20为根据本申请一些实施例所示的一种设备的结构示意图,如图20所示,该设备2000包括处理器2010、存储器2020、输入装置2030和输出装置2040;设备中处理器2010的数量可以是一个或多个,图20中以一个处理器2010为例;设备中的处理器2010、存储器2020、输入装置2030和输出装置2040可以通过总线或其他方式连接,图8中以通过总线连接为例。
存储器2020作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本申请实施例中的医疗成像设备参数确定方法对应的程序指令/模块(例如,体 征信息获取模块31、异常体征信息确定模块32和参数确定模块33),和/或,本申请实施例中的医疗成像设备的成像方法对应的程序指令/模块(例如,扫描参数获取模块1810、异常体征信息获取模块1820、参数调整模块1830和扫描模块1840)。处理器2010通过运行存储在存储器2020中的软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述的参数确定方法。
存储器2020可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器2020可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器2020可进一步包括相对于处理器2010远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置2030可用于接收输入的数字或字符信息,以及产生与设备的用户设置以及功能控制有关的键信号输入。输出装置2040可包括显示屏等显示设备。
本申请实施例九还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种医疗成像设备参数确定方法和/或医疗成像设备的成像方法。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的方法操作,还可以执行本申请任意实施例所提供的医疗成像设备参数确定方法和/或医疗成像设备的成像方法中的相关操作。
如此描述了基本概念,对于本领域技术人员而言,在阅读了该详细公开之后,可以很明显地认识到,上述详细公开仅旨在通过示例的方式进行描述,而并非是限制性的。尽管这里没有明确说明,但是可能发生各种改变,改进和修改,并且它们是本领域技术人员想要的。这些改变,改进和修改旨在由本公开提出,并且在本公开的示例性实施方式的精神和范围内。
而且,某些术语已经被用来描述本公开的实施例。例如,术语“一个实施例”,“一个实施例”和/或“一些实施例”表示结合该实施例描述的特定特征,结构或特性包括在本公开的至少一个实施例中。因此,应当强调并且应当理解,在本说明书的各个部分中对“一个实施例”或“一个实施例”或“替代实施例”的两次或更多次引用不一定都指同一实施例。此外,可以在本公开的一个或以上实施例中适当地组合特定特征,结构或特性。
此外,本领域的技术人员将意识到,本文中的公开内容的各个方面可以以许多可授予专利的类别或环境中的任何一种进行说明和描述,包括任何新的和有用的过程,机器,制造或物质组成,或其任何新的有用的改进。因此,本申请的各个方面可以完全以硬件,完全以 软件(包括固件,驻留软件,微代码等)实现,或者可以将软件和硬件实现组合在一起,在此通常将其统称为“单元”。模块或“系统”。此外,本申请的各方面可以采取在一个或以上计算机可读介质中包含计算机可读程序代码的计算机程序产品的形式。
计算机可读信号介质可以包括例如在基带中或作为载波的一部分的传播的数据信号,该传播的数据信号具有体现在其中的计算机可读程序代码。此类传播信号可以有多种形式,包括电磁形式、光形式等或任何合适的组合。计算机可读信号介质可以是不是计算机可读存储介质的任何计算机可读介质,并且可以通信,传播或传输供指令执行系统,装置或设备使用或与其结合使用的程序。包含在计算机可读信号介质上的程序代码可以使用任何适当的介质来传输,包括无线,有线,光纤电缆,RF或类似介质,或前述的任何适当组合。
用于执行本申请各方面的操作的计算机程序代码可以使用一种或多种编程语言的组合来编写,包括Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C、C等面向对象的编程语言。程序代码可以完全在用户计算机上,部分在用户计算机上,作为独立软件包执行,部分在用户计算机上并且部分在远程计算机上执行,或者完全在远程计算机或服务器上执行。在后一种情况下,远程计算机可以通过任何类型的网络(包括局域网(LAN)或广域网(WAN))连接到用户计算机,或者可以与外部计算机建立连接(用于例如,通过使用Internet服务提供商的Internet)或在云计算环境中或作为服务(例如软件即服务(SaaS))提供。
此外,陈述的处理元素或序列的顺序,或数字的使用,字母,或其其他名称,除非在权利要求中指定,否则本申请无意将所要求保护的过程和方法限制为任何顺序。尽管以上公开内容通过各种示例讨论了当前被认为是本公开内容的各种有用实施例,但是应当理解,这种细节仅是出于该目的,并且所附权利要求不限于所公开的实施例,但是,相反,其旨在覆盖在所公开的实施例的精神和范围内的修改和等同布置。例如,尽管上述各种组件的实现可以体现在硬件设备中,但是它也可以实现为纯软件解决方案,例如,在现有服务器或移动设备上的安装。
类似地,应当理解,在本公开的实施例的前述描述中,有时将各种特征组合在单个实施例,附图或其描述中,以简化本公开,以帮助理解一个或以上。各种实施例。然而,本公开的方法不应被解释为反映了这样一种意图,即所要求保护的主题需要比每个权利要求中明确叙述的特征更多的特征。相反,要求保护的主题可以少于单个前述公开的实施例的所有特征。

Claims (74)

  1. 一种医学设备控制方法,其特征在于,包括:
    获取医学设备的相关信息和/或目标对象的相关信息;
    基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备。
  2. 如权利要求1所述的方法,其特征在于,所述医学设备包括扫描设备;
    所述获取医学设备的相关信息和/或目标对象的相关信息包括:获取所述扫描设备中一个或多个电离室的位置信息,所述扫描设备用于扫描所述目标对象;
    所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:
    基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;
    确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及
    控制所述投影设备将所述投影数据投影到所述目标对象上。
  3. 如权利要求2所述的方法,其特征在于,所述投影数据还包括对应于所述目标对象的待扫描的感兴趣区域的图像数据。
  4. 如权利要求2所述的方法,其特征在于,所述方法进一步包括:
    获取所述目标对象的参考图像,所述参考图像由摄像头在所述投影设备将所述投影数据投影到待扫描的目标对象上之后拍摄;
    识别参考图像中的第一区域,所述第一区域对应于所述目标对象的待扫描的感兴趣区域;
    识别参考图像中的第二区域,所述第二区域对应于投影到所述目标对象上的至少一个电离室的探测区域;以及
    基于所述第一区域和第二区域,确定所述至少一个电离室中是否包括一个或多个候选电离室,其中所述一个或多个候选电离室的探测区域被所述目标对象的待扫描的感兴趣区域覆盖。
  5. 如权利要求4所述的方法,其特征在于,所述方法进一步包括:
    响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备生成提示信息;
    响应于确定所述至少一个电离室中不包括任何候选电离室,使所述至少一个电离室中的一个或多个参考电离室相对于所述目标对象的感兴趣区域进行移动。
  6. 如权利要求2所述的方法,其特征在于,所述方法进一步包括:
    获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息;
    基于所述识别信息,调整所述投影数据,使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,其中,所述第一特征值和所述第二特征值对应于相同的图像特征。
  7. 如权利要求1所述的方法,其特征在于,
    所述获取医学设备的相关信息和/或目标对象的相关信息包括:
    获取所述医学设备的虚拟模型,其中,所述医学设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;以及
    获取所述医学设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;
    所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:
    控制所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
  8. 如权利要求7所述的方法,其特征在于,所述获取所述医学设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息,包括:
    获取所述医学诊疗设备的第一部件的当前运动信息;
    其中,在获取所述医学诊疗设备的第一部件的当前运动信息前,所述第一部件收到将执行所述当前运动的运动控制信息。
  9. 如权利要求7所述的方法,其特征在于,所述获取所述医学设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息包括:
    获取所述虚拟模型的第二部件的当前运动信息;
    其中,在获取所述虚拟模型的第二部件的当前运动信息前,所述第二部件收到将执行所述当前运动的运动控制信息。
  10. 如权利要求7所述的方法,其特征在于,所述虚拟模型在显示界面上显示,且所述第二部件在当前运动下的实时位置信息也在所述显示界面上显示并随着运动更新。
  11. 如权利要求1所述的方法,其特征在于,
    所述获取医学设备的相关信息和/或目标对象的相关信息,包括:
    基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;以及
    获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;
    所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备,包括:
    所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;
    以及将所述模型运动轨迹的模拟显示在显示设备上。
  12. 如权利要求11所述的方法,其特征在于,所述模型通过以下方式获得:
    获取所述X射线摄影机架的图像;
    提取所述图像的特征点;
    基于所述特征点进行重建,获得所述模型。
  13. 如权利要求1所述的方法,其特征在于,所述医学设备为医疗成像设备;
    所述获取医学设备的相关信息和/或目标对象的相关信息包括:
    获取目标对象的体征信息;
    所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备包括:
    对所述体征信息进行分析,确定异常体征信息;以及
    基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  14. 如权利要求13所述的方法,其特征在于,所述基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:
    基于所述目标对象的所述异常体征信息,确定所述目标对象的疾病类型;
    基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  15. 如权利要求14所述的方法,其特征在于,所述基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:
    确定所述异常体征信息的异常等级;
    基于所述疾病类型和所述异常等级,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  16. 如权利要求13所述的方法,其特征在于,所述对所述体征信息的进行分析,确定异常体征信息,包括:
    将所述体征信息输入至被训练的体征识别模型中,获取所述体征识别模型输出的异常体征信息;
    或者,
    将所述体征信息与常态体征参数进行比对,确定异常体征信息。
  17. 如权利要求13所述的方法,其特征在于,所述对所述体征信息的进行分析,确定异常体征信息,还包括:
    将所述体征信息与所述目标对象的类型对应的标准体征信息进行比对,确定不符合所述标准体征信息的异常体征信息;
    其中,所述目标对象的类型根据所述目标对象的人体基础信息和/或历史病历确定,所述人体基础信息至少包括:性别、年龄、体重和身高。
  18. 如权利要求1所述的方法,其特征在于,所述医学设备为医疗成像设备;
    所述获取医学设备的相关信息和/或目标对象的相关信息,包括:
    获取针对目标对象的医疗成像设备的扫描参数;
    获取所述目标对象的异常体征信息;
    所述基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备,包括:
    基于所述异常体征信息,适应性地调整所述医疗成像设备的扫描参数和/或图像处理参数;
    对所述目标对象进行扫描并获得医学图像,其中,所述扫描是基于所述调整后的扫描参数和/或所述医学图像是经过调整后的图像处理参数处理得到的。
  19. 一种医学设备控制系统,其特征在于,包括:
    信息获取模块,用于获取医学设备的相关信息和/或目标对象的相关信息;
    控制模块,用于基于所述医学设备的相关信息和/或目标对象的相关信息,控制所述医学设备。
  20. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如权利要求1~18任意一项所述的医学设备控制方法。
  21. 一种医学设备控制装置,其特征在于,所述装置包括至少一个处理器以及至少一个存储器;
    所述至少一个存储器用于存储计算机指令;
    所述至少一个处理器用于执行所述计算机指令中的至少部分指令以实现如权利要求1~18任意一项所述的医学设备控制方法。
  22. 一种标记电离室的探测区域的方法,其特征在于,所述方法包括:
    获取扫描设备中一个或多个电离室的位置信息,所述扫描设备用于扫描对象;
    基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;
    确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及
    控制所述投影设备将所述投影数据投影到所述对象上。
  23. 如权利要求22所述的方法,其特征在于,所述投影数据还包括对应于所述对象的待扫描的感兴趣区域的图像数据。
  24. 如权利要求22所述的方法,其特征在于,所述方法进一步包括:
    获取所述对象的参考图像,所述参考图像由摄像头在所述投影设备将所述投影数据投影到待扫描的对象上之后拍摄;
    识别参考图像中的第一区域,所述第一区域对应于所述对象的待扫描的感兴趣区域;
    识别参考图像中的第二区域,所述第二区域对应于投影到所述对象上的至少一个电离室的探测区域;以及
    基于所述第一区域和第二区域,确定所述至少一个电离室中是否包括一个或多个候选电 离室,其中所述一个或多个候选电离室的探测区域被所述对象的待扫描的感兴趣区域覆盖。
  25. 如权利要求24所述的方法,其特征在于,所述方法进一步包括:
    响应于确定所述至少一个电离室中不包括任何候选电离室,使终端设备生成提示信息。
  26. 如权利要求24所述的方法,其特征在于,响应于确定所述至少一个电离室中不包括任何候选电离室,所述方法进一步包括:
    使所述至少一个电离室中的一个或多个参考电离室相对于所述对象的感兴趣区域进行移动。
  27. 如权利要求24所述的方法,其特征在于,所述方法进一步包括:
    响应于确定所述至少一个电离室中包括一个或多个候选电离室,从所述一个或多个候选电离室中选择一个或多个目标电离室,所述一个或多个目标电离室将会在扫描所述对象的过程中运行。
  28. 如权利要求22所述的方法,其特征在于,所述方法进一步包括:
    获取从所述至少一个电离室中选择的一个或多个目标电离室的识别信息;
    基于所述识别信息,调整所述投影数据,使所述投影数据中对应于所述一个或多个目标电离室的探测区域的图像数据的第一特征值与对应于其他电离室的探测区域的图像数据的第二特征值不同,其中,所述第一特征值和所述第二特征值对应于相同的图像特征。
  29. 一种用于标记电离室的探测区域的系统,包括:
    获取模块,用于获取扫描设备中一个或多个电离室的位置信息;
    探测区域确定模块,用于基于所述一个或多个电离室的位置信息,确定所述一个或多个电离室中至少一个电离室的探测区域;
    投影数据确定模块,用于确定投影设备的投影数据,所述投影数据包括对应于所述至少一个电离室的探测区域的图像数据;以及
    控制模块,用于控制所述投影设备将所述投影数据投影到待扫描的对象上。
  30. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,当计算机读取所述存储介质中的计算机指令后,所述计算机执行如权利要求22-28中任一项所述的方 法。
  31. 一种用于标记电离室的探测区域的装置,其特征在于,所述装置包括用于标记电离室的探测区域的程序,所述程序实现如权利要求22-28中任一项所述的方法。
  32. 一种医学诊疗设备的控制方法,其特征在于,所述方法包括:
    获取医学诊疗设备的虚拟模型,其中,所述医学诊疗设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;
    获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;
    所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
  33. 根据权利要求32所述的方法,其特征在于,所述相同的运动包括同步运动。
  34. 根据权利要求32所述的方法,其特征在于,所述医学诊疗设备包括X射线摄影系统。
  35. 根据权利要求34所述的方法,其特征在于,所述第一部件包括X射线摄影系统的机架。
  36. 根据权利要求35所述的方法,其特征在于,所述机架包括球管、探测器或者所述球管的支撑元件或者所述探测器的支撑元件。
  37. 根据权利要求32所述的方法,其特征在于,获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息包括:
    获取所述医学诊疗设备的第一部件的当前运动信息;
    其中,在获取所述医学诊疗设备的第一部件的当前运动信息前,所述第一部件收到将执行所述当前运动的运动控制信息。
  38. 根据权利要求37所述的方法,其特征在于,所述运动控制信息包括第一部件的自动运动的控制指令或者对第一部件的手动操作。
  39. 根据权利要求32所述的方法,其特征在于,获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息包括:
    获取所述虚拟模型的第二部件的当前运动信息;
    其中,在获取所述虚拟模型的第二部件的当前运动信息前,所述第二部件收到将执行所述当前运动的运动控制信息。
  40. 根据权利要求39所述的方法,其特征在于,所述运动控制信息由鼠标、键盘或者语音输入或者通过触控输入。
  41. 根据权利要求32所述的方法,其特征在于,所述虚拟模型在显示界面上显示,且所述第二部件在当前运动下的实时位置信息也在所述显示界面上显示并随着运动更新。
  42. 根据权利要求41所述的方法,其特征在于,所述显示界面为电脑的或者移动终端的或者公用的显示界面。
  43. 根据权利要求34所述的方法,其特征在于,所述虚拟模型通过对所述X射线摄影系统的机架的数据进行建模获得。
  44. 根据权利要求34所述的方法,其特征在于,所述虚拟模型通过以下方式获得:
    获取所述X射线摄影系统中机架的图像;
    提取所述图像的特征点;
    基于所述特征点进行重建,获得所述模型。
  45. 根据权利要求41所述的方法,其特征在于,所述第一部件的其中一个部位运动时,所述显示界面上突出显示与所述第一部件的其中一个部位相对应的所述第二部件的一部分的运动轨迹。
  46. 一种医学诊疗设备的控制方法,其特征在于,所述方法包括:
    基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;
    获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;
    所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;
    以及将所述模型运动轨迹的模拟显示在显示设备上。
  47. 根据权利要求46所述的方法,其特征在于,所述模型通过对所述X射线摄影机架的数据进行建模获得。
  48. 根据权利要求46所述的方法,其特征在于,所述模型通过以下方式获得:
    获取所述X射线摄影机架的图像;
    提取所述图像的特征点;
    基于所述特征点进行重建,获得所述模型。
  49. 根据权利要求46所述的方法,其特征在于,所述X射线摄影机架的其中一个部位运动时,所述将所述模型运动轨迹的模拟显示在显示设备上包括:
    在所述显示设备上突出显示与所述X射线摄影机架的其中一个部位相对应的所述X射线摄影机架的模型的一部分的运动轨迹。
  50. 根据权利要求46所述的方法,其特征在于,所述显示设备设置在所述X射线摄影机架的机房外,所述方法还包括:
    通过所述显示设备获取交互数据;
    基于所述交互数据控制所述当前X射线摄影机架的实体结构进行运动。
  51. 根据权利要求50所述的方法,其特征在于,所述显示设备包括触控屏,所述通过所述显示设备获取交互数据,包括:
    在所述触控屏上通过触摸控制所述X射线摄影机架的模型,生成所述交互数据。
  52. 一种医学诊疗设备的控制系统,其特征在于,所述系统包括:
    模型获取模块,用于获取医学诊疗设备的虚拟模型,其中,所述医学诊疗设备包括至少一个可运动的第一部件,相应地,所述虚拟模型包括模拟所述第一部件的第二部件,所述第一部件所在的设备坐标系与所述第二部件所在的模型坐标系具有映射关系;
    运动信息获取模块,用于获取所述医学诊疗设备的第一部件和所述虚拟模型的第二部件其中之一者的当前运动信息;
    运动执行模块,用于所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中另一者,与获取运动指令的所述医学诊疗设备的第一部件和虚拟模型的第二部件的其中之一者执行相同的运动。
  53. 一种医学诊疗设备的控制系统,其特征在于,所述系统包括:
    运动模拟模块,用于基于X射线摄影机架的实体结构获取所述X射线摄影机架的模型,并基于所述X射线摄影机架的实体结构的运动轨迹,采用所述X射线摄影机架的模型进行相对应的模型运动轨迹的模拟;
    指令获取模块,用于获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;
    模拟控制模块,用于控制所述X射线摄影机架的实体结构基于所述运动指令到达目标位置,所述X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;
    显示模块,用于将所述模型运动轨迹的模拟显示在显示设备上。
  54. 一种医学诊疗设备的控制方法,其特征在于,所述方法包括:
    获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括所述当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关的运动时间信息;
    X射线摄影机架的实体结构基于所述运动指令到达目标位置,X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;
    以及将所述模型运动轨迹的模拟显示在显示设备上。
  55. 一种医学诊疗设备的控制系统,其特征在于,所述系统包括:
    指令获取模块,用于获取当前X射线摄影机架的实体结构的运动指令,所述运动指令包括当前X射线摄影机架的其中一个部位需要运动到达的目标位置和相关运动时间信息;
    模拟控制模块,用于控制X射线摄影机架的实体结构基于所述运动指令到达目标位置,X射线摄影机架的模型基于所述运动指令,基于所述运动时间信息同步进行模型运动轨迹的模拟;
    显示模块,用于将所述模型运动轨迹的模拟显示在显示设备上。
  56. 一种医学诊疗设备的控制装置,包括处理器,其特征在于,所述处理器用于执行计算机指令,以实现权利要求32~51中任一项所述的方法。
  57. 一种医疗成像设备参数确定方法,其特征在于,包括:
    获取目标对象的体征信息;
    对所述体征信息进行分析,确定异常体征信息;
    基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  58. 根据权利要求57所述的方法,所述扫描参数包括:扫描电压、扫描电流、扫描视野、扫描层数或者扫描层厚。
  59. 根据权利要求57所述的方法,所述图像处理参数包括:图像对比度或者图像均衡度。
  60. 根据权利要求57所述的方法,所述获取目标对象的体征信息是通过传感器来实现的。
  61. 根据权利要求60所述的方法,所述传感器包括摄像头、温度传感器、心跳传感器或呼吸传感器。
  62. 根据权利要求57所述的方法,其特征在于,所述基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:
    基于所述目标对象的所述异常体征信息,确定所述目标对象的疾病类型;
    基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  63. 根据权利要求62所述的方法,其特征在于,所述基于所述疾病类型,适应性地确定医疗成像设备的扫描参数及/或图像处理参数,包括:
    确定所述异常体征信息的异常等级;
    基于所述疾病类型和所述异常等级,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  64. 根据权利要求57所述的方法,其特征在于,所述对所述体征信息的进行分析,确定异常体征信息,包括:
    将所述体征信息输入至被训练的体征识别模型中,获取所述体征识别模型输出的异常体征信息;
    或者,
    将所述体征信息与常态体征参数进行比对,确定异常体征信息。
  65. 根据权利要求57所述的方法,其特征在于,所述对所述体征信息的进行分析,确定异常体征信息,还包括:
    将所述体征信息与所述目标对象的类型对应的标准体征信息进行比对,确定不符合所述标准体征信息的异常体征信息;
    其中,所述目标对象的类型根据所述目标对象的人体基础信息和/或历史病历确定,所述人体基础信息至少包括:性别、年龄、体重和身高。
  66. 根据权利要求57所述的方法,其特征在于,所述方法还包括:
    基于所述扫描参数,确定所述目标对象的目标扫描协议;
    基于所述目标扫描协议和所述图像处理参数对所述目标对象进行扫描,得到所述目标对象的扫描图像。
  67. 根据权利要求57或66所述的方法,其特征在于,所述图像处理参数为对所述目标对象的感兴趣区域的扫描算法进行处理的参数。
  68. 一种医疗成像设备的成像方法,其特征在于,包括:
    获取针对目标对象的医疗成像设备的扫描参数;
    获取所述目标对象的异常体征信息;
    基于所述异常体征信息,适应性地调整所述医疗成像设备的扫描参数和/或图像处理参数;
    对所述目标对象进行扫描并获得医学图像,其中,所述扫描是基于所述调整后的扫描参 数和/或所述医学图像是经过调整后的图像处理参数处理得到的。
  69. 根据权利要求68所述的医疗成像设备的成像方法,所述医疗成像设备包括X射线摄影设备、MR设备、CT设备、PET设备、超声设备或DSA设备,或多模态成像设备。
  70. 一种医疗成像设备参数确定装置,其特征在于,包括:
    体征信息获取模块,用于获取目标对象的体征信息;
    异常体征信息确定模块,用于对所述体征信息进行分析,确定异常体征信息;
    参数确定模块,用于基于所述目标对象的所述异常体征信息,适应性地确定医疗成像设备的扫描参数及/或图像处理参数。
  71. 一种医疗成像设备,其特征在于,包括:
    成像组件,其用于扫描目标对象以获得医学图像;
    传感器,获取所述目标对象的异常体征信息;
    控制器,其与所述成像组件和所述传感器耦接,基于所述异常体征信息适应性地调整所述成像组件的扫描参数,和/或基于所述异常体征信息适应性地调整所述医学图像的图像处理参数。
  72. 根据权利要求71所述的医疗成像设备,其中,所述传感器包括摄像头、温度传感器、心跳或脉搏传感器或呼吸传感器。
  73. 一种设备,其特征在于,所述设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求57-67中任一所述的医疗成像设备参数确定方法和/或权利要求68所述的医疗成像设备的成像方法。
  74. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求57-67中任一所述的医疗成像设备参数确定方法和/或权利要求68所述的医疗成像设备的成像方法。
PCT/CN2021/110409 2020-08-03 2021-08-03 一种医学设备控制方法及系统 WO2022028439A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21852413.0A EP4176812A4 (en) 2020-08-03 2021-08-03 METHOD AND SYSTEM FOR CONTROLLING A MEDICAL DEVICE
US18/163,923 US20230172577A1 (en) 2020-08-03 2023-02-03 Methods and systems for controlling medical devices

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202010767758.3 2020-08-03
CN202010767758.3A CN111772655A (zh) 2020-08-03 2020-08-03 医疗成像设备参数确定方法、成像方法及装置
CN202011114024.1A CN112274166A (zh) 2020-10-18 2020-10-18 一种医学诊疗设备的控制方法、系统及装置
CN202011114024.1 2020-10-18
CN202011114737.8A CN112071405A (zh) 2020-10-18 2020-10-18 一种标记电离室的探测区域的方法、系统及装置
CN202011114737.8 2020-10-18

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/163,923 Continuation US20230172577A1 (en) 2020-08-03 2023-02-03 Methods and systems for controlling medical devices

Publications (1)

Publication Number Publication Date
WO2022028439A1 true WO2022028439A1 (zh) 2022-02-10

Family

ID=80117029

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110409 WO2022028439A1 (zh) 2020-08-03 2021-08-03 一种医学设备控制方法及系统

Country Status (3)

Country Link
US (1) US20230172577A1 (zh)
EP (1) EP4176812A4 (zh)
WO (1) WO2022028439A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118266977A (zh) * 2024-04-19 2024-07-02 北京友通上昊科技有限公司 射线成像方法、系统和介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102934526A (zh) * 2010-04-13 2013-02-13 卡尔斯特里姆保健公司 使用数字射线摄影探测器的曝光控制
CN107644686A (zh) * 2016-07-20 2018-01-30 郎焘 基于虚拟现实的医学数据采集系统及方法
CN108766589A (zh) * 2018-05-31 2018-11-06 湖北民族学院 一种远程医疗监护方法和装置
CN110742631A (zh) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 一种医学图像的成像方法和装置
CN110811623A (zh) * 2019-11-21 2020-02-21 上海联影医疗科技有限公司 医学图像扫描计划方法、装置、设备及存储介质
WO2020115307A1 (en) * 2018-12-07 2020-06-11 Koninklijke Philips N.V. Positioning a medical x-ray imaging apparatus
CN111772655A (zh) * 2020-08-03 2020-10-16 上海联影医疗科技有限公司 医疗成像设备参数确定方法、成像方法及装置
CN112071405A (zh) * 2020-10-18 2020-12-11 上海联影医疗科技股份有限公司 一种标记电离室的探测区域的方法、系统及装置
CN112274166A (zh) * 2020-10-18 2021-01-29 上海联影医疗科技股份有限公司 一种医学诊疗设备的控制方法、系统及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102934526A (zh) * 2010-04-13 2013-02-13 卡尔斯特里姆保健公司 使用数字射线摄影探测器的曝光控制
CN107644686A (zh) * 2016-07-20 2018-01-30 郎焘 基于虚拟现实的医学数据采集系统及方法
CN108766589A (zh) * 2018-05-31 2018-11-06 湖北民族学院 一种远程医疗监护方法和装置
WO2020115307A1 (en) * 2018-12-07 2020-06-11 Koninklijke Philips N.V. Positioning a medical x-ray imaging apparatus
CN110742631A (zh) * 2019-10-23 2020-02-04 深圳蓝韵医学影像有限公司 一种医学图像的成像方法和装置
CN110811623A (zh) * 2019-11-21 2020-02-21 上海联影医疗科技有限公司 医学图像扫描计划方法、装置、设备及存储介质
CN111772655A (zh) * 2020-08-03 2020-10-16 上海联影医疗科技有限公司 医疗成像设备参数确定方法、成像方法及装置
CN112071405A (zh) * 2020-10-18 2020-12-11 上海联影医疗科技股份有限公司 一种标记电离室的探测区域的方法、系统及装置
CN112274166A (zh) * 2020-10-18 2021-01-29 上海联影医疗科技股份有限公司 一种医学诊疗设备的控制方法、系统及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4176812A4 *

Also Published As

Publication number Publication date
EP4176812A1 (en) 2023-05-10
US20230172577A1 (en) 2023-06-08
EP4176812A4 (en) 2023-10-18

Similar Documents

Publication Publication Date Title
CN111938678B (zh) 一种成像系统和方法
CN110139607B (zh) 用于患者扫描设置的方法和系统
EP3669942B1 (en) Systems and methods for determining a region of interest of a subject
JP7451460B2 (ja) マンモグラフィにおけるユーザおよび/または患者の経験の改善のための方法およびシステム
US20230172577A1 (en) Methods and systems for controlling medical devices
US12064268B2 (en) Systems and methods for medical imaging
CN113940691A (zh) 用于图像采集的患者定位的系统和方法
US10398395B2 (en) Medical image diagnostic apparatus
CN112071405A (zh) 一种标记电离室的探测区域的方法、系统及装置
JP2020188991A (ja) 医用画像処理装置、医用画像処理プログラム、x線診断装置及びx線診断システム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21852413

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021852413

Country of ref document: EP

Effective date: 20230206

NENP Non-entry into the national phase

Ref country code: DE