WO2022022723A1 - 一种医学操作的相关参数的确定方法以及系统 - Google Patents

一种医学操作的相关参数的确定方法以及系统 Download PDF

Info

Publication number
WO2022022723A1
WO2022022723A1 PCT/CN2021/109902 CN2021109902W WO2022022723A1 WO 2022022723 A1 WO2022022723 A1 WO 2022022723A1 CN 2021109902 W CN2021109902 W CN 2021109902W WO 2022022723 A1 WO2022022723 A1 WO 2022022723A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
target
target object
optical image
medical
Prior art date
Application number
PCT/CN2021/109902
Other languages
English (en)
French (fr)
Inventor
冯娟
韩业成
李伟
Original Assignee
上海联影医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010751784.7A external-priority patent/CN111870268A/zh
Priority claimed from CN202010786489.5A external-priority patent/CN114067994A/zh
Application filed by 上海联影医疗科技股份有限公司 filed Critical 上海联影医疗科技股份有限公司
Priority to EP21849927.5A priority Critical patent/EP4169450A4/en
Publication of WO2022022723A1 publication Critical patent/WO2022022723A1/zh
Priority to US18/157,796 priority patent/US20230148986A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/06Diaphragms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/545Control of apparatus or devices for radiation diagnosis involving automatic set-up of acquisition parameters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/08Auxiliary means for directing the radiation beam to a particular spot, e.g. using light beams
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/461Displaying means of special interest
    • A61B6/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/48Diagnostic techniques
    • A61B6/488Diagnostic techniques involving pre-scan acquisition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/542Control of apparatus or devices for radiation diagnosis involving control of exposure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/547Control of apparatus or devices for radiation diagnosis involving tracking of position of the device or parts of the device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/502Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of breast, i.e. mammography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]

Definitions

  • the present application relates to a method for determining relevant parameters, and in particular, to a method and system for determining relevant parameters of a medical operation.
  • Radiation equipment such as DR equipment, CT equipment, X-ray machine, linear accelerator, C-arm machine, etc. shoots and/or treats patients by emitting radiation (such as X-ray, beta-ray, gamma-ray, etc.).
  • a beam limiter is used to set a corresponding opening, and the radiation is irradiated on the human body through the opening. If the area irradiated on the human body through the beam limiter opening does not match the area to be irradiated on the human body, there will be a problem of receiving unnecessary radiation, which may cause harm to the human body. Therefore, it is necessary to provide a method for determining the target position information of the beam limiting device, so as to improve the matching degree between the opening of the beam limiter and the area to be irradiated on the human body.
  • the present application provides a method for determining parameters related to medical operations, the method includes: acquiring optical image information of a target object; determining target part information of the target object; determining, based at least on the optical image information and the target part information, Parameters related to medical procedures.
  • the determining the target part information of the target object comprises: acquiring the target part information of the target object.
  • the determining the target part information of the target object includes: processing the optical image information to determine the target part information in the target object.
  • the relevant parameters of the medical operation include the position information to be irradiated on the target object and/or the target position information of the beam limiting device; the information is based on at least the optical image information and the target site information , determining the relevant parameters of the medical operation, including: determining the position information to be irradiated on the target object and/or the target position information of the beam limiting device at least according to the optical image information and the target site information.
  • the acquiring the target part information of the target object includes: acquiring a medical image of the target part in the target object; and determining the target object at least according to the optical image information and the target part information
  • the position information to be irradiated on the target object and/or the target position information of the beam limiting device includes: determining the position information to be irradiated on the target object and/or the beam limiting device according to at least the optical image information and the medical image target location information.
  • acquiring the target part information of the target object further comprises: acquiring protocol information related to the target object, the protocol information at least including the target part information of the target object;
  • the optical image information, the target site information, and determining the position information to be irradiated on the target object and/or the target position information of the beam limiting device include: determining the target at least according to the optical image information and the protocol information The position information to be irradiated on the object and/or the target position information of the beam limiting device.
  • acquiring the target part information of the target object further includes: acquiring label information of a medical image corresponding to the target object.
  • the method further includes: acquiring initial position information of the beam limiting device; when determining the position information to be irradiated on the target object according to the optical image information and the target part information, The method further includes: determining target position information of the beam limiting device according to the to-be-irradiated position information and the initial position information.
  • the determining the position information to be irradiated on the target object according to the optical image information and the target part information includes: inputting the optical image information and the target part information into a first machine learning model , to determine the location information to be irradiated.
  • the first machine learning model is obtained by: obtaining an initial machine learning model; obtaining initial sample training data, the initial sample training data including historical optical images of historical target objects, and the historical target The historical medical image images of one or more target parts on the object; according to the fusion result information of the historical optical images and the historical medical image images, the marking information of the historical optical images is determined; the marking information includes the The position information of the target part in the historical optical image; the historical optical image and the historical medical image image are used as input data, and the label information is used as output data, which is input to the initial machine learning model for training.
  • the determining the target position information of the beam limiting device according to the optical image information and the target position information includes: inputting the optical image information and the target position information into a second machine learning model to determine the target position information of the beam limiting device.
  • the second machine learning model is obtained by: obtaining an initial machine learning model; obtaining initial sample training data, the initial sample training data including historical optical images of historical target objects and the historical target objects One or more historical medical images of target parts on the historical medical image; according to the fusion result information of the historical optical image and the historical medical image, determine the historical target position information of the corresponding beam limiting device; The historical medical image is used as input data, and the historical target position information of the corresponding beam limiting device is used as output data, which is input to the initial machine learning model for training.
  • the movement of the beam limiting device is controlled according to the target position information of the beam limiting device.
  • the target location information is greater than a preset threshold range, prompt information is issued.
  • the position information to be irradiated includes at least two sub-areas to be irradiated; the target position information of the beam limiting device includes sub-target position information corresponding to the two sub-areas to be irradiated.
  • the protocol information related to the target object includes at least two sub-target parts; correspondingly, the sub-areas to be irradiated may be determined according to the sub-target parts in the protocol information.
  • the sub-areas to be irradiated are determined by the preset algorithm; correspondingly, the position information of at least two sub-targets of the beam limiting device is determined according to the sub-areas to be irradiated.
  • the beam limiting device includes a multi-leaf grating collimator.
  • the relevant parameters of the medical operation include labeling information of a medical image corresponding to the target object; the target part information includes orientation information of the target part; the at least the optical image information and the information based on the target part information, and determining the relevant parameters of the medical operation, including: determining the marking information of the medical image based on the orientation information of the target part.
  • the method further comprises: marking the medical image of the target object based on the orientation information of the target part information.
  • the method includes: acquiring a medical image of the target object; and marking the marking information in the medical image.
  • marking the medical image of the target object based on the orientation information includes: determining corresponding protocol information based on the orientation information; mark.
  • the orientation information includes at least one of a left-right orientation, a front-rear orientation, and an up-down orientation of the target part relative to the target object.
  • the optical image information of the target object includes a still image or a video image.
  • the processing of the optical image information is performed by a preset algorithm, wherein the preset The algorithm includes a machine learning model; correspondingly, the processing of the optical image information to determine the orientation information of the target part in the target object includes: inputting the optical image information into the machine learning model; according to the output data of the machine learning model Determine the orientation information of the target part.
  • the optical image information is obtained by a camera, and the medical image is one image of MRI, XR, PET, SPECT, CT, ultrasound, or a fusion image of two or more.
  • the method further includes: automatically adjusting the radiation source of the medical imaging device based on the optical image information of the target part, so that the target part is in the ray path of the radiation source.
  • marking the medical image of the target object based on the orientation information includes marking with color or text or graphics.
  • the method further includes manual adjustment of the labeling of the labelled medical image.
  • the present application also provides a system for determining parameters related to medical operations, the system includes an optical image information acquisition module, a target site information determination module, and a medical operation related parameter determination module; the optical image information acquisition module is used to acquire a target object The target part information determination module is used to determine the target part information of the target object; the medical operation related parameter determination module is used to determine the relevant parameters of the medical operation based on at least the optical image information and the target part information .
  • the target part information determination module is further configured to acquire target part information of the target object.
  • the target part information determination module is further configured to process the optical image information to determine target part information in the target object.
  • the relevant parameters of the medical operation include position information on the target object to be irradiated and/or target position information of the beam limiting device; the medical operation-related parameter determination module is further configured to: at least according to the The optical image information and the target position information are used to determine the position information to be irradiated on the target object and/or the target position information of the beam limiting device.
  • the acquiring the target part information of the target object includes: acquiring a medical image of the target part in the target object; and determining the target object at least according to the optical image information and the target part information
  • the position information to be irradiated on the target object and/or the target position information of the beam limiting device includes: determining the position information to be irradiated on the target object and/or the beam limiting device according to at least the optical image information and the medical image target location information.
  • acquiring the target part information of the target object further comprises: acquiring protocol information related to the target object, the protocol information at least including the target part information of the target object;
  • the optical image information, the target site information, and determining the position information to be irradiated on the target object and/or the target position information of the beam limiting device include: determining the target at least according to the optical image information and the protocol information The position information to be irradiated on the object and/or the target position information of the beam limiting device.
  • acquiring the target part information of the target object further includes: acquiring label information of a medical image corresponding to the target object.
  • the system further includes: acquiring initial position information of the beam limiting device; when determining the position information to be irradiated on the target object according to the optical image information and the target part information, The method further includes: determining target position information of the beam limiting device according to the to-be-irradiated position information and the initial position information.
  • the determining the position information to be irradiated on the target object according to the optical image information and the target part information includes: inputting the optical image information and the target part information into a first machine learning model , to determine the location information to be irradiated.
  • the first machine learning model is obtained by: obtaining an initial machine learning model; obtaining initial sample training data, the initial sample training data including historical optical images of historical target objects, and the historical target The historical medical image images of one or more target parts on the object; according to the fusion result information of the historical optical images and the historical medical image images, the marking information of the historical optical images is determined; the marking information includes the The position information of the target part in the historical optical image; the historical optical image and the historical medical image image are used as input data, and the label information is used as output data, which is input to the initial machine learning model for training.
  • the determining the target position information of the beam limiting device according to the optical image information and the target position information includes: inputting the optical image information and the target position information into a second machine learning model to determine the target position information of the beam limiting device.
  • the second machine learning model is obtained by: obtaining an initial machine learning model; obtaining initial sample training data, the initial sample training data including historical optical images of historical target objects and the historical target objects The historical medical images of one or more target parts on the historical optical image; according to the fusion result information of the historical optical image and the historical medical image, determine the historical target position information of the corresponding beam limiting device; The historical medical image is used as input data, and the historical target position information of the corresponding beam limiting device is used as output data, which is input to the initial machine learning model for training.
  • the movement of the beam limiting device is controlled according to the target position information of the beam limiting device.
  • the target location information is greater than a preset threshold range, prompt information is issued.
  • the position information to be irradiated includes at least two sub-areas to be irradiated; the target position information of the beam limiting device includes sub-target position information corresponding to the two sub-areas to be irradiated.
  • the protocol information related to the target object includes at least two sub-target parts; correspondingly, the sub-areas to be irradiated may be determined according to the sub-target parts in the protocol information.
  • the sub-areas to be irradiated are determined by the preset algorithm; correspondingly, the position information of at least two sub-targets of the beam limiting device is determined according to the sub-areas to be irradiated.
  • the beam limiting device includes a multi-leaf grating collimator.
  • the relevant parameters of the medical operation include label information of the medical image corresponding to the target object; the target part information includes orientation information of the target part; the medical operation-related parameter determination module is further configured to: The orientation information of the target part is described, and the marking information of the medical image is determined.
  • the system further includes: marking the medical image of the target object based on the orientation information of the target part information.
  • a medical image of the target object is acquired; and the marking information is marked in the medical image.
  • marking the medical image of the target object based on the orientation information includes: determining corresponding protocol information based on the orientation information; mark.
  • the orientation information includes at least one of a left-right orientation, a front-rear orientation, and an up-down orientation of the target part relative to the target object.
  • the optical image information of the target object includes a still image or a video image.
  • the processing of the optical image information is performed by a preset algorithm, wherein the preset The algorithm includes a machine learning model; correspondingly, the processing of the optical image information to determine the orientation information of the target part in the target object includes: inputting the optical image information into the machine learning model; according to the output data of the machine learning model Determine the orientation information of the target part.
  • the optical image information is obtained by a camera, and the medical image is one image of MRI, XR, PET, SPECT, CT, ultrasound, or a fusion image of two or more.
  • the shown system further includes: based on the optical image information of the target part, automatically adjusting the radiation source of the medical imaging device so that the target part is in the ray path of the radiation source.
  • marking the medical image of the target object based on the orientation information includes marking with color or text or graphics.
  • system further includes manual adjustment of the marking of the marked medical image.
  • the present application provides a system for determining target position information of a beam limiting device, comprising an optical image information acquisition module, a target position information acquisition module and a determination module; the optical image information acquisition module is used to acquire optical image information of a target object; The target part information acquisition module is used to acquire the target part information of the target object; the determination module is used to determine the position information to be irradiated on the target object and/or at least according to the optical image information and the target part information The target position information of the beam limiting device.
  • the present application provides an apparatus for determining an operating position, including a processor for executing a method for determining target position information of any beam limiting apparatus.
  • the present application provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes a method for determining target position information of any beam limiting device.
  • the present application provides an orientation marking system for a target part, the system comprising: an image information acquisition module for acquiring image information of a target part of a target object; an orientation information determination module for processing the image information to determine The orientation information of the target part in the target object; the orientation information marking module is used to mark the medical image of the target object based on the orientation information.
  • the system further includes a camera, which is used for acquiring the image information, and the medical image is one image of MRI, XR, PET, SPECT, CT, ultrasound, or two or more fused images.
  • an apparatus for marking an orientation of a target site includes a processor, wherein the processor is configured to execute computer instructions to mark a method with the orientation of any target site.
  • the present application provides a system for marking the orientation of a target part, characterized in that, the system comprises: a camera device for acquiring image information of the target part of the target object; a medical imaging device for acquiring the image information of the target object A medical image; an information processing device for processing the image information to determine the orientation information of the target part in the target object; and marking the orientation information in the medical image.
  • the camera device is relatively fixed or movably disposed on the medical imaging device.
  • the camera device is a camera.
  • FIG. 1 is a schematic diagram of an application scenario of a system for determining target position information of a beam limiting device according to some embodiments of the present application;
  • FIG. 2 is a block diagram of a system for determining target position information of a beam limiting device according to some embodiments of the present application
  • FIG. 3 is a flowchart of a method for determining target position information of an exemplary beam limiting device according to some embodiments of the present application
  • FIG. 4 is an exemplary flow diagram of a target orientation marking system according to some embodiments of the present specification.
  • FIG. 5 is an exemplary flowchart of marking a target position according to some embodiments of the present application.
  • FIG. 6 is a schematic diagram of a medical image according to some embodiments of the present application.
  • FIG. 7 is a schematic diagram of a medical image according to other embodiments of the present application.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • visual recognition is performed by a camera device, and relevant parameters of the medical operation are generated according to the recognition result or in combination with other information.
  • the optical image information of the target object may be acquired by the camera device, then the target part information of the target object is determined, and finally the relevant parameters of the medical operation are determined at least based on the optical image information and the target part information.
  • the relevant parameters of the medical operation may include the position information on the target object to be irradiated and/or the target position information of the beam limiting device.
  • the relevant parameters of the medical operation may include labeling information of the medical image corresponding to the target object.
  • One or more embodiments of the present application relate to a method and system for determining parameters related to medical procedures.
  • the relevant parameters of the medical operation include the position information to be irradiated on the target object and/or the target position information of the beam limiting device
  • the method and system for determining the relevant parameters of the medical operation in one or more embodiments of the present application further It can be called a method and system for determining target position information of a beam limiting device.
  • the method for determining the target position information of the beam limiting device can be applied to beam limiters on radiation equipment (eg, DR equipment, CT equipment, etc.).
  • the method can automatically determine the beam limit according to at least the automatically acquired optical image information of the target object (such as human body or other experimental body) and target site information (such as the irradiated organ in medical tasks).
  • the target position information of the device enables the target position of the beam-limiting device to be irradiated to the target object as much as possible to match the area to be irradiated, so as to ensure the imaging/treatment quality and avoid unnecessary radiation dose damage to the target object .
  • the method is particularly suitable for use when children are exposed to radiation, so as to achieve protection against radiation damage to children.
  • the method and system for determining the relevant parameters of the medical operation in one or more embodiments of the present application may also be referred to as the orientation marking method and system.
  • the shooting doctor can also select a corresponding shooting protocol according to the known shooting site during the actual shooting process, and the medical imaging device can mark the orientation information according to the selected protocol. During this process, there may be errors in judgment, calibration, or protocol selection, which in turn may have a certain impact on the diagnosis results and subsequent treatment.
  • the method for marking a target part provided by some embodiments of the present application can obtain image information of the target object, process the image information based on a preset algorithm, determine the orientation information of the target part in the target object, and then determine the orientation information of the target part based on the orientation information.
  • the medical image of the target site is marked.
  • FIG. 1 is a schematic diagram of an application scenario of a system for determining relevant parameters of medical operations according to some embodiments of the present application.
  • the system 100 for determining relevant parameters of medical operations may include a medical imaging device 110 , a network 120 , a processing device 140 , a storage device 150 and a camera device 160 .
  • the system 100 may also include at least one terminal 130 .
  • the various components in the system 100 may be connected to each other through the network 120 .
  • the camera 160 and the processing device 140 may be connected or communicated through the network 120 .
  • the medical imaging device 110 may perform data acquisition on the target object to obtain a medical image of the target object or a target part of the target object.
  • the medical imaging device may include a digital X-ray (DR, Digital Radiography) imaging device, a computed tomography (CT) scanner, a magnetic resonance imaging (MRI) scanner, B-imaging (b-scan ultrasonography) scanner, thermal tomography (Thermal texture maps, TTM) scanning equipment or positron emission tomography (positron emission tomography, PET) scanner, etc.
  • DR digital X-ray
  • CT computed tomography
  • MRI magnetic resonance imaging
  • B-imaging b-scan ultrasonography
  • TTM positron emission tomography
  • PET positron emission tomography
  • the medical imaging device 110 is described by taking a CT scanner as an example.
  • the system analyzes the left knee according to the image information obtained by the camera 160, and the target object can lie face up on the scanning bed, and move the scanning bed so that the left knee is in the scanning area for scanning to obtain a medical image of the left knee.
  • the medical imaging device 110 when the medical imaging device 110 is a CT device, the device includes a beam limiting device (eg, a beam limiter) for limiting the light transmission area of the radiation of the CT device.
  • the medical imaging device 110 may also be any other radiation equipment.
  • the radiation equipment can photograph and/or treat the target object by emitting radiation (eg, X-rays, beta-rays, gamma-rays, etc.).
  • radiation equipment may include, but is not limited to, DR equipment, X-ray machines, linear accelerators, C-arm machines, and the like.
  • the beam limiting device may include a multi-leaf grating collimator, which can adapt to areas to be irradiated with different shapes or any irregular shapes, thereby improving the accuracy of adaptation and reducing Unnecessary radiation dose harms the human body.
  • the camera 160 may perform data collection on the target object to obtain optical image information of the target object or the target portion of the target object.
  • the camera device 160 may be provided on the medical imaging device 110 , or may be provided independently of the medical imaging device 110 .
  • camera 160 may be an optical device, such as a camera or other image sensor, or the like.
  • the camera 160 may also be a non-optical device, and the device obtains a heat map that can reflect the shape, size, and other characteristics of the target object based on several collected distance data.
  • the optical image information collected by the camera device 160 may be a still image or a video image.
  • the camera device 160 includes a camera.
  • Network 120 may include any suitable network capable of facilitating the exchange of information and/or data for system 100 .
  • at least one component of the system 100 eg, the camera device 160 , the processing device 140 , the storage device 150 , the medical imaging device 110 , the at least one terminal 130
  • the processing device 140 may obtain the optical image information of the target object or the target part of the target object from the camera 160 through the network 120 .
  • the processing device 140 may obtain user (eg, doctor) instructions from the at least one terminal 130 through the network 120 .
  • the network 120 may or include a public network (eg, the Internet), a private network (eg, a local area network (LAN)), a wired network, a wireless network (eg, an 802.11 network, a Wi-Fi network), a frame relay network, a virtual private network Network (VPN), satellite network, telephone network, router, hub, switch, server computer and/or any combination thereof.
  • a public network eg, the Internet
  • a private network eg, a local area network (LAN)
  • a wireless network eg, an 802.11 network, a Wi-Fi network
  • a frame relay network e.g, a virtual private network Network (VPN)
  • satellite network e.g, telephone network, router, hub, switch, server computer and/or any combination thereof.
  • the network 120 may include a wired network, a wired network, a fiber optic network, a telecommunication network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), a BluetoothTM network, a ZigBeeTM network , Near Field Communication (NFC) networks, etc., or any combination thereof.
  • network 120 may include at least one network access point.
  • network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which at least one component of beam limiting device target location information determination system 100 may be connected to network 120 for exchange data and/or information.
  • At least one terminal 130 may be in communication connection with at least one of the camera device 160 , the medical imaging device 110 , the processing device 140 and the storage device 150 .
  • at least one terminal 130 may also obtain the position information to be irradiated on the target object and/or the target position information of the beam limiting device from the processing device 140 and display and output it.
  • at least one terminal 130 may acquire an operation instruction of the user, and then send the operation instruction to the camera device 160 and/or the medical imaging device 110 to control them (such as adjusting the image capture angle, setting the working parameters of the beam limiting device, etc. ).
  • At least one terminal 130 may obtain the orientation analysis result of the target part from the processing device 140 , or obtain the collected image information from the camera 160 . Also, for example, at least one terminal 130 can acquire the user's operation instruction, and then send the operation instruction to the medical imaging device 110 or the camera device 160 to control it (eg, adjust the image acquisition angle, set the working parameters of the medical imaging device, etc.). .
  • At least one terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, the like, or any combination thereof.
  • mobile device 131 may include a mobile phone, a personal digital assistant (PDA), a medical device, the like, or any combination thereof.
  • PDA personal digital assistant
  • at least one terminal 130 may include input devices, output devices, and the like.
  • the input device may include alphanumeric and other keys for inputting control commands to control the camera device 160 and/or the medical imaging device 110 .
  • the input device may select keyboard input, touch screen (e.g., with tactile or tactile feedback) input, voice input, gesture input, or any other similar input mechanism.
  • Input information received through the input device may be transmitted, eg, via a bus, to the processing device 140 for further processing.
  • Other types of input devices may include cursor control devices such as a mouse, trackball, or cursor direction keys, among others.
  • the output device may include a display, a speaker, a printer, etc., or any combination thereof, for outputting the optical image information of the target object collected by the camera device 160 and/or the medical image detected by the medical imaging device 110 .
  • at least one terminal 130 may be part of processing device 140 .
  • Processing device 140 may process data and/or instructions obtained from camera 160 , storage device 150 , at least one terminal 130 , or other components of system 100 .
  • the processing device 140 may process the optical image information of the target object obtained from the camera 160 to obtain the posture information of the target object.
  • the posture information may include, but is not limited to, the height, body width information and bone joint point information of the target object.
  • the processing device 140 may obtain optical image information of the target object from the camera 160 and process it to obtain orientation information of the target part of the target object.
  • the processing device 140 may acquire a pre-stored instruction from the storage device 150, and execute the instruction to implement the method for determining the target position information of the beam limiting device as described below.
  • processing device 140 may be a single server or group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data from camera 160 , storage device 150 , and/or at least one terminal 130 via network 120 . As another example, processing device 140 may be directly connected to camera 160, at least one terminal 130, and/or storage device 150 to access information and/or data. In some embodiments, processing device 140 may be implemented on a cloud platform. For example, cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, inter-cloud clouds, multi-clouds, etc., or any combination thereof.
  • the medical imaging device 110 may operate based on the position information to be irradiated on the target object obtained by the processing device 140 and/or the target position information of the beam limiting device. For example, the medical imaging device 110 may set the position and opening of the beam limiting device according to the target position information of the beam limiting device (eg, the position of the beam limiting device, the size of the opening of the beam limiting device, etc.) according to the processing device 140 . size; in some embodiments, the medical imaging device 110 may also determine the target position information of the beam limiting device according to the position information to be irradiated on the target object obtained by the processing device 140 and the initial position information of the beam limiting device.
  • the target position information of the beam limiting device eg, the position of the beam limiting device, the size of the opening of the beam limiting device, etc.
  • the medical imaging device 110 may also determine the target position information of the beam limiting device according to the position information to be irradiated on the target object obtained by the processing device 140 and the
  • the medical imaging device 110 may scan based on the orientation information of the target part on the target object determined by the processing device 140 .
  • the medical imaging device 110 may scan the target part (eg, the left knee) according to the orientation information (eg, the left knee) of the target part of the target object processed by the processing device 140 to obtain a medical image of the target part.
  • Storage device 150 may store data, instructions, and/or any other information.
  • the storage device 150 may store optical image information collected by the camera device 160 and medical images collected by the medical imaging device 110 .
  • storage device 150 may store data obtained from camera 160 , at least one terminal 130 , and/or processing device 140 .
  • the storage device 150 may store a historical image information database of the target object, each historical image in the historical image information database corresponding to an optical image of the target object.
  • the storage device 150 may further store protocol information related to the target object, the protocol information includes at least target part information of the target object, and the processing device 140 may obtain the target part information of the target object based on the protocol information.
  • the storage device 150 may further store the target position information of the beam limiting device, and the medical imaging device 110 may acquire the pre-stored target position information of the beam limiting device from the storage device 150, and use the target position information of the beam limiting device according to the target position of the beam limiting device. The information controls the movement of the beam limiting device.
  • the storage device 150 may also store a preset threshold range and prompt information, and the processing device 140 may make a judgment based on the stored prompt information, the preset threshold range and the target location information, if the target location information If it is greater than the preset threshold range, a prompt message will be issued.
  • the storage device 150 may store medical images acquired by the medical imaging device 110 . In some embodiments, storage device 150 may store data obtained from medical imaging device 110 , at least one terminal 130 , and/or processing device 140 . In some embodiments, the storage device 150 may further store the correspondence between the target part and the orientation information, and the processing device 140 may obtain orientation information of the target part based on the correspondence and the processed target part.
  • storage device 150 may store data and/or instructions used by processing device 140 to perform or use to accomplish the example methods described herein.
  • storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), the like, or any combination thereof.
  • Exemplary mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • Exemplary removable storage may include flash drives, floppy disks, optical disks, memory cards, compact disks, magnetic tapes, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • storage device 150 may be implemented on a cloud platform.
  • storage device 150 may be connected to network 120 to communicate with at least one other component in system 100 (eg, processing device 140, at least one terminal 130). At least one component in system 100 may access data or instructions stored in storage device 150 via network 120 . In some embodiments, storage device 150 may be part of processing device 140 .
  • the storage device 150 may be a data storage device including a cloud computing platform, such as a public cloud, a private cloud, a community and hybrid cloud, and the like.
  • a cloud computing platform such as a public cloud, a private cloud, a community and hybrid cloud, and the like.
  • the system for determining parameters related to medical operations may further include an optical image information acquisition module, a target site information determination module, and a medical operation related parameter determination module.
  • the optical image information acquisition module is used to acquire the optical image information of the target object.
  • the target part information determination module is used to determine the target part information of the target object.
  • the medical operation related parameter determination module is configured to determine the medical operation related parameters based on at least the optical image information and the target site information.
  • the target part information determination module is further configured to acquire target part information of the target object. In some embodiments, the target part information determination module is further configured to process the optical image information to determine target part information in the target object. In some embodiments, when the relevant parameters of the medical operation include the position information to be irradiated on the target object and/or the target position information of the beam limiting device, the medical operation-related parameter determination module is further configured to: at least according to the optical The image information and the target position information determine the position information to be irradiated on the target object and/or the target position information of the beam limiting device.
  • the medical operation-related parameter determination module is further configured to: Orientation information, to determine the marking information of the medical image.
  • the system and method for determining relevant parameters of a medical operation are exemplarily described below with reference to different scenarios.
  • 2-3 are more exemplary descriptions of a system and method for determining target location information of a beam limiting device.
  • 4-7 are more exemplary descriptions of the system and method of orientation marking of the target site.
  • FIG. 2 is a block diagram of a system 200 for determining target position information of a beam limiting device according to some embodiments of the present application.
  • the target position information determination system 200 may include an optical image information acquisition module 210 , a target position information acquisition module 220 and a target position information determination module 230 .
  • the target part information acquisition module is included in the target part information determination module.
  • the target location information determination module includes a medical operation-related parameter determination module.
  • the optical image information acquisition module 210 may be used to acquire optical image information of the target object.
  • the target part information acquisition module 220 may be configured to acquire target part information of the target object. In some embodiments, the target part information acquisition module 220 may also be used to acquire protocol information related to the target object. The protocol information includes at least target part information of the target object. In some embodiments, the target part information acquisition module 220 may also be used to acquire a medical image of the target part in the target object.
  • the target position information determination module 230 may be configured to determine the position information to be irradiated on the target object and/or the target position information of the beam limiting device according to at least the optical image information and the target part information. In some embodiments, the target position information determination module 230 may be further configured to determine the position information to be irradiated on the target object and/or the target position information of the beam limiting device according to at least the optical image information and the medical image. . In some embodiments, the target position information determination module 230 may be further configured to determine the position information to be irradiated on the target object and/or the target position information of the beam limiting device according to at least the optical image information and the protocol information .
  • the target position information determination module 230 may be configured to input the optical image information and the target part information into the second machine learning model to determine the target position information of the beam limiting device. In some embodiments, the target position information determination module 230 may be configured to determine target position information of the beam limiting device according to the to-be-irradiated position information and the initial position information. In some embodiments, the target location information determination module 230 may be configured to input the optical image information and the target part information into the first machine learning model to determine the location information to be irradiated.
  • the target location information determination system 200 further includes a control module.
  • the control module can be used to determine whether the target location information is greater than a preset threshold range. If the target position information is less than or equal to a preset threshold range, the control module may be configured to control the movement of the beam limiting device according to the target position information of the beam limiting device. If the target location information is greater than the preset threshold range, the control module can be used to send out prompt information.
  • the system 200 for determining target position information further includes an initial position acquisition module for acquiring initial position information of the beam limiting device.
  • the system 200 for determining target location information further includes a training module for acquiring the first machine learning model by the following methods: acquiring an initial machine learning model; acquiring initial sample training data, where the initial sample training data includes history The historical optical image of the target object, and the historical medical image of one or more target parts on the historical target object; according to the historical optical image and the fusion result information of the historical medical image, determine the historical optical image. Marking information; the marking information includes the position information of the target part in the historical optical image; the historical optical image and the historical medical image are used as input data, and the marking information is used as output data or reference standard, which is input into the The initial machine learning model is trained.
  • the training module may also be used to obtain a second machine learning model by the following methods: obtaining an initial machine learning model; obtaining initial sample training data, the initial sample training data including historical optical images of historical target objects and the historical medical images of one or more target parts on the historical target object; according to the fusion result information of the historical optical image and the historical medical image, determine the historical target position information of the corresponding beam limiting device; The historical optical image and the historical medical image are used as input data, and the historical target position information of the corresponding beam limiting device is used as output data or reference standard, which is input to the initial machine learning model for training.
  • system and its modules shown in FIG. 2 may be implemented in various ways.
  • the system and its modules may be implemented in hardware, software, or a combination of software and hardware.
  • the hardware part can be realized by using dedicated logic;
  • the software part can be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware.
  • a suitable instruction execution system such as a microprocessor or specially designed hardware.
  • the methods and systems described above may be implemented using computer-executable instructions and/or embodied in processor control code, for example on a carrier medium such as a disk, CD or DVD-ROM, such as a read-only memory (firmware) ) or a data carrier such as an optical or electronic signal carrier.
  • the system and its modules of the present application can not only be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, etc., or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc. , can also be implemented by, for example, software executed by various types of processors, and can also be implemented by a combination of the above-mentioned hardware circuits and software (eg, firmware).
  • the optical image information acquisition module 210, the target position information acquisition module 220, and the target position information determination module 230 disclosed in FIG. 2 may be different modules in a system, or may be a module to implement the above-mentioned The functionality of two or more modules.
  • each module may share one storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of the present application.
  • the orientation marking system of the target part in some embodiments of the present application may include an optical image information acquisition module and an orientation information determination module.
  • the orientation marking system of the target site may further include an orientation information marking module.
  • the optical image information acquisition module is used to acquire the optical image information of the target part of the target object.
  • the azimuth information determination module is used for processing the optical image information to determine the azimuth information of the target part in the target object.
  • An orientation information marking module configured to mark the medical image of the target object based on the orientation information.
  • the orientation information marking module is further configured to: acquire a medical image of the target part; and mark the orientation information in the medical image.
  • the position information marking module is further configured to: determine corresponding protocol information based on the position information; and mark the medical image of the target part based on the protocol information.
  • FIG. 3 is a flowchart of a method for determining target position information of an exemplary beam limiting device according to some embodiments of the present application. Specifically, the determination method 300 may be executed by the system 200 for determining the target location information. As shown in FIG. 1 , the method 300 for determining the target location information may include:
  • Step 310 obtaining optical image information of the target object. Specifically, this step 310 may be performed by the optical image information acquisition module 210 .
  • the target object can be understood as the object to be irradiated, which can include a human body or other experimental objects.
  • the other experimental subjects may include other living animals, or non-living experimental models.
  • the optical image information may be visible light image information of the target object.
  • the optical image information may be a visible light whole body image of a human body or other experimental body, a video that can reflect the whole body image of the human body or other experimental body.
  • the optical image information acquisition module 210 may acquire the optical image information of the target object through a camera device.
  • the camera device may be fixedly arranged on the medical imaging device, or may be arranged at a fixed position outside the medical imaging device. This specification does not specifically limit the fixed position of the imaging device, as long as the imaging device can acquire the whole body image of the target object through one or more pictures.
  • Step 320 Obtain target part information of the target object. Specifically, this step 320 may be performed by the target part information acquisition module 220 .
  • the target site refers to the organ to be irradiated on the target object in the medical task.
  • the target site information refers to information that can reflect the organ to be irradiated.
  • the target site information may be the name of the organ to be irradiated.
  • the target site information may be specific location information of the organ to be irradiated.
  • the target part information acquisition module 220 may acquire protocol information related to the target object, and acquire target part information of the target object according to the protocol information, wherein the protocol information may include target part information of the target object.
  • the target part information obtaining module 220 may obtain a medical image of the target part in the target object, and the doctor obtains the target part of the target object according to the medical image.
  • the target part information acquisition module 220 may acquire label information of the medical image corresponding to the target object, and then determine the target part of the target object according to the label information.
  • the medical image corresponding to the target object includes the target shooting part on the target object, and the label information is used to reflect the name of the target shooting part and the orientation of the target shooting part relative to the target object. More descriptions of labeling information of medical images can be found elsewhere in this specification, eg, at least part of the contents of FIGS. 4-7 .
  • the target part information acquisition module 220 may acquire the target part information of the target object in any other manner.
  • the target part information is notified to the doctor by the target object.
  • the medical image may be understood as a medical image acquired by a medical imaging device.
  • the medical imaging device 110 may include, but is not limited to, DR equipment, CT equipment, X-ray machines, linear accelerators, C-arm machines, and the like.
  • the target part information of the target object obtained in step 320 may also be processed by processing the optical image information of the target object to determine the target part information of the target object.
  • processing the optical image information of the target object may also be processed by processing the optical image information of the target object to determine the target part information of the target object.
  • Step 330 Determine target position information of the beam limiting device. Specifically, this step 330 may be performed by the target location information determination module 230 .
  • the processing device may process the optical image information and the target position information to directly determine the target position information of the beam limiting device, see step 336 for details;
  • the processing device can also process the optical image information and the target position information to determine the position information to be irradiated; and then determine the target position information of the beam limiting device based on the position information to be irradiated and the initial position information of the beam limiting device, see step 332 and Step 334.
  • Step 332 Determine the position information to be irradiated according to the optical image information and the target site information.
  • the to-be-irradiated position may be understood as the position of the to-be-irradiated area that needs to be irradiated on the target object, and may also be called the to-be-irradiated area's position.
  • the location information to be irradiated refers to information that can reflect the location of the area to be irradiated.
  • the position information to be irradiated may be the position information of the organ to be irradiated of the target object determined on the optical image.
  • the position information to be irradiated may include one or more of the position of the organ to be irradiated reflected on the optical image, the size of the area of the organ to be irradiated reflected on the optical image, and the like.
  • the processing device may process the optical image based on the target part information, and then output to-be-irradiated position information corresponding to the target part.
  • the processing device may perform image fusion processing on the optical image and the medical image, and determine the position of the target part reflected on the surface of the target object on the optical image. For example, the outline of the target site can be displayed directly on the optical image.
  • the processing device may perform processing analysis on the optical image, the processing analysis is used to determine the approximate organ position of the target object in the optical image, and based on the protocol information
  • the target part in determines the position information of the target part reflected on the surface of the target object. For example, contours or regions of organs corresponding to target sites in the protocol information can be displayed directly in the optical image.
  • the processing device may use a preset algorithm to process one or more of the above steps.
  • the preset algorithm may include, but is not limited to, a machine learning model and the like.
  • the processing device can use the machine learning model to directly determine the position of the target part reflected on the surface of the target object, that is, the position to be illuminated, based on the optical image information and the target part information.
  • the preset algorithm may be the first machine learning model.
  • the target part information includes a medical image
  • the optical image of the target object and the medical image of the target part can be input into the first machine learning model, and the first machine learning model can directly output the position information to be irradiated.
  • the location information to be illuminated output by the first machine learning model may include an optical image with location markers.
  • the location information to be irradiated output by the first machine learning model may include coordinate information of the location to be irradiated.
  • the protocol information when the target part information is obtained from the corresponding protocol information, the protocol information can be processed, the target part information in the protocol information can be extracted, and then the target part information is subjected to feature processing, and the processed and The feature information corresponding to the target part information and the optical image of the target object are input into the first machine learning model.
  • the first machine learning model can directly output an optical image with a position mark or directly output the coordinate information of the position to be irradiated. Specifically, the training process of the first machine learning model is described later in detail.
  • Step 334 Determine the target position information of the beam limiting device according to the position information to be irradiated and the initial position information of the beam limiting device.
  • the initial position of the beam limiting device refers to the position before the beam limiting device moves when irradiation has not yet started.
  • the initial position information of the beam limiting device refers to the initial position information that can reflect the beam limiting device.
  • the initial position information of the beam limiting device can be understood as the distance between the beam limiting device and the target object to be irradiated before the beam limiting device moves.
  • the target position of the beam limiting device refers to a position that the beam limiting device needs to reach after moving, and the position corresponds to the position information to be irradiated.
  • the target position information of the beam limiting device refers to information that can reflect the target position of the beam limiting device.
  • the target position information of the beam-limiting device may include the position of the blade after the beam-limiting device reaches the target position (eg, the spatial coordinate position of the blade after the beam-limiting device reaches the target position) or the position of the blade after the beam-limiting device reaches the target position.
  • the size of the opening in the end face of the beam limiting device (such as the position of the blade on the end face of the beam limiting device after the beam limiting device reaches the target position), etc.
  • the initial position information of the beam limiting device may be obtained through protocol information related to the target object, and the protocol information may include initial position information of the beam limiting device.
  • the initial position information of the beam limiting device may also be acquired in other ways.
  • Other methods may include automatic acquisition methods and manual acquisition methods.
  • the automatic acquisition method may include that the system directly acquires corresponding measurement data from distance detection sensors, laser detection devices, and infrared detection devices.
  • Manual acquisition methods may include, but are not limited to, a doctor manually measuring the position of the blade on the beam limiting device with an additional laser detection device, a doctor manually measuring the position of the blade on the beam limiting device with an additional infrared detector, and the like.
  • the doctor can place the laser detection device in a suitable position, emit laser light to the beam limiting device, and then receive the laser signal by the laser receiver on the laser detection device, so that the laser detection device can determine the position of the blade on the beam limiting device. , and then the doctor manually inputs the position of the blade to the determination module 230 through an external input device.
  • External input devices may include, but are not limited to, a mouse, a keyboard, and the like.
  • the initial position information of the beam limiting device may also be preset in the algorithm.
  • the determining module 230 may determine the target position information of the speed limiting device according to the position information to be irradiated and the initial position information of the beam limiting device. Specifically, the determining module 230 determines the distance between the beam-limiting device and the target object through the initial position of the beam-limiting device, and calculates and determines the distance between the beam-limiting device and the target object based on the position information of the area to be irradiated on the target object and the distance between the beam-limiting device and the target object.
  • the target position information enables the target position of the ray to pass through the beam limiting device to irradiate the area of the target object to match the area to be irradiated as much as possible.
  • the processing device can accurately determine the position information to be irradiated according to the optical image information and the target position information, and then accurately determine the target position information of the beam limiting device according to the position information to be irradiated and the initial position information of the beam limiting device.
  • This embodiment is suitable for the situation that the initial position of the beam limiting device will change frequently.
  • the position of the area to be irradiated is determined first, and then the corresponding target position of the beam limiting device is calculated based on the current position of the beam limiting device. , it can adapt to more scenarios with different initial positions of beam limiting devices, and has more flexibility.
  • Step 336 Determine the target position information of the beam limiting device according to the optical image information and the target position information.
  • the target position of the beam limiting device is the same as the target position in step 334 , and details are not repeated here. For details, please refer to the corresponding part description in step 334 .
  • the processing device may use a preset algorithm to process one or more of the above steps.
  • the preset algorithm may include any algorithm capable of determining the target position information of the beam limiting device. Any algorithm can be understood as a preset instruction that can reflect the correspondence between the optical image information and the target position information and the target position information of the beam limiting device.
  • the determination module 230 may input the optical image information and the target position information into a preset algorithm, and then the preset algorithm directly outputs the target position information of the corresponding beam limiting device.
  • the initial position of the beam limiting device needs to be considered in advance, and if the initial position changes, it needs to be adjusted accordingly in the algorithm.
  • the preset algorithm may include, but is not limited to, machine learning models and the like.
  • the preset algorithm may be a second machine learning model, and the optical image information and the target part information are input into the second machine learning model to determine the target position information of the beam limiting device.
  • the determination module 230 may input the optical image information and the target part information into the second machine learning model when the initial position of the beam limiting device during actual irradiation is consistent with the initial position during training.
  • the learning model outputs the coordinate value of the target position of the beam limiting device, so as to directly determine the target position information of the beam limiting device.
  • the optical image of the target object and the medical image of the target part can be input into the second machine learning model, and the second machine learning model can directly output the target position information of the beam limiting device .
  • the second machine learning model directly outputs the target position coordinates of the beam limiting device.
  • the protocol information when the target part information is obtained from the corresponding protocol information, the protocol information can be processed, the target part information in the protocol information can be extracted, and then the target part information is subjected to feature processing, and the processed and
  • the feature information corresponding to the target part information and the optical image of the target object are input into the second machine learning model, and correspondingly, the second machine learning model can directly output the target position information of the beam limiting device, for example, the coordinate information of the target position of the beam limiting device .
  • the training process of the second machine learning model is described later in detail.
  • the beam limiting device can be directly controlled to move to the corresponding target position based on the target position information, as detailed in step 360 .
  • the determined target position may also be judged, and if the target position is greater than a preset threshold range, a prompt message is issued to inform that the current beam limiting device cannot meet the shooting requirements of the target position, See steps 340 and 350 for details.
  • the judgment and reminder scheme based on the preset threshold range can prevent the beam limiting device from irradiating the target part when it cannot irradiate the entire target part, resulting in the situation that shooting cannot be performed.
  • the beam limiting device can only obtain local medical treatment of the target site in one shot. image.
  • the shooting of the target part needs to be divided into at least two shootings respectively, and then the medical images obtained by at least two shootings are stitched together to obtain a complete medical image of the target part.
  • whether the target part needs to be spliced and divided into several segments for splicing may be determined by a processing device, or may be determined according to protocol information.
  • the position information to be irradiated when the target site needs to be divided into at least two parts for irradiation, includes at least two sub-areas to be irradiated; correspondingly, the target position information of the beam limiting device also includes Sub-target position information corresponding to the two sub-areas to be illuminated.
  • the target part needs to be divided into two parts for shooting, the first shooting part is the upper half of the target part, and the second shooting part is the lower half of the target part.
  • Each to-be-irradiated area can be regarded as the sub-to-be-irradiated area.
  • the two sets of target position information of the beam limiting device determined respectively based on the two sub-areas to be irradiated may be regarded as sub-target position information.
  • the protocol information may include target part information and two sub-target parts corresponding to the target part.
  • the sub-areas to be irradiated may be determined according to the sub-target site in the protocol information.
  • the processing device may process the optical image of the target object based on the sub-target portion information of the target portion in the protocol information to obtain the sub-areas to be illuminated corresponding to the sub-target portion. For the specific process, please refer to the relevant description of step 332 in this specification.
  • whether it is necessary to divide the shooting of the target part into multiple shots, and into several shots can be automatically determined by the processing device.
  • the processing device can automatically plan several sub-regions to be irradiated corresponding to the target part based on the image information and the target part information.
  • target location information corresponding to each sub-area to be illuminated may be determined based on the output location information of the beam limiting device, and the several target location information may be regarded as Target sub-location information.
  • Target sub-location information A detailed description of determining the target position of the beam limiting device based on the area to be irradiated can be found elsewhere in this specification.
  • Step 340 Determine whether the target location information is greater than a preset threshold range.
  • the preset threshold range refers to the range of the target site that can be covered by the radiation beam emitted by the beam limiting device.
  • the preset threshold range may be stored in the storage device 150 .
  • the preset threshold range may be obtained according to the doctor's past experience.
  • the preset threshold range may be set according to the physical information of the target object. For example, for a target object whose height or body width is within a certain range, the corresponding threshold range may be the first group. For target objects whose height (or body width, etc.) is in other ranges, the corresponding threshold range may be the second group.
  • a threshold range corresponding to the physical information of the target object may be used as the preset threshold range by searching the target part database of the beam limiting device according to the physical information of the target object.
  • the determination system 200 may perform step 360 .
  • the determination system 200 may perform step 350 .
  • Step 350 sending a prompt message.
  • the processing device can send out prompt information to inform the medical staff that the current beam limiting device cannot reach the target position determined by the system calculation.
  • the medical staff can suspend the shooting, and adjust the beam limiting device according to the recorded information in the prompt information.
  • the prompt information may include whether the preset threshold range is exceeded, how much it exceeds, and the specific content of the target position information, so that reference can be made when adjusting the beam limiting device in the future.
  • the prompt information may include one or more of text prompts, voice prompts, video prompts, light prompts, and the like.
  • the doctor can quickly find the problem, stop the follow-up shooting operation in time, and adjust the beam limiting device according to the recorded information in the prompt information, so as to improve the work efficiency.
  • Step 360 Control the movement of the beam limiting device according to the target position information of the beam limiting device.
  • the determination system 200 may control the movement of the beam limiting device based on the target position information. For example, the determination system 200 may control the beam limiting device to move as a whole from the initial position to the position to be irradiated according to the coordinate value of the target position of the beam limiting device as a whole. For another example, after the beam-limiting device moves to the target position, the control system 200 can control the opening position of the blade on the end face of the beam-limiting device, so that the area where the radiation passes through the target position of the beam-limiting device and irradiates the target object can be as close to the area to be irradiated as possible. possibly match.
  • the determination system 200 may obtain the first machine learning model by: the determination system 200 obtains the initial machine learning model. In some embodiments, the determination system 200 may obtain the initial machine learning model from the storage device 150 via the network 120 .
  • the initial machine learning model may include one or any combination of a DNN model, a CNN model, an RNN model, an LSTM network model, and the like.
  • the determination system 200 obtains initial sample training data. In some embodiments, determination system 200 may obtain initial sample training data from storage device 150 via network 120 .
  • the initial sample training data may include historical optical images of historical target objects, and historical medical images of one or more target sites on the historical target objects. Historical optical images refer to visible light images that have been captured by historical target objects.
  • the historical medical image refers to a medical image corresponding to one or more target organs of a historical target object captured by a medical imaging device.
  • Medical imaging devices may include, but are not limited to, DR equipment, CT equipment, X-ray machines, linear accelerators, and C-arm machines.
  • historical medical images may be images obtained by a CT device taking a target site.
  • the determination system 200 determines the marker information of the historical optical image according to the fusion result information of the historical optical image and the historical medical image.
  • the fusion result information refers to the positional correspondence of the target part between the historical optical image and the historical medical image.
  • the historical medical image is the X-ray image of the lung
  • the historical optical image is the whole body visible light image of the target object
  • the fusion result information may be the position of the target part corresponding to the historical optical image on the historical optical image.
  • the marker information may include location information of the target site in the historical optical image.
  • the determination system 200 uses historical optical images and historical medical images as input data, and label information as output data, and inputs the initial machine learning model for training to obtain a trained first machine learning model.
  • the determination system 200 may obtain the second machine learning model by: the determination system 200 obtains the initial machine learning model. In some embodiments, the determination system 200 may obtain the initial machine learning model from the storage device 150 via the network 120 .
  • the initial machine learning model may include one or any combination of a DNN model, a CNN model, an RNN model, an LSTM network model, and the like.
  • the determination system 200 obtains initial sample training data. In some embodiments, determination system 200 may obtain initial sample training data from storage device 150 via network 120 . In some embodiments, the initial sample training data may include historical optical images of historical target objects and historical medical images of one or more target sites on the historical target objects.
  • the determination system 200 determines the historical target position information of the corresponding beam limiting device according to the fusion result information of the historical optical image and the historical medical image.
  • the historical target position refers to the target position of the beam limiting device corresponding to the historical to-be-irradiated area.
  • the historical to-be-irradiated area may be determined according to the fusion result of historical optical images and historical medical images.
  • the determination system 200 can obtain the target part information according to the historical medical image, and then mark the target part information on the corresponding position of the historical optical image, so as to obtain the historical area to be illuminated on the historical target object, and then based on the historical target area on the historical target object
  • the irradiation area is calculated to determine the historical target position information of the beam limiting device.
  • the determination system 200 takes as input data historical optical images as well as historical medical images. Determine the historical target position information of the beam limiting device corresponding to the system 200 as output data, and input the initial machine learning model for training to obtain the trained second machine learning model.
  • the second machine learning model may also consider the initial position of the beam limiting device during the training process. For example, after the historical to-be-irradiated area is determined, the historical target position information of the beam-limiting device may be determined based on the initial position of the beam-limiting device. Correspondingly, the historical optical image, the initial position of the beam limiting device and the historical medical image can also be input as input data for training.
  • the determination system 200 may also obtain the first machine learning model or the second machine learning model in the following manner: the determination system 200 obtains the initial machine learning model.
  • the determination system 200 may obtain the initial machine learning model from the storage device 150 via the network 120 .
  • the initial machine learning model may include one or any combination of a DNN model, a CNN model, an RNN model, an LSTM network model, and the like.
  • the determination system 200 obtains initial sample training data.
  • determination system 200 may obtain initial sample training data from storage device 150 via network 120 .
  • the initial sample training data may include historical optical images of the historical target object, age information of the historical target object, and historical medical images of one or more target sites on the historical target object.
  • the determination system 200 determines the marker information of the historical optical image according to the historical optical image, the age information of the target object, and the fusion result information of the historical medical image.
  • the marker information may include position information of the target site in historical optical images or historical target position information of the beam limiting device.
  • the target location information determination system 200 uses historical optical images, age information of the target object, and historical medical images as input data, and label information as output data or reference standards, and inputs the initial machine learning model for training.
  • the marking information may be the marking information of the historical optical image, and the marking information of the historical image is marked with the target part corresponding to the historical medical image and the age information of the target object .
  • the age information of the target object is introduced into the training data of the model, which reflects the influence of age on the target position information of the beam limiting device, and can better protect children from radiation damage.
  • FIG. 4 is an exemplary flow diagram of a target orientation marking system according to some embodiments of the present specification.
  • process 400 may be performed by an orientation marking system of a target site, and process 400 includes:
  • Step 410 acquiring image information of the target object.
  • step 410 may be performed by an image information acquisition module.
  • the target object includes an object to be imaged medically, eg, a patient.
  • the image information of the target object includes image information of the target portion of the target object.
  • the image information refers to the image of the target object (for example, the human body and various parts or organs of the body) acquired by the camera device.
  • the images may include still images or video images of the target object.
  • still images may include photographs, pictures, etc., that exist still.
  • video images refer to dynamic images, which may include but are not limited to videos, animations, and the like.
  • a video stream can be derived from the video images, and the video stream can include multiple frames of still images.
  • the image may be an optical image or a non-optical image.
  • the image information may include optical image information (including visible light images and non-visible light images), and may also include non-optical image information.
  • the camera device may be an optical device, such as a camera, or other image sensor, or the like.
  • the camera 160 may also be a non-optical device, and the device obtains a heat map that can reflect the shape, size, and other characteristics of the target object based on several collected distance data.
  • camera device 160 may include any device capable of two-dimensional or three-dimensional image capture.
  • the image information includes at least position information of the target part in the target object relative to the medical imaging device 110 , and the processor may determine the target part based on the position information.
  • the placement information includes whether there is an object to be detected in the radiation irradiation area of the medical imaging device, or whether there is an object to be detected on a placement table (eg, a hospital bed) of the medical imaging device.
  • a placement table eg, a hospital bed
  • the placing table or the radiation irradiation area can be regarded as a positionable area
  • the detected object to be detected in the positionable area can be regarded as the target part to be detected of the target object.
  • the medical imaging device 110 may also be adjusted based on the image information of the target part, so that the target part is in the ray path of the ray source.
  • the operation of adjusting the medical imaging device 110 may be performed manually, or may be performed automatically by a machine.
  • adjusting the medical imaging device may include adjusting the radiation source of the medical imaging device 110 , may also include adjusting the detector of the medical imaging device 110 , or may include adjusting the detector and the radiation source, as long as the adjustment can make the target site in the in the ray path of the ray source.
  • the target part in the positionable area in the process of adjusting the posture and/or position of the target object to the target part so that it is in the ray path of the ray source of the medical imaging device, the target part in the positionable area can be determined.
  • the target site may be determined in the process of manually or automatically adjusting the movement of the radiation source of the medical imaging device 110 to be aligned with the target site. For example, if the patient has entered the to-be-photographed area of the medical imaging device 110 for positioning, and now the target part is located on the left side of the radiation source of the medical imaging device 110 , the target part can be adjusted to move to the right so that it is at the radiation source of the medical imaging device 110 . In the ray path of the medical imaging device 110, the ray source of the medical imaging device 110 can also be adjusted to move to the left so that the target part is in the ray path of the ray source of the medical imaging device 110.
  • the processor can be based on the collected image information ( For example, the video information corresponding to the process) determines the target part in the positionable area.
  • the image information acquisition module can acquire the image information captured by the camera device 160 by wire or wirelessly, and further identify the target object in the image information.
  • the system may derive a video stream for input video images, and perform frame-by-frame processing.
  • the processing may include filtering and denoising of the image, normalization of the gray scale of the image, horizontal rotation of the image, correction of the scale of the image, and the like.
  • the processing may also include identifying or segmenting the target object or target portion in the image.
  • Step 420 Process the image information to determine the orientation information of the target part.
  • step 420 may be performed by a position information determination module.
  • the target site may include all or a portion of a tissue or organ of the target subject.
  • the target site may include the left ankle joint, the chest, and the like.
  • the orientation information of the target part includes at least one of a left-right orientation, an up-down orientation, and a front-to-back orientation of the target part relative to the target object.
  • the orientation information of the target part relative to the target object includes left and right orientation information, for example, the left knee joint.
  • the orientation information of the target part relative to the target object includes up-down information, eg, the upper spine.
  • the orientation information of the target part relative to the target object includes front-to-back orientation information, eg, the back.
  • the orientation information of the target part relative to the target object is left and right up and down information, for example, the target part is the upper left hip joint.
  • the orientation information of the target part may further include the ray incident orientation in the medical imaging device, and the like.
  • the ray incident orientation in the medical imaging device may include the positional relationship between the initial incident direction of the ray and the target object or target part.
  • the orientation information may include the hand located on the left side of the body, and may also include whether the back of the hand is facing the direction of initial incidence of rays, or the palm is facing the direction of initial incidence of rays.
  • the orientation information can include the thigh on the left side of the body, or whether the face of the target object faces the direction of the initial incidence of the ray, or whether the target object Back to the direction of initial incidence of rays, that is, whether the patient is lying flat on the scanning bed facing the direction of initial incidence of rays, or lying flat on the scanning couch facing away from the direction of initial incidence of rays.
  • the orientation information determination module receives image information of the target part containing the target object through the network, and can identify the image of the target part according to a preset algorithm, process the image information, and determine the orientation of the target part information. For example, in the continuous process of capturing video images, the camera device captures the entire image including the patient positioning process to the exposure of the patient, and during this process, the radiation source and/or the stage and/or the camera may be Configured to be movable, the azimuth information determination module can automatically identify the azimuth information of the target part.
  • the camera device captures that the medical imaging equipment moves the ray source above the left knee joint, and the orientation information determination module can analyze and identify the target part in real time as the left knee joint. knee joint.
  • the preset algorithm may include an algorithm for processing and analyzing the image. Specifically, the preset algorithm first performs image segmentation and other processing on the image information of the target object obtained by the camera device, and confirms the position of the target part in the image information according to the positional relationship between the target part in the image information and the medical imaging device. , and then analyze and determine the orientation information of the target part relative to the target object.
  • the preset algorithm may include an image matching algorithm. Specifically, based on the image matching algorithm, the degree of matching between the image information of the target object obtained by the camera and the image information in the associated database is calculated, and the image information with the highest matching degree is selected as the obtained image information, and further analysis is performed to determine the relative position of the target part to the target object. location information.
  • the method of image matching includes grayscale-based image matching and feature-based image matching.
  • the preset algorithm may also include a machine learning model. Specifically, the image information of the target object obtained by the camera is input into the trained machine learning model, and the orientation information of the target part is determined according to the output data of the machine learning model.
  • the output data of the machine learning model may include the name of the target part and its corresponding orientation information, eg, the left knee joint.
  • the image information acquired by the camera device may be preprocessed to screen out a higher-quality image. The image may be an image with higher definition, or may contain all the target objects, and the target part is in the The image of the placement position. Then, the filtered images are input into the machine learning model, and the machine learning model can automatically output the orientation information of the target part relative to the target object according to the input data.
  • the machine learning model may include a deep neural network (DNN), such as a convolutional neural network (CNN), a deep belief network (DBN), a stochastic Boolean neural network (random Boolean network, RBN), etc.
  • DNN deep neural network
  • CNN convolutional neural network
  • DBN deep belief network
  • RBN stochastic Boolean neural network
  • Deep learning models can include multi-layer neural network structures. Taking a convolutional neural network as an example, a convolutional neural network may include an input layer, a convolutional layer, a dimensionality reduction layer, a hidden layer, an output layer, and the like.
  • a convolutional neural network includes one or more convolution kernels for convolution operations.
  • an initial machine learning model may be trained using training sample data to obtain a trained machine learning model.
  • the training sample data may include historical images of several target objects, and the historical images need to include images of target parts. Mark the target part and its orientation information in the historical image, for example, the marked information of the target part may include the left knee joint. Then, the historical image information is used as input data, the label information of orientation information is used as the corresponding output data or criterion, and the input data and output data are input into the initial machine learning model for training, and the trained model is obtained.
  • Step 430 marking the medical image of the target object based on the orientation information.
  • the system obtains the medical image of the target object through the medical imaging device, and the orientation information marking module marks the corresponding orientation information on the obtained medical image.
  • the medical image of the target object may include a medical image corresponding to a target part on the target object.
  • the medical image of the target object may further include medical images corresponding to non-target parts on the target object.
  • the non-target part can be understood as a part that has a certain relationship with the target part. For example, if the target part is the palm, the non-target part may be the arm corresponding to the palm, and the orientation information of the palm of the target object can be marked on the medical image of the target object's arm.
  • a medical image may be understood as an image obtained by a medical imaging device.
  • the medical imaging device may include a DR imaging device, a CT scanner, an MRI scanner, a B-imaging scanner, a TTM scanning device, a SPECT device, a PET scanner, or the like.
  • the medical image includes at least one image of MRI, XR, PET, SPECT, CT, ultrasound.
  • the medical image information may also include a fusion image of one or more of the above medical images.
  • the image information and the medical image may be obtained simultaneously or sequentially.
  • each medical image may include one or more markers.
  • R may be marked on the photographed medical image of the right knee joint.
  • each medical image may further include one or more parts-related marking information.
  • manual adjustments may be made to markers, which may include adding one or more marker points, deleting one or more markers, changing the position of one or more markers, and the like.
  • the orientation information marking module may directly mark in the medical image based on the orientation information of the target part.
  • the orientation information marking module may select a scanning protocol to obtain a medical image based on orientation information of the target part and further mark it.
  • steps 431b and 432b please refer to the following steps 431b and 432b.
  • Step 431a acquiring a medical image of the target object.
  • a medical image may be understood as an image acquired by a medical imaging device.
  • the medical image may include MRI images, CT images, cone beam CT images, PET images, functional MRI images, X-rays images, fluoroscopic images, ultrasound images, SPECT images, etc., or any combination thereof.
  • Medical images can reflect information about a certain part of a patient's tissues, organs and/or bones.
  • the medical image is one or a set of two-dimensional images. For example, black and white X-ray films, such as CT two-dimensional scan images, etc.
  • the medical image may be a 3D image, for example, a 3D image of an organ reconstructed from CT scan images of different slices, or a 3D space image output by a device with 3D imaging capability.
  • the medical image may also be a moving image over a period of time. For example, a video that reflects the changes of the heart and its surrounding tissues during a cardiac cycle, etc.
  • the medical image may come from a medical imaging device, a storage module, or an input from a user through an interactive device.
  • the medical imaging device uses the medical imaging device to acquire a medical image of the target part according to the obtained orientation information, and marks the orientation information in the obtained medical image.
  • Step 432a marking the orientation information in the medical image.
  • the orientation information may be marked at a certain orientation in the medical image, eg, the orientation information may be marked at the upper left corner of the image.
  • marking the orientation information into the medical image can be understood as marking directly in the medical image in a displayable manner, for example, covering a certain local area of the medical image; for example, adding a description to the medical image to be able to display Orientation information of the target part in the medical image.
  • the position of the mark is generally set at the peripheral position of the medical image.
  • the marked content may only include orientation information of the target part, and the doctor may determine the name of the target part through the corresponding medical image.
  • the content of the mark can be: the right side, or it can be represented by the English letter R, such as shown in Figure 6 .
  • the marked content may include the name of the target part and its orientation information, for example, the marked content may be the right ankle, which may also be represented by English letters RIGHT ANKLE, as shown in FIG. 7 .
  • Step 431b determining the corresponding protocol information based on the orientation information.
  • the system selects a corresponding protocol according to the orientation information of the target part, and then checks the subject's target part according to the protocol to obtain a medical image of the target part captured by the medical imaging device.
  • a protocol refers to a combination of parameters captured by a medical imaging device, and a corresponding protocol selected for a target site captured by a patient. For example, if the left knee or chest is photographed with DR, the protocol for the left knee or chest is selected when scanning.
  • Step 432b marking the medical image based on the protocol information.
  • the system further labels the medical image according to the selected protocol. In some embodiments, the system detects the protocol used and further tags the orientation information in the protocol used onto the medical image or its perspective content.
  • the labeling of the labelled medical images may also be adjusted.
  • the adjustment may include manual adjustment or automatic adjustment by machine. For example, if a doctor finds that at least one of the marking content, marking position, and marking method of the marking information on the medical image is inappropriate, the doctor can adjust it manually.
  • the machine can also automatically check the marking information on the medical image, and automatically adjust the inappropriateness of the marking information, so as to ensure the accuracy of the marking information.
  • the exemplary flowchart of the orientation marking system for the target part may further include acquiring optical image information of the target object; determining the target part information of the target object, wherein the target part information includes orientation information of the target part, Then, based on the orientation information of the target part, the marking information of the medical image is determined. In some embodiments, marking the medical image of the target object based on the orientation information of the target part information may also be included.
  • FIG. 5 is an exemplary flowchart of marking a target position according to some embodiments of the present application.
  • the camera device includes a camera.
  • the image acquired by the camera is optical image information.
  • the process can be performed by a system for marking the orientation of the target part, the system uses a medical imaging device to acquire a medical image of the target part, and marks the generated orientation information of the target part in the medical image.
  • the system includes: a camera device for acquiring optical image information of a target object; a medical imaging device for acquiring a medical image of a target part on the target object; an information processing device for analyzing the optical image information based on a preset algorithm Processing is performed to determine the orientation information of the target part; and the orientation information is marked in the medical image.
  • the target object is placed first, the camera starts to collect optical image information, the image information acquisition module analyzes the collected images, and determines whether a patient is detected in the images collected by the camera. In some embodiments, whether the patient is detected indicates whether the acquired image contains the patient, and whether the target site of the patient is within the positionable area.
  • the positioning is completed, and the patient can be detected.
  • the target object for example, a patient
  • the target object is placed in the medical imaging device first, the patient is placed on the scanning bed of the CT scanning machine, the posture and/or position of the patient on the scanning bed is adjusted, and the scanning The position of the bed so that part or all of the radiation beam of the CT scanner passes through the target site of the target object when, for example, scanning a positioning slice, before and/or during patient setup and/or after positioning is completed
  • the camera device simultaneously acquires optical image information, and the image information acquisition module analyzes the acquired image information. , it means that the patient's positioning is complete.
  • the image information acquisition module can analyze and obtain optical image information including the patient, and the target object is in the positionable area, that is, the positioning is completed. For example, taking a breast machine as an example, when the patient stands in front of the machine, the breast is compressed between the detector housing and the compressor, so that part or all of the beam can pass through the breast, and the camera device captures the optical image of this process.
  • the image information acquisition module analyzes the obtained optical image information, and if the image information including the patient can be obtained by analysis, the positioning is completed at this time.
  • the information processing device analyzes the data to obtain orientation information, and the system automatically completes the marking on the captured medical image according to the analysis result.
  • the camera device may be relatively fixed or movably disposed on the medical imaging device; in some embodiments, the camera device may also be independently disposed outside the medical imaging device, in this case, during the process of capturing images , the camera device can be fixed or movable. In some embodiments, the camera device may be located on a movable part of the medical imaging device, or may be located on a movable part integrated on the medical imaging device. For example, the camera unit may be located on the C-arm of the mammography machine or on the gantry, or the like. For another example, rails can be fixed on the rack, and the camera device can be moved on the rails.
  • the orientation information determination module analyzes the image information according to a preset algorithm (for example, a machine learning model) to obtain the target part, and further analyzes and generates orientation information of the target part.
  • a preset algorithm for example, a machine learning model
  • the camera device and the medical imaging device may perform data connection through a wired or wireless connection.
  • the camera device is a camera.
  • the possible beneficial effects of the embodiments of the present application include but are not limited to: (1)
  • the method can determine reasonable target position information of the beam limiting device according to the optical image information and target position information of the target object, so that the rays can pass through the beam limiting device.
  • the target position irradiated to the target object can match the area to be irradiated as much as possible to ensure the quality of imaging/treatment while avoiding unnecessary radiation dose damage to the target object.
  • the beam-limiting device can be controlled to move quickly to a specific position to improve work efficiency; (4) The orientation information of the target part is obtained by processing and analyzing the optical image information based on the preset algorithm, which improves the accuracy of orientation information recognition; (5) Automatic recognition by machine The orientation information of the target position is marked based on the identified orientation information, which improves the accuracy of the marking operation; (6) The machine is used to automatically identify and mark the target part in the medical image, which realizes automation and intelligence, and improves the operational efficiency. It should be noted that different embodiments may have different beneficial effects, and in different embodiments, the possible beneficial effects may be any one or a combination of the above, or any other possible beneficial effects.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Optics & Photonics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种医疗操作的相关参数的确定方法,方法包括:获取目标对象的光学图像信息(310);确定目标对象的目标部位信息;至少基于光学图像信息和目标部位信息,确定医疗操作的相关参数。

Description

一种医学操作的相关参数的确定方法以及系统
优先权信息
本申请要求2020年07月30日提交的中国申请号202010751784.7的优先权,2020年08月07日提交的中国申请号202010786489.5的优先权全部内容通过引用并入本文。
技术领域
本申请涉及相关参数确定方法,特别涉及一种医学操作的相关参数的确定方法和系统。
背景技术
放射线设备(如DR设备、CT设备、X射线机、直线加速器、C臂机等)通过发出放射线(如X射线、β射线、γ射线等)对患者进行拍摄和/或治疗。放射线设备在进行放射线照射时,通过限束器设置对应的开口,射线透过所述开口照射到人体上。如果透过限束器开口照射到人体上的区域与人体上的待照射区域不匹配会带来接收不必要辐射的问题,而这些不必要的辐射可能会对人体产生危害。因此,有必要提供一种限束装置的目标位置信息的确定方法,提高限束器开口与人体上的待照射区域的匹配度。
发明内容
本申请提供一种医疗操作的相关参数的确定方法,所述方法包括:获取目标对象的光学图像信息;确定目标对象的目标部位信息;至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数。
在一些实施例中,所述确定目标对象的目标部位信息包括:获取所述目标对象的目标部位信息。
在一些实施例中,所述确定目标对象的目标部位信息包括:对所述光学图像信息进行处理,确定所述目标对象中的目标部位信息。
在一些实施例中,所述医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息;所述至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数,包括:至少根据所述光学图像信息、所述目标部位信息,确定所述目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息包括:获取所述目标对象中目标部位的医学图像;所述至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息包括:至少根据所 述光学图像信息、所述医学图像,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息还包括:获取与所述目标对象相关的协议信息,所述协议信息至少包括所述目标对象的目标部位信息;所述至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息包括:至少根据所述光学图像信息、所述协议信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息还包括:获取所述目标对象对应的医学图像的标记信息。
在一些实施例中,所述方法还包括:获取所述限束装置的初始位置信息;当所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息时,所述方法还包括:根据所述待照射位置信息以及所述初始位置信息,确定所述限束装置的目标位置信息。
在一些实施例中,所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息包括:将所述光学图像信息以及所述目标部位信息输入第一机器学习模型,以确定所述待照射位置信息。
在一些实施例中,所述第一机器学习模型通过以下方式获得:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像,以及所述历史目标对象上的一个或多个目标部位的历史医学影像图像;根据所述历史光学图像以及所述历史医学影像图像的融合结果信息,确定所述历史光学图像的标记信息;所述标记信息包括所述历史光学图像中目标部位的位置信息;将所述历史光学图像以及所述历史医学影像图像作为输入数据,所述标记信息作为输出数据,输入所述初始机器学习模型进行训练。
在一些实施例中,所述根据所述光学图像信息、所述目标部位信息,确定所述限束装置的目标位置信息包括:将所述光学图像信息以及所述目标部位信息输入第二机器学习模型,以确定所述限束装置的目标位置信息。
在一些实施例中,所述第二机器学习模型通过以下方式获得:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像以及所述历史目标对象上的一个或多个目标部位的历史医学图像;根据所述历史光学图像与所述历史医学图像的融合结果信息,确定对应的限束装置的历史目标位置信息; 将所述历史光学图像以及所述历史医学图像作为输入数据,所述对应的限束装置的历史目标位置信息作为输出数据,输入所述初始机器学习模型进行训练。
在一些实施例中,根据所述限束装置的目标位置信息控制所述限束装置运动。
在一些实施例中,如果所述目标位置信息大于预设的阈值范围,则发出提示信息。
在一些实施例中,所述待照射位置信息包括至少两个子待照射区域;所述限束装置的目标位置信息包括与所述两个子待照射区域对应的子目标位置信息。
在一些事实例中,与所述目标对象相关的协议信息中包括至少两个子目标部位;对应的,所述子待照射区域可以根据所述协议信息中的子目标部位确定。
在一些实施例中,所述子待照射区域由所述预设算法确定;对应的,限束装置的至少两个子目标位置信息根据所述子待照射区域确定。
在一些实施例中,所述限束装置包括多叶光栅准直器。
在一些实施例中,所述医疗操作的相关参数包括目标对象对应的医学图像的标记信息;所述目标部位信息包括目标部位的方位信息;所述至少所述光学图像信息和基于所述目标部位信息,确定医疗操作的相关参数,包括:基于所述目标部位的方位信息,确定医学图像的标记信息。
在一些实施例中,所述方法还包括:基于所述目标部位信息的方位信息对所述目标对象的医学图像进行标记。
在一些实施例中,所述方法包括:获取所述目标对象的医学图像;将所述标记信息标记在所述医学图像中。
在一些实施例中,所述基于所述方位信息对所述目标对象的医学图像进行标记包括:基于所述方位信息确定对应的协议信息;基于所述协议信息对所述目标对象的医学图像进行标记。
在一些实施例中,所述方位信息包括所述目标部位相对于所述目标对象的左右方位、前后方位以及上下方位中至少一种。
在一些实施例中,所述目标对象的光学图像信息包括静态图像或视频图像。
在一些实施例中,当所述目标部位的方位信息是通过对所述光学图像信息进行处理得到时,所述对所述光学图像信息进行处理是通过预设算法的,其中,所述预设算法包括机器学习模型;相应地,所述对所述光学图像信息进行处理确定目标对象中目标部位的方位信息包括:将所述光学图像信息输入机器学习模型;根据所述机器学习模型 的输出数据确定目标部位的方位信息。
在一些实施例中,所述光学图像信息是由摄像头获得的,所述医学图像为MRI、XR、PET、SPECT、CT、超声的一种图像或两种以上的融合图像。
在一些实施例中,所述方法还包括:基于所述目标部位的光学图像信息,自动调整医学影像装置的射线源,以使所述目标部位处于所述射线源的射线路径中。
在一些实施例中,基于所述方位信息对所述目标对象的医学图像进行标记包括进行颜色或文字或者图形的标记。
在一些实施例中,所述方法还包括对已标记的医学图像的标记手动调整。
本申请还提供一种医疗操作的相关参数的确定系统,所述系统包括光学图像信息获取模块、目标部位信息确定模块以及医疗操作相关参数确定模块;所述光学图像信息获取模块用于获取目标对象的光学图像信息;所述目标部位信息确定模块用于确定目标对象的目标部位信息;医疗操作相关参数确定模块用于至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数。
在一些实施例中,所述目标部位信息确定模块还用于获取所述目标对象的目标部位信息。
在一些实施例中,所述目标部位信息确定模块还用于对所述光学图像信息进行处理,确定所述目标对象中的目标部位信息。
在一些实施例中,所述医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息;所述医疗操作相关参数确定模块还用于:至少根据所述光学图像信息、所述目标部位信息,确定所述目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息包括:获取所述目标对象中目标部位的医学图像;所述至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息包括:至少根据所述光学图像信息、所述医学图像,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息还包括:获取与所述目标对象相关的协议信息,所述协议信息至少包括所述目标对象的目标部位信息;所述至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息包括:至少根据所述光学图像信息、所述协议信息,确 定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,所述获取所述目标对象的目标部位信息还包括:获取所述目标对象对应的医学图像的标记信息。
在一些实施例中,所述系统还包括:获取所述限束装置的初始位置信息;当所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息时,所述方法还包括:根据所述待照射位置信息以及所述初始位置信息,确定所述限束装置的目标位置信息。
在一些实施例中,所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息包括:将所述光学图像信息以及所述目标部位信息输入第一机器学习模型,以确定所述待照射位置信息。
在一些实施例中,所述第一机器学习模型通过以下方式获得:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像,以及所述历史目标对象上的一个或多个目标部位的历史医学影像图像;根据所述历史光学图像以及所述历史医学影像图像的融合结果信息,确定所述历史光学图像的标记信息;所述标记信息包括所述历史光学图像中目标部位的位置信息;将所述历史光学图像以及所述历史医学影像图像作为输入数据,所述标记信息作为输出数据,输入所述初始机器学习模型进行训练。
在一些实施例中,所述根据所述光学图像信息、所述目标部位信息,确定所述限束装置的目标位置信息包括:将所述光学图像信息以及所述目标部位信息输入第二机器学习模型,以确定所述限束装置的目标位置信息。
在一些实施例中,所述第二机器学习模型通过以下方式获得:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像以及所述历史目标对象上的一个或多个目标部位的历史医学图像;根据所述历史光学图像与所述历史医学图像的融合结果信息,确定对应的限束装置的历史目标位置信息;将所述历史光学图像以及所述历史医学图像作为输入数据,所述对应的限束装置的历史目标位置信息作为输出数据,输入所述初始机器学习模型进行训练。
在一些实施例中,根据所述限束装置的目标位置信息控制所述限束装置运动。
在一些实施例中,如果所述目标位置信息大于预设的阈值范围,则发出提示信息。
在一些实施例中,所述待照射位置信息包括至少两个子待照射区域;所述限束 装置的目标位置信息包括与所述两个子待照射区域对应的子目标位置信息。
在一些事实例中,与所述目标对象相关的协议信息中包括至少两个子目标部位;对应的,所述子待照射区域可以根据所述协议信息中的子目标部位确定。
在一些实施例中,所述子待照射区域由所述预设算法确定;对应的,限束装置的至少两个子目标位置信息根据所述子待照射区域确定。
在一些实施例中,所述限束装置包括多叶光栅准直器。
在一些实施例中,所述医疗操作的相关参数包括目标对象对应的医学图像的标记信息;所述目标部位信息包括目标部位的方位信息;所述医疗操作相关参数确定模块还用于:基于所述目标部位的方位信息,确定医学图像的标记信息。
在一些实施例中,所述系统还包括:基于所述目标部位信息的方位信息对所述目标对象的医学图像进行标记。
在一些实施例中,获取所述目标对象的医学图像;将所述标记信息标记在所述医学图像中。
在一些实施例中,所述基于所述方位信息对所述目标对象的医学图像进行标记包括:基于所述方位信息确定对应的协议信息;基于所述协议信息对所述目标对象的医学图像进行标记。
在一些实施例中,所述方位信息包括所述目标部位相对于所述目标对象的左右方位、前后方位以及上下方位中至少一种。
在一些实施例中,所述目标对象的光学图像信息包括静态图像或视频图像。
在一些实施例中,当所述目标部位的方位信息是通过对所述光学图像信息进行处理得到时,所述对所述光学图像信息进行处理是通过预设算法的,其中,所述预设算法包括机器学习模型;相应地,所述对所述光学图像信息进行处理确定目标对象中目标部位的方位信息包括:将所述光学图像信息输入机器学习模型;根据所述机器学习模型的输出数据确定目标部位的方位信息。
在一些实施例中,所述光学图像信息是由摄像头获得的,所述医学图像为MRI、XR、PET、SPECT、CT、超声的一种图像或两种以上的融合图像。
在一些实施例中,所示系统还包括:基于所述目标部位的光学图像信息,自动调整医学影像装置的射线源,以使所述目标部位处于所述射线源的射线路径中。
在一些实施例中,基于所述方位信息对所述目标对象的医学图像进行标记包括进行颜色或文字或者图形的标记。
在一些实施例中,所述系统还包括对已标记的医学图像的标记手动调整。
本申请提供一种限束装置的目标位置信息的确定系统,包括光学图像信息获取模块、目标部位信息获取模块以及确定模块;所述光学图像信息获取模块用于获取目标对象的光学图像信息;所述目标部位信息获取模块用于获取所述目标对象的目标部位信息;所述确定模块用于至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
本申请提供一种用于确定操作位置的装置,包括处理器,所述处理器用于执行任一限束装置的目标位置信息的确定方法。
本申请提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行任一限束装置的目标位置信息的确定方法。
本申请提供一种目标部位的方位标记系统,所述系统包括:图像信息获取模块,用于获取目标对象的目标部位的图像信息;方位信息确定模块,用于对所述图像信息进行处理,确定目标对象中目标部位的方位信息;方位信息标记模块,用于基于所述方位信息对所述目标对象的医学图像进行标记。
在一些实施例中,所述系统还包括摄像头,其用于获取所述图像信息,所述医学图像为MRI、XR、PET、SPECT、CT、超声的一种图像或两种以上的融合图像。
在一些实施例中,一种用于标记目标部位方位的装置,包括处理器,其特征在于,所述处理器用于执行计算机指令,以任一目标部位的方位标记方法。
本申请提供一种用于标记目标部位方位的系统,其特征在于,所述系统包括:摄像装置,用于获取所述目标对象的目标部位的图像信息;医学影像装置,用于获取目标对象的医学图像;信息处理装置,对所述图像信息进行处理,确定所述目标对象中目标部位的方位信息;并将所述方位信息在所述医学图像中标记。
在一些实施例中,所述摄像装置相对固定或活动地设置于所述医学影像装置上。
在一些实施例中,所述摄像装置为摄像头。
附图说明
本申请将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本申请一些实施例所示的限束装置的目标位置信息的确定系统的应 用场景示意图;
图2是根据本申请一些实施例所示的限束装置的目标位置信息的确定系统的模块图;
图3是根据本申请一些实施例所示的示例性限束装置的目标位置信息的确定方法流程图;
图4是根据本说明书的一些实施例所示的目标方位的标记系统的示例性流程图;
图5是根据本申请一些实施例所示的对目标方位进行标记的示例性流程图;
图6是根据本申请一些实施例所示的医学图像的示意图;
图7是根据本申请另一些实施例所示的医学图像的示意图。
具体实施方式
为了更清楚地说明本申请的实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其他类似情景。应当理解,给出这些示例性实施例仅仅是为了使相关领域的技术人员能够更好地理解进而实现本发明,而并非以任何方式限制本发明的范围。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
在很多医疗操作场景,有很多医疗操作需要根据不同患者的不同特征来对应调整医疗操作的相关参数。比如,在放射性治疗的医疗操作场景中,需要根据不同患者的体征信息(例如,身高、身体宽度或身体厚度)来确定对应的待照射部位所在的位置,然后再调整限束装置的开口位置,使得限束装置的开口位置与患者的待照射部位能够尽可能地匹配。又比如,在患者拍摄医学图像之后,需要对医学图像进行标记,以告知医师或患者该医学图像对应的拍摄部位。为了确保标记信息与拍摄部位的准确,需要根据不同患者当前拍摄的部位以及部位所在的方位来对医学图像进行标记。
为了确保相关医疗操作过程中的准确以及高效,本申请一个或多个实施例中,通过摄像装置进行视觉识别,根据识别结果,或者再结合其他信息,生成医疗操作的相关参数。在一些实施例中,可以先通过摄像装置获取目标对象的光学图像信息,然后确定目标对象的目标部位信息,最后至少基于所述光学图像信息和目标部位信息确定医疗操作的相关参数。
在一些实施例中,医疗操作的相关参数可以包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
在一些实施例中,医疗操作的相关参数可以包括目标对象对应的医学图像的标记信息。
本申请一个或多个实施例涉及一种医疗操作的相关参数的确定方法和系统。当医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息时,本申请一个或多个实施例中的医疗操作的相关参数的确定方法和系统还可以称之为限束装置的目标位置信息的确定方法和系统。该限束装置的目标位置信息的确定方法可以应用于放射线设备(如DR设备、CT设备等)上的限束器。在每次进行放射线设备的照射之前,该方法可以至少根据自动获取的目标对象(如人体或其他实验体)的光学图像信息、目标部位信息(如医疗任务中的照射器官),自动确定限束装置的目标位置信息,使得射线透过限束装置的目标位置照射到目标对象的区域能够与待照射区域尽可能地匹配,在保证成像/治疗质量的同时避免不必要辐射剂量对目标对象的伤害。该方法特别适合对儿童进行辐射照射时使用,以实现对儿童的辐射伤害防护。
当医疗操作的相关参数包括目标对象对应的医学图像的标记信息时,本申请一个或多个实施例中的医疗操作的相关参数的确定方法和系统还可以称之为目标部位的方位标记方法和系统。在一些实施例中,需要人工识别医学图像中的拍摄部位并将拍摄部位人工标记在医学图像中。在一些实施例中,拍摄医生在实际拍摄的过程中也可以根据已知的拍摄部位选择对应的拍摄协议,医学影像设备可以根据选择的协议进行方位信息的标记。在此过程中,有可能出现判断错误、标定错误或者选择协议,进而对诊断结果和后续治疗造成一定的影响。本申请一些实施例提供的目标部位的标记方法可以通过获取目标对象的图像信息,并基于预设算法对所述图像信息进行处理,确定目标对象中目标部位的方位信息,再基于所述方位信息对所述目标部位的医学图像进行标记。
下面结合附图对本申请实施例所提供的医疗操作的相关参数的确定系统进行详细说明。
图1所示为根据本申请一些实施例所示的医疗操作的相关参数的确定系统的应用场景示意图。
如图1所示,医疗操作的相关参数的确定系统100可以包括医学影像装置110、网络120、处理设备140、存储设备150和摄像装置160。在一些实施例中,所述系统100还可以包括至少一个终端130。该系统100中的各个组件之间可以通过网络120互相连接。例如,摄像装置160和处理设备140可以通过网络120连接或通信。
在一些实施例中,医学影像装置110可以对目标对象进行数据采集,以得到目标对象或者目标对象的目标部位的医学图像。在一些实施例中,医学影像装置可以包括数字化X射线(DR,Digital Radiography)成像设备、计算机断层成像(computed tomography,CT)扫描机、磁共振成像(magnetic resonance imaging,MRI)扫描仪、B成像(b-scan ultrasonography)扫描仪、热断层扫描成像(Thermal texture maps,TTM)扫描设备或者正电子发射断层成像(positron emission tomography,PET)扫描仪等。示例性地,以CT扫描机为例对医学影像装置110进行说明。例如,系统根据摄像装置160得到的图像信息分析得到目标部位是左膝盖,目标对象可以平躺面部朝上于扫描床上,移动扫描床使左膝盖位于扫描区域内进行扫描得到左膝盖的医学图像。在一些实施例中,当医学影像装置110为CT设备时,该设备包括限束装置(如限束器),用于限制CT设备放射线的透光区域。在一些实施例中,医学影像装置110还可以为其它任意的放射线设备。放射线设备可以通过发出放射线(如X射线、β射线、γ射线等)对目标对象进行拍摄和/或治疗。例如,放射线设备可以包括但不限于DR设备、X射线机、直线加速器和C臂机等。在一些实施例中实施例中,所述限束装置可以包括多叶光栅准直器,可以对不同形状或任意不规则形状的待照射区域进行适配,从而可以提高适配的精准度,减少不必要辐射剂量对人体的伤害。
在一些实施例中,摄像装置160可以对目标对象进行数据采集,以得到该目标对象或者目标对象的目标部位的光学图像信息。在一些实施例中,摄像装置160可以设置在医学影像装置110上,也可以独立于医学影像装置110进行单独设置。在一些实施例中,摄像装置160可以是光学设备,例如摄像头或其他图像传感器等。在一些实施例中,摄像装置160也可以是非光学设备,该设备基于采集得到的若干距离数据,得到能够体现目标对象的形状、尺寸等特征的热图。在一些实施例中,摄像装置160采集的光学图像信息可以是静态图像,也是视频图像。在一些实施例中,摄像装置160包括摄像头。
网络120可以包括能够促进系统100的信息和/或数据交换的任何合适的网络。在一些实施例中,所述系统100的至少一个组件(例如,摄像装置160、处理设备140、存储设备150、医学影像装置110、至少一个终端130)可以通过网络120与系统100中至少一个其他组件交换信息和/或数据。例如,处理设备140可以通过网络120从摄像装置160获得目标对象或者目标对象的目标部位的光学图像信息。又例如,处理设备140可以通过网络120从至少一个终端130获得用户(如医生)指令。网络120可以或包括公共网络(例如,互联网)、专用网络(例如,局部区域网络(LAN))、有线网络、无线网络(例如,802.11网络、Wi-Fi网络)、帧中继网络、虚拟专用网络(VPN)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任意组合。例如,网络120可以包括有线网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公共电话交换网络(PSTN)、蓝牙TM网络、ZigBeeTM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括至少一个网络接入点。例如,网络120可以包括有线和/或无线网络接入点,例如基站和/或互联网交换点,限束装置的目标位置信息确定系统100的至少一个组件可以通过接入点连接到网络120以交换数据和/或信息。
在一些实施例中,至少一个终端130可以与摄像装置160、医学影像装置110、处理设备140以及存储设备150中的至少一个进行通信连接。例如,至少一个终端130还可以从处理设备140获得目标对象上的待照射位置信息和/或所述限束装置的目标位置信息并进行显示输出。又例如,至少一个终端130可以获取用户的操作指令,然后将该操作指令发送至摄像装置160和/或医学影像装置110以对其进行控制(如调整图像采集视角、设置限束装置工作参数等)。还例如,至少一个终端130可以从处理设备140获得目标部位的方位分析结果,或从摄像装置160获取采集的图像信息。还例如,至少一个终端130可以获取用户的操作指令,然后将该操作指令发送至医学影像装置110或摄像装置160以对其进行控制(如调整图像采集视角、设置医学影像装置的工作参数等)。
在一些实施例中,至少一个终端130可以包括移动设备131、平板计算机132、膝上型计算机133等或其任意组合。例如,移动设备131可以包括移动电话、个人数字助理(PDA)、医用设备等或其任意组合。在一些实施例中,至少一个终端130可以包括输入设备、输出设备等。输入设备可以包括字母数字和其他按键,用于输入控制指令对摄像装置160和/或医学影像装置110进行控制。输入设备可以选用键盘输入、触摸 屏(例如,具有触觉或触觉反馈)输入、语音输入、手势输入或任何其他类似的输入机制。通过输入设备接收的输入信息可以通过如总线传输到处理设备140,以进行进一步处理。其他类型的输入设备可以包括光标控制装置,例如鼠标、轨迹球或光标方向键等。输出设备可以包括显示器、扬声器、打印机等或其任意组合,用于输出摄像装置160采集的目标对象光学图像信息和/或医学影像装置110检测得到的医学图像。在一些实施例中,至少一个终端130可以是处理设备140的一部分。
处理设备140可以处理从摄像装置160、存储设备150、至少一个终端130或系统100的其他组件获得数据和/或指令。例如,处理设备140可以从摄像装置160获的目标对象的光学图像信息,对其进行处理以得出该目标对象的体态信息。体态信息可以包括但不限于目标对象的身高、体宽信息和骨骼关节点信息等。例如,处理设备140可以从摄像装置160获得目标对象的光学图像信息,对其进行处理以得出目标对象的目标部位的方位信息。又例如,处理设备140可以从存储设备150获取预先存储的指令,并执行该指令以实现如下所述的限束装置的目标位置信息的确定方法。
在一些实施例中,处理设备140可以是单一服务器或服务器组。服务器组可以是集中式的或分布式的。在一些实施例中,处理设备140可以是本地或远程的。例如,处理设备140可以通过网络120从摄像装置160、存储设备150和/或至少一个终端130访问信息和/或数据。又例如,处理设备140可以直接连接到摄像装置160、至少一个终端130和/或存储设备150以访问信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。例如,云平台可以包括私有云、公共云、混合云、社区云、分布式云、云间云、多云等或其任意组合。
在一些实施例中,医学影像装置110可以基于处理设备140得出的目标对象上的待照射位置信息和/或所述限束装置的目标位置信息进行工作。例如,医学影像装置110可以根据处理设备140处理得出的限束装置的目标位置信息(如:限束装置的位置、限束装置的开口大小等),或设定限束装置的位置以及开口大小;在一些实施例中,医学影像装置110还可以根据处理设备140得出的目标对象上的待照射位置信息以及限束装置的初始位置信息确定限束装置的目标位置信息。
在一些实施例中,医学影像装置110可以基于处理设备140确定的目标对象上目标部位的方位信息进行扫描。例如,医学影像装置110可以根据处理设备140处理得到的目标对象的目标部位的方位信息(例如,左部膝盖)来扫描目标部位(例如,左膝盖),进而得到目标部位的医学图像。
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储摄像装置160采集的光学图像信息以及医学影像装置110采集的医学图像。在一些实施例中,存储设备150可以存储从摄像装置160、至少一个终端130和/或处理设备140获得的数据。在一些实施例中,存储设备150可以存储目标对象的历史图像信息库,该历史图像信息库中的每一张历史图像对应一个目标对象的光学图像。在一些实施例中,存储设备150还可以存储与目标对象相关的协议信息,协议信息至少包括所述目标对象的目标部位信息,处理设备140可以基于该协议信息获取目标对象的目标部位信息。在一些实施例中,存储设备150还可以存储限束装置的目标位置信息,医学影像装置110可以从存储设备150获取预先存储的限束装置的目标位置信息,并根据该限束装置的目标位置信息控制限束装置运动。在一些实施例中,存储设备150还可以存储预设的阈值范围以及提示信息,处理设备140可以基于该存储的提示信息、预设的阈值范围和目标位置信息进行判断,如果所述目标位置信息大于预设的阈值范围,则发出提示信息。
在一些实施例中,存储设备150可以存储医学影像装置110采集的医学图像。在一些实施例中,存储设备150可以存储从医学影像装置110、至少一个终端130和/或处理设备140获得的数据。在一些实施例中,存储设备150还可以存储目标部位与方位信息之间的对应关系,处理设备140可以基于该对应关系以及处理得到的目标部位得出目标部位的方位信息。
在一些实施例中,存储设备150可以存储处理设备140用来执行或使用来完成本申请中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150可以包括大容量存储器、可移动存储器、易失性读写存储器、只读存储器(ROM)等或其任意组合。示例性的大容量存储器可以包括磁盘、光盘、固态磁盘等。示例性可移动存储器可以包括闪存驱动器、软盘、光盘、存储卡、压缩盘、磁带等。示例性易失性读写存储器可以包括随机存取存储器(RAM)。在一些实施例中,存储设备150可以在云平台上实现。
在一些实施例中,存储设备150可以连接到网络120以与系统100中的至少一个其他组件(例如,处理设备140、至少一个终端130)通信。系统100中的至少一个组件可以通过网络120访问存储设备150中存储的数据或指令。在一些实施例中,存储设备150可以是处理设备140的一部分。
应该注意的是,上述描述仅出于说明性目的而提供,并不旨在限制本申请的范 围。对于本领域普通技术人员而言,在本申请内容的指导下,可做出多种变化和修改。可以以各种方式组合本申请描述的示例性实施例的特征、结构、方法和其他特征,以获得另外的和/或替代的示例性实施例。例如,存储设备150可以是包括云计算平台的数据存储设备,例如公共云、私有云、社区和混合云等。然而,这些变化与修改不会背离本申请的范围。
在一些实施例中,该医疗操作的相关参数的确定系统还可以包括光学图像信息获取模块、目标部位信息确定模块以及医疗操作相关参数确定模块。其中,光学图像信息获取模块用于获取目标对象的光学图像信息。目标部位信息确定模块用于确定目标对象的目标部位信息。医疗操作相关参数确定模块用于至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数。
在一些实施例中,目标部位信息确定模块还用于获取目标对象的目标部位信息。在一些实施例中,目标部位信息确定模块还用于对所述光学图像信息进行处理,确定所述目标对象中的目标部位信息。在一些实施例中,当医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息时,医疗操作相关参数确定模块还用于:至少根据所述光学图像信息、所述目标部位信息,确定所述目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。在一些实施例中,当医疗操作的相关参数包括目标对象对应的医学图像的标记信息,目标部位信息包括目标部位的方位信息时,医疗操作相关参数确定模块还用于:基于所述目标部位的方位信息,确定医学图像的标记信息。
下面结合不同场景对医疗操作的相关参数的确定系统和方法进行示例性说明。图2-3是关于限束装置的目标位置信息的确定系统以及方法的更多示例性描述。图4-7是关于目标部位的方位标记系统以及方法的更多示例性描述。
图2是根据本申请一些实施例所示的限束装置的目标位置信息的确定系统200的模块图。如图2所示,该目标位置信息的确定系统200可以包括光学图像信息获取模块210、目标部位信息获取模块220和目标位置信息确定模块230。其中,目标部位信息获取模块包含于目标部位信息确定模块。目标位置信息确定模块包含与医疗操作相关参数确定模块。
光学图像信息获取模块210可以用于获取目标对象的光学图像信息。
目标部位信息获取模块220可以用于获取所述目标对象的目标部位信息。在一些实施例中,目标部位信息获取模块220还可以用于获取与目标对象相关的协议信息。 协议信息至少包括所述目标对象的目标部位信息。在一些实施例中,目标部位信息获取模块220还可以用于获取所述目标对象中目标部位的医学图像。
目标位置信息确定模块230可以用于至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。在一些实施例中,目标位置信息确定模块230还可以用于至少根据所述光学图像信息、所述医学图像,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。在一些实施例中,目标位置信息确定模块230还可以用于至少根据所述光学图像信息以及所述协议信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。在一些实施例中,目标位置信息确定模块230可以用于将光学图像信息以及目标部位信息输入第二机器学习模型,以确定限束装置的目标位置信息。在一些实施例中,目标位置信息确定模块230可以用于根据所述待照射位置信息以及所述初始位置信息,确定所述限束装置的目标位置信息。在一些实施例中,目标位置信息确定模块230可以用于将光学图像信息以及目标部位信息输入第一机器学习模型,以确定待照射位置信息。
在一些实施例中,目标位置信息的确定系统200还包括控制模块。控制模块可以用于判断目标位置信息是否大于预设的阈值范围。若目标位置信息小于等于预设的阈值范围,控制模块可以用于根据所述限束装置的目标位置信息控制限束装置运动。若目标位置信息大于预设的阈值范围,控制模块可以用于发出提示信息。
在一些实施例中,目标位置信息的确定系统200还包括获取初始位置获取模块,用于获取所述限束装置的初始位置信息。
在一些实施例中,目标位置信息的确定系统200还包括训练模块,用于通过以下方法获取第一机器学习模型:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像,以及所述历史目标对象上的一个或多个目标部位的历史医学图像;根据所述历史光学图像以及所述历史医学图像的融合结果信息,确定所述历史光学图像的标记信息;所述标记信息包括所述历史光学图像中目标部位的位置信息;将所述历史光学图像以及所述历史医学图像作为输入数据,所述标记信息作为输出数据或参考标准,输入所述初始机器学习模型进行训练。
在一些实施例中,所述训练模块还可以用于通过以下方法获取第二机器学习模型:获取初始机器学习模型;获取初始样本训练数据,所述初始样本训练数据包括历史目标对象的历史光学图像以及所述历史目标对象上的一个或多个目标部位的历史医学图像;根据所述历史光学图像与所述历史医学图像的融合结果信息,确定对应的限束装 置的历史目标位置信息;将所述历史光学图像以及所述历史医学图像作为输入数据,所述对应的限束装置的历史目标位置信息作为输出数据或参考标准,输入所述初始机器学习模型进行训练。
应当理解,图2所示的系统及其模块可以利用各种方式来实现。例如,在一些实施例中,系统及其模块可以通过硬件、软件或者软件和硬件的结合来实现。其中,硬件部分可以利用专用逻辑来实现;软件部分则可以存储在存储器中,由适当的指令执行系统,例如微处理器或者专用设计硬件来执行。本领域技术人员可以理解上述的方法和系统可以使用计算机可执行指令和/或包含在处理器控制代码中来实现,例如在诸如磁盘、CD或DVD-ROM的载体介质、诸如只读存储器(固件)的可编程的存储器或者诸如光学或电子信号载体的数据载体上提供了这样的代码。本申请的系统及其模块不仅可以有诸如超大规模集成电路或门阵列、诸如逻辑芯片、晶体管等的半导体、或者诸如现场可编程门阵列、可编程逻辑设备等的可编程硬件设备的硬件电路实现,也可以用例如由各种类型的处理器所执行的软件实现,还可以由上述硬件电路和软件的结合(例如,固件)来实现。
需要注意的是,以上对于用于确定操作位置的系统及其模块的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可以在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。例如,在一些实施例中,图2中披露的光学图像信息获取模块210、目标部位信息获取模块220和目标位置信息确定模块230可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。又例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。诸如此类的变形,均在本申请的保护范围之内。
本申请一些实施例中的目标部位的方位标记系统可以包括光学图像信息获取模块和方位信息确定模块。在一些实施例中,目标部位的方位标记系统还可以包括方位信息标记模块。
其中,光学图像信息获取模块,用于获取目标对象的目标部位的光学图像信息。
方位信息确定模块,用于对所述光学图像信息进行处理,确定目标对象中目标部位的方位信息。
方位信息标记模块,用于基于所述方位信息对所述目标对象的医学图像进行标记。在一些实施例中,所述方位信息标记模块还用于:获取所述目标部位的医学图像; 将所述方位信息标记在所述医学图像中。在一些实施例中,所述方位信息标记模块还用于:基于所述方位信息确定对应的协议信息;基于所述协议信息对所述目标部位的医学图像进行标记。
图3为根据本申请一些实施例所示的示例性限束装置的目标位置信息的确定方法流程图。具体的,确定方法300可以由目标位置信息的确定系统200执行。如图1所示,该目标位置信息的确定方法300可以包括:
步骤310,获取目标对象的光学图像信息。具体的,该步骤310可以由光学图像信息获取模块210执行。
在一些实施例中,目标对象可以理解为待照射的对象,可以包括人体或其它实验体。所述其他实验体可以包括其他动物活体,或非活体的实验模型。光学图像信息可以为目标对象的可见光图像信息。例如,光学图像信息可以为人体或其它实验体的可见光全身图像、能够反映人体或其它实验体的全身图像的视频。在一些实施例中,光学图像信息获取模块210可以通过摄像装置获取目标对象的光学图像信息。在一些实施例中,所述摄像装置可以固定地设置在医学影像装置上,也可以设置在医学影像装置外一个固定的位置。本说明书不对摄像装置的固定位置做具体限定,只要摄像装置能够通过一张或多张图片来获取目标对象的全身图像即可。
步骤320,获取目标对象的目标部位信息。具体的,该步骤320可以由目标部位信息获取模块220执行。
目标部位是指医疗任务中目标对象上的待照射器官。目标部位信息是指能够反映所述待照射器官的信息。例如,目标部位信息可以为待照射器官的名称。又例如,目标部位信息可以为待照射器官的具体位置信息。在一些实施例中,目标部位信息获取模块220可以获取与目标对象相关的协议信息,根据该协议信息获取目标对象的目标部位信息,其中,协议信息可以包括目标对象的目标部位信息。
在一些实施例中,目标部位信息获取模块220可以获取目标对象中目标部位的医学图像,医生根据医学图像获取目标对象的目标部位。在一些实施例中,目标部位信息获取模块220可以获取目标对象对应的医学图像的标记信息,然后根据标记信息确定目标对象的目标部位。其中,目标对象对应的医学图像中包括目标对象上的目标拍摄部位,标记信息用于反映目标拍摄部位的名称以及目标拍摄部位相对目标对象的方位。关于医学图像的标记信息的更多描述可参见本说明书其他部分,例如,图4-7中的至少部分内容。在一些实施例中,目标部位信息获取模块220可以采用其它任意方式获取目标 对象的目标部位信息。例如,由目标对象告知医生目标部位信息。在一些实施例中,所述医学图像可以理解为利用医学影像装置采集的医学图像。医学影像装置110可以包括但不限于DR设备、CT设备、X射线机、直线加速器以及C臂机等。
在一些实施例中,步骤320中获取目标对象的目标部位信息还可以通过对目标对象的光学图像信息进行处理,以确定目标对象的目标部位信息。关于对光学图像信息进行处理得到目标部位信息的更多描述可参见本说明书其他部分,例如,图4-7中的至少部分内容。
步骤330,确定限束装置的目标位置信息。具体的,该步骤330可以由目标位置信息确定模块230执行。
在一些实施例中,在执行步骤330,确定限束装置的目标位置信息时,处理设备可以对光学图像信息、目标部位信息进行处理,直接确定限束装置的目标位置信息,详见步骤336;处理设备也可以对光学图像信息、目标部位信息进行处理,确定待照射位置信息;然后基于待照射位置信息以及限束装置的初始位置信息,确定限束装置的目标位置信息,详见步骤332以及步骤334。
步骤332,根据光学图像信息以及目标部位信息,确定待照射位置信息。
在一些实施例中,待照射位置可以理解为需要照射到目标对象上的待照射区域的位置,也可以叫待照射区域的位置。待照射位置信息是指能够反映所述待照射区域的位置的信息。具体的,在一些实施例中,待照射位置信息可以是光学图像上确定的目标对象的待照射器官位置信息。例如,待照射位置信息可以包括待照射器官反映在光学图像上的位置、待照射器官反映在光学图像上的面积大小等中的一种或多种。
在一些实施例中,处理设备可以基于目标部位信息对光学图像进行处理,然后输出与目标部位对应的待照射位置信息。在一些实施例中,当目标部位信息包括目标部位对应的医疗图像时,处理设备可以对光学图像以及医疗图像进行图像融合处理,在光学图像上确定出目标部位反映在目标对象表面的位置。例如,可以直接在光学图像上显示出目标部位的轮廓。在一些实施例中,当目标部位信息从对应的协议信息中获取时,处理设备可以对光学图像进行处理分析,该处理分析用于确定光学图像中的目标对象的大概器官位置,并基于协议信息中的目标部位确定出反映在目标对象表面的目标部位的位置信息。例如,可以直接在光学图像中显示出与协议信息中的目标部位对应的器官的轮廓或区域。
在一些实施例中,处理设备可以使用预设算法对上述一个或多个步骤进行处理。 所述预设算法可以包括但不限于机器学习模型等。例如,处理设备可以利用机器学习模型基于光学图像信息以及目标部位信息,直接确定出目标部位反映在目标对象表面的位置,即待照射位置。
在一些实施例中,预设算法可以为第一机器学习模型。在一些实施例中,当目标部位信息包括医学图像时,可以将目标对象的光学图像以及目标部位的医学图像输入第一机器学习模型,第一机器学习模型可以直接输出待照射位置信息。在一些实施例中,第一机器学习模型输出的待照射位置信息可以包括带有位置标记的光学图像。在一些实施例中,第一机器学习模型输出的待照射位置信息可以包括待照射位置的坐标信息。在一些实施例中,当目标部位信息从对应的协议信息中获取时,可以将协议信息进行处理,提取协议信息中的目标部位信息,然后将目标部位信息进行特征处理,并将处理后的与目标部位信息对应的特征信息以及目标对象的光学图像输入第一机器学习模型,对应地,第一机器学习模型可以直接输出带有位置标记的光学图像或者直接输出待照射位置的坐标信息。具体的,第一机器学习模型的训练过程详见后文描述。
步骤334,根据待照射位置信息以及限束装置的初始位置信息,确定限束装置的目标位置信息。
限束装置的初始位置是指还未开始照射时,限束装置运动前的位置。限束装置的初始位置信息是指能够反映限束装置的初始位置信息。具体的,限束装置的初始位置信息可以理解为限束装置运动前限束装置到待照射的目标对象之间的距离。
限束装置的目标位置是指限束装置运动后需要到达的位置,且该位置与所述待照射位置信息相对应。限束装置的目标位置信息是指能够反映限束装置的目标位置的信息。在一些实施例中,限束装置的目标位置信息可以包括限束装置达到目标位置后叶片的位置(如限束装置达到目标位置后叶片的空间坐标位置)或限束装置达到目标位置后叶片在限束装置端面内的开口大小(如限束装置达到目标位置后叶片在限束装置端面的位置)等。
在一些实施例中,限束装置的初始位置信息可以通过目标对象相关的协议信息获取,所述协议信息可以包括限束装置的初始位置信息。在一些实施例中,限束装置的初始位置信息还可以通过其它方式获取。其它方式可以包括自动获取方式和人工获取方式。自动获取方式可以包括系统从距离检测传感器、激光检测装置以及红外检测装置等处直接获取对应的测量数据。人工获取方式可以包括但不限于医生借助额外的激光检测装置人工测量限束装置上叶片的位置、医生借助额外的红外检测器人工测量限束装置上 叶片的位置等。例如,可以通过医生将激光检测装置放置在合适的位置后,向限束装置发射激光,再由激光检测装置上的激光接收器接受激光信号,从而由激光检测装置确定限束装置上叶片的位置,然后医生通过外部输入设备手动输入所述叶片的位置到确定模块230。外部输入设备可以包括但不限于鼠标、键盘等。在一些实施例中,所述限束装置的初始位置信息还可以在算法中预先设定。
在一些实施例中,确定模块230可以根据待照射位置信息以及限束装置的初始位置信息,确定限速装装置的目标位置信息。具体地,确定模块230通过限束装置的初始位置确定限束装置距离目标对象的距离,并基于目标对象上待照射区域的位置信息以及限束装置距离目标对象的距离,计算确定限束装置的目标位置信息,使得射线透过限束装置的目标位置照射到目标对象的区域能够与待照射区域尽可能地匹配。
处理设备根据光学图像信息、目标部位信息可以准确地确定待照射位置信息,然后再根据待照射位置信息和限束装置的初始位置信息,准确地确定限束装置的目标位置信息。该实施例适应于限束装置的初始位置会经常发生变化的情形,在该实施例中,先确定出待照射区域位置,然后基于限束装置的当前位置计算出对应的限束装置的目标位置,可以适应更多的限束装置初始位置不同的场景,具有更多的灵活性。
步骤336,根据光学图像信息以及目标部位信息,确定限束装置的目标位置信息。
限束装置的目标位置与步骤334中的目标位置相同,此处不再赘述,具体详见步骤334中对应部分描述。在一些实施例中,处理设备可以使用预设算法对上述一个或多个步骤进行处理。其中,预设算法可以包括能够确定限束装置的目标位置信息的任意算法。任意算法可以理解为能够体现光学图像信息和目标部位信息与限束装置的目标位置信息之间的对应关系的预设指令。在一些实施例中,确定模块230可以将光学图像信息以及目标部位信息输入预设算法,然后预设算法直接输出对应的限束装置的目标位置信息。在该实施例中,需要预先考虑限束装置的初始位置,如果初始位置发生变化,需要在算法中进行相应地调整。
在一些实施例中,预设算法可以包括但不限于机器学习模型等。在一些实施例中,预设算法可以为第二机器学习模型,将将所述光学图像信息以及所述目标部位信息输入第二机器学习模型,以确定所述限束装置的目标位置信息。在一些实施例中,限束装置在实际照射时的初始位置与训练时的初始位置保持一致的情况下,确定模块230可以将光学图像信息以及目标部位信息输入第二机器学习模型,第二机器学习模型输出限 束装置的目标位置坐标值,从而直接确定限束装置的目标位置信息。
在一些实施例中,当目标部位信息包括医学图像时,可以将目标对象的光学图像以及目标部位的医学图像输入第二机器学习模型,第二机器学习模型可以直接输出限束装置的目标位置信息。例如,第二机器学习模型直接输出限束装置的目标位置坐标。在一些实施例中,当目标部位信息从对应的协议信息中获取时,可以将协议信息进行处理,提取协议信息中的目标部位信息,然后将目标部位信息进行特征处理,并将处理后的与目标部位信息对应的特征信息以及目标对象的光学图像输入第二机器学习模型,对应地,第二机器学习模型可以直接输出限束装置的目标位置信息,例如,限束装置的目标位置的坐标信息。另外,第二机器学习模型的训练过程详见后文描述。
在一些实施例中,确定限束装置的目标位置信息后,可以直接基于所述目标位置信息控制所述限束装置运动至对应的目标位置,详见步骤360。在一些实施例中,也可以对确定的目标位置进行判断,如果所述目标位置大于预设的阈值范围时,则发出提示信息,告知当前的限束装置无法满足所述目标位置的拍摄要求,详见步骤340、350。在该实施例中,基于预设的阈值范围的进行的判断以及提醒方案可以避免限束装置在无法照射整个目标部位的情况下,还对目标部位进行照射,而导致无法进行拍摄的情况发生。
在一些实施例中,当待照射位置信息中显示目标部位在光学图像上的面积大于限束装置发出辐射束所能覆盖的最大面积时,则限束装置一次拍摄只能得到目标部位的局部医学图像。为了得到整个目标部位完整的医学图像,需要将该目标部位的拍摄分为至少两次拍摄分别进行,然后将至少两拍摄获得的医学图像拼接在一起,获得目标部位完整的医学图像。在一些实施例中,目标部位是否需要进行拼接以及分成几段拼接可以通过处理设备来确定,也可以根据协议信息确定。
在一些实施例中,当需要将目标部位分成至少两部分进行照射时,所述待照射位置信息包括至少两个子待照射区域;对应的,所述限束装置的目标位置信息也包括与分别所述两个子待照射区域对应的子目标位置信息。例如,目标部位需要分成两部分进行拍摄,第一次拍摄的部位是目标部位的上半部分,第二次拍摄的部位是目标部位的下半部分。那么需要基于目标部位的上半部分确定在目标对象上对应的上半部分待照射区域,以及基于目标部位的下半部分确定出在目标对象上的对应的下半部分的待照射区域,这两个待照射区域可视为所述子待照射区域。基于所述两个子待照射区域分别确定的限束装置的两组目标位置信息可视为子目标位置信息。
在一些实施例中,是否需要将目标部位的拍摄分成多次拍摄,以及分成几次拍摄,可以通过协议信息确定。例如,协议信息中可以包括目标部位信息,以及与所述目标部位对应的两个子目标部位。对应的,子待照射区域可以根据协议信息中的子目标部位确定。在一些实施例中,处理设备可以基于协议信息中的目标部位的子目标部位信息对目标对象的光学图像进行处理,得到与所述子目标部位对应的子待照射区域。具体过程可参考本说明书步骤332部分相关描述。
在一些实施例中,是否需要将目标部位的拍摄分成多次拍摄,以及分成几次拍摄,可以通过处理设备自动地确定。例如,处理设备可以基于图像信息以及目标部位信息自动地规划出与所述目标部位对应的几个待照射子区域。
在一些实施例中,所述待照射子区域确定后,可以基于限束装置的输出位置信息,确定每个待照射子区域对应的几个目标位置信息,所述几个目标位置信息可视为目标子位置信息。基于待照射区域确定限束装置的目标位置的详细描述可参见本说明书其他部分。
步骤340,判断目标位置信息是否大于预设的阈值范围。
预设的阈值范围是指限束装置发出的辐射束所能覆盖的目标部位的范围。在一些实施例中,预设的阈值范围可以存储在存储设备150中。在一些实施例中,预设的阈值范围可以根据医生的以往经验所得。在一些实施例中,预设的阈值范围可以根据目标对象的体征信息进行设定。比如,身高或体宽等在一定范围内的目标对象,其对应的阈值范围可以为第一组。身高(或体宽等)在其他范围内的目标对象,其对应的阈值范围可以为第二组。在一些实施例中,可以根据目标对象的体征信息通过搜索限束装置的目标部位数据库,将与该目标对象的体征信息对应的阈值范围作为该预设的阈值范围。
在一些实施例中,当确定系统200判断目标位置信息小于等于预设的阈值范围时,确定系统200可以执行步骤360。当确定系统200的目标位置信息大于预设的阈值范围时,确定系统200可以执行步骤350。
步骤350,发出提示信息。
当目标位置信息超出预设的阈值范围时,处理设备可以发出提示信息,以告诉医护人员当前限束装置无法达到系统计算确定的目标位置。此时,医护人员可以暂停拍摄,并根据提示信息中的记录信息对限束装置进行调整。在一些实施例中,提示信息可以包括是否超出预设的阈值范围、超出多少以及目标位置信息的具体内容,以使得后续对限束装置的调整时做参考。具体的,提示信息可以包括文字提示、语音提示、视频提 示、灯光提示等中的一种或多种。例如,当确定系统200的目标位置信息大于预设的阈值范围时,确定系统200发出警报声。通过提示信息的设置,医生可以快速发现问题所在,及时停止后续拍摄操作,并根据提示信息中的记录信息对限束装置进行调整,提高工作效率。
步骤360,根据限束装置的目标位置信息控制限束装置运动。
在一些实施例中,确定系统200可以根据目标位置信息控制限束装置运动。例如,确定系统200可以根据限束装置整体的目标位置坐标值,控制限束装置从初始位置整体地移动到待照射位置。又例如,当限束装置移动到目标位置后,控制系统200可以控制叶片在限束装置端面的开口位置,以使射线透过限束装置的目标位置照射到目标对象的区域能够与待照射区域尽可能地匹配。
在一些实施例中,确定系统200可以通过以下方式获得第一机器学习模型:确定系统200获取初始机器学习模型。在一些实施例中,确定系统200可以经由网络120从存储设备150中获取初始机器学习模型。初始机器学习模型可以包括DNN模型、CNN模型、RNN模型、LSTM网络模型等中的一种或任意几种的组合。确定系统200获取初始样本训练数据。在一些实施例中,确定系统200可以经由网络120从存储设备150中获取初始样本训练数据。在一些实施例中,初始样本训练数据可以包括历史目标对象的历史光学图像,以及历史目标对象上的一个或多个目标部位的历史医学图像。历史光学图像是指历史目标对象已拍摄的可见光图像。历史医学图像是指由医学影像装置拍摄的历史目标对象的一处或多处目标器官对应的医学图像。医学影像装置可以包括但不限于DR设备、CT设备、X射线机、直线加速器以及C臂机等。例如,历史医学图像可以为CT设备拍摄目标部位获得的图像。
在一些实施例中,确定系统200根据历史光学图像以及历史医学图像的融合结果信息,确定历史光学图像的标记信息。在一些实施例中,融合结果信息是指历史光学图像与历史医学图像之间的目标部位位置对应关系。例如,历史医学图像为肺部的X射线图像,历史光学图像为目标对象的全身可见光图像,融合结果信息可以为历史光学图像所对应目标部位在历史光学图像上的位置。标记信息可以包括历史光学图像中目标部位的位置信息。确定系统200将历史光学图像以及历史医学图像作为输入数据,标记信息作为输出数据,输入初始机器学习模型进行训练,以获得训练好的第一机器学习模型。
在一些实施例中,确定系统200可以通过以下方式获得第二机器学习模型:确定系统200获取初始机器学习模型。在一些实施例中,确定系统200可以经由网络120 从存储设备150中获取初始机器学习模型。初始机器学习模型可以包括DNN模型、CNN模型、RNN模型、LSTM网络模型等中的一种或任意几种的组合。确定系统200获取初始样本训练数据。在一些实施例中,确定系统200可以经由网络120从存储设备150中获取初始样本训练数据。在一些实施例中,初始样本训练数据可以包括历史目标对象的历史光学图像以及历史目标对象上的一个或多个目标部位的历史医学图像。
在一些实施例中,确定系统200根据历史光学图像与历史医学图像的融合结果信息,确定对应的限束装置的历史目标位置信息。历史目标位置是指与历史待照射区域对应的限束装置的目标位置。其中,历史待照射区域可以根据历史光学图像以及历史医学图像的融合结果确定。具体的,确定系统200可以根据历史医学图像获取目标部位信息,再将目标部位信息标记在历史光学图像的对应位置上,从而得到历史目标对象上历史待照射区域,然后基于历史目标对象上历史待照射区域,计算确定限束装置的历史目标位置信息。
在一些实施例中,确定系统200将历史光学图像以及历史医学图像作为输入数据。确定系统200对应的限束装置的历史目标位置信息作为输出数据,输入初始机器学习模型进行训练,以获得训练好的第二机器学习模型。
进一步地,在一些实施例中,第二机器学习模型在训练过程中也可以考虑限束装置的初始位置。例如,待确定历史待照射区域后,可以基于限束装置的初始位置来确定限束装置的历史目标位置信息。对应的,也可以把历史光学图像、限束装置的初始位置以及历史医学图像一起作为输入数据输入进行训练。
在一些实施例中,确定系统200还可以通过以下方式获得第一机器学习模型或第二机器学习模型:确定系统200获取初始机器学习模型。在一些实施例中,确定系统200可以经由网络120从存储设备150中获取初始机器学习模型。初始机器学习模型可以包括DNN模型、CNN模型、RNN模型、LSTM网络模型等中的一种或任意几种的组合。确定系统200获取初始样本训练数据。在一些实施例中,确定系统200可以经由网络120从存储设备150中获取初始样本训练数据。在一些实施例中,初始样本训练数据可以包括历史目标对象的历史光学图像、历史目标对象的年龄信息以及历史目标对象上的一个或多个目标部位的历史医学图像。确定系统200根据历史光学图像、目标对象的年龄信息以及历史医学图像的融合结果信息,确定历史光学图像的标记信息。在一些实施例中,标记信息可以包括历史光学图像中目标部位的位置信息或限束装置的历史目标位置信息。目标位置信息的确定系统200将历史光学图像、目标对象的年龄信息以 及历史医学图像作为输入数据,标记信息作为输出数据或参考标准,输入初始机器学习模型进行训练。标记信息可以为历史光学图像的标记信息,在历史图像的标记信息上标记有历史医学图像对应目标部位以及目标对象的年龄信息 该模型的训练数据中引入了目标对象的年龄信息,体现出年龄对限束装置的目标位置信息的影响,能更好的实现对儿童辐射伤害防护。
应当注意的是,上述有关流程300的描述仅仅是为了示例和说明,而不限定本申请的适用范围。对于本领域技术人员来说,在本申请的指导下可以对流程300进行各种修正和改变。然而,这些修正和改变仍在本申请的范围之内。
图4是根据本说明书的一些实施例所示的目标方位的标记系统的示例性流程图。
在一些实施例中,流程400可以由目标部位的方位标记系统执行,流程400其包括:
步骤410,获取目标对象的图像信息。在一些实施例中,步骤410可以由图像信息获取模块执行。
目标对象包括待拍摄医学图像的对象,例如,患者。目标对象的图像信息包括目标对象的目标部位的图像信息。
图像信息是指采用摄像装置获取的目标对象(例如,人体及身体各部位或器官)的图像。在一些实施例中,图像可以包括目标对象的静态影像或视频影像。在一些实施例中,静态图像可以包括照片、图片等静止存在的图像。在一些实施例中,视频图像是指动态图像,可以包括但不限于视频、动画等。在一些实施例中,可以从视频图像中引出视频流,视频流可以包括多帧静态图像。在一些实施例中,图像可以是光学图像,也可以是非光学图像。对应地,图像信息可以包括光学图像信息(包括可见光图像和非可见光图像),也可以包括非光学图像信息。在一些实施例中,摄像装置可以是光学设备,例如摄像头、或其他图像传感器等。在一些实施例中,摄像装置160也可以是非光学设备,该设备基于采集得到的若干距离数据,得到能够体现目标对象的形状、尺寸等特征的热图。
在一些实施例中,摄像装置160可以包括任何具有二维或者三维图像捕捉功能的装置。在一些实施例中,图像信息至少包括目标对象中目标部位相对医学影像装置110的摆位信息,处理器可以基于摆位信息判断出目标部位。在一些实施例中,摆位信息包括在医学影像装置的射线照射区域是否存在待检测对象,或者在医学影像装置的摆放台(例如,病床)上的是否存在待检测对象。其中,摆放台或射线照射区域可视为可 摆位区域,检测到处于可摆位区域的待检测对象可视为目标对象的待检测的目标部位。
在一些实施例中,还可以基于目标部位的图像信息,来调整医学影像装置110,以使目标部位处于射线源的射线路径中。在一些实施例中,调整医学影像装置110的操作可以手动执行,也可以由机器自动地执行。在一些实施例中,调整医学影像装置可以包括调整医学影像装置110的射线源,也可以包括调整医学影像装置110的探测器,或者包括调整探测器和射线源,只要调整之后能够使目标部位处于射线源的射线路径中即可。
在一些实施例中,在目标对象对目标部位调整姿态和/或位置以使其处于医学影像装置射线源的射线路径的过程中,从而可以判断出处于可摆位区域内的目标部位。
在一些实施例中,在手动或者自动调节医学影像装置110的射线源移动以对准目标部位的过程中可以判断出目标部位。例如,患者已进入医学影像装置110的待拍摄区域进行摆位,现目标部位位于医学影像装置110的射线源的左侧,则可以调节目标部位向右侧移动使其处于医学影像装置110射线源的射线路径中,也可以调节医学影像装置110的射线源向左侧移动使目标部位处于医学影像装置110的射线源的射线路径中,在此过程中,处理器可以基于采集到的图像信息(例如,对应该过程的视频信息)判断出处于可摆位区域的目标部位。
图像信息获取模块可以通过有线或者无线获取摄像装置160捕捉到的图像信息,并进一步识别图像信息中的目标对象。在一些实施例中,系统可以对于输入的视频图像导出视频流,进行逐帧处理。在一些实施例中,该处理可以包括对图像的滤波去噪、图像灰度的归一化、图像的水平旋转、图像尺度大小的校正等。在一些实施例中,该处理还可以包括对图像中的目标对象或者目标部位进行识别或者分割。
步骤420,对图像信息进行处理,确定目标部位的方位信息。在一些实施例中,步骤420可以由方位信息确定模块执行。
在一些实施例中,目标部位可以包括目标对象的整体或一部分组织或者器官。例如,目标部位可以包括左踝关节、胸部等。
在一些实施例中,目标部位的方位信息包括目标部位相对于目标对象的左右方位、上下方位以及前后方位中至少一种。在一些实施例中,目标部位相对目标对象的方位信息包括左右方位信息,例如,左膝关节。在一些实施例中,目标部位相对目标对象的方位信息包括上下信息,例如,上脊椎。在一些实施例中,目标部位相对目标对象的方位信息包括前后方位信息,例如,背部。在一些实施例中,目标部位相对目标对象的 方位信息是左右上下信息,例如,目标部位是左上髋关节。
在一些实施例中,目标部位的方位信息还可以包括医学影像装置中射线入射方位等。医学影像装置中射线入射方位可以包括射线初始入射的方向与目标对象或目标部位的位置关系。例如,需要拍摄医学影像的目标部位是左手,那么方位信息可以包括位于身体左侧的手部,还可以包括是手背朝向射线初始入射的方向,还是手掌朝向射线初始入射的方向。再例如,需要在DR扫描机上拍摄医学影像的目标部位是左大腿,那么方位信息可以包括位于身体的左侧的大腿部,也可以包括是目标对象脸面向射线初始入射的方向,还是目标对象背对着射线初始入射的方向,即患者是面朝射线初始入射方向平躺在扫描床上,还是背朝射线初始入射方向平卧在扫描床上。
在一些实施例中,方位信息确定模块通过网络接收包含有目标对象的目标部位的图像信息,并根据预设的算法可以对目标部位的图像进行识别,对图像信息进行处理,确定目标部位的方位信息。例如,在拍摄视频图像的这个连续的过程中,摄像装置拍摄到了包括患者摆位的过程到对患者曝光的全部影像,在该过程中,射线源和/或摆放台和/或摄像头可被配置为可移动的,方位信息确定模块可以自动识别出目标部位的方位信息。例如,在通过DR拍摄左膝关节的X光片的过程中,摄像装置拍摄到医疗影像设备将射线源移动到左膝关节的上方,则方位信息确定模块可以实时分析并识别出目标部位是左膝关节。
在一些实施例中,预设的算法可以包括对图像进行处理分析的算法。具体的,预设的算法首先对摄像装置获得的目标对象的图像信息进行图像分割等处理,根据图像信息中目标部位与医学影像装置的位置关系,从而确认图像信息中处于摆放位置的目标部位,进而分析判断出目标部位相对目标对象的方位信息。
在一些实施例中,预设的算法可以包括图像匹配算法。具体的,基于图像匹配算法计算摄像装置获得的目标对象的图像信息与关联数据库中的图像信息的匹配度,选择匹配度最高的图像信息作为获得的图像信息,进一步分析判断出目标部位相对目标对象的方位信息。在一些实施例中,图像匹配的方法包括基于灰度的图像匹配和基于特征的图像匹配。
在一些实施例中,预设算法还可以包括机器学习模型。具体的,将摄像装置获得的目标对象的图像信息输入经过训练的机器学习模型中,并根据机器学习模型的输出数据确定目标部位的方位信息。在一些实施例中,机器学习模型的输出数据可以包括目标部位的名称以及其对应的方位信息,例如,左膝关节。在一些实施例中,可以把摄像 装置获取的图像信息进行预处理,筛选出质量较高的图像,该图像可以是清晰度较高的图像,也可以是包含有全部目标对象,且目标部位处于摆放位置的图像。然后将筛选好的图像输入机器学习模型,机器学习模型可以根据输入数据自动地输出目标部位相对目标对象的方位信息。
在一些实施例中,机器学习模型可以包括深度神经网络(deep neural network,DNN),如卷积神经网络(convolutional neural network,CNN)、深度信念网络(deep belief network,DBN)、随机布尔神经网络(random Boolean network,RBN)等。深度学习模型可以包括多层神经网络结构。以卷积神经网络为例,卷积神经网络可以包括输入层、卷积层、降维层、隐藏层、输出层等。卷积神经网络中包括一个或多个卷积核,用于卷积运算。
在一些实施例中,可以利用训练样本数据对初始机器学习模型进行训练,以获取训练好的机器学习模型。训练样本数据可以包括若干目标对象的历史图像,历史图像需要包括目标部位的图像。对历史图像中的目标部位及其方位信息进行标记,例如,目标部位的标记信息可以包括左膝关节。然后将历史图像信息当作输入数据,将方位信息的标记信息当作对应的输出数据或判据标准,并将输入数据和输出数据输入初始机器学习模型进行训练,得到训练好的模型。
步骤430,基于方位信息对目标对象的医学图像进行标记。
系统通过医学影像装置获得目标对象的医学图像,方位信息标记模块在所获得的医学图像上标记相应的方位信息。在一些实施例中,目标对象的医学图像可以包括目标对象上的目标部位对应的医学图像。在一些实施例中,目标对象的医学图像还可以包括目标对象上非目标部位对应的医学图像。其中,非目标部位可以理解为与目标部位具有一定关联的部位。例如,如果目标部位是手掌,非目标部位可以是手掌对应的手臂,可以将目标对象的手掌的方位信息在目标对象手臂的医学图像上进行标记。
在一些实施例中,医学图像可以理解为通过医学影像装置获得的图像。在一些实施例中,医学影像装置可以包括DR成像设备、CT扫描机、MRI扫描仪、B成像扫描仪、TTM扫描设备、SPECT设备或者PET扫描仪等。对应地,在一些实施例中,医学图像包括MRI、XR、PET、SPECT、CT、超声的至少一种图像。在一些实施例中,医学图像信息也可以包括上述一种或两种以上医学图像的融合图像。在一些实施例中,图像信息和医学图像可以同时获得,也可以先后获得。
在一些实施例中,标记可以是颜色或者文字或者图形,更具体地,比如,汉字 标记、英文标记、图形标记中的一个或多个组合。在一些实施例中,每张医学图像都可以包括一个或者多个标记。例如,可以对拍摄得到的右膝关节的医学图像上标记R。可选择地,每张医学图像还可以包括一个或者多个与部位相关的标记信息。
在一些实施例中,可以对标记进行手动调整,在一些实施例中,手动调整可以包括添加一个或多个标记点、删除一个或多个标记、改变一个或多个标记的位置等。
在一些实施例中,方位信息标记模块可以基于目标部位的方位信息来直接标记在医学图像中,详细描述可参见下文步骤431a和步骤432a。在另一些实施例中,方位信息标记模块可以基于目标部位的方位信息来选择扫描协议获取医学图像并进一步标记,详细描述可参见下文步骤431b和步骤432b。
步骤431a,获取目标对象的医学图像。
在一些实施例中,医学图像可以理解为由医学影像装置获取的图像,在一些实施例中,医学图像可以包括MRI图像、CT图像、锥形束CT图像、PET图像、功能MRI图像、X射线图像、荧光透视图像、超声图像、SPECT图像等或其任意组合。医学图像可以反映患者某一部分组织、器官和/或骨骼的信息。在一些实施例中,医学图像是一个或一组二维影像。例如,黑白的X光胶片,如,CT二维扫描图像等。在一些实施例中,医学图像可以是一个三维影像,例如,根据不同断层的CT扫描影像重建的器官三维图像,或者由具有三维造影能力的设备输出的三维空间图像。在一些实施例中,医学图像还可以是一段时间内的动态影像。例如,一段反映心脏及其周围组织在一个心动周期内的变化情况的视频等。在一些实施例中,医学图像可以来自于医学影像装置,可以来自于存储模块,也可以来自于用户通过交互设备的输入。
在一些实施例中,医学影像装置根据获得的方位信息使用医学影像装置获取目标部位的医学图像,并将方位信息标记在所得的医学图像中。
步骤432a,将方位信息标记在医学图像中。
在一些实施例中,方位信息可以标记在医学图像中的某一方位,例如,可以将方位信息标记在图像的左上角处。其中,将方位信息标记到医学图像中可以理解为直接在医学图像中以可显示的方式进行标记,例如,覆盖医学图像的某一局部区域;又例如,在医学图像上增加描述,以能够显示该医学图像中目标部位的方位信息。为了不影响医生对目标部位的观察,标记的位置一般设置在医学图像的周边位置。在一些实施例中,标记的内容可以仅包括目标部位的方位信息,医生可以通过对应的医学图像判断出目标部位的名称。例如,标记的内容可以为:右侧,也可以用英文字母R表示,例如图6所 述。在一些实施例中,标记的内容可以包括目标部位的名称及其方位信息,例如,标记的内容可以为右脚踝,也可以用英文字母RIGHT ANKLE表示,例如图7所示。
步骤431b,基于方位信息确定对应的协议信息。
在一些实施例中,系统根据目标部位的方位信息选择对应的协议,然后依据协议对受检者的目标部位进行检查,获得由医学影像装置拍摄的目标部位的医学图像。在一些实施例中,协议是指医学影像装置拍摄参数的组合,针对一个患者拍摄的目标部位选择的相应的协议。例如,以DR拍摄左膝关节或胸部,则扫描时选择左膝关节或胸部的协议。
步骤432b,基于协议信息对医学图像进行标记。
在一些实施例中,系统进一步根据选择的协议去对医学图像进行标记。在一些实施例中,系统检测到所用的协议,并进一步将所用的协议中的方位信息标记到医学图像上或其视角内容中。
在一些实施例中,还可以对已标记的医学图像的标记进行调整。其中,调整可以包括手动调整,也可以包括机器自动调整。例如,医生发现医学图像上的标记信息的标记内容、标记位置、以及标记方式等中的至少一个有不妥之处,医生可以手动进行调整。又例如,也可以是机器对医学图像上的标记信息进行自动排查,自动调整标记信息中的不妥之处,以确保标记信息的准确性。
需要注意的是,以上对于流程图的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解本申请后,可能在不背离这一原理的情况下,对实施上述方法和系统的应用领域形式和细节上进行各种修正和改变。然而,这些修正和改变仍在以上描述的范围内。例如,系统可以直接识别目标部位而不经过识别目标对象。例如,在一些实施例中,目标部位的方位标记系统的示例性流程图还可以包括获取目标对象的光学图像信息;确定目标对象的目标部位信息,其中,目标部位信息包括目标部位的方位信息,然后基于所述目标部位的方位信息,确定医学图像的标记信息。在一些实施例中,还可以包括基于目标部位信息的方位信息对所述目标对象的医学图像进行标记。
图5是根据本申请一些实施例所示的对目标方位进行标记的示例性流程图。摄像装置包括摄像头。摄像头获取的图像为光学图像信息。
流程可以由用于标记目标部位方位的系统执行,系统使用医学影像装置获取目标部位的医学图像,并将生成的目标部位的方位信息标记在医学图像中。在一些实施例 中,系统包括:摄像装置,用于获取目标对象的光学图像信息;医学影像装置,用于获取目标对象上目标部位的医学图像;信息处理装置,基于预设算法对光学图像信息进行处理,确定目标部位的方位信息;并将方位信息在医学图像中标记。
在一些实施例中,目标对象首先进行摆位,摄像装置开始采集光学图像信息,图像信息获取模块分析采集到的图像,并判断在摄像装置采集到的图像中是否检测到患者。在一些实施例中,是否检测到患者表示采集的图像中是否包含患者,且患者的目标部位是否处于可摆位区域内。
当摄像装置能够清楚拍摄到患者,且目标部位处于可摆位区域内时,则摆位完成,即可以检测到患者。例如,这里以CT扫描机为例,目标对象(例如,患者)先进入医学影像装置中摆位,将患者置于CT扫描机的扫描床上,调整患者在扫描床上的姿态和/或位置、扫描床位置,以使在诸如扫描定位片时CT扫描机的射线束中部分或全部穿过目标对象的目标部位,在患者摆位过程中和/或定位完成后到进床之前和/或进床过程中和/或扫描定位片时,摄像装置同时获取光学图像信息,图像信息获取模块对所获得的图像信息进行分析,如果能分析得到包含患者的光学图像信息,且目标部位处于可摆位区域时,则表示患者的摆位完成。
如果不能在摄像装置160采集到的光学图像信息中检测到患者,或者检测到的患者的目标部位没有处于可摆位区域内时,需要再一次进行调整患者的姿势或位置和/或扫描床位置并获取新的光学图像信息,直到图像信息获取模块能够分析得到包含患者的光学图像信息,且目标对象处于可摆位区域,即摆位完成。例如,以乳腺机为例,当患者站立在乳腺机前,将乳房压迫于探测器壳体和压迫器中间,使射线束中部分或全部能穿过乳房,同时摄像装置获取此过程的光学图像信息,图像信息获取模块对所获得的光学图像信息进行分析,如果能分析得到包含患者的图像信息,则此时摆位完成。
在一些实施例中,当摆位完成后,信息处理装置对数据进行分析得到方位信息,系统根据分析结果自动在拍摄的医学图像上完成标记。
在一些实施例中,摄像装置可以相对固定或活动地设置于医学影像装置上;在一些实施例中,摄像装置也可以独立地设置在医学影像装置之外,此时,在采集图像的过程中,摄像装置可以是固定的,也可以是活动的。在一些实施例中,摄像装置可以位于医学影像装置上可以移动的部分,也可以位于集成在医学影像装置上的可移动部分。例如,摄像装置可以位于乳腺机的C臂上或机架上等。再例如,可以在机架上固定轨道,摄像装置可以在轨道上移动。当患者的摆位完成后,方位信息确定模块根据预设算 法(例如,机器学习模型)对图像信息进行分析得到目标部位,进一步分析生成目标部位的方位信息。
在一些实施例中,摄像装置与医学影像装置可以通过有线或者无线的连接方式进行数据连接。在一些实施例中,摄像装置为摄像头。
本申请实施例可能带来的有益效果包括但不限于:(1)该方法能够根据目标对象的光学图像信息、目标部位信息确定合理的限束装置的目标位置信息,使得射线透过限束装置的目标位置照射到目标对象的区域能够与待照射区域尽可能地匹配,在保证成像/治疗质量的同时避免不必要辐射剂量对目标对象的伤害,非常适合在儿童进行辐射照射时使用,以实现儿童辐射伤害防护;(2)根据确定出的限束装置的目标位置信息能够控制限束装置快速运动到特定位置,提高工作效率;(3)本申请能够根据目标对象的光学图像信息、目标部位信息,然后确定出与目标部位对应的待照射位置;(4)基于预设算法对光学图像信息处理分析来目标部位的方位信息,提高了方位信息识别的准确性;(5)利用机器自动识别目标位置的方位信息,并基于识别出来的方位信息进行标记,提高了标记操作的准确性;(6)利用机器对医学图像中的目标部位进行自动识别并标记,实现了自动化和智能化,提高了操作效率。需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的实施例。

Claims (26)

  1. 一种医疗操作的相关参数的确定方法,其特征在于,所述方法包括:
    获取目标对象的光学图像信息;
    确定目标对象的目标部位信息;
    至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数。
  2. 根据权利要求1所述的方法,其特征在于,所述确定目标对象的目标部位信息包括:获取所述目标对象的目标部位信息。
  3. 根据权利要求1所述的方法,其特征在于,所述确定目标对象的目标部位信息包括:对所述光学图像信息进行处理,确定所述目标对象中的目标部位信息。
  4. 根据权利要求1-3中的任一个所述的方法,其特征在于,所述医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息;
    所述至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数,包括:
    至少根据所述光学图像信息、所述目标部位信息,确定所述目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
  5. 根据权利要求4所述的方法,其特征在于,所述获取所述目标对象的目标部位信息还包括:获取与所述目标对象相关的协议信息,所述协议信息至少包括所述目标对象的目标部位信息;
    所述至少根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息包括:
    至少根据所述光学图像信息、所述协议信息,确定目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    获取所述限束装置的初始位置信息;
    当所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息时,所述方法还包括:
    根据所述待照射位置信息以及所述初始位置信息,确定所述限束装置的目标位置信息。
  7. 根据权利要求4所述的方法,其特征在于,所述根据所述光学图像信息、所述目标部位信息,确定目标对象上的待照射位置信息包括:
    将所述光学图像信息以及所述目标部位信息输入第一机器学习模型,以确定所述待照射位置信息。
  8. 根据权利要求4所述的方法,其特征在于,所述根据所述光学图像信息、所述目标部位信息,确定所述限束装置的目标位置信息包括:
    将所述光学图像信息以及所述目标部位信息输入第二机器学习模型,以确定所述限束装置的目标位置信息。
  9. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    如果所述目标位置信息大于预设的阈值范围,则发出提示信息。
  10. 根据权利要求4所述的方法,其特征在于,所述待照射位置信息包括至少两个子待照射区域;所述限束装置的目标位置信息包括与所述两个子待照射区域对应的子目标位置信息。
  11. 根据权利要求10所述的方法,其特征在于,与所述目标对象相关的协议信息中包括至少两个子目标部位;对应的,所述子待照射区域可以根据所述协议信息中的子目标部位确定。
  12. 根据权利要求10所述的方法,其特征在于,所述子待照射区域由所述预设算法确定;对应的,限束装置的至少两个子目标位置信息根据所述子待照射区域确定。
  13. 根据权利要求1-3中的任一个所述的方法,其特征在于,所述医疗操作的相关参数包括目标对象对应的医学图像的标记信息;所述目标部位信息包括目标部位的方位信息;
    所述至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数,包括:
    基于所述目标部位的方位信息,确定医学图像的标记信息。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    基于所述目标部位信息的方位信息对所述目标对象的医学图像进行标记。
  15. 根据权利要求14所述的方法,其特征在于,所述基于所述方位信息对所述目标对象的医学图像进行标记包括:
    基于所述方位信息确定对应的协议信息;
    基于所述协议信息对所述目标对象的医学图像进行标记。
  16. 根据权利要求13所述的方法,其特征在于,所述方位信息包括所述目标部位相对于所述目标对象的左右方位、前后方位以及上下方位中至少一种。
  17. 根据权利要求1所述的方法,其特征在于,所述目标对象的光学图像信息包括静态图像或视频图像。
  18. 根据权利要求13所述的方法,其特征在于,当所述目标部位的方位信息是通过对所述光学图像信息进行处理得到时,所述对所述光学图像信息进行处理是通过预设算法的,其中,所述预设算法包括机器学习模型;相应地,所述对所述光学图像信息进行处理确定目标对象中目标部位的方位信息包括:
    将所述光学图像信息输入机器学习模型;
    根据所述机器学习模型的输出数据确定目标部位的方位信息。
  19. 根据权利要求13所述的方法,其特征在于,所述光学图像信息是由摄像头获得的,所述医学图像为MRI、XR、PET、SPECT、CT、超声的一种图像或两种以上的融合图像。
  20. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    基于所述目标部位的光学图像信息,自动调整医学影像装置的射线源,以使所述目标部位处于所述射线源的射线路径中。
  21. 根据权利要求14所述的方法,其特征在于,基于所述方位信息对所述目标对象的医学图像进行标记包括进行颜色或文字或者图形的标记。
  22. 一种医疗操作的相关参数的确定系统,其特征在于,所述系统包括光学图像信息获取模块、目标部位信息确定模块以及医疗操作相关参数确定模块;
    所述光学图像信息获取模块用于获取目标对象的光学图像信息;
    所述目标部位信息确定模块用于确定目标对象的目标部位信息;
    所述医疗操作相关参数确定模块用于至少基于所述光学图像信息和所述目标部位信息,确定医疗操作的相关参数。
  23. 根据权利要求22所述的系统,其特征在于,所述目标部位信息确定模块还用于获取所述目标对象的目标部位信息。
  24. 根据权利要求22所述的系统,其特征在于,所述目标部位信息确定模块还用于对所述光学图像信息进行处理,确定所述目标对象中的目标部位信息。
  25. 根据权利要求22-24中的任一个所述的系统,其特征在于,所述医疗操作的相关参数包括目标对象上的待照射位置信息和/或所述限束装置的目标位置信息;
    所述医疗操作相关参数确定模块还用于:至少根据所述光学图像信息、所述目标部位信息,确定所述目标对象上的待照射位置信息和/或所述限束装置的目标位置信息。
  26. 根据权利要求22-24中的任一个所述的系统,其特征在于,所述医疗操作的相关参数包括目标对象对应的医学图像的标记信息;所述目标部位信息包括目标部位的方位信息;
    所述医疗操作相关参数确定模块还用于:基于所述目标部位的方位信息,确定医学图像的标记信息。
PCT/CN2021/109902 2020-07-30 2021-07-30 一种医学操作的相关参数的确定方法以及系统 WO2022022723A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21849927.5A EP4169450A4 (en) 2020-07-30 2021-07-30 METHOD AND SYSTEM FOR DETERMINING A PARAMETER RELATED TO MEDICAL OPERATION
US18/157,796 US20230148986A1 (en) 2020-07-30 2023-01-20 Methods and systems for determining parameters related to medical operations

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010751784.7 2020-07-30
CN202010751784.7A CN111870268A (zh) 2020-07-30 2020-07-30 一种限束装置的目标位置信息的确定方法和系统
CN202010786489.5A CN114067994A (zh) 2020-08-07 2020-08-07 一种目标部位的方位标记方法及系统
CN202010786489.5 2020-08-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/157,796 Continuation US20230148986A1 (en) 2020-07-30 2023-01-20 Methods and systems for determining parameters related to medical operations

Publications (1)

Publication Number Publication Date
WO2022022723A1 true WO2022022723A1 (zh) 2022-02-03

Family

ID=80037606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109902 WO2022022723A1 (zh) 2020-07-30 2021-07-30 一种医学操作的相关参数的确定方法以及系统

Country Status (3)

Country Link
US (1) US20230148986A1 (zh)
EP (1) EP4169450A4 (zh)
WO (1) WO2022022723A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961154A (zh) * 2011-08-31 2013-03-13 Ge医疗系统环球技术有限公司 调节x射线系统的曝光视场的方法及装置和x射线系统
CN109730704A (zh) * 2018-12-29 2019-05-10 上海联影智能医疗科技有限公司 一种控制医用诊疗设备曝光的方法及系统
US20190261931A1 (en) * 2018-02-27 2019-08-29 Steven Aaron Ross Video patient tracking for medical imaging guidance
US20200138395A1 (en) * 2018-10-24 2020-05-07 Canon Medical Systems Corporation Medical image diagnostic device, medical image diagnostic method, and storage medium
CN111870268A (zh) * 2020-07-30 2020-11-03 上海联影医疗科技有限公司 一种限束装置的目标位置信息的确定方法和系统
CN112053346A (zh) * 2020-09-02 2020-12-08 上海联影医疗科技股份有限公司 一种操作引导信息的确定方法和系统
CN112716509A (zh) * 2020-12-24 2021-04-30 上海联影医疗科技股份有限公司 一种医学设备的运动控制方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170112460A1 (en) * 2014-06-30 2017-04-27 Agfa Healthcare Nv Method and system for configuring an x-ray imaging system
EP3387997B1 (en) * 2017-04-13 2020-02-26 Siemens Healthcare GmbH Medical imaging device and method controlling one or more parameters of a medical imaging device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102961154A (zh) * 2011-08-31 2013-03-13 Ge医疗系统环球技术有限公司 调节x射线系统的曝光视场的方法及装置和x射线系统
US20190261931A1 (en) * 2018-02-27 2019-08-29 Steven Aaron Ross Video patient tracking for medical imaging guidance
US20200138395A1 (en) * 2018-10-24 2020-05-07 Canon Medical Systems Corporation Medical image diagnostic device, medical image diagnostic method, and storage medium
CN109730704A (zh) * 2018-12-29 2019-05-10 上海联影智能医疗科技有限公司 一种控制医用诊疗设备曝光的方法及系统
CN111870268A (zh) * 2020-07-30 2020-11-03 上海联影医疗科技有限公司 一种限束装置的目标位置信息的确定方法和系统
CN112053346A (zh) * 2020-09-02 2020-12-08 上海联影医疗科技股份有限公司 一种操作引导信息的确定方法和系统
CN112716509A (zh) * 2020-12-24 2021-04-30 上海联影医疗科技股份有限公司 一种医学设备的运动控制方法及系统

Also Published As

Publication number Publication date
US20230148986A1 (en) 2023-05-18
EP4169450A1 (en) 2023-04-26
EP4169450A4 (en) 2023-08-23

Similar Documents

Publication Publication Date Title
JP7099459B2 (ja) 放射線治療用追跡装置、位置検出装置および動体追跡方法
JP7098485B2 (ja) 撮像で使用する仮想位置合わせ画像
JP4484462B2 (ja) 医療用診断または治療装置における患者の位置決め方法および装置
US9782134B2 (en) Lesion imaging optimization using a tomosynthesis/biopsy system
WO2018097935A1 (en) Methods and systems for patient scan setup
JP5693388B2 (ja) 画像照合装置、患者位置決め装置及び画像照合方法
JP2015083068A (ja) 放射線治療装置およびシステムおよび方法
CN104602608A (zh) 基于光学3d场景检测与解释的患者特异性且自动的x射线系统调节
KR102579039B1 (ko) 의료용 화상 처리 장치, 치료 시스템, 및 의료용 화상 처리 프로그램
CN111870268A (zh) 一种限束装置的目标位置信息的确定方法和系统
CN111712198A (zh) 用于移动x射线成像的系统和方法
CN113647967A (zh) 一种医学扫描设备的控制方法、装置及系统
CN111528879A (zh) 一种医学图像采集的方法和系统
CN113397578A (zh) 一种成像系统和方法
US20220054862A1 (en) Medical image processing device, storage medium, medical device, and treatment system
JP2014212820A (ja) 放射線治療システム
EP3892200A1 (en) Methods and systems for user and/or patient experience improvement in mammography
WO2021049478A1 (ja) 検像装置、コンソール及び放射線撮影システム
JP7242640B2 (ja) 放射線量を決定するための方法、システム、および装置
WO2022022723A1 (zh) 一种医学操作的相关参数的确定方法以及系统
US20220353409A1 (en) Imaging systems and methods
JP2018153277A (ja) X線透視装置
CN112716509B (zh) 一种医学设备的运动控制方法及系统
JP6824641B2 (ja) X線ct装置
CN114067994A (zh) 一种目标部位的方位标记方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21849927

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021849927

Country of ref document: EP

Effective date: 20230120

NENP Non-entry into the national phase

Ref country code: DE