WO2023216947A1 - 一种用于介入手术的医学影像处理系统和方法 - Google Patents

一种用于介入手术的医学影像处理系统和方法 Download PDF

Info

Publication number
WO2023216947A1
WO2023216947A1 PCT/CN2023/091895 CN2023091895W WO2023216947A1 WO 2023216947 A1 WO2023216947 A1 WO 2023216947A1 CN 2023091895 W CN2023091895 W CN 2023091895W WO 2023216947 A1 WO2023216947 A1 WO 2023216947A1
Authority
WO
WIPO (PCT)
Prior art keywords
medical image
image
interventional
structure set
surgery
Prior art date
Application number
PCT/CN2023/091895
Other languages
English (en)
French (fr)
Inventor
何少文
廖明哲
张璟
方伟
Original Assignee
武汉联影智融医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210493274.3A external-priority patent/CN117045318A/zh
Priority claimed from CN202210764281.2A external-priority patent/CN116912098A/zh
Application filed by 武汉联影智融医疗科技有限公司 filed Critical 武汉联影智融医疗科技有限公司
Publication of WO2023216947A1 publication Critical patent/WO2023216947A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B17/00Surgical instruments, devices or methods, e.g. tourniquets
    • A61B17/34Trocars; Puncturing needles

Definitions

  • This specification relates to the field of image processing technology, and in particular to a medical image processing method, system, device and computer storage medium for interventional surgery.
  • CT Computer Tomography, computed tomography
  • CT computed tomography
  • the doctor master-slave controls the robot to perform puncture, which greatly improves puncture efficiency and accuracy. Reduces patient's radiation dose. How to assist doctors to better control robots to conduct guided percutaneous interventional surgeries is an issue that needs to be solved urgently.
  • this specification provides a medical image processing system and method for interventional surgery to improve the efficiency of guiding percutaneous interventional surgery.
  • One embodiment of this specification provides a medical image processing system for interventional surgery.
  • the system includes: a control system, the control system includes one or more processors and a memory, and the memory includes: A plurality of processors execute the operation instructions of the following steps: acquiring a first medical image of the target object before the interventional surgery and a second medical image of the target object during the interventional surgery; registering the second medical image and the first medical image. image to obtain a registration result; at least determine the interventional surgery planning information of the target object based on the registration result, perform an interventional surgery risk assessment based on the interventional surgery planning information, and obtain a risk assessment corresponding to the interventional surgery planning information. result.
  • One embodiment of this specification provides a medical image processing method for interventional surgery.
  • the method includes: acquiring a first medical image of a target object before the interventional surgery and a second medical image of the target object during the interventional surgery; Correcting the second medical image and the first medical image to obtain a registration result; at least determining the interventional surgery planning information of the target object based on the registration result, and performing an interventional surgery risk assessment based on the interventional surgery planning information. , to obtain a risk assessment result corresponding to the interventional surgery planning information.
  • One embodiment of the present specification provides a medical image processing method for interventional surgery.
  • the method includes: acquiring a model for planning an interventional path; acquiring a pre-operative enhanced image; and processing the first target structure set of the pre-operative enhanced image. Segment and obtain the first medical image of the first target structure set; obtain the scanned image during the operation; segment the second target structure set of the scanned image during the operation and obtain the second medical image of the second target structure set.
  • the first target structure set intersects with the second target structure set; registering the first medical image and the second medical image to determine the spatial position of the third target structure set during the operation,
  • the element selection of the third target structure set is based on the mode of planning the intervention path; the intervention path is planned based on the spatial position of the third target structure set during the operation, and the risk assessment is performed based on the intervention path; wherein, the third At least one element in the three target structure sets is included in the first target structure set, and at least one element in the third target structure set is not included in the second target structure set.
  • One embodiment of this specification provides an interventional surgery guidance system.
  • the system includes: a control system, the control system includes one or more processors and a memory, and the memory includes: causing the one or more processors to execute The operation instructions of the following steps: collect the first medical image, the second medical image and the third medical image of the target object at different times; register the first medical image and the second medical image to obtain the third medical image.
  • Four medical images, the fourth medical image includes registered interventional surgery planning information;
  • the fourth medical image is mapped to the third medical image to guide interventional surgery.
  • One embodiment of this specification provides a medical image processing device for interventional surgery, including a processor, and the processor is configured to execute the medical image processing method for interventional surgery described in any embodiment.
  • One embodiment of the present specification provides a computer-readable storage medium, which stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer executes the interventional surgery method as described in any embodiment. Image processing methods.
  • Figure 1 is an exemplary schematic diagram of an application scenario of a medical image processing system for interventional surgery according to some embodiments of this specification;
  • Figure 2 is an exemplary flow chart of a medical image processing method for interventional surgery according to some embodiments of this specification
  • Figure 3 is an exemplary flow chart of guided interventional procedures according to some embodiments of the present specification.
  • Figure 4 is an exemplary flow chart of a medical image processing method for interventional surgery according to some embodiments of this specification
  • Figure 5 is an exemplary flow chart of the segmentation process involved in the interventional surgical medical image processing method provided according to some embodiments of this specification;
  • Figure 6 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification
  • Figure 7 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification.
  • Figure 8 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification.
  • Figure 9 is an exemplary flowchart of a process of accurately segmenting elements according to some embodiments of this specification.
  • Figure 10 is an exemplary schematic diagram of positioning information determination of element masks according to some embodiments of this specification.
  • Figure 11 is an exemplary schematic diagram of positioning information determination of element masks according to some embodiments of this specification.
  • Figure 12A is an exemplary schematic diagram of determining the sliding direction based on the positioning information of the element mask according to some embodiments of this specification;
  • Figures 12B-12E are exemplary schematic diagrams of accurate segmentation after sliding windows according to some embodiments of this specification.
  • Figure 13 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • Figure 14 is an exemplary flowchart of a registration process for a first medical image and a second medical image shown in some embodiments of this specification;
  • Figure 15 is an exemplary flowchart of a process for determining a registration deformation field shown in some embodiments of this specification
  • Figure 16 is an exemplary flowchart of a process for determining a registration deformation field shown in some embodiments of this specification
  • Figure 17 is an exemplary schematic diagram of the first medical image and the second medical image obtained through segmentation in some embodiments of this specification;
  • Figure 18 is an exemplary flow chart for determining intervention risk values of at least some elements in the third target structure set in the fast planning mode shown in some embodiments of this specification;
  • Figure 19 is an exemplary flow chart for determining intervention risk values of at least some elements in the third target structure set in the precise planning mode shown in some embodiments of this specification;
  • Figure 20 is an exemplary flowchart of an image anomaly detection process illustrated in some embodiments of this specification.
  • Figure 21 is an exemplary flow diagram of a postoperative assessment process illustrated in some embodiments of this specification.
  • Figure 22 is an exemplary flow diagram of a postoperative assessment process illustrated in some embodiments of the present specification.
  • Figure 23 is a flow chart of an exemplary interventional surgery guidance method according to some embodiments of the present specification.
  • Figure 24 is a schematic diagram of an exemplary interventional surgery guidance method according to some embodiments of the present specification.
  • Figure 25 is another schematic diagram of an exemplary interventional surgery guidance method according to other embodiments of this specification.
  • Figure 26 is an exemplary module diagram of a medical image processing system for interventional surgery according to some embodiments of this specification.
  • Figure 27 is a schematic diagram of an exemplary puncture procedure guidance user interface according to some embodiments of the present specification.
  • system means of distinguishing between different components, elements, parts, portions or assemblies at different levels.
  • said words may be replaced by other expressions if they serve the same purpose.
  • Interventional surgery is a minimally invasive treatment surgery performed using modern high-tech means. Specifically, it can use precision catheters, guide wires, etc. under the guidance of medical scanning equipment or medical imaging equipment.
  • interventional surgery is also called puncture and puncture surgery, which can be used interchangeably without causing confusion.
  • Preoperative planning is the abbreviation of preoperative planning for interventional surgery. It is a very important part of auxiliary interventional surgery. The accuracy of preoperative planning will directly affect the accuracy of the interventional path during interventional surgery, thereby affecting the effectiveness of interventional surgery. bad.
  • the target object which may also be referred to as the scan object, may include whole or parts of biological objects and/or non-biological objects involved in the scanning process.
  • the target object can be a living or inanimate organic and/or inorganic substance, such as the head, ears, nose, mouth, neck, chest, abdomen, liver, gallbladder, pancreas, spleen, kidneys, spine, etc.
  • CT Computer Tomography, computed tomography
  • doctors The master-slave control robot performs puncture, which greatly improves puncture efficiency and accuracy and reduces the patient's radiation dose.
  • the range of real-time CT scanning is small, which affects the real-time puncture field of view. If the lesion is large or the needle insertion point is far away from the target point, or the user wants to completely observe the status of the entire target organ during real-time puncture, the scanning range needs to be expanded. If the real-time CT scanning range is expanded, the layer thickness of the CT image will be too large, and the detailed information inside the target organ cannot be identified. Especially when the lesion is small, the detailed information of the lesion may not be displayed in the real-time scan image.
  • related technologies have problems such as insufficient preoperative planning accuracy, and the workflow implemented based on inaccurate planning is relatively simple and has poor risk aversion, resulting in poor surgical results.
  • related technologies are used for intraoperative navigation, there is no puncture path guidance and target (lesion) display during the operation, and the calculation of real-time simulation planning is too complex and time-consuming, making it difficult to apply in clinical scenarios.
  • the embodiments of this specification propose some improvement methods to assist doctors in better performing interventional surgeries.
  • FIG. 1 is an exemplary schematic diagram of an application scenario of a medical image processing system for interventional surgery according to some embodiments of this specification.
  • interventional surgery/treatment may include cardiovascular interventional surgery, oncology interventional surgery, obstetrics and gynecology interventional surgery, musculoskeletal interventional surgery or any other feasible interventional surgery, such as neurointerventional surgery, etc.
  • interventional surgery/treatment may also include percutaneous biopsy, coronary angiography, thrombolytic therapy, stent implantation, or any other feasible interventional surgery, such as ablation surgery, etc.
  • the medical image processing system 100 may include a medical scanning device 110 , a network 120 , one or more terminals 130 , a processing device 140 and a storage device 150 . Connections between components in medical image processing system 100 may be variable.
  • medical scanning device 110 may be connected to processing device 140 through network 120.
  • medical scanning device 110 may be directly connected to processing device 140, as indicated by the dashed bidirectional arrow connecting medical scanning device 110 and processing device 140.
  • storage device 150 may be connected to processing device 140 directly or through network 120 .
  • terminal 130 may be connected directly to processing device 140 (as shown by the dashed arrow connecting terminal 130 and processing device 140), or may be connected to processing device 140 through network 120.
  • the medical scanning device 110 may be configured to scan the scanned object using high-energy rays (such as X-rays, gamma rays, etc.) to collect scan data related to the scanned object.
  • the scan data can be used to generate one or more images of the scanned object.
  • medical scanning device 110 may include an ultrasound imaging (US) device, a computed tomography (CT) scanner, a digital radiography (DR) scanner (eg, mobile digital radiography), digital subtraction angiography (DSA) scanner, dynamic space reconstruction (DSR) scanner, X-ray microscope scanner, multi-modal scanner, etc. or a combination thereof.
  • US ultrasound imaging
  • CT computed tomography
  • DR digital radiography
  • DSA digital subtraction angiography
  • DSR dynamic space reconstruction
  • the multi-modality scanner may include a computed tomography-positron emission tomography (CT-PET) scanner, a computed tomography-magnetic resonance imaging (CT-MRI) scanner.
  • CT-PET computed tomography-positron emission tomography
  • CT-MRI computed tomography-magnetic resonance imaging
  • Scan objects can be living or non-living.
  • scan objects may include patients, artificial objects (eg, artificial phantoms), and the like.
  • scan objects may include specific parts, organs, and/or tissues of the patient.
  • the medical scanning device 110 may include a gantry 111 , a detector 112 , a detection area 113 , a workbench 114 and a radioactive source 115 .
  • Rack 111 may support detector 112 and radiation source 115 .
  • Scan objects may be placed on the workbench 114 for scanning.
  • Radiation source 115 may emit radiation toward the scanned object.
  • Detector 112 may detect radiation (eg, X-rays) emitted from radiation source 115 .
  • detector 112 may include one or more detector units.
  • the detector unit may include a scintillation detector (eg, a cesium iodide detector), a gas detector, or the like.
  • the detector unit may include a single row of detectors and/or multiple rows of detectors.
  • Network 120 may include any suitable network capable of facilitating the exchange of information and/or data for medical image processing system 100 .
  • one or more components of medical image processing system 100 eg, medical scanning device 110, terminal 130, processing device 140, storage device 150
  • processing device 140 may obtain imaging data from medical scanning device 110 via network 120 .
  • processing device 140 may obtain user instructions from terminal 130 via network 120.
  • Network 120 may be and/or include a public network (eg, the Internet), a private network (eg, a local area network (LAN), a wide area network (WAN), etc.), a wired network (eg, an Ethernet network, a wireless network (eg, an 802.11 network, Wi-Fi network, etc.), cellular Networks (such as Long Term Evolution (LTE) networks), Frame Relay networks, virtual private networks ("VPN"), satellite networks, telephone networks, routers, hubs, switches, server computers and/or any combination thereof.
  • network 120 may include one or more network access points.
  • network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which one or more components of medical image processing system 100 may be connected to network 120 to Exchange data and/or information.
  • the terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, etc., or any combination thereof.
  • mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, the like, or any combination thereof.
  • terminal 130 may be part of processing device 140.
  • the processing device 140 may process data and/or information obtained from the medical scanning device 110, the terminal 130, and/or the storage device 150.
  • the processing device 140 can acquire the data acquired by the medical scanning device 110, and use the data to perform imaging to generate medical images (such as pre-operative enhanced images, intra-operative scan images), segment the medical images, and generate segmentation result data, such as , the first segmented image (first medical image), the second segmented image (second medical image), the spatial position of blood vessels and lesions during surgery, registration map, etc.).
  • the processing device 140 may obtain medical images, planning mode data (such as precise planning mode data, fast planning mode data) and/or scanning protocols from the terminal 130 .
  • the processing device 140 can obtain data obtained by the medical scanning device 110 (such as segmentation and registration results, interventional risk values, preset weights, weighted risk values, cumulative risk values, image abnormality types, image abnormality degrees, etc.), And use these data to process and generate intervention paths and/or prompt information.
  • data obtained by the medical scanning device 110 such as segmentation and registration results, interventional risk values, preset weights, weighted risk values, cumulative risk values, image abnormality types, image abnormality degrees, etc.
  • processing device 140 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data stored in medical scanning device 110, terminal 130, and/or storage device 150 via network 120. As another example, processing device 140 may be directly connected to medical scanning device 110, terminal 130, and/or storage device 150 to access stored information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform.
  • Storage device 150 may store data, instructions, and/or any other information.
  • storage device 150 may store data obtained from medical scanning device 110, terminal 130, and/or processing device 140.
  • the storage device 150 may store medical image data (such as pre-operative enhanced images, intra-operative scan images, first medical images, second medical images, etc.) and/or positioning information data acquired from the medical scanning device 110 .
  • the storage device 150 may store medical images and/or scan protocols input from the terminal 130 .
  • the storage device 150 can store data generated by the processing device 140 (for example, medical imaging data, organ mask data, positioning information data, accurately segmented result data, spatial positions of blood vessels and lesions during surgery, registration maps, etc. ) to store.
  • the storage device 150 may store data generated by the processing device 140 (for example, segmentation and registration results, intervention risk values, preset weights, weighted risk values, risk cumulative values, image abnormality types, image abnormality degrees, intervention paths, and /or prompt information, etc.) are stored.
  • storage device 150 may store data and/or instructions that processing device 140 may perform or be used to perform the example methods described in this specification.
  • the storage device 150 includes a mass storage device, a removable storage device, a volatile read-write memory, a read-only memory (ROM), etc., or any combination thereof.
  • Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like.
  • the storage device 150 may be implemented on a cloud platform.
  • storage device 150 may be connected to network 120 to communicate with one or more other components in medical image processing system 100 (eg, processing device 140, terminal 130). One or more components in medical image processing system 100 may access data or instructions stored in storage device 150 via network 120 . In some embodiments, storage device 150 may be directly connected to or in communication with one or more other components in medical image processing system 100 (eg, processing device 140, terminal 130). In some embodiments, storage device 150 may be part of processing device 140.
  • the description of the medical image processing system 100 is intended to be illustrative and not to limit the scope of this description. Many alternatives, modifications and variations will be apparent to those of ordinary skill in the art. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules or form a subsystem to connect with other modules without departing from this principle.
  • Figure 2 is an exemplary flow chart of a medical image processing method for interventional surgery according to some embodiments of this specification.
  • process 200 may be implemented by a medical image processing system 100 or a processing device (eg, processing device 140) for interventional surgery.
  • the medical image processing method for interventional surgery can be implemented by a control system included in the medical image processing system.
  • the control system includes one or more processors and a memory, and the memory includes one or more processors. Multiple processors execute the operation instructions of process 200.
  • Step 210 Obtain the first medical image of the target object before the interventional surgery and the second medical image of the target object during the interventional surgery.
  • the first medical image refers to a medical image of the target object obtained before performing the interventional surgery.
  • the second medical image refers to the medical image of the target object obtained during the interventional surgery.
  • the first medical image and/or the second medical image may include CT images, PET-CT images, US images or MR images, etc.
  • the first medical image and the second medical image can be used when the target object is at the same breathing amplitude point or does not affect Collect at points close to respiratory amplitude for puncture accuracy.
  • the first medical image may be collected when the target subject is at a first respiratory amplitude point before the interventional surgery
  • the second medical image may be collected when the target subject is at a second respiratory amplitude point during the interventional surgery and before puncture is performed.
  • the deviation between the second respiratory amplitude point and the first respiratory amplitude point is less than a preset value.
  • it may be a time period during the preparation process for the interventional surgery and the puncture needle has not yet started puncturing. This time period may be a critical time point for whether the puncture needle enters the body of the target object.
  • Breathing amplitude is a physical quantity that reflects the changes in air volume during breathing.
  • the respiratory amplitude point refers to a time point in a certain respiratory amplitude, for example, the end of inspiration, the end of expiration, an intermediate state of inspiration, an intermediate state of expiration, etc.
  • the respiratory amplitude point of the acquired image may be determined based on needs, experience, and/or user habits. For example, when performing a lung puncture, the inspiratory state exerts less pressure on the lesion, and images can be collected at the end of inhalation.
  • the target subject before the interventional surgery, before the puncture is performed during the interventional surgery, and during the puncture, the target subject can adjust itself (or adjust under the guidance of a technician) to a certain respiratory amplitude point (for example, end of inspiration)
  • a certain respiratory amplitude point for example, end of inspiration
  • the first medical image and the second medical image can be collected and obtained respectively at the respiratory amplitude point by a medical scanning device (for example, the medical scanning device 110).
  • the processing device may use a respiratory gating device to collect the first medical image and the second medical image when the target object is at the same or approximately the same respiratory amplitude point.
  • a respiratory gating device when collecting the first medical image, the respiratory gating device can obtain the breathing amplitude point A of the target subject; during interventional surgery and before puncture is performed, the respiratory gating device can monitor the breathing of the target subject. And the medical scanning device is caused to collect the second medical image when the target object is at the breathing amplitude point A'.
  • the breathing amplitude of the target subject is monitored through the respiratory gating device, and when the target subject adjusts breathing to the breathing amplitude point A", the medical scanning equipment can be used to collect the third medical image ( The description of the third medical image can be found in the description of Figure 3 below).
  • the breathing amplitude of the target object is monitored. When the breathing amplitude deviates greatly from "A", a prompt can also be issued to the user to assist the user in adjusting breathing.
  • Collecting the first medical image, the second medical image, and the third medical image at the same or approximately the same respiratory amplitude point can reduce the movement of organs and tissues between images caused by respiratory motion, which is beneficial to improving preoperative planning. Accuracy.
  • the preset value can be set according to needs and/or experience, for example, the preset value is set to 1%, 5%, 7%, 10%, etc.
  • the first medical image is collected at the first respiratory amplitude point A
  • the second medical image is collected at the second respiratory amplitude point A' whose deviation from the first respiratory amplitude point A is less than the preset value
  • the third medical image is collected at the second respiratory amplitude point A' whose deviation from the first respiratory amplitude point A is less than the preset value.
  • the image is collected at a third respiratory amplitude point A′′ whose deviation from the first respiratory amplitude point A and/or the second respiratory amplitude point A′ is less than a preset value.
  • the acquired first medical image and the second medical image may also be medical images that have undergone certain processing, for example, segmentation processing.
  • the processing device can acquire the pre-operative enhanced image, segment the first target structure set of the pre-operative enhanced image, and obtain the first medical image of the first target structure set; and acquire the intra-operative scan. image, and segment the second target structure set of the scanned image during surgery to obtain a second medical image of the second target structure set.
  • the pre-operative enhanced image which can be referred to as the pre-operative enhanced image, refers to the image obtained by scanning the target object (such as the patient, etc.) through the medical scanning equipment (such as the medical scanning equipment 110) after the contrast agent is injected before the operation.
  • the pre-operative enhanced images may include CT images, PET-CT images, US images or MR images, etc.
  • the processing device can obtain the pre-operative enhanced image of the target object from the medical scanning device 110, and can also read and obtain the pre-operative enhanced image of the scanned object from the terminal, database and storage device.
  • pre-surgery enhanced images may also be obtained through any other feasible method.
  • the pre-surgery images may be obtained from a cloud server and/or a medical system (such as a hospital's medical system center, etc.) via a network (eg, network 120).
  • Pre-enhanced images are not limited in the embodiments of this specification.
  • Intra-operative scanning images refer to images obtained after the target object is scanned by medical scanning equipment during surgery.
  • the intra-operative scan images may include CT images, PET-CT images, US images or MR images, etc.
  • the intraoperative scan image may be a real-time scan image.
  • the intra-operative scan image may also be called a pre-operative plain scan image or an intra-operative plain scan image, which is a scan image taken during the surgical preparation process and before the operation is performed (that is, before the needle is actually inserted).
  • the first set of target structures for the pre-operative enhancement image may include blood vessels within the target organ (eg, target organ).
  • the first target structure set of the preoperative enhanced image may include target organs and lesions in addition to blood vessels within the target organ.
  • the target organ may include the brain, lungs, liver, spleen, kidneys or any other possible organ tissue, such as the thyroid gland, etc.
  • the first medical image is a medical image of the first pre-operative target structure set (for example, the target organ, blood vessels in the target organ, and lesions in the pre-operative enhanced image) obtained by segmenting the pre-operative enhanced image.
  • the regions or organs included in the second target structure set of the intra-operative scan image may be determined based on the pattern of planning the interventional path.
  • the interventional path refers to the path along which the instruments used in interventional surgery are introduced into the target object's body.
  • the intervention path modes may include precise planning mode and rapid planning mode.
  • the precise planning mode or the fast planning mode may be a path planning method used for segmenting scanned images during surgery.
  • precision planning mode may include precision segmentation mode.
  • the fast planning mode may include a fast segmentation mode.
  • the regions or organs included in the second target structure set may be different.
  • the second set of target structures may include inaccessible areas.
  • the second target structure set can include all important organs in the scanned images during surgery. Vital organs refer to the organs that need to be avoided during interventional surgery, such as the liver, kidneys, blood vessels outside the target organ, etc.
  • the second target structure set may also include target organs and lesions in addition to all important organs in the non-interventionable area/intra-operative scan image.
  • the second medical image is a medical image of the second target structure set during the operation (for example, inaccessible areas/vital organs, target organs, lesions) obtained by segmenting the scanned images during the operation.
  • the first target structure set and the second target structure set intersect.
  • the first target structure set may include blood vessels and target organs within the target organ
  • the second target structure set may include inaccessible areas (or all important organs), target organs, and lesions.
  • the intersection of the set of structures is the target organ.
  • the first target structure set may include blood vessels, target organs, and lesions within the target organ
  • the second target structure set may include inaccessible areas (or all important organs), target organs, and lesions
  • the first target structure set and The intersection of the second target structure set is the target organ and the lesion.
  • Step 220 Register the second medical image and the first medical image to obtain a registration result.
  • Registration refers to the process of matching and superimposing different images acquired at different times or under different conditions.
  • registration may be an image processing operation that makes the corresponding points of the first medical image and the second medical image consistent with their spatial and anatomical positions through spatial transformation.
  • image processing operation that makes the corresponding points of the first medical image and the second medical image consistent with their spatial and anatomical positions through spatial transformation.
  • the registration result refers to an image obtained by registering the second medical image and the first medical image.
  • the registration result may also be called a fourth medical image.
  • the registration results may include the spatial location of the third set of target structures during surgery.
  • the third target structure set is a complete set of structures obtained after registering the first medical image and the second medical image.
  • elements in the third target structure set may include elements in the first target structure set and the second target structure set.
  • the third set of target structures may include the target organ (eg, target organ), blood vessels within the target organ, lesions, and other areas/organs (eg, non-interventional areas, all vital organs).
  • other regions/organs may refer to non-interventionable areas; in the precise segmentation mode, other regions/organs may refer to all important organs.
  • At least one element in the third target structure set is included in the first target structure set, and at least one element in the third target structure set is not included in the second target structure set.
  • the first target structure set includes blood vessels, target organs, and lesions within the target organ
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the blood vessels within the target organ are included in the first target structure set.
  • the target structure is in the set and is not included in the second target structure set.
  • the elements of the third target structure set may be determined based on the pattern of planning the interventional path. Modes for planning intervention paths may include precise planning mode and rapid planning mode. For description of the model for planning the intervention path, please refer to the description of step 410 in Figure 4 below.
  • the processing device can obtain the registration deformation field by matching the second medical image with the first medical image, and the registration deformation field can be used to reflect the spatial position changes of the first medical image and the second medical image;
  • the registration result can be obtained by superimposing the second medical image and the first medical image based on the registration deformation field.
  • the processing device may use a non-rigid body registration algorithm based on features, grayscale, etc., such as a non-rigid body registration algorithm based on Demons, to register the first medical image and the second medical image.
  • the processing device 140 can also use a non-rigid body registration algorithm based on deep learning to register the first medical image and the second medical image to improve the real-time performance of the registration.
  • the operation process of registering the first medical image and the second medical image may be as shown in the following embodiment.
  • the processing device can obtain an interventional surgery planning information image based on the first medical image.
  • the interventional surgery planning information image refers to an image containing interventional surgery planning information.
  • the interventional surgery planning information may include at least one of high-risk tissue information, planned puncture path, and lesion location information.
  • High-risk tissues refer to organs and/or tissues whose puncture may adversely affect the target object and/or the surgical process, such as large blood vessels, bones, etc.
  • different high-risk tissues can be set according to the individual conditions of different target subjects. For example, the liver of a patient with low liver function is used as a dangerous area, and other lesions in the target subject's body are used as dangerous areas.
  • the planned puncture path refers to the planned route of the puncture instrument.
  • the planned path information of the puncture may include the needle entry point, target point, puncture angle, puncture depth, path length, tissues and/or organs passed by the path, etc.
  • the location information of the lesion may include the coordinates, depth, volume, edge, etc. of the lesion (or the center of the lesion) in the human body coordinate system.
  • the processing equipment or relevant personnel may perform segmentation and other processing on the first medical image to obtain interventional surgery planning information.
  • various tissues or organs can be segmented, such as blood vessels, skin, bones, organs, tissues to be punctured, etc.
  • each segmented tissue or organ can be classified into lesion areas, penetrable areas, high-risk tissues, etc.
  • the planned puncture path can be determined based on the lesion area, penetrable area, high-risk tissue, etc.
  • the processing equipment or relevant personnel can mark the interventional surgery planning information on the first medical image to obtain the interventional surgery planning information image.
  • the processing device can register the first medical image and the second medical image for the first time to obtain the first deformation information.
  • the first deformation information refers to the deformation of an image element (eg, a pixel or a voxel) in the second medical image relative to the corresponding image element in the first medical image. Morphological change information. For example, geometric change information, projection change information, etc.
  • the first deformation information can be represented by a first deformation matrix.
  • the first deformation matrix can include a deformation matrix in the x direction, a deformation matrix in the y direction, and a deformation matrix in the z direction.
  • each deformation matrix corresponds to The unit area of the second medical image (for example, 1 pixel point, 1 mm ⁇ 1 mm image area, 1 voxel point, 1 mm ⁇ 1 mm ⁇ 1 mm image area, etc.), the value of the element is the unit area on the x axis, Deformation information in the y-axis or z-axis direction.
  • the processing device may perform the first registration of the first medical image and the second medical image through a Demons-based non-rigid body registration algorithm, a geometric correction method, a deep learning-based non-rigid body registration algorithm, etc., Obtain the first deformation information.
  • the processing device can apply the first deformation information to the interventional surgery planning information image to obtain a registration result.
  • the interventional surgery planning information in the registration result is the interventional surgery planning information after the first registration.
  • the processing device can apply the first deformation matrix to the interventional surgery planning information image, that is, the interventional surgery planning information image and the interventional surgery planning information (high-risk tissue information, planned puncture path, lesion location information, etc.) A morphological change corresponding to the first deformation information is generated to obtain a registration result.
  • the interventional surgery planning information in the registration result is the interventional surgery planning information after the first registration.
  • the first medical image is collected before interventional surgery, and the time for acquisition and image processing is relatively sufficient.
  • the scanning range of the first medical image is relatively large and the slices are thick. For example, it includes a large number of slices covering all relevant tissues and/or organs. Planning the puncture path on the first medical image with more comprehensive information will help improve the accuracy of subsequent interventional surgical guidance.
  • the second medical image is collected during the interventional surgery and before the puncture is performed. The time for acquisition and image processing is relatively tight.
  • the scanning range of the second medical image is relatively small and the slices are thin. For example, it may only include 4 to 10 images around the needle tip. slices.
  • the registration result (the fourth medical image) obtained by registering the first medical image and the second medical image may include registered interventional surgery planning information.
  • the calculation pressure after the puncture is started can be avoided or reduced, so that the actual interventional surgery can be performed immediately after the real-time image is acquired. Or perform it in a shorter period of time to reduce the duration of puncture execution.
  • Step 230 Determine interventional surgery planning information of the target object based at least on the registration result, perform interventional surgery risk assessment based on the interventional surgery planning information, and obtain risk assessment results corresponding to the interventional surgery planning information.
  • interventional surgery planning information may also be referred to as puncture planning information.
  • the registration result may include annotated interventional surgery planning information. Therefore, the processing device can directly determine the interventional surgery planning information of the target object based on the registration result.
  • Risk assessment refers to the process of analyzing and judging the risks that may arise during puncture.
  • the risk assessment conclusion may be a summary of the risk assessment.
  • the spatial positions of elements in the registration results for example, target organs, lesions, blood vessels within target organs, inaccessible areas, all important organs
  • the spatial positions of elements in the registration results can comprehensively and accurately reflect the current status of the target object (for example, patients).
  • Interventional surgery planning information can enable surgical instruments (e.g., puncture needles) to effectively avoid blood vessels in the target organ, non-interventionable areas, and/or all important organs to reach the lesion smoothly, while reducing surgical risks and risk-taking interventional surgery planning information. Assessment can further reduce the risk of interventional surgery.
  • performing risk assessment based on interventional surgery planning information may include determining interventional risk values for at least some elements in the registration result, and performing risk assessment based on the interventional risk values.
  • the processing device may determine the intervention risk value of at least some elements in the third target structure set, and perform a risk assessment based on the intervention risk value.
  • the intervention risk value can represent the degree of intervention risk of an element.
  • the greater the intervention risk value the higher the degree of intervention risk, that is, the greater the intervention risk.
  • a certain element area with an intervention risk value of 8 points has a higher intervention risk than an area with an intervention risk value of 6 points. higher.
  • the selection of elements of the third target structure set may be determined based on the mode of planning the interventional path.
  • the modes of planning the interventional path are different, the elements of the risk assessment used to determine the interventional path in the third target structure set may be different.
  • the elements of the third target structure set for determining the risk assessment of the interventional path may include blood vessels and non-interventional areas within the target organ.
  • the elements used to determine the risk assessment of the interventional route in the third target structure set may include blood vessels in the target organ and all important organs.
  • the processing device may determine whether the intervention path in the interventional surgery planning information passes through the preset elements in the third target structure set; when the determination result is yes, determine the preset risk object in the third target structure set The intervention risk value.
  • the preset element in the third target structure set may refer to the target organ.
  • the preset risk objects in the third target structure concentration may refer to blood vessels in the target organ. It can be understood that the preset risk object may be included in at least some elements of the third target structure set used for risk assessment.
  • the blood vessels and non-interventional areas in the target organ have a certain risk relative to the interventional path, and it is necessary to calculate the risk within the target organ.
  • the intervention risk value of blood vessels and non-interventional areas relative to the intervention path in the precise planning mode, the blood vessels within the target organ and external important organs/tissues have a certain risk relative to the intervention path, and it is necessary to calculate the blood vessels within the target organ and the external The interventional risk value of important organs/tissues relative to the interventional route.
  • the blood vessels in the target organ pose no risk relative to the interventional path, and there is no need to consider the impact of the blood vessels in the target organ on the interventional path (you can also The risk of intervention for blood vessels within the target organ is considered to be zero). Therefore, when introducing When the entry path does not pass through the target organ of the third target structure set, in the fast planning mode, only the intervention risk value of the inaccessible area relative to the intervention path needs to be calculated; in the precise planning mode, only the important organs outside the target organ need to be calculated /Organization’s intervention risk value relative to the intervention path. According to whether the interventional path passes through the target organ, and the interventional risk values of the elements that need to be calculated are determined in different planning modes, the risk assessment of the interventional path can be more rationally performed.
  • the method for determining whether the interventional path passes through the target organ may include obtaining the intersection between the target organ mask and the interventional path. If the intersection is not an empty set, the interventional path passes through the target organ; otherwise, it does not. Pass through target organs.
  • the processing device may guide the interventional surgery based on the interventional surgery planning information that satisfies the preset condition.
  • the preset condition may be that the intervention risk value is less than the preset threshold. For example, assuming that the preset threshold is 7, when the intervention risk value is 8, it is considered that the preset condition is not met. When the intervention risk value is 6, the preset condition is considered to be met. .
  • Guiding interventional surgery based on interventional surgery planning information that satisfies preset conditions can assist in guiding the movement of surgical instruments in the target object according to interventional surgery planning information that satisfies preset conditions to avoid blood vessels, non-interventionable areas, and /Or all vital organs can successfully reach the lesion to achieve treatment for the patient.
  • Figure 3 is an exemplary flowchart of a guided interventional procedure according to some embodiments of the present specification.
  • Step 310 Obtain the third medical image of the target object during the interventional surgery.
  • the third medical image may be a real-time image during interventional surgery.
  • the third medical image is acquired during performance of the interventional procedure.
  • the process of interventional surgery may include the process of inserting a needle from the skin, entering the target area along the interventional path, completing the operation in the target area, and removing the needle.
  • the third medical image may be acquired by a computed tomography (CT) device.
  • CT computed tomography
  • the third medical image may be acquired by a different imaging device than the first medical image and the second medical image.
  • the first medical image and the second medical image can be collected by the imaging equipment in the imaging room, and the third medical image can be collected by the imaging equipment in the operating room.
  • image parameters eg, image range, precision, contrast, grayscale, gradient, etc.
  • the scanning range of the first medical image may be larger than the scanning range of the second medical image and the third medical image, or the accuracy of the second medical image and the third medical image may be higher than that of the first medical image.
  • the third medical image may be collected at the same respiratory amplitude point as when the first medical image and the second medical image are collected by the target subject, or at a similar respiratory amplitude point that does not affect puncture accuracy.
  • the first medical image is collected at the first respiratory amplitude point A
  • the second medical image is collected at the second respiratory amplitude point A' whose deviation from the first respiratory amplitude point A is less than a preset value
  • the third medical image is collected at a third respiratory amplitude point A" whose deviation from the first respiratory amplitude point A and/or the second respiratory amplitude point A' is less than a preset value.
  • the target object can adjust itself (or adjust under the guidance of the technician) to a certain respiratory amplitude point (for example, the end of inhalation), and the third medical image can be collected at this respiratory amplitude point through the medical scanning device. .
  • Step 320 Map the registration result to the third medical image to guide interventional surgery.
  • the processing device may map the fourth medical image registration result to the third medical image through methods such as homography transformation, affine transformation, alpha channel transformation, etc., to guide the interventional surgery.
  • the user can use the mapped third medical image as a guide to execute the mapped puncture path, avoid high-risk areas such as blood vessels, and gradually puncture the mapped lesion.
  • the second medical image and the third medical image may be collected when the target subject is at different respiratory amplitude points, and the organs and/or tissues in the images may move.
  • the processing device can perform a second registration of the second medical image and the third medical image.
  • the second deformation information can be obtained by registering the second medical image and the third medical image for a second time.
  • the second deformation information refers to the morphological change information of the image elements in the third medical image relative to the corresponding image elements in the second medical image.
  • the second deformation information can be represented by a second deformation matrix.
  • the second deformation matrix can include a deformation matrix in the x direction, a deformation matrix in the y direction, and a deformation matrix in the z direction.
  • each deformation matrix corresponds to The unit area of the third medical image (for example, 1 pixel point, 1 mm ⁇ 1 mm image area, 1 voxel point, 1 mm ⁇ 1 mm ⁇ 1 mm image area, etc.), the value of the element is the unit area on the x-axis, Deformation information in the y-axis or z-axis direction.
  • the processing device can perform a second registration of the second medical image and the third medical image through a non-rigid body registration algorithm based on Demons, a geometric correction method, a non-rigid body registration algorithm based on deep learning, etc., Obtain the second deformation information.
  • the processing device may apply the second deformation information to the registration result (the fourth medical image) to obtain a fifth medical image.
  • the fifth medical image includes interventional surgery planning information after the second registration.
  • the processing device can apply the second deformation matrix to the registration result, so that the interventional surgery planning information (high-risk tissue information, planned puncture path, lesion location information, etc.) that has been registered for the first time contained in the registration result is generated.
  • the morphological changes corresponding to the second deformation information are used to obtain the fifth medical image.
  • the processing device may map the fifth medical image to the third medical image. For example, through homography transformation, Methods such as affine transformation and alpha channel transformation map the fifth medical image to the third medical image.
  • the second medical image and the third medical image are both small-slice data, the calculation amount of rapid registration is small, and registration can be achieved in a short time after the third medical image is obtained during the operation, thereby reducing surgical risks. .
  • the processing device may display the registration result (the fourth medical image) outside the display range of the third medical image or the fifth medical image within the display range of the third medical image.
  • external image information For example, display the registration result or other tissues and/or organs in the fifth medical image, etc.
  • FIG. 27 lesions outside the display range of the third medical image are displayed at time T1.
  • the processing device may display the planned path information of the corresponding interventional surgery outside the display range of the third medical image. For example, as shown in Figure 27, the lesion is outside the scanning range, and the planned path from point C to the lesion is displayed.
  • the processing device can differentially display image information within and outside the display range of the third medical image. For example, different background colors are set for the display range and outside the display range.
  • the display range is displayed as an RGB image and the display range is displayed as a grayscale image.
  • Lines within the display range (for example, planning paths) are displayed as solid lines.
  • Display Lines outside the range (for example, planned paths) are displayed as dotted lines (or solid lines of different colors), etc.
  • the scope of the third medical imaging scan is smaller, affecting the real-time puncture field of view.
  • the puncture planning information mapped outside the third medical image scanning range can be supplemented into the real-time image of the interventional surgery process, so as to increase the field of view of the planning information that users can see during the puncture process and obtain More useful information on interventional surgical procedures.
  • the lesions displayed outside the scanning range can provide the doctor with a clear target for puncture, making the operation more likely to be successful.
  • the processing device may identify the puncture needle tip position based on the third medical image.
  • the processing device can extract the puncture needle in the third medical image through a semi-automatic threshold segmentation algorithm or a fully automatic deep learning algorithm, and then obtain the needle tip position. For example, as shown in Figure 27, the processing device can identify point B1, the position of the needle tip at time T1.
  • the processing device can move the target object, the bed board carrying the target object, or the detector used to collect the third medical image according to the needle tip position, so that the needle tip position is located in the center area of the display range of the third medical image.
  • the needle tip punctures toward the lesion along the planned path.
  • the processing device 140 can estimate the position of the needle tip at point B2 at time T2, and move the bed according to the change in the position of the needle tip during the T1-T2 time period, so that The scanning range of the third medical image moves downward, so that the needle tip position is located in the center area of the display range of the third medical image.
  • the scanning range is updated in real time, and the needle tip position is kept in the center area of the display range of the third medical image, so that the information around the needle tip can be highlighted , more accurate tracking of the needle tip's progress will help improve surgical efficiency and reduce surgical risks.
  • the operating instructions of the control system may also cause one or more processors to perform the following operations:
  • an abnormal image recognition model obtain real-time medical images of the target object during surgery; use the abnormal image recognition model to determine the real-time probability that the target object is at risk based on the real-time medical images, and when the probability reaches a preset threshold, a real-time reminder doctor.
  • the anomaly image recognition model may be a machine learning model used to identify whether there are anomalies in images. Abnormalities can include bleeding, needle penetration through high-risk tissue, etc.
  • the abnormal image recognition model may include a deep neural network model, a convolutional neural network model, etc.
  • the abnormal image recognition model can be obtained by obtaining historical real-time medical images during the execution of historical interventional surgeries as training samples, and manually annotating whether there are actual abnormal conditions, and using the annotation results as labels to train the initial abnormal image recognition model. Training methods can include various common model training methods, such as gradient descent methods, etc., which are not limited in this specification.
  • Real-time medical imaging refers to images acquired during the performance of an interventional procedure on a target subject.
  • Real-time medical images can be obtained through real-time scanning of medical scanning equipment, or through other methods, such as reading images during surgery from storage devices or databases.
  • the images can be second medical images, third medical images, and surgical images. Scan images, etc.
  • the processing device can input real-time medical images to the abnormal image recognition model, and the abnormal image recognition model outputs the real-time probability that the target object is at risk. This real-time probability is also the probability that risks may occur if the current operation continues.
  • the preset threshold (real-time probability threshold) can be set in advance, for example, 0.8, 0.6, 0.5, etc.
  • the setting method can be manual setting or other methods, and this manual does not limit this.
  • the operating instructions of the control system may also cause one or more processors to perform the following steps:
  • Characteristic information of the target object refers to data that can reflect the personal characteristics of the target object, such as age, gender, height, weight, body fat rate, blood pressure, whether there are underlying diseases, categories of underlying diseases, etc.
  • the actual progress of the current interventional surgery may include surgery execution time, surgery completion degree (eg, 5%, 10%, 20%, etc.), puncture needle penetration depth, etc.
  • the actual puncture path may be the same as or different from the planned path of the interventional surgery. For example, during the interventional surgery, certain adjustments may be made based on the actual situation based on the planned path of the interventional surgery. exist
  • the actual puncture path can be obtained through image recognition processing based on real-time medical images. For example, the actual puncture needle path is segmented from the real-time medical images, and the actual puncture path is determined based on the segmentation results.
  • the processing device can input the target object's characteristic information, real-time medical images, the actual progress of the current interventional surgery, and the actual puncture path into the risk prediction model for processing, and the risk prediction model outputs the interventional surgery risk of the target object in the next time period.
  • the next time period can be 1 minute later, 2 minutes later, or 5 minutes later.
  • the next time period may include multiple time points, and the risk prediction model may simultaneously output the probability of possible surgical risks at each time point.
  • the risk prediction model may include a deep neural network model, a convolutional neural network model, etc., or other combined models.
  • the risk prediction model can obtain the characteristic information of historical target objects, historical real-time medical images, historical actual progress of current interventional surgeries, and historical actual puncture paths as training samples, and manually label based on whether there are actual abnormal situations, and use the labeling results as Label, obtained by training the initial risk prediction model.
  • Training methods can include various common model training methods, such as gradient descent methods, etc., which are not limited in this specification.
  • the operating instructions of the control system may also cause one or more processors to perform the following operations:
  • the processing device can obtain actual interventional surgery information in multiple time periods; compare the actual interventional surgery information in each time period with the corresponding interventional surgery planning information to obtain interventional deviation information; perform clustering based on the interventional deviation information and display the clustering results .
  • the actual interventional surgery information may include the actual interventional surgery path and actual lesion location information during the interventional surgery.
  • the actual lesion location information refers to the lesion location information obtained based on real-time medical images during the interventional surgery.
  • the actual interventional surgery path can be the path that the puncture needle takes from the start of the interventional surgery, when the puncture needle enters the target object's body, to the current moment. As time points pass, the actual interventional surgery path can be obtained in multiple time periods.
  • Interventional deviation information refers to the difference between the actual interventional surgery path and the interventional surgery planning path in the actual interventional surgery information, and the actual lesion location information and the lesion location information in the interventional surgery planning information.
  • the processing device can obtain interventional deviation information through direct comparison, calculation of variance, and other methods.
  • the interventional deviation information can be represented by images or matrices.
  • the actual interventional surgery path and the interventional surgery planned path can be Display on the same image and mark the difference between the two (such as path deviation distance, etc.); you can also display the actual lesion location and the lesion location in the interventional surgery planning information on one image at the same time, and compare both The deviations between them are marked.
  • Clustering can refer to the aggregation of intervention deviation information for multiple time periods.
  • the intervention deviation information corresponding to the same position in multiple time periods is aggregated.
  • the deviation between the actual interventional surgery path and the planned access surgery path in the first time period is 0, and the deviation in the second time period is 0.
  • the deviation is 1, the deviation in the third time period is 0.5, etc.
  • the deviation information that appears at different time points can be integrated and displayed for the doctor to view, which can facilitate the doctor to understand the key parts or key times of the deviation that may occur during the interventional surgery, and then adjust the subsequent intervention in a targeted manner. Operation.
  • FIG. 4 is an exemplary flowchart of a medical image processing method for interventional surgery according to some embodiments of this specification.
  • Step 410 Obtain the model of the planned intervention path.
  • the interventional path refers to the path along which the instruments used in the interventional surgery are introduced into the human body.
  • the intervention path modes may include precise planning mode and rapid planning mode.
  • the precise planning mode or the fast planning mode may be a path planning mode used for segmenting scanned images during surgery.
  • precision planning mode may include precision segmentation mode.
  • the fast planning mode may include a fast segmentation mode.
  • a pattern of planned intervention paths may be obtained.
  • patterns for planning the interventional path may be obtained from the medical scanning device 110 .
  • the pattern of planning the intervention path may be obtained from the terminal 130, the processing device 140, and the storage device 150.
  • Step 420 Obtain pre-operative enhanced images.
  • the processing device may acquire pre-surgery enhanced images of the scanned object from the medical scanning device 110, such as PET-CT images. In some embodiments, the processing device may acquire pre-operative enhanced images of the scanned object, such as US images, etc., from the terminal 130, the processing device 140 and the storage device 150.
  • Step 430 Segment the first target structure set of the pre-surgery enhanced image to obtain a first medical image of the first target structure set.
  • the first medical image may also be called a first segmented image.
  • Step 440 Obtain scanned images during surgery.
  • the processing device may acquire intra-operative scan images of the scanned object, such as PET-CT images, etc. from the medical scanning device 110 . In some embodiments, the processing device may acquire the intra-operative scan images of the scanned object, such as US images, etc. from the terminal 130, the processing device 140 and the storage device 150.
  • Step 450 Segment the second target structure set of the scanned image during surgery to obtain a second medical image of the second target structure set.
  • the second medical image may also be called a second segmented image.
  • the processing device's segmentation of the target organ of the scanned image during surgery may be implemented in the following manner: segmenting the second set of target structures of the scanned image during surgery according to the planning mode.
  • the segmentation mode can be based on fast segmentation mode and/or fine
  • the quasi-segmentation mode is used to segment the fourth target structure set of scanned images during surgery.
  • the fourth target structure set may be part of the second target structure set, for example, non-interventionable areas, all important organs outside the target organ. Under different planning modes, the fourth target structure set includes different regions/organs. In some embodiments, in fast segmentation mode, the fourth set of target structures may include inaccessible regions. In some embodiments, in the precise segmentation mode, the fourth target structure set may include preset important organs.
  • regional positioning calculations can be performed on scanned images during surgery, and inaccessible areas can be segmented and extracted.
  • non-interventionable area refers to the area that needs to be avoided during interventional surgery.
  • non-interventional areas may include non-penetrable areas, non-introducible or implantable areas, and non-injectable areas.
  • post-processing can be performed on the area outside the inaccessible area and the target organ (eg, target organ) to ensure that there is no cavity area in the intermediate area between the inaccessible area and the target organ.
  • the hole area refers to the background area formed by the boundaries connected by the foreground pixels.
  • the non-interventional area can be obtained by subtracting the target organ and the interventional area from the abdominal cavity (or chest) area. After subtracting the target organ and the interventional area from the abdominal cavity (or chest) area to obtain the non-interventional area, there may be a cavity area between the target organ and the non-interventional area. This cavity area does not belong to the target organ or the inaccessible area. intervention area.
  • post-processing may include corrosion operations and expansion operations.
  • the erosion operation and the expansion operation may be performed based on convolution of the intra-operative scanned image with a filter.
  • the erosion operation can be performed by convolving the filter with the intra-operative scanned image, and then finding a local minimum according to the predetermined corrosion range, so that the outline of the intra-operative scanned image is reduced to the desired range, and the intra-operative scanned image displays the initial image. Reduce the highlighted area of the target to a certain range.
  • the expansion operation may be to convolve the filter with the intraoperative scan image, and then find a local maximum according to the predetermined corrosion range, so that the outline of the intraoperative scan image is expanded to the desired range, and the intraoperative scan image displays the initial image. Reduce the highlighted area of the target to a certain range.
  • the fast segmentation mode regional positioning calculations can be performed on scanned images during surgery, and then segmentation and extraction of inaccessible areas can be performed.
  • the blood vessel mask inside the target organ can be determined based on the segmentation mask and blood vessel mask of the target organ in the scanned images during surgery. It should be noted that in the fast segmentation mode, only the blood vessels inside the target organ are segmented; in the precise segmentation mode, the blood vessels inside the target organ and other external blood vessels can be segmented.
  • Mask such as organ mask
  • the mask can be a pixel-level classification label.
  • the mask represents the classification of each pixel in the medical image. For example, it can be divided into background, liver, spleen, Kidney, etc., the summary area of a specific category is represented by the corresponding label value. For example, all pixels classified as liver are summarized, and the summary area is represented by the label value corresponding to the liver.
  • the label value here can be set according to the specific rough segmentation task.
  • the segmentation mask refers to the corresponding mask obtained after the segmentation operation.
  • the masks may include organ masks (eg, organ masks of target organs) and blood vessel masks.
  • the fast segmentation mode only the thoracic cavity or abdominal cavity region is used as an example.
  • the regional positioning of the thoracic cavity or abdominal cavity region within the scanning range of the scanned image during surgery is calculated. Specifically, for the abdominal cavity, the top of the liver is selected.
  • the abdominal cavity segmentation mask is used to remove the segmentation mask of the target organ and The penetrable area mask can extract the inaccessible area.
  • the interventional area may include a fat portion, such as a fat-containing gap between two organs. Taking the liver as an example, part of the area between the subcutaneous area and the liver can be covered by fat. Due to the fast processing speed in the fast segmentation mode, the planning speed is faster, the time is shorter, and the image processing efficiency is improved.
  • a fat portion such as a fat-containing gap between two organs. Taking the liver as an example, part of the area between the subcutaneous area and the liver can be covered by fat. Due to the fast processing speed in the fast segmentation mode, the planning speed is faster, the time is shorter, and the image processing efficiency is improved.
  • all organs in the scanned images during surgery can be segmented.
  • all organs scanned in images during surgery may include basic organs and important organs scanned in images during surgery.
  • the basic organ of the intra-operative scan image may include a target organ (such as a target organ) of the intra-operative scan image.
  • preset important organs of scanned images during surgery can be segmented. The preset important organs can be determined based on the importance of each organ in the scanned images during surgery. For example, all vital organs in scanned images during surgery can be used as preset vital organs.
  • the ratio of the preset total volume of vital organs in the fast segmentation mode to the preset total volume of vital organs in the precise segmentation mode may be greater than the preset efficiency factor m.
  • the preset efficiency factor m can represent the difference in segmentation efficiency (or segmentation detail) of segmentation based on different segmentation modes.
  • the preset efficiency factor m may be equal to or greater than 1.
  • the setting of the efficiency factor m is related to the type of interventional surgery. Interventional surgery types may include but are not limited to urological surgery, thoracoabdominal surgery, cardiovascular surgery, obstetrics and gynecology interventional surgery, musculoskeletal surgery, etc.
  • the preset efficiency factor m in urological surgery can be set larger; the preset efficiency factor m in thoracoabdominal surgery can be set smaller.
  • segmentation masks of all organs of the scanned images during surgery are obtained through segmentation.
  • segmentation masks and blood vessel masks of all organs in the scanned images during surgery are obtained through segmentation.
  • all organs are determined based on the segmentation masks and blood vessel masks of all organs in the scanned images during surgery. Mask of blood vessels inside the organ. It can be seen that in the precise segmentation mode, the segmented image content is more detailed, making the planning path more selective, and the robustness of image processing is also enhanced.
  • FIG. 5 is an exemplary flowchart of a segmentation process involved in an interventional surgical medical image processing method according to some embodiments of this specification. As shown in Figure 5, process 500 may include the following steps:
  • Step 510 Perform coarse segmentation on at least one element in the target structure set in the medical image.
  • medical images may include pre-operative enhanced images and intra-operative scan images.
  • the target structure set may include any one or more of the first target structure set, the second target structure set, and the fourth target structure set.
  • the processing device may use a threshold segmentation method, a region growing method, or a level set method to perform a coarse segmentation operation on at least one element in the target structure set in the medical image.
  • Elements may include target organs in medical images (eg, target organs), blood vessels within the target organ, lesions, non-interventionable areas, all important organs, etc.
  • coarse segmentation based on the threshold segmentation method can be implemented in the following manner: multiple different pixel threshold ranges can be set to classify each pixel in the medical image according to the pixel value of the input medical image, Divide pixels whose pixel values are within the same pixel threshold range into the same area.
  • coarse segmentation based on the region growing method can be implemented in the following manner: based on known pixels on the medical image or a predetermined area composed of pixels, preset similarity discrimination conditions according to needs, and based on the predetermined Set the similarity discrimination condition, compare a pixel with its surrounding pixels, or compare a predetermined area with its surrounding areas, merge pixels or areas with high similarity, stop merging until the above process cannot be repeated, and complete the rough segmentation process.
  • the preset similarity discrimination condition can be determined based on preset image features, for example, such as grayscale, texture and other image features.
  • coarse segmentation based on the level set method can be implemented in the following manner: setting the target contour of the medical image as the zero level set of a high-dimensional function, differentiating the function, and extracting the zero level set from the output To obtain the contour of the target, and then segment the pixel area within the contour range.
  • the processing device may use a method based on a deep learning convolutional network to perform a coarse segmentation operation on at least one element of the target structure set in the medical image.
  • methods based on deep learning convolutional networks may include segmentation methods based on fully convolutional networks.
  • the convolutional network can adopt a network framework based on a U-shaped structure, such as UNet, etc.
  • the network framework of the convolutional network may be composed of an encoder and a decoder and a residual connection (skip connection) structure, where the encoder and the decoder are composed of a convolutional layer or a convolutional layer combined with an attention mechanism,
  • the convolutional layer is used to extract features
  • the attention module is used to apply more attention to key areas
  • the residual connection structure is used to combine the features of different dimensions extracted by the encoder to the decoder part, and finally the segmentation result is output via the decoder .
  • a method based on deep learning convolutional networks for rough segmentation can be implemented in the following manner: the encoder of the convolutional neural network performs feature extraction of medical images through convolution, and then the decoding of the convolutional neural network The device restores the features into a pixel-level segmentation probability map.
  • the segmentation probability map represents the probability that each pixel in the image belongs to a specific category. Finally, the segmentation probability map is output as a segmentation mask, thereby completing rough segmentation.
  • Step 520 Obtain a mask of at least one element.
  • the mask of an element may refer to information used to occlude elements in the target structure set.
  • the result of coarse segmentation eg, the segmentation mask
  • the result of coarse segmentation may be used as a mask for elements.
  • Step 530 Determine the positioning information of the mask.
  • FIG. 6 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification.
  • Figure 7 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification.
  • Figure 8 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification.
  • determining the positioning information of the element mask can be implemented in the following manner: performing soft connected domain analysis on the element mask.
  • Connected domain that is, connected area, generally refers to the image area composed of foreground pixels with the same pixel value and adjacent positions in the image.
  • step 530 performs soft connected domain analysis on the element mask, which may include the following sub-steps:
  • Sub-step 531 determine the number of connected domains
  • Sub-step 532 when the number of connected domains ⁇ 2, determine the area of connected domains that meets the preset conditions;
  • Sub-step 533 When the ratio of the area of the largest connected domain among the multiple connected domains to the total area of the connected domains is greater than the first threshold M, it is determined that the largest connected domain meets the preset conditions;
  • Sub-step 534 determine that the retained connected domain at least includes the maximum connected domain
  • Sub-step 535 Determine the positioning information of the element mask based on the preserved connected domain.
  • the preset conditions refer to the conditions that need to be met when the connected domain is retained as a connected domain.
  • the preset condition may be a limiting condition on the area of the connected domain.
  • the medical image may include multiple connected domains, and the multiple connected domains have different areas. Multiple connected domains with different areas can be sorted according to area size, for example, from large to small, and the sorted connected domains can be recorded as the first connected domain, the second connected domain, and the kth connected domain. Among them, the first connected domain may be the connected domain with the largest area among multiple connected domains, also called the maximum connected domain. In this case, the preset conditions for determining connected domains with different area orders as retained connected domains can be different.
  • the connected domains that meet the preset conditions may include: connected domains whose areas are ordered from largest to smallest and are within the preset order n.
  • the preset order n is 3, it is possible to determine whether each connected domain is a preserved connected domain in order according to the area order and according to the corresponding preset conditions. That is, first determine whether the first connected domain is a retained connected domain, and then determine whether the second connected domain is a retained connected domain.
  • the preset order n may be set based on the category of the target structure, for example, chest target structure, abdominal target structure.
  • the first threshold M may range from 0.8 to 0.95, within which the expected accuracy of soft connected domain analysis can be ensured. In some embodiments, the first threshold M may range from 0.9 to 0.95, further improving the accuracy of soft connected domain analysis. In some embodiments, the first threshold M may be set based on the category of the target structure, for example, chest target structure, abdominal target structure. In some embodiments, the preset order n/first threshold M can also be reasonably set based on machine learning and/or big data, and is not further limited here.
  • step 530 performs soft connected domain analysis on the element mask, which can be performed in the following manner:
  • the number of connected domains is 0, it means that the corresponding mask is empty, that is, the mask acquisition or rough segmentation fails or the segmentation object does not exist and no processing is performed.
  • the spleen in the abdominal cavity there may be a situation where the spleen is removed and the mask of the spleen is empty.
  • the number of connected domains When the number of connected domains is 1, it means that there is only one connected domain. There are no false positives, no separation and disconnection, etc., and the connected domain is retained. It can be understood that when the number of connected domains is 0 and 1, there is no need to pre-determine the connected domain. Set a condition to determine whether the connected domain is a preserved connected domain.
  • connected domain A When the number of connected domains is 2, obtain connected domains A and B respectively according to the size of the area (S). Among them, the area of connected domain A is larger than the area of connected domain B, that is, S(A)>S(B).
  • connected domain A can also be called the first connected domain or the maximum connected domain; connected domain B can be called the second connected domain.
  • the preset condition that the connected domain needs to satisfy as a retained connected domain may be the relationship between the ratio of the maximum connected domain area and the total area of the connected domain and the threshold. Calculate the connected domain.
  • the connected domain B can be determined to be false.
  • the ratio of the area of A to the total area of A and B is greater than the first threshold M, that is, S(A)/S(A+B)>the first threshold M
  • the connected domain B can be determined to be false.
  • the positive area only connected domain A is retained (that is, connected domain A is determined to be a retained connected domain); when the proportion of the area of A to the total area of A and B is less than or equal to the first threshold M, both A and B can be determined to be element masks.
  • a part of the membrane that simultaneously preserves connected domains A and B that is, determines connected domains A and B as preserved connected domains).
  • the maximum connected domain (ie, connected domain A) as the preset condition that needs to be met to retain the connected domain may be the ratio of the maximum connected domain area to the total area of the connected domains.
  • the maximum connected domain (i.e., connected domain A) as the preset condition that needs to be met to retain the connected domain can also be the area of the second connected domain and the area of the largest connected domain.
  • the relationship between the ratio and the threshold for example, the second threshold N).
  • the ratio of the area of connected domain A to the total area S(T) is greater than the first threshold M, that is, S(A)/S(T)>the first threshold M, or the area of connected domain B accounts for the area of connected domain A
  • the ratio of the area is less than the second threshold N, that is, S(B)/S(A) ⁇ the second threshold N
  • the connected domain A is determined as the element mask part and retained (that is, the connected domain A is a retained connected domain), and the remaining The connected domains are all determined as false positive areas; otherwise, the calculation continues, that is, it continues to determine whether the second connected domain (ie, connected domain B) is a retained connected domain.
  • the preset condition that connected domain B needs to satisfy as a retained connected domain may be the relationship between the ratio of the sum of the areas of the first connected domain and the second connected domain to the total area of the connected domain and the first threshold M. In some embodiments, the preset conditions that connected domain B needs to satisfy as a retained connected domain may also be the ratio of the area of the third connected domain to the sum of the area of the first connected domain and the area of the second connected domain and a threshold (for example, the The size relationship between the two thresholds N).
  • the judgment method of connected domain C is similar to the judgment method of connected domain B.
  • the preset conditions that connected domain C needs to satisfy as a retained connected domain can be the sum of the areas of the first connected domain, the second connected domain and the third connected domain and the connectivity.
  • the relationship between the ratio of the total domain area and the first threshold M, or the ratio of the fourth connected domain area to the sum of the first connected domain area, the second connected domain area and the third connected domain area and the threshold for example, the The size relationship between the two thresholds N).
  • FIG. 6 only shows the judgment of whether the three connected domains are retained connected domains. It can also be understood that the value of the preset order n in Figure 6 is set to 4. Therefore, only the connected domains with order 1, 2, and 3, that is, connected domain A, connected domain B, and connected domain C Determine whether to preserve the connected domain.
  • the second threshold N may range from 0.05 to 0.2. Within the value range, it is possible to ensure that the soft connected domain analysis obtains the expected accuracy. In some embodiments, the value range of the second threshold N may be 0.05. With this setting, a relatively excellent soft connected domain analysis accuracy effect can be obtained.
  • the upper and lower left are the cross-sectional medical images and the three-dimensional medical images of the rough segmentation results that do not use soft connected domain analysis, respectively.
  • the right side are respectively the cross-sectional medical images and the three-dimensional medical images that are the rough segmentation results that use soft connected domain analysis. Medical Imaging.
  • the results of rough segmentation of the element mask based on soft connected domain analysis show that the false positive areas outlined by the box in the left image are removed.
  • the accuracy and reliability of excluding false positive areas are better. Higher, and directly contributes to the subsequent reasonable extraction of bounding boxes of element mask positioning information, improving segmentation efficiency.
  • the positioning information of the element mask may be the position information of the enclosing rectangle of the element mask, for example, the coordinate information of the border line of the enclosing rectangle.
  • the bounding rectangle of the element mask covers the positioning area of the element.
  • the bounding rectangle may be displayed in the medical image in the form of a bounding rectangular frame.
  • the bounding rectangle may be constructed based on the bottom edge of the connected area in the element in each direction, for example, the bottom edge of the connected area in the up, down, left, and right directions, to construct a circumscribing rectangular frame relative to the element mask.
  • the bounding rectangle of the element mask may be a rectangular box or a combination of multiple rectangular boxes.
  • it can be a rectangular frame with a larger area, or a rectangular frame with a larger area formed by a combination of multiple rectangular frames with smaller areas.
  • the bounding rectangle of the element mask may be a bounding rectangle in which only one rectangle exists.
  • a larger circumscribed rectangle is constructed based on the bottom edges of the connected area in each direction.
  • the above-mentioned large-area circumscribed rectangle can be applied to organs where there is a connected area.
  • the bounding rectangle of the element mask may be a circumscribing rectangular frame composed of multiple rectangular frames.
  • multiple rectangular boxes corresponding to the multiple connected areas are constructed into a rectangular box based on the bottom edges of the multiple rectangular boxes. It can be understood that if the bottom edges of three rectangular boxes corresponding to three connected areas are constructed into a total circumscribed rectangular box, the calculation will be processed as a total circumscribed rectangular box, while ensuring the expected accuracy and reducing amount of calculation.
  • the location information of the multiple connected domains can be determined first, and then the positioning information of the element mask is obtained based on the location information of the multiple connected domains. For example, you can first determine the connected domain among multiple connected domains that meets the preset conditions, that is, retain the location information of the connected domain, and then obtain the positioning information of the element mask based on the retained location information of the connected domain.
  • determining the positioning information of the element mask may also include the following operations: positioning the element mask based on the preset positioning coordinates of the element.
  • this operation may be performed if positioning of the element mask's bounding rectangle fails. It is understandable that when the coordinates of the enclosing rectangle of the element mask do not exist, it is judged that the positioning of the corresponding element fails.
  • the preset element can select an element with a relatively stable positioning (for example, an organ with a relatively stable positioning), and the probability of positioning failure when positioning the element is low, thereby achieving accurate element masking. position.
  • a relatively stable positioning for example, an organ with a relatively stable positioning
  • the probability of positioning failure of the liver, stomach, spleen, and kidneys in the abdominal cavity is low, and the probability of positioning failure of the lungs in the thoracic cavity is low, the positioning of these organs is relatively stable, Therefore, the liver, stomach, spleen, and kidneys can be used as preset organs in the abdominal cavity, that is, the preset elements can include the liver, stomach, spleen, kidneys, lungs, or any other possible Organ tissue.
  • the organ mask in the abdominal cavity can be repositioned based on the positioning coordinates of the liver, stomach, spleen, and kidneys. In some embodiments, the organ mask in the chest range may be positioned based on the positioning coordinates of the lungs.
  • the element mask can be repositioned using the preset positioning coordinates of the element as the reference coordinates.
  • the positioning coordinates of the liver, stomach, spleen, and kidney are used as the coordinates for repositioning, and the failed element in the abdominal cavity is repositioned accordingly.
  • the positioning coordinates of the lungs are used as the coordinates for repositioning, and the element that fails to be positioned in the chest is repositioned accordingly.
  • the positioning coordinates of the top of the liver, the bottom of the kidney, the left spleen, and the right liver can be used as the cross-sectional defense line (upper and lower sides) and coronal direction (left) for repositioning. side, right side), and take the most anterior and posterior ends of the coordinates of these four organs as the coordinates of the newly positioned sagittal direction (anterior, posterior), based on which the failed elements in the abdominal cavity are re-located. position.
  • the circumscribed rectangular frame formed by the lung positioning coordinates is expanded by a certain pixel, and the element that fails to be positioned in the chest is positioned again accordingly.
  • Step 540 Accurately segment at least one element based on the positioning information of the mask.
  • Figure 9 is an exemplary flowchart of a process of accurately segmenting elements according to some embodiments of this specification.
  • accurately segmenting at least one element based on the positioning information of the mask may include the following sub-steps:
  • Sub-step 541 Perform preliminary precise segmentation on at least one element.
  • the preliminary precise segmentation may be a precise segmentation based on the positioning information of the rough segmented element mask.
  • the element may be processed based on the input data and the bounding rectangle positioned by the rough segmentation.
  • Perform preliminary and accurate segmentation. Precisely segmented element masks can be generated through preliminary accurate segmentation.
  • Sub-step 542 determine whether the positioning information of the element mask is accurate. Through step 542, it can be determined whether the positioning information of the element mask obtained by rough segmentation is accurate, and further whether the rough segmentation is accurate.
  • the element mask of the preliminary precise segmentation can be calculated to obtain its positioning information, and the positioning information of the rough segmentation can be compared with the positioning information of the precise segmentation.
  • the bounding rectangular frame of the roughly segmented element mask can be compared with the bounded rectangular frame of the precisely segmented element mask to determine the difference between the two.
  • the circumscribed rectangular frame of the roughly segmented element mask can be masked with the precisely segmented element mask in six directions of the three-dimensional space (that is, the entire circumscribed rectangular frame is a cube in the three-dimensional space). Compare the surrounding rectangular boxes to determine the difference between the two.
  • whether the positioning information of the roughly segmented element mask is accurate can be determined based on the positioning information of the initially precisely segmented element mask. In some embodiments, whether the judgment result is accurate can be determined based on the difference between the coarse segmentation positioning information and the precise segmentation positioning information.
  • the positioning information may be a circumscribed rectangle (such as a circumscribed rectangle) of the element mask. The coarse segmented element mask is determined based on the circumscribed rectangle of the roughly segmented element mask and the circumscribed rectangle of the precisely segmented element mask. Is the bounding rectangle accurate?
  • the difference between the coarse segmentation positioning information and the precise segmentation positioning information may refer to the distance between the closest border lines in the coarse segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame.
  • the positioning information of coarse segmentation is significantly different from the positioning information of precise segmentation (that is, the distance between the closest border lines in the rough segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame is relatively large)
  • the positioning information of rough segmentation is inaccurate.
  • the rough segmentation bounding rectangle is obtained by pixel expansion (for example, 15-20 voxels) on the border line of the original rough segmentation close to the element.
  • whether the positioning information of coarse segmentation is accurate can be determined based on the relationship between the distance between the nearest border lines in the roughly segmented circumscribed rectangular frame and the precisely segmented circumscribed rectangular frame and a preset threshold, for example , when the distance is less than the preset threshold, it is determined to be inaccurate, and when the distance is greater than the preset threshold, it is determined to be accurate.
  • the value range of the preset threshold may be less than or equal to 5 voxels.
  • FIG. 10 to 11 are exemplary schematic diagrams of positioning information determination of element masks according to some embodiments of this specification.
  • FIG. 12A is an exemplary schematic diagram of determining the sliding direction based on the positioning information of the element mask according to some embodiments of this specification.
  • Figures 10 and 11 show the element mask A obtained by rough segmentation and the circumscribed rectangular frame B of element mask A (that is, the positioning information of element mask A), as well as the first accurate segmentation based on the circumscribed rectangular frame of rough segmentation.
  • Figure 12A also shows the sliding window B1 obtained after sliding the roughly divided circumscribed rectangular frame B.
  • (a) is a schematic diagram before the sliding operation
  • (b) is a schematic diagram after the sliding operation.
  • a planar rectangular frame within a plane of a three-dimensional circumscribed rectangular frame is used as an example.
  • the difference between the right border line in the precisely segmented circumscribed rectangular frame C and the corresponding border line in the coarsely segmented circumscribed rectangular frame B is small. From this, it can be judged that the corresponding right border line in the roughly segmented circumscribed rectangular frame B The direction is inaccurate and the right border line needs to be adjusted.
  • the upper, lower, and left border lines in C are quite different from the upper, lower, and left sides in B. From this, it can be judged that the corresponding directions of the upper, lower, and left sides of the rough segmented external rectangular frame B are accurate.
  • the border lines of the four sides of the precisely segmented circumscribed rectangular frame C and the corresponding border lines of the coarsely segmented circumscribed rectangular frame B are very different. It can be judged that the four sides of the roughly segmented circumscribed rectangular frame B The border lines on both sides are accurate. It should be noted that there are 6 directions in element mask A. In Figures 10 and 11, only 4 border lines are used for illustration. In actual situations, there will be 12 borders in 6 directions in element mask A. line for judgment.
  • Sub-step 543a when the judgment result is inaccurate, obtain accurate positioning information based on the adaptive sliding window.
  • the coarse segmentation result when the coarse segmentation result is inaccurate, the elements obtained by precise segmentation are likely to be inaccurate.
  • Corresponding adaptive sliding window calculations can be performed on them and accurate positioning information can be obtained to continue. Precise segmentation.
  • obtaining accurate positioning information based on adaptive sliding windows can be implemented in the following manner: determining at least one direction in which the positioning information is inaccurate; and performing adaptive sliding window calculations in the direction according to the overlap rate parameter.
  • at least one direction in which the circumscribed rectangular frame is inaccurate can be determined; after determining that the rough segmented circumscribed rectangular frame is inaccurate, the rough segmented circumscribed rectangular frame is slid in the corresponding direction according to the input preset overlap rate parameter, that is, Sliding window operation, and repeat the sliding window operation until all bounding rectangular boxes are completely accurate.
  • the overlap rate parameter refers to the ratio of the overlap area between the initial circumscribed rectangular frame and the sliding circumscribed rectangular frame to the area of the initial circumscribed rectangular frame.
  • the sliding step length of the sliding window operation is shorter.
  • the overlap rate parameter if you want to ensure that the sliding window calculation process is more concise (that is, there are fewer steps in the sliding window operation), you can set the overlap rate parameter smaller; if you want to ensure that the results of the sliding window calculation are more accurate, you can set The overlap ratio parameter is set larger.
  • the sliding step size for sliding window operation may be calculated according to the current overlap rate parameter. According to the judgment method in Figure 10, it can be seen that the directions corresponding to the right and lower border lines of the roughly segmented circumscribed rectangular frame B in Figure 12A are inaccurate.
  • the direction corresponding to the right border line of the external rectangular frame B is recorded as the first direction (the first direction is perpendicular to the right border line of B), and the direction corresponding to the lower border line is recorded as the second direction (the second direction). perpendicular to the lower border line of B).
  • the first direction is perpendicular to the right border line of B
  • the second direction is recorded as the second direction (the second direction). perpendicular to the lower border line of B).
  • the length of the external rectangular frame B is a
  • the overlap rate parameter is 60%
  • it can be determined that the corresponding step length is a*(1-60%).
  • the right border line of the external rectangular frame B It is possible to slide a* (1-60%) along the first direction.
  • the lower border line of the external rectangular frame B can slide along the second direction by a corresponding step. Repeat the corresponding sliding window operation on the right border line and the lower border line of the external rectangular frame B until the external rectangular frame B is completely accurate, as shown in the sliding window B1 in Figure 12A(b).
  • the coordinate values of the border lines in the six directions of the finely segmented circumscribed rectangular frame are compared with the coarse segmented circumscribed rectangular frame.
  • the coordinate values of the border lines in the 6 directions on the rectangular frame are compared one by one.
  • the coordinate difference threshold for example, the coordinate difference threshold is 5pt
  • the coordinate difference threshold can be set according to the actual situation , not limited here
  • the pixel point coordinates in the four directions corresponding to the four sides in the image of the finely segmented circumscribed rectangular frame C are compared with the pixel coordinates in the four directions corresponding to the four border lines in the image of the coarsely segmented circumscribed rectangular frame B.
  • the pixel coordinates are compared one by one.
  • the difference in pixel coordinates in one direction is less than the coordinate difference threshold of 8pt, it can be determined that the rough segmentation enclosing rectangular frame in Figure 10 is inaccurate in that direction.
  • the top difference is 20pt
  • the bottom difference is 30pt
  • the right difference is 1pt
  • the left is 50pt
  • B1 is a circumscribed rectangular frame (also called a sliding window) obtained by sliding the roughly segmented circumscribed rectangular frame B.
  • the sliding window is a roughly segmented circumscribed rectangle that meets the expected accuracy standard. frame, it is necessary to slide the border lines (for example, the right border line, the bottom border line) of the rough segmented external rectangular frame B along the corresponding directions (for example, the first direction, the second direction) by the corresponding steps to the sliding window B1.
  • the border lines for example, the right border line, the bottom border line
  • the direction corresponding to each border line that does not meet the standard is moved in sequence.
  • each side sliding depends on the overlap rate between B1 and B, where the overlap rate can be the ratio of the current overlapping area of the rough segmented circumscribed rectangular frame B and the sliding window B1 to the total area, for example, the current Overlap is 40% and so on.
  • the sliding order of the border lines of the roughly divided circumscribed rectangular frame B may be from left to right, from top to bottom, or other feasible order, which is not further limited here.
  • 12B-12E are exemplary schematic diagrams of precise segmentation after sliding windows according to some embodiments of this specification.
  • 12B-12E based on the original coarse segmented circumscribed rectangular frame (original sliding window), the accurate coarse segmented circumscribed rectangular frame is obtained after adaptive sliding window, and the accurate coordinate value of the circumscribed rectangular frame can be obtained, And based on the coordinate value and overlap rate parameters, the new sliding window is accurately segmented, and the accurate segmentation result is superimposed with the preliminary accurate segmentation result to obtain the final accurate segmentation result.
  • the sliding window operation can be performed on the original sliding window B to obtain the sliding window B1 (the maximum range of the circumscribed rectangular frame after the sliding window operation).
  • B slides the corresponding step along the first direction to obtain the sliding window B1- 1. Then accurately segment the entire range of the sliding window B1-1 to obtain the accurate segmentation result of the sliding window B1-1.
  • B can slide the corresponding step along the second direction to obtain the sliding window B1-2, and then accurately segment the entire range of the sliding window B1-2 to obtain an accurate segmentation result of the sliding window B1-2.
  • B can obtain the sliding window B1-3 by sliding (for example, B can obtain the sliding window B1-2 by sliding as shown in Figure 12C, and then slide the sliding window B1-2 to obtain the sliding window B1-3) , and then accurately segment the entire range of the sliding window B1-3, and obtain the accurate segmentation result of the sliding window B1-3.
  • sliding window B1-1, sliding window B1-2 and sliding window B1-3 are superimposed with the preliminary precise segmentation results to obtain the final precise segmentation result.
  • the sizes of the sliding window B1-1, the sliding window B1-2 and the sliding window B1-3 are the same as the size of B.
  • Sliding window B1 is the final sliding window result obtained by performing continuous sliding window operations on original sliding window B, namely sliding window B1-1, sliding window B1-2 and sliding window B1-3.
  • the precise segmentation results of the sliding window B1-1, the sliding window B1-2 and the sliding window B1-3 are superimposed with the preliminary precise segmentation results, there may be repeated overlapping parts.
  • the sliding window There may be an intersection between B1-1 and the sliding window B1-2.
  • the intersection may be repeatedly superimposed.
  • the following method can be used to deal with it: for a certain part of the element mask A, if the segmentation result of one sliding window is accurate for this part and the segmentation result of the other sliding window is inaccurate, the segmentation result will be accurate.
  • the segmentation result of the sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are accurate, the segmentation result of the right sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are not accurate , then the segmentation result of the right sliding window is used as the segmentation result of this part, and precise segmentation continues until the segmentation result is accurate.
  • obtaining accurate positioning information based on the adaptive sliding window is a cyclic process. Specifically, after comparing the precise segmentation border line and the coarse segmentation border line, the updated coordinate value of the precise segmentation external rectangular frame can be obtained through the adaptive sliding window.
  • the precise segmentation external rectangular frame is expanded by a certain pixel and set to a new one. Roughly segment the circumscribed rectangular frame in a round cycle, and then accurately segment the new circumscribed rectangular frame again to obtain a new precisely segmented circumscribed rectangular frame, and calculate whether it meets the accurate requirements. If the exact requirements are met, the cycle ends, otherwise the cycle continues.
  • a deep convolutional neural network model may be used to perform precise segmentation on at least one element in the coarse segmentation.
  • the historical medical images initially acquired before rough segmentation can be used as training data, and the historical accurate segmentation result data can be used to train the deep convolutional neural network model.
  • the historical medical images and the historical accurate segmentation result data are obtained from the medical scanning device.
  • the historical scanned medical images of the scanned object and the historical accurate segmentation result data are obtained.
  • historical scanned medical images of the scanned object and historical accurate segmentation result data can be obtained from the terminal 130, the processing device 140, and the storage device 150.
  • Sub-step 543b when the judgment result is accurate, the preliminary accurate segmentation result is output as the segmentation result.
  • the judgment result ie, the coarse segmentation result
  • At least one element result data for accurate segmentation can be output.
  • image post-processing operations may be performed before outputting the segmentation results.
  • Image post-processing operations may include edge smoothing and/or image denoising, etc.
  • edge smoothing processing may include smoothing processing or blurring processing to reduce noise or distortion of medical images.
  • smoothing processing or blurring processing may adopt the following methods: mean filtering, median filtering, Gaussian filtering, and bilateral filtering.
  • Figure 13 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • the upper and lower left are cross-sectional medical images and three-dimensional medical images using rough segmentation results using traditional technology
  • the right are respectively cross-sectional medical images and three-dimensional medical images using the organ segmentation method provided by the embodiment of the present application.
  • Step 460 Register the first medical image and the second medical image to determine the spatial position of the third target structure set during the operation.
  • step 420 For more description about registering and determining the spatial position of the third target structure set during the operation, please refer to the description in step 420 .
  • the fourth target structure set can also be regarded as part of the third target structure set, for example, the inaccessible area and all important organs outside the target organ.
  • the first medical image (ie, the segmented image of the pre-operative first target structure set obtained by segmenting the pre-operative enhanced image) may include the first target structure set (for example, blood vessels in the pre-operative target organ, The precise structural characteristics of preoperative target organs, preoperative lesions); the second medical image (i.e., the segmented image of the second target structure set during surgery obtained by segmenting the scanned image during surgery) may include the second target structure set (for example, surgical Precise structural characteristics of target organs, intraoperative lesions, intraoperative inaccessible areas/all important organs).
  • the first medical image and the second medical image may be subjected to separation processing of the appearance features of the target structure set and the background.
  • the separation process of appearance features and background can be based on artificial neural networks (linear decision functions, etc.), threshold-based segmentation methods, edge-based segmentation methods, image segmentation methods based on cluster analysis (such as K-means) etc.) or any other feasible algorithm, such as segmentation method based on wavelet transform, etc.
  • the first medical image includes the blood vessels in the preoperative target organ (eg, target organ) and the structural characteristics of the preoperative target organ (ie, the first target structure set includes the blood vessels and target organ in the target organ), and the second medical image Including the structural characteristics of the intraoperative target organ, intraoperative lesions, intraoperative non-interventional areas/all important organs (i.e. the second target structure set includes the target organ, intraoperative lesions, non-interventional areas/all important organs) as an example, for the registration process Make an illustrative description.
  • the structural features of the lesion are not limited to being included in the second medical image. In other embodiments, the structural features of the lesion may also be included in the first medical image, or the structural features of the lesion may be included in the first medical image at the same time. In Medical Imaging and Secondary Medical Imaging.
  • Figure 14 is an exemplary flowchart of a process of registering a first medical image and a second medical image shown in some embodiments of this specification.
  • Step 461 Register the first medical image and the second medical image to determine the registration deformation field.
  • Figure 15-16 are exemplary flowcharts of the process of determining a registration deformation field shown in some embodiments of this specification.
  • Figure 17 is an exemplary schematic diagram of the first medical image and the second medical image obtained through segmentation in some embodiments of this specification.
  • step 461 the process of registering the first segmented image and the second segmented image and determining the registration deformation field may include the following sub-steps:
  • Sub-step 4611 determine the first preliminary deformation field based on the registration between elements.
  • the elements may be element outlines of the first medical image and the second medical image (eg, organ outlines, blood vessel outlines, lesion outlines).
  • the registration between elements may refer to the registration between image areas covered by the element outlines (mask).
  • the pre-surgery enhanced images in Figures 16 and 17 are segmented to obtain the image area covered by the organ outline A of the target organ (such as the target organ) (the area with the same or basically the same grayscale in the dotted line area in the lower left figure), surgery In the mid-scan image, the image area covered by the organ outline B of the target organ (such as the target organ) is obtained after segmentation (the area with the same or basically the same grayscale in the dotted area in the lower right image).
  • the first preliminary deformation field (deformation field 1 in Figure 16) is obtained through area registration between the image area covered by organ outline A and the image area covered by organ outline B.
  • the first preliminary deformation field may be a local deformation field.
  • the local deformation field about the liver contour is obtained through the liver preoperative contour A and the intraoperative contour B.
  • Sub-step 4612 Determine the second preliminary deformation field of the entire image based on the first preliminary deformation field between elements.
  • the full image can be a region-wide image containing elements.
  • the full image can be an image of the entire abdominal cavity.
  • the full image can be an image of the entire chest cavity.
  • the second preliminary deformation field of the entire image may be determined through interpolation based on the first preliminary deformation field.
  • the second preliminary deformation field may be a global deformation field.
  • the deformation field of the full image size is determined through deformation field 1 interpolation. 2.
  • Sub-step 4613 Deform the floating image based on the second preliminary deformation field of the entire image, and determine the registration map of the floating image.
  • the floating image can be an image to be registered, for example, an enhanced image before surgery or a scanned image during surgery.
  • the floating image when registering the intra-operative scan image to the pre-operative scan image, the floating image is the intra-operative scan image.
  • the scanned image during surgery can be registered by registering the deformation field so that the spatial position of the scanned image before surgery is consistent.
  • the pre-operative enhanced image when registered to the intra-operative scanned image, the floating image is the pre-operative enhanced image.
  • the pre-operative scanned image can be registered by registering the deformation field so that the spatial position of the scanned image during surgery is consistent.
  • the registration map of the floating image may be the image of the intermediate registration result obtained during the registration process.
  • the registration map of the floating image can be the intermediate intra-operative scanned image obtained during the registration process.
  • the embodiments of this specification take the registration of pre-operative enhanced images to intra-operative scanned images as an example to describe the registration process in detail.
  • the floating image that is, the pre-surgery enhanced image
  • the pre-surgery enhanced image is deformed to determine the registration map of the pre-surgery enhanced image, that is, the intermediate registration result.
  • In-operative scan images For example, as shown in Figure 16, based on the obtained deformation field of the abdominal cavity range where the liver is located, the pre-operative enhanced image (abdominal cavity enhanced image) is deformed and its registration map is obtained.
  • Sub-step 4614 Register the registration map of the floating image with the area of the first grayscale difference range in the reference image to obtain the third preliminary deformation field.
  • the reference image refers to the target image before registration, which may also be called the target image without registration.
  • the reference image refers to the intra-operative scanned image without registration action.
  • the third preliminary deformation field may be a local deformation field.
  • sub-step 4614 can be implemented in the following manner: perform pixel grayscale calculations on different areas of the registration map of the floating image and the reference image to obtain corresponding grayscale values; calculate the grayscale of the registration map of the floating image.
  • the difference value when the difference value is within the first grayscale difference range, it may mean that the difference between an area in the registration map of the floating image and the corresponding area in the reference image is not large or relatively small.
  • the first grayscale difference range is 0 to 150
  • the grayscale difference between area Q1 in the registration map of the floating image and the same area in the reference image is 60
  • the area Q2 in the registration map of the floating image is the same as that in the reference image.
  • the grayscale difference value of the area is 180, then the difference in area Q1 of the two images (i.e., the registration map and the reference image of the floating image) is not large, while the difference in area Q2 is large. Only the area Q1 in the two images Perform registration.
  • elastic registration is performed on the registration map of the floating image and the area in the reference image that conforms to the first grayscale difference range (the area where the difference is not too large) to obtain the deformation field 3 (That is, the third preliminary deformation field mentioned above).
  • Sub-step 4615 Determine the fourth preliminary deformation field of the entire image based on the third preliminary deformation field.
  • interpolation is performed to obtain a fourth preliminary deformation field of the entire image.
  • the fourth preliminary deformation field may be a global deformation field.
  • this step can be used to obtain the local third preliminary deformation field into the global fourth preliminary deformation field.
  • the deformation field 4 of the full image size is determined by interpolation of the deformation field 3.
  • Sub-step 4616 Based on the fourth preliminary deformation field, register the area in the second grayscale difference range to obtain a final registration map.
  • the area of the second grayscale difference range may be an area where the grayscale value difference between the registration map grayscale value of the floating image and the reference image grayscale value is larger.
  • a grayscale difference threshold can be set (for example, the grayscale difference threshold is 150).
  • the area where the difference between the grayscale value of the registration map of the floating image and the grayscale value of the reference image is less than the grayscale difference threshold is The area in the first grayscale difference range, which is greater than the grayscale difference threshold, belongs to the second grayscale difference range.
  • the final registration map can be based on at least one deformation field, performing multiple deformations on the floating image (for example, the enhanced image before surgery) to obtain the same spatial position and anatomical position as the final scanned image during surgery. Image.
  • the regions in the second grayscale difference range are registered to obtain a final registered registration map. For example, for the area outside the spleen where the gray value difference is relatively large, the area is deformed through the deformation field 4 to obtain the final registration map.
  • elements in the floating image that are segmented and not segmented in the reference image can be extracted from the floating image. mapped to the reference image.
  • the floating image is the pre-operative enhanced image and the reference image is the intra-operative scanned image.
  • the blood vessels in the target organ are segmented in the pre-operative enhanced image and are not segmented in the intra-operative scanned image.
  • the target can be segmented through registration. Blood vessels within the organ are mapped to intraoperative scans.
  • the registration method of Figure 15-16 can also be used for the registration of inaccessible areas in the fast segmentation mode and all important organs in the precise segmentation mode, or similar results can be achieved only through the corresponding segmentation method. Effect.
  • Step 462 Determine the spatial position of the corresponding element during the operation based on the registered deformation field and the spatial position of at least some elements in the first target structure set in the pre-surgery enhanced image.
  • the spatial position of the blood vessels in the target organ during surgery (hereinafter referred to as blood vessels) may be determined based on the registration deformation field and the blood vessels in the target organ in the pre-surgery enhanced image.
  • the spatial position of the blood vessel during surgery can be determined based on the following formula (1), based on the registration deformation field and the blood vessel in the pre-surgery enhanced image.
  • I Q represents the pre-operative enhanced image
  • (x, y, z) represents the three-dimensional spatial coordinates of the blood vessel
  • u(x, y, z) represents the registration deformation field from the pre-operative enhanced image to the intra-operative scanned image. Indicates the spatial position of the blood vessel in the scanned image during surgery.
  • u(x, y, z) can also be understood as the offset of the three-dimensional coordinates of elements in the floating image (for example, blood vessels in the target organ) to the three-dimensional coordinates in the final registered registration map.
  • the blood vessels in the pre-operative enhanced image can be deformed to generate the same spatial position of the blood vessels during surgery as their spatial positions.
  • the processing device can calculate the center point of the lesion based on the determined spatial positions of blood vessels and lesions during the operation (including in the second segmented image of the scanned image during the operation), and generate a safe area around the lesion and a potential needle insertion area.
  • the safe area around the lesion and the potential needle insertion area can be determined based on the determined interventional area and non-interventionable area.
  • a reference path from the percutaneous needle insertion point to the center point of the lesion can be planned based on the potential needle insertion area and basic obstacle avoidance constraints.
  • the basic obstacle avoidance constraints may include but are not limited to the needle insertion angle of the path, the needle insertion depth of the path, the path not intersecting with blood vessels and important organs, etc.
  • Step 470 Plan the intervention path based on the spatial position of the third target structure set during the operation, and perform risk assessment based on the intervention path.
  • the spatial location of elements in the third target structure set can more comprehensively and accurately reflect the scanned object (for example, patient)’s current condition.
  • the intervention path can be planned based on the spatial position of the third target structure set, so that the surgical instrument (for example, a puncture needle) can effectively avoid blood vessels in the target organ, inaccessible areas, and/or all important organs while successfully reaching the lesion, Reduce surgical risks.
  • element selection of the third target structure set may be based on a pattern of planned intervention pathways.
  • the elements of the risk assessment used to determine the interventional path in the third target structure set may be different.
  • the elements of the third target structure set for determining the risk assessment of the interventional path may include blood vessels and non-interventional areas within the target organ.
  • the elements used to determine the risk assessment of the interventional route in the third target structure set may include blood vessels in the target organ and all important organs.
  • Figure 18 is an exemplary flow chart for determining intervention risk values of at least some elements in the third target structure set in the fast planning mode shown in some embodiments of this specification.
  • the process 700 of determining the intervention risk values of at least some elements in the third target structure set in the fast planning mode may include the following steps:
  • Step 710 Determine the risk level of the element based on the shortest distance between the element and the intervention path;
  • Step 720 Determine the intervention risk value of the element according to the risk level.
  • elements in process 700 may include blood vessels and non-accessible areas within the target organ.
  • the elements in the process 700 may be blood vessels and non-interventional areas within the target organ.
  • blood vessels and inaccessible areas within the target organ have different risk levels for the interventional path, and the corresponding risk values are also different.
  • the elements in the process 700 may be non-interventional areas.
  • the risk level of the corresponding element can be determined based on the shortest distance between the blood vessel in the target organ and the interventional path, and the shortest distance between the non-interventional area and the interventional path, thereby determining the corresponding interventional risk value of the element.
  • the risk level and interventional risk value of the blood vessels and inaccessible areas within the target organ need to be calculated.
  • the calculation method of the interventional risk value of blood vessels in the target organ is: the nearest straight-line distance between the path and the blood vessel is L1.
  • the risk level is the highest (recorded as the first risk level), and the corresponding interventional risk value is the first.
  • Intervention risk value when M1 ⁇ L1 ⁇ N1, the risk level is the second risk level, and the corresponding intervention risk value is the second intervention risk value; when N1 ⁇ L1 ⁇ P1, the risk level is the third risk level, and the corresponding intervention risk The value is the third intervention risk value; when L1>P1, the risk level and intervention risk value are not considered. Among them, the first risk level is higher than the second risk level, and the second risk level is higher than the third risk level.
  • the calculation method of the intervention risk value of the inaccessible area is: when the shortest straight-line distance between the path and the inaccessible area is L2, and 0 ⁇ L2 ⁇ A1, the risk level is the highest (recorded as the first risk level), and the corresponding intervention risk value is the third.
  • One intervention risk value when A1 ⁇ L2 ⁇ B1, the risk level is the second risk level, and the corresponding intervention risk value is the second intervention risk value; when B1 ⁇ L2 ⁇ C1, the risk level is the third risk level, and the corresponding intervention risk value is The risk value is the third intervention risk value; when L2>C1, the risk level and intervention risk value are not considered.
  • M1>A1, N1>B1, P1>C1 this is because when the blood vessels in the target organ participate in the distance calculation, usually the blood vessels in the target organ are closer to the interventional path, so the blood vessels in the target organ are closer to the interventional path.
  • the control of the distance between them is relatively strict; while the inaccessible area is far away from the intervention path, so the control of the distance between the inaccessible area and the intervention path is relatively loose.
  • the distance between the blood vessels in the target organ and the interventional path, and the distance between the non-interventional area and the interventional path are both 5mm
  • a distance of 5mm is more acceptable for the non-interventional area
  • the interventional risk value is 3 points.
  • the interventional risk is greater, and the interventional risk value is 5 points.
  • the risk level of the non-interventional area and the calculation method of the interventional risk value are the same as the calculation method of the non-interventional area when passing through the target organ, and will not be described again here.
  • the total risk value of at least one interventional path can be calculated; the interventional path with the smallest total risk value can be determined as the optimal path, because the smaller the total risk value, the smaller the risk.
  • the intervention risk values of multiple intervention paths can be accumulated to obtain a total risk value.
  • the intervention path is planned using the optimal path with the smallest total risk value.
  • the fast segmentation mode does not need to segment all the organs and tissues in the scene, it only needs to segment the inaccessible areas, and extract the blood vessels (and lesions) in the target organs that are not obvious in the intraoperative scan images through registration. ) position, when planning the interventional path, it is only necessary to bypass the non-interventional area and make the interventional path directly reach the lesion, which helps to improve the efficiency of interventional planning and interventional surgery.
  • Figure 19 is an exemplary flow chart for determining intervention risk values of at least some elements in the third target structure set in the precise planning mode shown in some embodiments of this specification.
  • the process 800 of determining intervention risk values of at least some elements in the third target structure set in precise planning mode may include the following steps:
  • Step 810 Determine the risk level of the element based on the shortest distance between the element and the intervention path;
  • Step 820 Determine the intervention risk value of the element according to the risk level
  • Step 830 Determine different priorities based on the preset rules of the elements, and set corresponding preset weights of intervention risk values.
  • elements in process 800 may include blood vessels and all vital organs within the target organ. Specifically, when the interventional path passes through the target organ where the third target structure is concentrated, the elements in the process 800 may be the blood vessels in the target organ and all important organs. When the interventional path does not pass through the target organs in the third target structure set, the elements in the process 800 may be all important organs. Preset rules can be used to characterize the non-interventional importance of different elements for planning intervention paths. For example, in a certain planned interventional path, the blood vessels in the target organ and each of all important organs have different non-interventional importance for the planned interventional path. Different elements have different priorities under preset rules. In some embodiments, the priority of each element may be determined according to the element's preset rules.
  • step 830 different priorities are determined based on the preset rules of the elements, and corresponding preset weights of the intervention risk values are set. This can be implemented in the following manner: Based on the different priorities of the elements, the intervention risks are set. The corresponding preset weight for the value.
  • different priorities of the segmented regions can be determined according to the non-intervention importance of the segmented regions (ie, elements). For example, segmented regions that must not be involved, such as blood vessels and important organs, are set to higher priorities.
  • elements of different priorities may be assigned different preset weights. In some embodiments, the higher the priority, the larger the corresponding preset weight, and the lower the priority, the smaller the corresponding preset weight.
  • the preset weight can be represented by W, W ⁇ 1, 0.8, 0.6 ⁇ .
  • a larger preset weight can be set (for example, W is 1).
  • a larger preset weight can be set.
  • Set a smaller default weight for example, W is 0.6).
  • the risk level and interventional risk value of the blood vessels in the target organ and all important organs need to be calculated.
  • the calculation method of the risk level of blood vessels in the target organ and the interventional risk value is the same as that in the fast mode, and will not be described again here.
  • the risk level of important organs and the calculation method of interventional risk value can be as follows: determine the divided organ according to the shortest distance between the interventional path and its adjacent important organs (that is, the distance between the needle tract and the point closest to the needle tract of the adjacent organ) The risk level and intervention risk value of the area.
  • the risk level and interventional risk value of the segmented organ region may be determined based on the shortest distance between the needle track and its adjacent non-penetrable organ and whether it is within a set threshold range.
  • the set threshold range can be determined by setting multiple constant thresholds, such as X, Y and Z, where X ⁇ Y ⁇ Z, the shortest distance between the intervention path and its adjacent non-penetrable organs It can be expressed as L3, and the intervention risk value can be expressed as R.
  • the planned intervention path will immediately fail, and there is no need to evaluate the intervention risk value again; when 0 ⁇ L3 ⁇ X, the risk level is higher , set the intervention risk value R as a; when is c; when L3>Z, this risk can be ignored and the intervention risk value is set to 0, where a>b>c.
  • the corresponding priority can be determined based on the preset rules of blood vessels in the target organ and all important organs, and interventional risk values for blood vessels of different priorities and all important organs can be determined. Give different weights.
  • the interventional path does not pass through the target organ, only the risk level of the important organ and the interventional risk value need to be calculated.
  • the risk level of the vital organ and the interventional risk value are calculated in the same way as when it passes through the target organ, and will not be described again here.
  • corresponding priorities can be determined based on preset rules for all important organs, and different priorities can be determined for Different weights are given to interventional risk values for all vital organs.
  • planning the intervention path based on the intervention risk value can be implemented in the following manner: calculating the weighted risk value of at least one intervention path; determining the intervention path with the smallest weighted risk value as the optimal path.
  • the weighted risk value can be obtained by weighted calculation of the interventional risk values of multiple interventional paths.
  • the interventional path is planned using the optimal path with the smallest weighted risk value.
  • weighted risk value F When the weighted risk value F is smaller, it means that the needle path is farther away from important organs and blood vessels, and the risk is smaller.
  • the outline of the target organ and other tissues in the scene can be used to set the priority of intervention (such as puncture, etc.) according to the intervention risk value.
  • intervention risk value such as needle insertion points, etc.
  • the lesion area can avoid high-priority non-interventionable areas (such as blood vessels, important organs, etc.) and obtain potential needle insertion space, thus improving interventional planning and its Interventional surgery efficiency.
  • Figure 20 is an exemplary flow diagram of an image anomaly detection process 900 illustrated in some embodiments of this specification.
  • image anomaly detection process 900 may include the following steps:
  • Step 910 Obtain scanned images during surgery
  • Step 920 detect image abnormalities in the scanned images during surgery
  • Step 930 Based on the detected image anomaly, determine the corresponding image anomaly type
  • Step 940 Determine whether to perform quantitative calculation based on the image abnormality type
  • Step 950 Determine the degree of image abnormality based on the judgment result of whether to perform quantitative calculation.
  • image anomalies may include noncompliant portions of image data that present complications.
  • complications may include bleeding, pneumothorax, effusion, etc.
  • the generative adversarial network method of deep learning modeling can be used to detect image anomalies, and the image anomalies can be detected through comparison of normal data during modeling.
  • at least one of threshold processing, image segmentation, and other methods may be used to detect image anomalies.
  • threshold processing can be implemented in the following manner: Since the feedback of different complications on the image is inconsistent, the distribution range of pixel values in the image for pneumothorax, hemorrhage, effusion, etc. is different. This can be achieved by setting Pixel threshold is used to distinguish which complication the pixel value of the abnormal area belongs to.
  • image segmentation can be implemented in the following manner: after acquiring image abnormalities, use deep learning to segment the abnormalities, classify the pixels in the area where the pixel abnormalities are located, and determine which complication it belongs to. If it is not concurrent, If symptoms occur, the surgical process can continue; otherwise, complications such as bleeding, effusion, pneumothorax, etc. can be quickly identified and judged.
  • the surgical procedures are different when the types of image abnormalities are different.
  • the image abnormality type is pneumothorax
  • an alarm can be issued to the operator and the surgical process is completed.
  • the image abnormality type is bleeding or effusion
  • the amount of bleeding or effusion can be quantitatively calculated, and based on the results of the quantitative calculation, it can be determined whether the surgical process should be continued or ended.
  • the corresponding bleeding volume or fluid collection volume of the bleeding or fluid collection area can be calculated based on the image area ratio.
  • it can be determined whether the amount of bleeding or fluid accumulation exceeds a preset threshold for example, a preset blood volume threshold, a preset fluid volume threshold.
  • the judgment result of quantitative calculation determines the degree of image abnormality. For example, when the amount of bleeding or fluid accumulation exceeds a preset threshold, it can be determined that the degree of image abnormality is high; when the amount of bleeding or fluid accumulation does not exceed the preset threshold, it can be determined that the degree of image abnormality is low.
  • corresponding alarm prompts for the image abnormality degree may be provided. For example, when the image abnormality type is pneumothorax, the operator can be prompted to stop intervention. In some embodiments, when the image abnormality type is hemorrhage or effusion, different alarm prompts may be provided according to the degree of image abnormality of the hemorrhage or effusion. For example, when the degree of image abnormality is high, the operator can be prompted to stop intervention. For another example, when the degree of image abnormality is low, the operator can be prompted to intervene and continue to observe the information.
  • Figure 21 is an exemplary flow diagram of a postoperative assessment process illustrated in some embodiments of the present specification.
  • the post-operative evaluation process flow 1000 may include the following steps:
  • Step 1010 register the planned intervention path and the actual intervention path to the intra-operative scan image
  • Step 1020 determine the deviation between the actual intervention path and the planned intervention path
  • Step 1030 Determine whether the deviation intersects with a specific element in the third target structure set of the scanned image during surgery
  • Step 1040 Determine corresponding postoperative feedback information based on the judgment result.
  • the planned intervention path may be obtained based on pre-operative enhanced images and intra-operative scan images; the actual intervention path may be obtained based on post-operative scan images.
  • the post-operative scan image refers to an image of a scanning object (such as a patient, etc.) obtained by scanning with a medical scanning device after an interventional surgery.
  • the acquisition method of post-operative scan images please refer to the relevant descriptions in the previous section (for example, Figure 2), and will not be described again here.
  • Specific elements of the third target structure set of the intra-operative scan image may be the fourth target structure set. That is, in the fast planning mode, the specific elements refer to the inaccessible area; in the precise planning mode, the specific elements refer to all important external organs/tissues.
  • the planned intervention path and the actual intervention path can be registered to the intra-operative scan image, and registration calculations can be performed to obtain the registration deformation field.
  • the registered interventional path can be displayed, and the difference between the registered actual interventional path and the planned interventional path can be calculated. If there is a deviation between the actual intervention path and the planned intervention path, extract the deviation part and determine whether the deviation intersects with the inaccessible area or all important organs/tissues of the scanned images during the operation. If the intersection is not null, it proves that the actual intervention path may pass through inaccessible areas or all important organs/tissues, which may affect parenchymal organs.
  • the corresponding postoperative feedback information is a reminder message to the clinician; If the intersection is a null value, it is determined that the corresponding postoperative feedback information is not to be reminded; if there is no difference between the actual intervention path and the planned intervention path, it is determined that the corresponding postoperative feedback information is not to be reminded.
  • Figure 22 is an exemplary flow diagram of a postoperative assessment process illustrated in some embodiments of the present specification.
  • the lesion (original lesion) and its surrounding organ areas are segmented, and the region of interest in the area is intercepted and registered, so that the preoperative lesion and the postoperative scan image are The positions of the original lesion areas are then matched and displayed together to facilitate the doctor's analysis of the surgical results.
  • the area is intercepted based on the segmentation results. On the one hand, the area of the lesion changed after surgery can be calculated to evaluate the efficacy of the surgery; on the other hand, the area of the lesion can be calculated.
  • image abnormality detection that is, postoperative complication detection and identification operations
  • methods such as threshold segmentation and deep learning can be used to extract the path during the actual intervention process, and register the path with the planned intervention path (i.e., needle track comparison), and Determine whether there are changes to assess the impact of changes and achieve accurate assessment.
  • the deviation between the actual intervention path and the planned intervention path is determined, and the difference between the deviation and the specific element in the third target structure set of the scanned image during the operation (i.e., the fourth target structure set), if the intersection is an empty set, no reminder will be given; if the intersection is not an empty set, it means that the path passes through the fourth target structure set, and a reminder message can be sent to the clinician.
  • the fourth target structure set the difference between the deviation and the specific element in the third target structure set of the scanned image during the operation
  • Figure 23 is a flowchart of an exemplary interventional procedure guidance method according to some embodiments of the present specification.
  • process 2300 may be performed by processing device 140.
  • the process 2300 may be stored in a storage device (eg, the storage device 150, a storage unit of the processing device 140) in the form of a program or instructions, and when the processor executes the program or instructions, the process 2300 may be implemented.
  • process 2300 may utilize one or more additional operations not described below, and/or be completed without one or more operations discussed below.
  • Step 2310 Collect the first medical image, the second medical image, and the third medical image of the target object at different times.
  • the processing device may collect the first medical image, the second medical image, and the third medical image of the target object at different times through the medical scanning device. In some embodiments, the processing device may obtain the first medical image, the second medical image, and the third medical image of the target object from the medical scanning device 110, the storage device 150, the storage unit of the processing device 140, and the like.
  • the first medical image, the second medical image, and the third medical image may be acquired by a computed tomography (CT) device.
  • CT computed tomography
  • the first medical image may be a preoperative enhanced image or a preoperative plain scan image.
  • the first medical image may be acquired prior to the interventional procedure.
  • the period before the interventional surgery can be a certain period of time before the interventional surgery, for example, one hour before, two hours before, five hours before, one day before, two days before, one week before, etc.
  • the first medical image may be collected when the target subject is first diagnosed, during a routine physical examination, or after the previous interventional surgery.
  • the second medical image may be an intraoperative real-time image.
  • the second medical image is acquired during the interventional procedure and before the puncture is performed. During the interventional surgery and before the puncture is performed, it may be the preparation time before inserting the needle.
  • the second medical image can be collected at positioning time, disinfection time, local anesthesia time, etc.
  • the second medical image may be the first frame of real-time image during surgery.
  • the third medical image may be a real-time image during interventional surgery.
  • the third medical image is acquired during performance of the puncture.
  • the puncture execution process refers to the process of inserting a needle from the skin, entering the target area according to the puncture path, completing the operation in the target area, and removing the needle.
  • the first medical image, the second medical image, and the third medical image may be collected by different imaging devices.
  • the first medical image can be collected through the imaging equipment in the imaging room
  • the second medical image and the third medical image can be collected through the imaging equipment in the operating room.
  • image parameters eg, image range, precision, contrast, grayscale, gradient, etc.
  • the scanning range of the first medical image may be larger than the scanning range of the second medical image and the third medical image, or the accuracy of the second medical image and the third medical image may be higher than that of the first medical image.
  • Step 2320 Register the first medical image and the second medical image to obtain a fourth medical image.
  • the fourth medical image may include registered interventional surgery planning information.
  • the first medical image is collected before the interventional surgery, and there is relatively sufficient time for acquisition and image processing.
  • the first medical image The scan range is larger and the slices are thicker, e.g. including a large number of slices covering all relevant tissues and/or organs. Planning the puncture path on the first medical image with more comprehensive information will help improve the accuracy of subsequent interventional surgical guidance.
  • the second medical image is collected during the interventional surgery and before the puncture is performed.
  • the time for acquisition and image processing is relatively tight.
  • the second medical image scan range is small and the slices are thin. For example, it only covers the area around the needle tip. of 4 to 10 slices. It can be understood that the fourth medical image obtained by registering the first medical image and the second medical image includes the registered puncture planning information.
  • Step 2330 Map the fourth medical image to the third medical image to guide the interventional surgery.
  • step 2330 please refer to the relevant description in Figure 2 and will not be described again here.
  • Figure 24 is a schematic diagram of an exemplary interventional procedure guidance method according to some embodiments of the present specification.
  • the processing device monitors the target subject's respiration via a respiratory gating device.
  • the respiratory gating device can obtain the respiratory amplitude point A where the target object is located.
  • the respiratory gating device can monitor the breathing of the target subject, and enable the medical scanning equipment to collect the second medical image when the target subject is at the breathing amplitude point A'.
  • the processing device processes the first medical image to obtain the puncture planning information image, and obtains the first deformation information through the first registration.
  • the processing device applies the first deformation information to the puncture planning information image to obtain a fourth medical image.
  • the fourth medical image includes the puncture planning information after the first registration.
  • the target subject can control (for example, hold his breath) the breathing amplitude to the same or similar breathing amplitude.
  • the processing device may monitor the target subject's respiratory amplitude through a respiratory gating device.
  • the medical scanning device collects the third medical image, and the processing device maps the fourth medical image to the third medical image to guide the interventional surgery. If the respiratory gating device detects If there is an obvious deviation in the breathing amplitude, the processing equipment can give prompts and/or interrupt the puncture; when the target object adjusts to the same or similar breathing amplitude, continue the puncture process.
  • the respiratory gating device By using the respiratory gating device to collect the first medical image, the second medical image, and the third medical image at the same or approximately the same respiratory amplitude point, the movement of organs and tissues between images caused by respiratory motion can be smaller. It is helpful to improve the accuracy of preoperative planning.
  • the real-time images are used to display the status of the patient's puncture site in real time, and the pre-interventional surgery information is also introduced. Plan the results and detailed information in detail to avoid high-risk areas and reduce surgical risks.
  • the intraoperative real-time puncture process is based on the position of the puncture needle tip in the center of the field of view.
  • the puncture needle punctures toward the lesion along the puncture planning path.
  • the CT bed moves or moves the detector to update the scanning range, obtain real-time scan images, and perform puncture for the puncture process. Guidance improves surgical efficiency and reduces surgical risks.
  • Figure 25 is another schematic diagram of an exemplary interventional surgery guidance method according to other embodiments of the present specification.
  • the processing device may monitor the target subject's respiration without the aid of a respiratory gating device. As shown in Figure 25, the processing device collects the first medical image, the second medical image, and the third medical image of the target object at different times. The processing device processes the first medical image to obtain the puncture planning information image, and obtains the first deformation information through the first registration. The processing device applies the first deformation information to the puncture planning information image to obtain a fourth medical image.
  • the fourth medical image includes the puncture planning information after the first registration.
  • the processing device performs a second registration of the second medical image and the third medical image to obtain the second deformation information, and applies the second deformation information to the fourth medical image to obtain a fifth image.
  • the fifth image contains the information after the third medical image. Puncture planning information after secondary registration.
  • the processing device maps the fifth image to the third medical image to guide the interventional procedure.
  • the second medical image and the third medical image are both small-slice data, the calculation amount of the second registration is small, and the registration can be achieved in a short time after the third medical image is acquired during the operation, reducing the cost. Surgical Risks.
  • Embodiments of the present specification also provide a surgical robot, including: a robotic arm to perform interventional surgery; and a control system, the control system including one or more processors and a memory, the memory including a device adapted to cause the one or more A processor executes an operation instruction including the following steps: collecting the first medical image, the second medical image, and the third medical image of the target object at different times; The images are registered to obtain a fourth medical image, which contains the registered puncture planning information; the fourth medical image is mapped to the third medical image to guide the interventional surgery.
  • Embodiments of the present specification also provide a surgical robot, including: a robotic arm to perform interventional surgery; and a control system, the control system including one or more processors and a memory, the memory including a device adapted to cause the one or more
  • a processor executes an operation instruction including the following steps: collecting the first medical image, the second medical image, and the third medical image of the target object at different times; The images are registered for the first time to obtain the first deformation information and the fourth medical image.
  • the fourth The medical image includes registered puncture planning information; the second medical image and the third medical image are registered for a second time to obtain second deformation information; and the second deformation information is applied to the third medical image.
  • Four medical images are used to obtain a fifth image, which contains the puncture planning information after the second registration; and the fifth image is mapped to the third medical image to guide the interventional surgery.
  • Figure 26 is an exemplary module diagram of a medical image processing system for interventional surgery according to some embodiments of this specification.
  • system 2600 may include an acquisition module 2610, a registration module 2620, and a risk assessment module 2630.
  • the acquisition module 2610 is used to acquire the first medical image of the target object before the interventional surgery and the second medical image of the target object during the interventional surgery.
  • the registration module 2620 is used to register the second medical image and the first medical image to obtain a registration result.
  • the risk assessment module 2630 is configured to determine interventional surgery planning information of the target object based on at least the registration result, perform interventional surgery risk assessment based on the interventional surgery planning information, and obtain a risk assessment corresponding to the interventional surgery planning information. result.
  • medical image processing system 2600 for interventional procedures may include one or more other modules.
  • the medical image processing system for interventional surgery 2600 may include a storage module to store data generated by the modules of the medical image processing system for interventional surgery 2600 .
  • the acquisition module 2610, the registration module 2620 and the risk assessment module 2630 in Figure 26 can be different modules in one system, or one module can implement the functions of two or more modules mentioned above.
  • each module can share a storage module, or each module can have its own storage module.
  • the features, structures, methods, and other characteristics of the exemplary embodiments described in this specification may be combined in various ways to obtain additional and/or alternative exemplary embodiments.
  • processing device 140 and medical scanning device 110 may be integrated into a single device. Such deformations are within the scope of this manual.
  • Some embodiments of this specification also provide a medical image processing device for interventional surgery, including a processor.
  • the processor is configured to execute the medical image processing method for interventional surgery described in any embodiment. For details, see Figures 1 to 25. Description will not be repeated here.
  • Some embodiments of this specification also provide a computer-readable storage medium that stores computer instructions.
  • the computer reads the computer instructions, the computer executes the medical image processing method for interventional surgery as in any of the above embodiments. Specifically, Refer to the relevant descriptions in Figures 1 to 25, which will not be described again here.
  • adaptive sliding window calculation and corresponding sliding window operation can be used to complete the missing parts of the positioning area, and can automatically plan and execute reasonable
  • the sliding window operation reduces the dependence of the fine segmentation stage on the coarse positioning results, and improves the segmentation accuracy without significantly increasing the segmentation time and computing resources;
  • the element mask can be accurately positioned based on the preset positioning coordinates of the element, which not only improves the segmentation accuracy, but also reduces the segmentation time and the amount of segmentation calculations. , further improving the segmentation efficiency;
  • interventional surgery planning two fully automatic modes are provided for interventional surgery planning.
  • intervention such as puncture
  • path planning because a reasonable path can be planned, it can avoid high-priority blood vessels and important organs, and obtain potential needle insertion space, thus improving the efficiency of interventional planning and interventional surgery;
  • rapid In the planning mode when planning the interventional path, the rapid segmentation mode only needs to bypass the non-interventionable area and make the interventional path go directly to the lesion, which also helps to improve the efficiency of interventional planning and interventional surgery;
  • the workflow can automatically plan the best interventional path efficiently and accurately, analyze the risks of the interventional path, provide good preoperative planning guidance for interventional surgery, and provide real-time monitoring of complications during the operation.
  • the detection and identification functions further improve the safety of the interventional process.
  • the workflow also implements the postoperative evaluation function, which can assist the operator to accurately evaluate the surgical process and surgical results, improving surgical work efficiency and surgical safety;
  • numbers are used to describe the quantities of components and properties. It should be understood that such numbers used to describe the embodiments are modified by the modifiers "about”, “approximately” or “substantially” in some examples. Grooming. Unless otherwise stated, “about,” “approximately,” or “substantially” means that the stated number is allowed to vary by ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending on the desired features of the individual embodiment. In some embodiments, numerical parameters should account for the specified number of significant digits and use general digit preservation methods. Although the numerical ranges and parameters used to identify the breadth of ranges in some embodiments of this specification are approximations, in specific embodiments, such numerical values are set as accurately as is feasible.

Landscapes

  • Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Robotics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本说明书实施例提供了一种用于介入手术的医学影像处理系统和方法,其中,该系统包括:控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行以下步骤的操作指令:获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像;配准所述第二医学影像和所述第一医学影像,得到配准结果;至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。

Description

一种用于介入手术的医学影像处理系统和方法
交叉引用
本说明书要求2022年05月07日提交的中国申请号202210493274.3和2022年06月30日提交的中国申请号202210764281.2的优先权,其全部内容通过引用并入本文。
技术领域
本说明书涉及图像处理技术领域,特别涉及一种用于介入手术医学影像处理方法、系统、装置及计算机存储介质。
背景技术
CT(Computed Tomography,计算机断层扫描)引导经皮介入手术是目前临床上进行癌症诊断和治疗的最常用方法,在实时CT扫描下,医生主从控制机器人进行穿刺,大大提高了穿刺效率和精度,降低了患者的射线照射剂量。如何辅助医生更好地控制机器人进行引导经皮介入手术是目前亟待解决的一个问题。
因此,本说明书提供了一种用于介入手术的医学影像处理系统和方法,以提高引导经皮介入手术的效率。
发明内容
本说明书实施例之一提供一种用于介入手术的医学影像处理系统,所述系统包括:控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行以下步骤的操作指令:获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像;配准所述第二医学影像和所述第一医学影像,得到配准结果;至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
本说明书实施例之一提供一种用于介入手术的医学影像处理方法,所述方法包括:获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像;配准所述第二医学影像和所述第一医学影像,得到配准结果;至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
本说明书实施例之一提供一种用于介入手术医学影像处理方法,所述方法包括:获取规划介入路径的模式;获取手术前增强影像;对所述手术前增强影像的第一目标结构集进行分割,获得所述第一目标结构集的第一医学影像;获取手术中扫描影像;对所述手术中扫描影像的第二目标结构集进行分割,获得所述第二目标结构集的第二医学影像;所述第一目标结构集与所述第二目标结构集有交集;对所述第一医学影像与所述第二医学影像进行配准,确定手术中第三目标结构集的空间位置,所述第三目标结构集的元素选择基于所述规划介入路径的模式;基于所述手术中第三目标结构集的空间位置规划介入路径,基于所述介入路径进行风险评估;其中,所述第三目标结构集中至少有一个元素包括在第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
本说明书实施例之一提供一种介入手术引导系统,所述系统包括:控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行以下步骤的操作指令:在不同的时间分别采集目标对象的第一医学影像、第二医学影像和第三医学影像;将所述第一医学影像、所述第二医学影像进行配准,得到第四医学影像,所述第四医学影像包含配准后的介入手术规划信息;
将所述第四医学影像映射至所述第三医学影像,以引导介入手术。
本说明书实施例之一提供一种用于介入手术医学影像处理装置,包括处理器,所述处理器用于执行任一实施例所述的用于介入手术医学影像处理方法。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如任一实施例所述的用于介入手术医学影像处理方法。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这 些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的用于介入手术医学影像处理系统的应用场景示例性示意图;
图2是根据本说明书一些实施例所示的用于介入手术的医学影像处理方法的示例性流程图;
图3是根据本说明书一些实施例所示的引导介入手术的示例性流程图;
图4是根据本说明书一些实施例提供的用于介入手术医学影像处理方法的示例性流程图;
图5根据本说明书一些实施例提供的介入手术医学影像处理方法中涉及的分割过程的示例性流程图;
图6是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图;
图7是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图;
图8是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图;
图9是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图;
图10是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图;
图11是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图;
图12A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示意图;
图12B-图12E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图;
图13是根据本说明书一些实施例所示的分割结果的示例性效果对比图;
图14是本说明书一些实施例中所示的对第一医学影像与第二医学影像进行配准过程的示例性流程图;
图15是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图;
图16是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图;
图17是本说明书一些实施例中所示的经过分割得到第一医学影像、第二医学影像的示例性示意图;
图18是本说明书一些实施例中所示的快速规划模式下确定第三目标结构集中至少部分元素的介入风险值的示例性流程图;
图19是本说明书一些实施例中所示的精准规划模式下确定第三目标结构集中至少部分元素的介入风险值的示例性流程图;
图20是本说明书一些实施例中所示的图像异常检测过程的示例性流程图;
图21是本说明书一些实施例中所示的术后评估过程的示例性流程图;
图22是本说明书一些实施例中所示的术后评估过程的示例性流程图;
图23是根据本说明书一些实施例所示的示例性介入手术引导方法的流程图;
图24是根据本说明书一些实施例所示的示例性介入手术引导方法的示意图;
图25是根据本说明书另一些实施例所示的示例性介入手术引导方法的另一示意图;
图26是根据本说明书一些实施例所示的用于介入手术的医学影像处理系统的示例性模块图;
图27是根据本说明书一些实施例所示的示例性穿刺手术引导用户界面的示意图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
首先对本说明书实施例所涉及到的一些名词或者概念进行解释。
介入手术,或称介入治疗,是利用现代高科技手段进行的一种微创性治疗手术,具体地,其可在医学扫描设备或医学影像设备的引导下,将特制的导管、导丝等精密器械引入人体,对体内病态进行诊断和局部治疗的医学手段。在本说明书一些实施例中,介入手术也称为穿刺、穿刺手术,在不影响混淆的前提下,其可以相互替换使用。
术前规划,为介入手术的手术前规划的简称,其是辅助介入手术非常重要的一环,术前规划的精确度会直接影响介入手术中介入路径的准确性,从而影响介入手术效果的好坏。
目标对象,也可以称为扫描对象,其可以包括扫描过程中涉及的生物对象和/或非生物对象的整体或部位。例如,目标对象(扫描对象)可以是有生命或无生命的有机和/或无机物质,如头部、耳鼻、口腔、颈部、胸部、腹部、肝胆胰脾、肾脏、脊柱等。
介入手术的实施往往非常复杂,为了更加顺利的实施手术,相关技术中,使用CT(Computed Tomography,计算机断层扫描)引导经皮介入手术在临床上进行癌症诊断和治疗,在实时CT扫描下,医生主从控制机器人进行穿刺,大大提高了穿刺效率和精度,降低了患者的射线照射剂量。但是受辐射剂量,成像时间等限制,实时CT扫描的范围较小,影响实时穿刺视野。如果病灶较大或入针点与靶点相距较远或用户希望完整观察实时穿刺时整个靶器官的状态,就需要扩大扫描范围。如果扩大实时CT扫描范围,会导致CT图像的层厚过大,无法识别靶器官内部的细节信息,特别当病灶较小时,可能实时扫描的图像中无法显示病灶的详细信息。
除以以外,相关技术中存在术前规划精度不够,并且在规划不准确的基础上实施的工作流也较为单一,风险规避性差,从而导致手术效果差的问题。另一方面,相关技术在术中导航时,术中没有穿刺路径引导及靶点(病灶)的显示,且实时仿真规划的计算过于复杂,耗时长,应用于临床场景难度较大。
有鉴于此,本说明书实施例提出了一些改进方法,以辅助医生更好的实施介入手术。
图1是根据本说明书一些实施例所示的用于介入手术的医学影像处理系统的应用场景示例性示意图。
在一些实施例中,医学影像处理系统100可以应用于多种介入手术/介入治疗。在一些实施例中,介入手术/介入治疗可以包括心血管介入手术、肿瘤介入手术、妇产科介入手术、骨骼肌肉介入手术或其他任何可行的介入手术,如神经介入手术等。在一些实施例中,介入手术/介入治疗还可以包括经皮穿刺活检术、冠状动脉造影、溶栓治疗、支架置入手术或者其他任何可行的介入手术,如消融手术等。
如图1所示,医学影像处理系统100可以包括医学扫描设备110、网络120、一个或多个终端130、处理设备140和存储设备150。医学影像处理系统100中的组件之间的连接可以是可变的。例如,医学扫描设备110可以通过网络120连接到处理设备140。又例如,医学扫描设备110可以直接连接到处理设备140,如连接医学扫描设备110和处理设备140的虚线双向箭头所指示的。再例如,存储设备150可以直接或通过网络120连接到处理设备140。作为示例,终端130可以直接连接到处理设备140(如连接终端130和处理设备140的虚线箭头所示),也可以通过网络120连接到处理设备140。
医学扫描设备110可以被配置为使用高能射线(如X射线、γ射线等)对扫描对象进行扫描以收集与扫描对象有关的扫描数据。扫描数据可用于生成扫描对象的一个或以上影像。在一些实施例中,医学扫描设备110可以包括超声成像(US)设备、计算机断层扫描(CT)扫描仪、数字放射线摄影(DR)扫描仪(例如,移动数字放射线摄影)、数字减影血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪、X射线显微镜扫描仪、多模态扫描仪等或其组合。在一些实施例中,多模态扫描仪可以包括计算机断层摄影-正电子发射断层扫描(CT-PET)扫描仪、计算机断层摄影-磁共振成像(CT-MRI)扫描仪。扫描对象可以是生物的或非生物的。仅作为示例,扫描对象可以包括患者、人造物体(例如人造模体)等。又例如,扫描对象可以包括患者的特定部位、器官和/或组织。
在一些实施例中,医学扫描设备110可以包括机架111、探测器112、检测区域113、工作台114和放射源115。机架111可以支撑探测器112和放射源115。可以在工作台114上放置扫描对象以进行扫描。放射源115可以向扫描对象发射放射线。探测器112可以检测从放射源115发出的放射线(例如,X射线)。在一些实施例中,探测器112可以包括一个或以上探测器单元。探测器单元可以包括闪烁探测器(例如,碘化铯探测器)、气体探测器等。探测器单元可以包括单行探测器和/或多行探测器。
网络120可以包括能够促进医学影像处理系统100的信息和/或数据的交换的任何合适的网络。在一些实施例中,一个或多个医学影像处理系统100的组件(例如,医学扫描设备110、终端130、处理设备140、存储设备150)可以通过网络120彼此交换信息和/或数据。例如,处理设备140可以经由网络120从医学扫描设备110获得影像数据。又例如,处理设备140可以经由网络120从终端130获得用户指令。
网络120可以是和/或包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN)、广域网(WAN)等)、有线网络(例如,以太网络、无线网络(例如802.11网络、Wi-Fi网络等)、蜂窝 网络(例如长期演进(LTE)网络)、帧中继网络、虚拟专用网络(“VPN”)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任何组合。在一些实施例中,网络120可以包括一个或多个网络接入点。例如,网络120可以包括诸如基站和/或互联网交换点之类的有线和/或无线网络接入点,通过这些接入点,医学影像处理系统100的一个或多个组件可以连接到网络120以交换数据和/或信息。
终端130可以包括移动设备131、平板计算机132、膝上型计算机133等或其任意组合。在一些实施例中,移动设备131可以包括智能家居设备、可穿戴设备、移动设备、虚拟现实设备、增强现实设备等或其任意组合。在一些实施例中,终端130可以是处理设备140的一部分。
处理设备140可以处理从医学扫描设备110、终端130和/或存储设备150获得的数据和/或信息。例如,处理设备140可以获取医学扫描设备110获取的数据,并利用这些数据进行成像生成医学影像(如手术前增强影像、手术中扫描影像),并且对医学影像进行分割,生成分割结果数据,例如,第一分割影像(第一医学影像)、第二分割影像(第二医学影像)、手术中血管和病灶的空间位置、配准图等)。再例如,处理设备140可以从终端130获取医学影像、规划模式数据(如精准规划模式数据、快速规划模式数据)和/或扫描协议。再例如,处理设备140可以获取医学扫描设备110获取的数据(如分割和配准结果、介入风险值、预设权重、加权风险值、风险累计值、图像异常类型、图像异常程度等等),并利用这些数据进行处理生成介入路径和/或提示信息。
在一些实施例中,处理设备140可以是单个服务器或服务器组。服务器组可以是集中式或分布式的。在一些实施例中,处理设备140可以是本地的或远程的。例如,处理设备140可以经由网络120访问存储在医学扫描设备110、终端130和/或存储设备150中的信息和/或数据。又例如,处理设备140可以直接连接到医学扫描设备110、终端130和/或存储设备150以访问存储的信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从医学扫描设备110、终端130和/或处理设备140获得的数据。例如,存储设备150可以将从医学扫描设备110获取的医学影像数据(如手术前增强影像、手术中扫描影像、第一医学影像、第二医学影像等等)和/或定位信息数据进行存储。再例如,存储设备150可以将从终端130输入的医学影像和/或扫描协议进行存储。再例如,存储设备150可以将处理设备140生成的数据(例如,医学影像数据、器官掩膜数据、定位信息数据、精准分割后的结果数据、手术中血管和病灶的空间位置、配准图等)进行存储。再例如,存储设备150可以将处理设备140生成的数据(例如,分割和配准结果、介入风险值、预设权重、加权风险值、风险累计值、图像异常类型、图像异常程度、介入路径和/或提示信息等等)进行存储。
在一些实施例中,存储设备150可以存储处理设备140可以执行或用于执行本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150包括大容量存储设备、可移动存储设备、易失性读写存储器、只读存储器(ROM)等或其任意组合。示例性大容量存储设备可以包括磁盘、光盘、固态驱动器等。在一些实施例中,所述存储设备150可以在云平台上实现。
在一些实施例中,存储设备150可以连接到网络120以与医学影像处理系统100中的一个或多个其他组件(例如,处理设备140、终端130)通信。医学影像处理系统100中的一个或多个组件可以经由网络120访问存储在存储设备150中的数据或指令。在一些实施例中,存储设备150可以直接连接到医学影像处理系统100中的一个或多个其他组件或与之通信(例如,处理设备140、终端130)。在一些实施例中,存储设备150可以是处理设备140的一部分。
关于医学影像处理系统100的描述旨在是说明性的,而不是限制本说明书的范围。许多替代、修改和变化对本领域普通技术人员将是显而易见的。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。
图2是根据本说明书一些实施例所示的用于介入手术的医学影像处理方法的示例性流程图。在一些实施例中,流程200可以由用于介入手术的医学影像处理系统100或处理设备(例如,处理设备140)实现。在一些实施例中,用于介入手术的医学影像处理方法可以由医学影像处理系统包括的控制系统实现,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行流程200的操作指令。
步骤210,获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像。
第一医学影像是指在执行介入手术前获得的目标对象的医学图像。第二医学影像是指在介入手术进行中获得的目标对象的医学图像。
在一些实施例中,第一医学影像和/或第二医学影像可以包括CT影像、PET-CT影像、US影像或MR影像等。
在一些实施例中,第一医学影像、第二医学影像可以在目标对象处于相同的呼吸幅度点或不影响 穿刺准确性的相近呼吸幅度点下采集。例如,第一医学影像可以在介入手术前目标对象处于第一呼吸幅度点下采集,第二医学影像可以在所述介入手术中、穿刺执行之前所述目标对象处于第二呼吸幅度点下采集。第二呼吸幅度点与第一呼吸幅度点的偏差小于预设值。介入手术中、穿刺执行之前可以是在介入手术的准备过程中,穿刺针还未开始穿刺的时间段,该时间段可以是穿刺针是否进入目标对象体内为临界时间点。
呼吸幅度是反映呼吸过程中气量变化的物理量。呼吸幅度点是指处于某呼吸幅度的时间点,例如,吸气末、呼气末、吸气的某个中间状态、呼气的某个中间状态等。在一些实施例中,采集图像(例如,第一医学影像、第二医学影像)的呼吸幅度点可以根据需求、经验和/或用户习惯确定。例如,做肺部穿刺时,吸气状态对病灶的挤压较少,可以在吸气末采集图像。
在一些实施例中,在介入手术前、介入手术中穿刺执行之前、在穿刺执行过程中,目标对象可以自行调整(或在技师指导下调整)到某一呼吸幅度点(例如,吸气末),通过医学扫描设备(例如,医学扫描设备110)可以在该呼吸幅度点分别采集获得第一医学影像、第二医学影像。
在一些实施例中,处理设备可以通过呼吸门控装置使第一医学影像、第二医学影像在目标对象处于相同或近似相同的呼吸幅度点下采集。例如,如图24所示,采集第一医学影像时,呼吸门控装置可以获取目标对象所处的呼吸幅度点A;介入手术中、穿刺执行之前,呼吸门控装置可以监控目标对象的呼吸,并且使医学扫描设备在目标对象处于呼吸幅度点A’时采集第二医学影像。在一些实施例中,在介入手术执行过程中,通过呼吸门控装置监控目标对象的呼吸幅度,还可以在目标对象调整呼吸至呼吸幅度点A”时,使用医学扫描设备采集第三医学影像(关于第三医学影像的说明可见后文图3的描述)。在一些实施例中,在目标对象屏气(即保持呼吸幅度在A”点附近)过程中,监控目标对象的呼吸幅度,在目标对象呼吸幅度偏离A”较大时,还可以对用户发出提示,辅助用户调整呼吸。
在相同或近似相同的呼吸幅度点下采集第一医学影像、第二医学影像、第三医学影像,可以使呼吸运动造成的图像之间的器官组织的移动较小,有利于提高术前规划的准确度。
在一些实施例中,预设值可以根据需求和/或经验设定,例如,将预设值设为1%、5%、7%、10%等。如图24所示,第一医学影像在第一呼吸幅度点A采集、第二医学影像在与第一呼吸幅度点A的偏差小于预设值的第二呼吸幅度点A’采集、第三医学影像在与第一呼吸幅度点A和/或第二呼吸幅度点A’的偏差小于预设值的第三呼吸幅度点A”点采集。
在一些实施例中,获取的第一医学影像和第二医学影像还可以是经过一定处理,例如,分割处理后的医学图像。
在一些实施例中,处理设备可以获取手术前增强影像,并对所述手术前增强影像的第一目标结构集进行分割,获得第一目标结构集的第一医学影像;以及,获取手术中扫描影像,并对手术中扫描影像的第二目标结构集进行分割,获得第二目标结构集的第二医学影像。
手术前增强影像,可简称术前增强影像,是指目标对象(如患者等)在手术前注入造影剂后,经由医学扫描设备(如医学扫描设备110)扫描得到的影像。在一些实施例中,手术前增强影像可以包括CT影像、PET-CT影像、US影像或MR影像等。在一些实施例中,处理设备可以从医学扫描设备110获取目标对象的手术前增强影像,也可以从终端、数据库和存储设备中读取获取扫描对象的手术前增强影像。在一些实施例中,还可以通过任何其他可行的方式获取手术前增强影像,例如,可以经由网络(例如,网络120)从云端服务器和/或医疗系统(如医院的医疗系统中心等)获取手术前增强影像,本说明书实施例对此不做限定。
手术中扫描影像是指目标对象在手术中经由医学扫描设备进行扫描后得到的影像。在一些实施例中,手术中扫描影像可以包括CT影像、PET-CT影像、US影像或MR影像等。在一些实施例中,手术中扫描影像可以是实时扫描影像。在一些实施例中,手术中扫描影像也可以称为术前平扫影像或术中平扫影像,是手术准备过程中且手术执行前(即实际进针前)的扫描影像。
在一些实施例中,手术前增强影像的第一目标结构集可以包括目标器官(例如,靶器官)内的血管。在一些实施例中,手术前增强影像的第一目标结构集除了靶器官内的血管外,还可以包括靶器官和病灶。在一些实施例中,靶器官可以包括大脑、肺、肝脏、脾脏、肾脏或其他任何可能的器官组织,如甲状腺等。第一医学影像是对手术前增强影像进行分割得到的手术前第一目标结构集(例如,手术前增强影像中的靶器官、靶器官内的血管、病灶)的医学影像。
在一些实施例中,手术中扫描影像的第二目标结构集所包括的区域或器官可以基于规划介入路径的模式确定。介入路径是指介入手术中所用的器械引入目标对象体内时所经过的路径。介入路径的模式可以包括精准规划模式和快速规划模式。在一些实施例中,精准规划模式或快速规划模式可以是用于对手术中扫描影像分割的路径规划方式。在一些实施例中,精准规划模式可以包括精准分割模式。在一些实施例中,快速规划模式可以包括快速分割模式。在一些实施例中,当规划介入路径的模式不同时,第二目标结构集包括的区域或器官可以不同。例如,在快速规划模式下,第二目标结构集可以包括不可介入区域。又 例如,在精准规划模式下,第二目标结构集可以包括手术中扫描影像中所有重要器官。重要器官是指介入手术时介入规划路径需要避开的器官,例如,肝脏、肾脏、靶器官外部血管等。在一些实施例中,第二目标结构集除不可介入区域/手术中扫描影像中所有重要器官外,也可以包括靶器官和病灶。第二医学影像是对手术中扫描影像分割得到的手术中第二目标结构集(例如,不可介入区域/重要器官、靶器官、病灶)的医学影像。
在一些实施例中,第一目标结构集和第二目标结构集有交集。例如,第一目标结构集可以包括靶器官内的血管和靶器官,第二目标结构集可以包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官。又例如,第一目标结构集可以包括靶器官内的血管、靶器官和病灶,第二目标结构集可以包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官和病灶。
关于第一医学影像、第二医学影像、第三医学影像的更多说明可以参见后文图4和图21的相关描述,此处不再赘述。
步骤220,配准所述第二医学影像和所述第一医学影像,得到配准结果。
配准是指将不同时间或不同条件下获取的不同图像进行匹配、叠加的过程。例如,配准可以是通过空间变换使第一医学影像与第二医学影像的对应点达到空间位置和解剖位置一致的图像处理操作。通过配准可以综合体现不同图像包括的多维度信息。
配准结果是指对第二医学影像和第一医学影像进行配准后得到的图像。在一些实施例中,配准结果也可以称为第四医学影像。
在一些实施例中,配准结果可以包括手术中第三目标结构集的空间位置。第三目标结构集是第一医学影像与第二医学影像配准后得到的结构全集,例如,第三目标结构集中的元素可以包括第一目标结构集和第二目标结构集中的元素。在一些实施例中,第三目标结构集可以包括目标器官(例如,靶器官)、靶器官内的血管、病灶、以及其他区域/器官(例如,不可介入区域、所有重要器官)。在一些实施例中,在快速分割模式下,其他区域/器官可以是指不可介入区域;在精准分割模式下,其他区域/器官可以是指所有重要器官。在一些实施例中,第三目标结构集中至少有一个元素包括在第一目标结构集中,第三目标结构集中至少有一个元素不包括在第二目标结构集中。例如,第一目标结构集包括靶器官内的血管、靶器官和病灶,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,靶器官内的血管包括在第一目标结构集中且不包括在第二目标结构集中。在一些实施例中,第三目标结构集的元素可以基于规划介入路径的模式确定。规划介入路径的模式可以包括精准规划模式和快速规划模式。关于规划介入路径的模型的说明可以参见后文图4的步骤410的描述。
在一些实施例中,处理设备可以通过将第二医学影像和第一医学影像进行匹配获得配准形变场,配准形变场可以用于反映第一医学影像和第二医学影像的空间位置变化;基于配准形变场对第二医学影像和第一医学影像进行叠加,可以得到配准结果。经过基于配准形变场进行空间位置的变换,可以使得变换后的手术中扫描影像与手术前增强影像在空间位置和解剖位置上完全一致。
在一些实施例中,处理设备可以采用基于特征、灰度等的非刚体配准算法,如基于Demons的非刚体配准算法,将第一医学影像、第二医学影像进行配准。在一些实施例中,处理设备140还可以采用基于深度学习的非刚体配准算法,将第一医学影像、第二医学影像进行配准,以提高配准的实时性。
示例性地,对第一医学影像和第二医学影像进行配准的操作流程可以如下文实施例所示。
首先,处理设备可以基于第一医学影像得到介入手术规划信息图像。
介入手术规划信息图像是指包含介入手术规划信息的图像。在一些实施例中,介入手术规划信息可以包括高危险组织信息、穿刺的规划路径、病灶位置信息中的至少一个。高危险组织是指被穿刺会对目标对象和/或手术进程造成不利影响的器官和/或组织,例如,大血管、骨骼等。在一些实施例中,可以根据不同目标对象的个体情况,设置不同的高危险组织。例如,将肝功能低下患者的肝脏作为危险区、将目标对象体内的其他病灶作为危险区。穿刺的规划路径是指规划好的穿刺器械行进的路线。穿刺的规划路径信息可以包括入针点、靶点、穿刺角度、穿刺深度、路径长度、路径途经组织和/或器官等。病灶位置信息可以包括病灶(或病灶中心)在人体坐标系的坐标、深度、体积、边缘等。
在一些实施例中,处理设备或相关人员(例如,医生)可以对第一医学影像进行分割等处理获得介入手术规划信息。例如,可以分割出各项组织或器官,如,血管、皮肤、骨骼、脏器、待穿刺组织等。又例如,可以对分割出的各项组织或器官进行分类,分为病灶区、可穿区、高危险组织等。再例如,可以根据病灶区、可穿区、高危险组织等确定穿刺的规划路径。在一些实施例中,处理设备或相关人员(例如,医生)可以在第一医学影像上对介入手术规划信息进行标注,获得介入手术规划信息图像。
其次,处理设备可以将第一医学影像、第二医学影像进行第一次配准,得到第一形变信息。第一形变信息是指第二医学影像中的图像元素(例如,像素或体素)相对于第一医学影像中的对应图像元素的 形态变化信息。例如,几何变化信息、投影变化信息等。第一形变信息可以用第一形变矩阵表示,示例性地,第一形变矩阵可以包括x方向上的形变矩阵、y方向上的形变矩阵和z方向上的形变矩阵,各个形变矩阵中的元素对应第二医学影像的单位区域(例如,1个像素点、1mm×1mm的图像区域、1个体素点、1mm×1mm×1mm的图像区域等),元素的值为该单位区域的在x轴、y轴或z轴方向的形变信息。在一些实施例中,处理设备可以通过基于Demons的非刚体配准算法、几何纠正法、基于深度学习的非刚体配准算法等对第一医学影像、第二医学影像进行第一次配准,得到第一形变信息。
最后,处理设备可以将第一形变信息作用于介入手术规划信息图像,得到配准结果。其中,配准结果中的介入手术规划信息为经过第一次配准后的介入手术规划信息。示例性地,处理设备可以将第一形变矩阵作用于介入手术规划信息图像,即,使介入手术规划信息图像以及其中介入手术规划信息(高危险组织信息、穿刺的规划路径、病灶位置信息等)产生与第一形变信息对应的形态变化,从而得到配准结果。其中,配准结果中的介入手术规划信息为经过所述第一次配准后的介入手术规划信息。
第一医学影像在介入手术前采集,采集和图像处理的时间相对充裕,第一医学影像扫描范围相对较大、片层较厚,例如,包括涵盖所有相关组织和/或器官的大量片层。在信息更全面的第一医学影像上规划穿刺路径有利于提高后续介入手术引导的准确性。第二医学影像在介入手术中、穿刺执行之前采集,采集和图像处理的时间相对紧迫,第二医学影像扫描范围相对较小、片层较薄,例如,可能仅包括涵盖针尖周围的4到10个片层。将第一医学影像、第二医学影像进行配准得到的配准结果(第四医学影像)可以包含配准后的介入手术规划信息。由于实现高精度的配准的时间一般较长,需要几秒到十几秒左右。通过在介入手术中、穿刺执行之前预先将第一医学影像与第二医学影像进行高精度配准,可以避免或减少穿刺执行开始之后的计算压力,使实际介入手术能够在获取实时图像后立即执行或较短的时间内执行,减少穿刺执行的时长。
步骤230,至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
在一些实施例中,介入手术规划信息也可以称为穿刺规划信息。
如上文所述,配准结果中可以包括标注的介入手术规划信息,因此,处理设备可以直接基于配准结果,确定目标对象的介入手术规划信息。
风险评估是指对在穿刺执行过程中可能会出现的风险进行分析判断的过程。风险评估结论可以是对风险评估所做出的总结。配准结果中的元素(例如,靶器官、病灶、靶器官内的血管、不可介入区域、所有重要器官)的空间位置能够全面、准确的反映目标对象(例如,患者)的当前状况。介入手术规划信息可以使手术器械(例如,穿刺针)能够有效避开靶器官内的血管、不可介入区域和/或所有重要器官顺利抵达病灶的同时,降低手术风险,对介入手术规划信息进行风险评估则可以进一步地降低介入手术风险。
在一些实施例中,基于介入手术规划信息进行风险评估可以包括确定配准结果中至少部分元素的介入风险值,并基于介入风险值进行风险评估。例如,处理设备可以确定所述第三目标结构集中至少部分元素的介入风险值,并基于所述介入风险值,进行风险评估。
介入风险值可以表示元素的介入风险程度。在一些实施例中,介入风险值越大,介入风险程度越高,也即介入风险也越大,例如,介入风险值为8分的某一元素区域,比介入风险值为6分的介入风险更高。
在一些实施例中,第三目标结构集的元素选择可以基于规划介入路径的模式确定。在一些实施例中,规划介入路径的模式不同时,第三目标结构集中用于判定介入路径的风险评估的元素可以不同。例如,在快速规划模式下,第三目标结构集中用于判定介入路径的风险评估的元素可以包括靶器官内的血管和不可介入区域。又例如,在精准规划模式下,第三目标结构集中用于判定介入路径的风险评估的元素可以包括靶器官内的血管和所有的重要器官。
在一些实施例中,处理设备可以判断介入手术规划信息中的介入路径是否穿过第三目标结构集中的预设元素;当判断结果为是时,确定所述第三目标结构集中预设风险对象的所述介入风险值。
在一些实施例中,第三目标结构集中预设元素可以是指靶器官。第三目标结构集中预设风险对象可以是指靶器官内的血管。可以理解的是,预设风险对象可以包括在用于进行风险评估的第三目标结构集中的至少部分元素中。
在一些实施例中,当介入路径穿过第三目标结构集的靶器官时,在快速规划模式下,靶器官内的血管以及不可介入区域相对于介入路径具有一定的风险,需要计算靶器官内的血管以及不可介入区域相对于介入路径的介入风险值;在精准规划模式下,靶器官内的血管以及外部重要器官/组织相对于介入路径具有一定的风险,需要计算靶器官内的血管以及外部重要器官/组织相对于介入路径的介入风险值。在一些实施例中,当介入路径不穿过第三目标结构集的靶器官时,靶器官内的血管相对于介入路径不具有风险,无需考虑靶器官内的血管对介入路径的影响(也可以视为靶器官内的血管的介入风险值为零)。因此,当介 入路径不穿过第三目标结构集的靶器官时,在快速规划模式下,只需要计算不可介入区域相对于介入路径的介入风险值;在精准规划模式下,只需要计算靶器官外部重要器官/组织相对于介入路径的介入风险值。根据介入路径是否穿过靶器官,并在不同规划模式下分别确定需要计算的元素的介入风险值,可以更加合理的对介入路径进行风险评估。
在一些实施例中,判断介入路径是否穿过靶器官的方法可以包括获取取靶器官掩膜与介入路径之间的交集,若交集不为空集,则介入路径穿过了靶器官;否则未穿过靶器官。
关于介入手术风险评估的更多说明可以参见图3的相关描述。
在一些实施例中,响应于介入手术规划信息的风险评估结果满足预设条件,处理设备可以基于所述满足预设条件的介入手术规划信息引导介入手术。预设条件可以是介入风险值小于预设阈值,例如,假设预设阈值为7,当介入风险值为8时,认为不满足预设条件,当介入风险值为6时,认为满足预设条件。基于满足预设条件的介入手术规划信息引导介入手术可以是按照满足预设条件的介入手术规划信息,辅助引导手术器械在目标对象体内的移动,以避开靶器官内的血管、不可介入区域和/或所有重要器官顺利抵达病灶,实现对患者的治疗。
图3是根据本说明书一些实施例所示的引导介入手术的示例性流程图。
步骤310,获取介入手术执行中所述目标对象的第三医学影像。
第三医学影像可以是介入手术中实时影像。在一些实施例中,第三医学影像在介入手术执行过程中采集。介入手术执行的过程可以包括从皮肤进针,按照介入路径进入靶区,在靶区完成操作以及出针的过程。
在一些实施例中,第三医学影像可以由计算机断层扫描(CT)设备采集。
在一些实施例中,第三医学影像可以通过与采集第一医学影像和第二医学影像不同的成像设备采集获得。例如,第一医学影像和第二医学影像可以通过影像室的成像设备采集,第三医学影像可以通过手术室的成像设备采集。在一些实施例中,第一医学影像、第二医学影像、第三医学影像的图像参数(例如,图像范围、精度、对比度、灰度、梯度等)可以相同或不同。例如,第一医学影像的扫描范围可以大于第二医学影像、第三医学影像的扫描范围,或者,第二医学影像、第三医学影像的精度可以高于第一医学影像。
在一些实施例中,第三医学影像可以在与目标对象采集第一医学影像、第二医学影像时相同的呼吸幅度点或不影响穿刺准确性的相近呼吸幅度点下采集。如图24所示,例如,第一医学影像在第一呼吸幅度点A采集、第二医学影像在与第一呼吸幅度点A的偏差小于预设值的第二呼吸幅度点A’采集、第三医学影像在与第一呼吸幅度点A和/或第二呼吸幅度点A’的偏差小于预设值的第三呼吸幅度点A”点采集。
在介入手术执行过程中,目标对象可以自行调整(或在技师指导下调整)到某一呼吸幅度点(例如,吸气末),通过医学扫描设备可以在该呼吸幅度点采集得到第三医学影像。
步骤320,将所述配准结果映射至所述第三医学影像,以引导介入手术。
在一些实施例中,处理设备可以通过单应性变换、仿射变换、阿尔法通道变换等方法,将第四医学影像配准结果映射至第三医学影像,以引导介入手术。例如,用户可以以映射后的第三医学影像为引导,执行映射后的穿刺路径,并避开血管等高风险区域,逐步向映射的病灶进行穿刺。
在一些实施例中,如果不通过呼吸门控装置监控目标对象的呼吸,第二医学影像和第三医学影像可能在目标对象处于不同呼吸幅度点时采集,图像中器官和/或组织可能发生移动,处理设备可以对第二医学影像和第三医学影像进行第二次配准。如图25所示,在一些实施例中,通过将第二医学影像、第三医学影像进行第二次配准,可以得到第二形变信息。
第二形变信息是指第三医学影像中的图像元素相对于第二医学影像中的对应图像元素的形态变化信息。例如,几何变化信息、投影变化信息等。第二形变信息可以用第二形变矩阵表示,示例性地,第二形变矩阵可以包括x方向上的形变矩阵、y方向上的形变矩阵和z方向上的形变矩阵,各个形变矩阵中的元素对应第三医学影像的单位区域(例如,1个像素点、1mm×1mm的图像区域、1个体素点、1mm×1mm×1mm的图像区域等),元素的值为该单位区域的在x轴、y轴或z轴方向的形变信息。在一些实施例中,处理设备可以通过基于Demons的非刚体配准算法、几何纠正法、基于深度学习的非刚体配准算法等将第二医学影像、第三医学影像进行第二次配准,得到第二形变信息。
在一些实施例中,处理设备可以将第二形变信息作用于配准结果(第四医学影像),得到第五医学影像,第五医学影像中包含经过第二次配准后的介入手术规划信息。例如,处理设备可以将第二形变矩阵作用于配准结果,使配准结果包含的经过第一次配准的介入手术规划信息(高危险组织信息、穿刺的规划路径、病灶位置信息等)产生与第二形变信息对应的形态变化,从而得到第五医学影像。
在一些实施例中,处理设备可以将第五医学影像映射至第三医学影像。例如,通过单应性变换、 仿射变换、阿尔法通道变换等方法将第五医学影像映射至第三医学影像。
由于第二医学影像与第三医学影像都是少片层的数据,快速配准的计算量小,可在执行手术过程中获取第三医学影像后较短的时间内实现配准,降低手术风险。
在一些实施例中,介入手术执行时,处理设备可以在第三医学影像的显示范围之外,显示配准结果(第四医学影像)或第五医学影像中位于第三医学影像的显示范围之外的图像信息。例如,显示配准结果或第五医学图像中的其他组织和/或器官等。又例如,如图27所示,T1时刻显示第三医学影像的显示范围之外的病灶。
处理设备可以在第三医学影像的显示范围之外,显示对应的介入手术的规划路径信息。例如,如图27所示,病灶在扫描范围外,显示C点到病灶的规划路径。
处理设备可以对第三医学影像的显示范围内和外的图像信息进行区别显示。例如,为显示范围内和显示范围外设置不同的底色,显示范围内以RGB图像显示、显示范围外以灰度图像显示,显示范围内的线条(例如,规划路径)以实线显示,显示范围外的线条(例如,规划路径)以虚线(或不同颜色的实线)显示等。
由于受辐射剂量,成像时间等限制,第三医学影像扫描的范围较小,影响实时穿刺视野。通过呈现显示范围之外的信息,可以将映射到第三医学影像扫描范围外的穿刺规划信息补全到介入手术过程实时图像中,以增大穿刺过程用户可以看到的规划信息的视野,获取更多介入手术过程的有用信息。特别的当开始时,病灶不在穿刺过程实时图像(例如第三医学影像)扫描范围内时,显示在扫描范围外的病灶能够给医生穿刺提供明确的目标,使手术更容易成功。
在一些实施例中,处理设备可以根据第三医学影像,识别穿刺针尖位置。处理设备可以通过半自动阈值分割算法或全自动的深度学习算法提取得到第三医学影像中的穿刺针,进而可以获取针尖位置。例如,如图27所示,处理设备可以识别出T1时刻针尖的位置B1点。
处理设备可以根据针尖位置,移动目标对象、承载目标对象的床板,或用于采集第三医学影像的探测器,使得针尖位置位于第三医学影像的显示范围的居中区域。例如,如图27所示,在T1时刻,针尖沿着规划路径穿刺向病灶,处理设备140可以预估T2时刻针尖的位置B2点,在T1-T2时间段内根据针尖位置变化移床,使第三医学影像的扫描范围向下移动,使得针尖位置位于第三医学影像的显示范围的居中区域。
通过移动目标对象、承载目标对象的床板,或用于采集第三医学影像的探测器,实时更新扫描范围,保持针尖位置位于第三医学影像的显示范围的居中区域,可以突出显示针尖周围的信息,更精准的跟踪针尖行进过程,有利于提高手术效率,降低手术风险。
在一些实施例中,为了进一步地降低手术风险,所述控制系统的操作指令还可以使一个或多个处理器执行以下操作:
获取异常图像识别模型;获取目标对象手术中的实时医学影像;基于所述实时医学影像采用所述异常图像识别模型判断所述目标对象存在风险的实时概率,当概率达到预设阈值时,实时提醒医生。
异常图像识别模型可以是用于识别图像中是否有异常情况的机器学习模型。异常情况可以包括出血、穿刺针穿过高危组织等。在一些实施例中,异常图像识别模型可以包括深度神经网络模型、卷积神经网络模型等。异常图像识别模型可以通过获取历史介入手术执行过程中的历史实时医学影像作为训练样本,并人工基于实际是否有出现异常情况进行标注,将标注结果作为标签,对初始异常图像识别模型进行训练获得。训练方式可以包括各种常见的模型训练方法,例如,梯度下降法等,本说明书对此不作限定。
实时医学影像是指在目标对象的介入手术执行过程中获取的图像。实时医学影像可以通过医学扫描设备实时扫描的方式获取,也可以通过其他方式,比如,从存储设备、数据库读取手术过程中的图像,该图像可以是第二医学影像、第三医学影像以及手术中扫描影像等。
在一些实施例中,处理设备可以将实时医学影像输入至异常图像识别模型,由异常图像识别模型输出目标对象存在风险的实时概率。该实时概率也就是当前手术继续进行可能会出现风险的概率。
预设阈值(实时概率阈值)可以提前设定,比如,0.8、0.6、0.5等,设定方式可以是人工设定或其他方式,本说明书对此不作限定。
在一些实施例中,所述控制系统的操作指令还可以使一个或多个处理器执行以下步骤:
获取目标对象的特征信息,基于所述实时医学影像、当前介入手术的实际进度和实际穿刺路径,通过风险预测模型预测下一时间段所述目标对象的介入手术风险。
目标对象的特征信息是指能够反映目标对象个人特性的数据,例如,年龄、性别、身高、体重、体脂率、血压,是否存在基础疾病,基础疾病类别等。当前介入手术的实际进度可以包括手术执行时间、手术完成度(比如,5%、10%、20%等)、穿刺针进入深度等。实际穿刺路径可以与介入手术规划路径相同或不同,例如,在介入手术过程中可能会根据实际情况在介入手术规划路径的基础上做一定的调整。在 一些实施例中,实际穿刺路径可以基于实时医学影像通过图像识别处理的方式获得,例如,从实时医学影像中分割出穿刺针的实际进针路径,并基于分割结果确定实际穿刺路径。
处理设备可以将目标对象的特征信息、实时医学影像、当前介入手术的实际进度和实际穿刺路径输入至风险预测模型进行处理,由风险预测模型输出下一时间段目标对象的介入手术风险。下一时间段可以是1分钟后、2分钟后或5分钟后。在一些实施例中,下一时间段可以包括多个时间点,风险预测模型可以同时输出各个时间点可能出现手术风险的概率。
在一些实施例中,风险预测模型可以包括深度神经网络模型、卷积神经网络模型等,或其他组合模型。风险预测模型可以通过获取历史目标对象的特征信息、历史实时医学影像、历史当前介入手术的实际进度和历史实际穿刺路径作为训练样本,并人工基于实际是否有出现异常情况进行标注,将标注结果作为标签,对初始风险预测模型进行训练获得。训练方式可以包括各种常见的模型训练方法,例如,梯度下降法等,本说明书对此不作限定。
在一些实施例中,所述控制系统的操作指令还可以使一个或多个处理器执行以下操作:
处理设备可以获取多个时间段的实际介入手术信息;比较各时间段的实际介入手术信息和对应的介入手术规划信息,获取介入偏差信息;基于所述介入偏差信息进行聚类,显示聚类结果。
实际介入手术信息可以包括介入手术中的实际介入手术路径、实际病灶位置信息,实际病变位置信息是指基于介入手术中的实时医学影像获取的病灶位置信息。实际介入手术路径可以是从介入手术开始,穿刺针进入目标对象体内到当前时刻穿刺针所经过的路径,随着时间点的推移,可以获得多个时间段的实际介入手术路径。
介入偏差信息是指实际介入手术信息中的实际介入手术路径与介入手术规划路径,实际病灶位置信息与介入手术规划信息中的病灶位置信息的差异。在一些实施例中,处理设备可以通过直接比较、计算方差等多种方式获取介入偏差信息,介入偏差信息可以用图像或矩阵等方式进行表示,例如,可以将实际介入手术路径和介入手术规划路径在同一张图像上显示,并标出两者之间的差异(如路径偏差距离等);也可以在一张图像上同时显示实际病灶位置和介入手术规划信息中的病灶位置,并对两者之间的偏差进行标记。
聚类可以是指对多个时间段的介入偏差信息进行聚合。例如,将多个时间段中对应相同位置处的介入偏差信息进行聚合,比如,在某个位置处,第一时间段实际介入手术路径与接入手术规划路径的偏差为0,第二时间段偏差为1,第三时间段偏差为0.5等。通过聚类,可以将不同时间点出现的偏差信息整合到一起显示给医生观看,可以方便医生了解介入手术过程中可能出现偏差的重点部位或者出现偏差的重点时间,进而针对性地调整后续的介入手术。
图4是根据本说明书一些实施例提供的用于介入手术医学影像处理方法的示例性流程图。
步骤410,获取规划介入路径的模式。
在一些实施例中,介入路径是指介入手术中所用的器械引入人体时所经过的路径。介入路径的模式可以包括精准规划模式和快速规划模式。在一些实施例中,精准规划模式或快速规划模式可以是用于对手术中扫描影像分割的路径规划模式。在一些实施例中,精准规划模式可以包括精准分割模式。在一些实施例中,快速规划模式可以包括快速分割模式。
在一些实施例中,可以获取规划介入路径的模式。在一些实施例中,可以从医学扫描设备110获取规划介入路径的模式。在一些实施例中,可以从终端130、处理设备140和存储设备150获取规划介入路径的模式。
步骤420,获取手术前增强影像。
在一些实施例中,处理设备可以从医学扫描设备110获取扫描对象的手术前增强影像,如PET-CT影像等。在一些实施例中,处理设备可以从终端130、处理设备140和存储设备150获取扫描对象的手术前增强影像,如US影像等。
步骤430,对手术前增强影像的第一目标结构集进行分割,获得第一目标结构集的第一医学影像。
在一些实施例中,第一医学影像也可以称为第一分割影像。
步骤440,获取手术中扫描影像。
在一些实施例中,处理设备可以从医学扫描设备110获取扫描对象的手术中扫描影像,如PET-CT影像等。在一些实施例中,处理设备可以从终端130、处理设备140和存储设备150获取扫描对象的手术中扫描影像,如US影像等。
步骤450,对手术中扫描影像的第二目标结构集进行分割,获得第二目标结构集的第二医学影像。在一些实施例中,第二医学影像也可以称为第二分割影像。
在一些实施例中,处理设备对手术中扫描影像的目标器官进行分割可以按以下方式实施:根据规划模式,对手术中扫描影像的第二目标结构集进行分割。在一些实施例中,可以根据快速分割模式和/或精 准分割模式,对手术中扫描影像的第四目标结构集进行分割。
在一些实施例中,第四目标结构集可以是第二目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。在不同规划模式下,第四目标结构集包括的区域/器官不同。在一些实施例中,在快速分割模式下,第四目标结构集可以包括不可介入区域。在一些实施例中,在精准分割模式下,第四目标结构集可以包括预设的重要器官。
在一些实施例中,在快速分割模式下,可以对手术中扫描影像进行区域定位计算,以及对不可介入区域进行分割提取。
不可介入区域是指在介入手术时介入规划路径需要避开的区域。在一些实施例中,不可介入区域可以包括不可穿刺区域、不可导入或置入区域以及不可注入区域。
在一些实施例中,可以对不可介入区域和目标器官(例如,靶器官)之外的区域进行后处理,以保障不可介入区域与目标器官的中间区域不存在空洞区域。空洞区域是指由前景像素相连接的边界包围所形成的背景区域。在一些实施例中,不可介入区域可以用腹腔(或是胸腔)区域减去目标器官和可介入区域得到。而用腹腔(或是胸腔)区域减去目标器官和可介入区域得到不可介入区域后,目标器官和不可介入区域的中间可能会存在空洞区域,该空洞区域既不属于目标器官,也不属于不可介入区域。此时,可以对空洞区域进行后处理操作以将空洞区域补全,也即是经过后处理操作的空洞区域可以视为不可介入区域。在一些实施例中,后处理可以包括腐蚀操作和膨胀操作。在一些实施例中,腐蚀操作和膨胀操作可以基于手术中扫描影像与滤波器进行卷积处理来实施。在一些实施例中,腐蚀操作可以是滤波器与手术中扫描影像卷积后,根据预定腐蚀范围求局部最小值,使得手术中扫描影像的轮廓缩小至期望范围,在手术中扫描影像显示初始影像中目标高亮区域缩小一定范围。在一些实施例中,膨胀操作可以是滤波器与术中扫描影像卷积后,根据预定腐蚀范围求局部最大值,使得手术中扫描影像的轮廓扩大至期望范围,在手术中扫描影像显示初始影像中目标高亮区域缩小一定范围。
在一些实施例中,在快速分割模式下,可以在对手术中扫描影像先进行区域定位计算,再进行不可介入区域的分割提取。在一些实施例中,可以基于手术中扫描影像的目标器官的分割掩膜和血管掩膜,确定目标器官内部的血管掩膜。需要说明的是,在快速分割模式下,仅需分割目标器官内部的血管;在精准分割模式下,可以分割目标器官内部的血管以及外部其他血管。
掩膜(Mask),如器官掩膜,可以是像素级的分类标签,以腹腔医学影像为例进行说明,掩膜表示对医学影像中各个像素进行分类,例如,可以分成背景、肝脏、脾脏、肾脏等,特定类别的汇总区域用相应的标签值表示,例如,所有分类为肝脏的像素进行汇总,汇总区域用肝脏对应的标签值表示,这里的标签值根据可以具体粗分割任务进行设定。分割掩膜是指经过分割操作后得到的相应掩膜。在一些实施例中,掩膜可以包括器官掩膜(如目标器官的器官掩膜)和血管掩膜。
在一些实施例中,快速分割模式下,仅以胸腔区域或腹腔区域作为示例,首先对手术中扫描影像的扫描范围内胸腔或是腹腔区域的区域定位计算,具体地,对于腹腔,选取肝顶直至直肠底部,作为腹腔的定位区域;如果是胸腔,则取食管顶至肺底(或肝顶),作为胸腹腔的定位区域;确定胸腔或是腹腔区域的区域定位信息后,再对腹腔或是胸腔进行分割,并在该分割区域内进行再次分割以提取可介入区域(与不可介入区域相对,如可穿区域,脂肪等);最后,用腹腔分割掩膜去掉目标器官的分割掩膜和可穿区域掩膜,即可提取到不可介入区域。在一些实施例中,可介入区域可以包括脂肪部分,如两个器官之间包含脂肪的缝隙等。以肝脏为例,皮下至肝脏之间的区域中的部分区域可以被脂肪覆盖。由于快速分割模式下处理速度快,进而使得规划速度更快,时间更短,提高了影像处理效率。
在一些实施例中,在精准分割模式下,可以对手术中扫描影像的所有器官进行分割。在一些实施例中,手术中扫描影像的所有器官可以包括手术中扫描影像的基本器官以及重要器官。在一些实施例中,手术中扫描影像的基本器官可以包括手术中扫描影像的目标器官(如靶器官)。在一些实施例中,在精准分割模式下,可以对手术中扫描影像的预设的重要器官进行分割。预设的重要器官可以根据手术中扫描影像的每个器官的重要程度来确定。例如,手术中扫描影像中的所有重要器官均可以作为预设的重要器官。在一些实施例中,快速分割模式下的预设的重要器官的总体积与精准分割模式下的预设的重要器官总体积的比值可以大于预设效率因子m。预设效率因子m可以表征基于不同分割模式进行分割的分割效率(或分割细致程度)的差异情况。在一些实施例中,预设效率因子m可以等于或大于1。在一些实施例中,效率因子m的设定与介入手术类型有关。介入手术类型可以包括但不限于泌尿手术、胸腹手术、心血管手术、妇产科介入手术、骨骼肌肉手术等。仅作为示例性说明,泌尿手术中的预设效率因子m可以设置的较大;胸腹手术中的预设效率因子m可以设置的较小。
在一些实施例中,在精准分割模式下,通过分割获取手术中扫描影像的所有器官的分割掩膜。在一些实施例中,在精准分割模式下,通过分割获取手术中扫描影像的所有器官的分割掩膜和血管掩膜。在一些实施例中,在精准分割模式下,基于手术中扫描影像的所有器官的分割掩膜和血管掩膜,确定所有器 官内部的血管掩膜。由此可知,精准分割模式下,分割的影像内容更细致,使得规划路径的选择性更多,影像处理的鲁棒性也得到了加强。
关于以上各步骤的更多说明可以参见图2的步骤220中的相关描述,此处不再赘述。
图5是根据本说明书一些实施例所示的介入手术医学影像处理方法中涉及的分割过程的示例性流程图。如图5所示,流程500可以包括以下步骤:
步骤510,对医学影像中的目标结构集中的至少一个元素进行粗分割。
在一些实施例中,医学影像可以包括手术前增强影像和手术中扫描影像。目标结构集可以包括第一目标结构集、第二目标结构集和第四目标结构集中的任意一个或多个。
在一些实施例中,步骤510中,处理设备可以利用阈值分割方法、区域生长方法或水平集方法,对医学影像中的目标结构集中的至少一个元素进行粗分割的操作。元素可以包括医学影像中的目标器官(例如,靶器官)、靶器官内的血管、病灶、不可介入区域、所有重要器官等。在一些实施例中,基于阈值分割方法进行粗分割,可以按以下方式实施:可以通过设定多个不同的像素阈值范围,根据输入医学影像的像素值,对医学影像中的各个像素进行分类,将像素值在同一像素阈值范围内的像素点分割为同一区域。在一些实施例中,基于区域生长方法进行粗分割,可以按以下方式实施:基于医学影像上已知像素点或由像素点组成的预定区域,根据需求预设相似度判别条件,并基于该预设相似度判别条件,将像素点与其周边像素点比较,或者将预定区域与其周边区域进行比较,合并相似度高的像素点或区域,直到上述过程无法重复则停止合并,完成粗分割过程。在一些实施例中,预设相似度判别条件可以根据预设影像特征确定,示例性地,如灰度、纹理等影像特征。在一些实施例中,基于水平集方法进行粗分割,可以按以下方式实施:将医学影像的目标轮廓设为一个高维函数的零水平集,对该函数进行微分,从输出中提取零水平集来得到目标的轮廓,然后将轮廓范围内的像素区域分割出来。
在一些实施例中,处理设备可以利用基于深度学习卷积网络的方法,对医学影像中的目标结构集的至少一个元素进行粗分割的操作。在一些实施例中,基于深度学习卷积网络的方法可以包括基于全卷积网络的分割方法。在一些实施例中,卷积网络可以采用基于U形结构的网络框架,如UNet等。在一些实施例中,卷积网络的网络框架可以由编码器和解码器以及残差连接(skip connection)结构组成,其中编码器和解码器由卷积层或卷积层结合注意力机制组成,卷积层用于提取特征,注意力模块用于对重点区域施加更多注意力,残差连接结构用于将编码器提取的不同维度的特征结合到解码器部分,最后经由解码器输出分割结果。在一些实施例中,基于深度学习卷积网络的方法进行粗分割,可以按以下方式实施:由卷积神经网络的编码器通过卷积进行医学影像的特征提取,然后由卷积神经网络的解码器将特征恢复成像素级的分割概率图,分割概率图表示图中每个像素点属于特定类别的概率,最后将分割概率图输出为分割掩膜,由此完成粗分割。
步骤520,得到至少一个元素的掩膜。
元素的掩膜可以是指用于对目标结构集中的元素进行遮挡的信息。在一些实施例中,可以将粗分割的结果(例如,所述分割掩膜)作为元素的掩膜。
步骤530,确定掩膜的定位信息。
图6是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图。图7是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图。图8是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图。
在一些实施例中,步骤530中,确定元素掩膜的定位信息,可以按以下方式实施:对元素掩膜进行软连通域分析。连通域,即连通区域,一般是指影像中具有相同像素值且位置相邻的前景像素点组成的影像区域。
在一些实施例中,步骤530对元素掩膜进行软连通域分析,可以包括以下几个子步骤:
子步骤531,确定连通域数量;
子步骤532,当连通域数量≥2时,确定符合预设条件的连通域面积;
子步骤533,当多个连通域中最大连通域的面积与连通域总面积的比值大于第一阈值M,确定最大连通域符合预设条件;
子步骤534,确定保留连通域至少包括最大连通域;
子步骤535,基于保留连通域确定元素掩膜的定位信息。
预设条件是指连通域作为保留连通域时需要满足的条件。例如,预设条件可以是对连通域面积的限定条件。在一些实施例中,医学影像中可能会包括多个连通域,多个连通域具有不同的面积。可以将具有不同面积的多个连通域按照面积大小,例如,从大到小进行排序,排序后的连通域可以记为第一连通域、第二连通域、第k连通域。其中,第一连通域可以是多个连通域中面积最大的连通域,也叫最大连通域。这种情况下,判断不同面积序位的连通域作为保留连通域的预设条件可以不同,具体参见图5的相关描述。 在一些实施例中,符合预设条件的连通域可以包括:连通域的面积按照从大到小排序在预设序位n以内的连通域。例如,预设序位n为3时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域是否为保留连通域。即,先判断第一连通域是否为保留连通域,再判断第二连通域是否为保留连通域。在一些实施例中,预设序位n可以基于目标结构的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,第一阈值M的取值范围可以为0.8至0.95,在取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第一阈值M的取值范围可以为0.9至0.95,进一步提高了软连通域分析的准确率。在一些实施例中,第一阈值M可以基于目标结构的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,预设序位n/第一阈值M也可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
在一些实施例中,步骤530对元素掩膜进行软连通域分析,可以按以下方式进行:
基于获取到的元素掩膜,对元素掩膜内连通域的个数及其对应面积进行分析和计算,过程如下:
当连通域个数为0时,表示对应掩膜为空,即掩膜获取或粗分割失败或分割对象不存在,不作处理。例如,对腹腔中的脾脏进行分割时,可能存在脾脏切除的情况,此时脾脏的掩膜为空。
当连通域个数为1时,表示仅此一个连通域,无假阳性,无分割断开等情况,保留该连通域;可以理解的是,连通域个数为0和1时,无需根据预设条件判断连通域是否为保留连通域。
当连通域个数为2时,按面积(S)的大小分别获取连通域A和B,其中,连通域A的面积大于连通域B的面积,即S(A)>S(B)。结合上文,连通域A也可以称为第一连通域或最大连通域;连通域B可以称为第二连通域。当连通域的个数为2时,连通域作为保留连通域需要满足的预设条件可以是最大连通域面积与连通域总面积的比值与阈值的大小关系。对连通域进行计算,当A面积占A、B总面积的比例大于第一阈值M时,即S(A)/S(A+B)>第一阈值M,可以将连通域B判定为假阳性区域,仅保留连通域A(即确定连通域A为保留连通域);当A面积占A、B总体面积的比例小于或等于第一阈值M时,可以将A和B均判定为元素掩膜的一部分,同时保留连通域A和B(即确定连通域A和B为保留连通域)。
当连通域个数大于或等于3时,按面积(S)的大小分别获取连通域A、B、C…P,其中,连通域A的面积大于连通域B的面积,连通域B的面积大于连通域C的面积,以此类推,即S(A)>S(B)>S(C)>…>S(P);然后计算连通域A、B、C…P的总面积S(T),对连通域进行计算,此时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域(或者面积序位在预设序位n以内的连通域)是否为保留连通域。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件可以是最大连通域面积与连通域总面积的比值与阈值(例如,第一阈值M)的大小关系。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件也可以是第二连通域面积与最大连通域面积的比值与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A面积占总面积S(T)的比例大于第一阈值M时,即S(A)/S(T)>第一阈值M,或者,连通域B面积占连通域A面积的比例小于第二阈值N时,即S(B)/S(A)<第二阈值N,将连通域A判定为元素掩膜部分并保留(即连通域A为保留连通域),其余连通域均判定为假阳性区域;否则,继续进行计算,即继续判断第二连通域(即连通域B)是否为保留连通域。在一些实施例中,连通域B作为保留连通域需要满足的预设条件可以是第一连通域与第二连通域的面积之和与连通域总面积的比值与第一阈值M的大小关系。在一些实施例中,连通域B作为保留连通域需要满足的预设条件也可以是第三连通域面积占第一连通域面积与第二连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A和连通域B的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B)/S(T)>第一阈值M,或者,连通域C面积占连通域A和连通域B面积的比例小于第二阈值N时,即S(C)/S(A+B)<第二阈值N,将连通域A和B判定为元素掩膜部分并保留(即连通域A和连通域B为保留连通域),剩余部分均判定为假阳性区域;否则,继续进行计算,即继续判断第三连通域(即连通域C)是否为保留连通域。连通域C的判断方法与连通域B的判断方法类似,连通域C作为保留连通域需要满足的预设条件可以是第一连通域、第二连通域和第三连通域的面积之和与连通域总面积的比值与第一阈值M的大小关系,或者,第四连通域面积占第一连通域面积、第二连通域面积和第三连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A、连通域B和连通域C的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B+C)/S(T)>第一阈值M,或者,连通域D面积占连通域A、连通域B和连通域C面积的比例小于第二阈值N时,即S(D)/S(A+B+C)<第二阈值N,将连通域A、B和C均判定为元素掩膜部分并保留(即连通域A、连通域B和连通域C均为保留连通域)。参照上述判断方法,可以依次判断连通域A、B、C、D…P,或者是面积序位在预设序位n以内的部分连通域是否为保留连通域。需要说明的是,图6中仅示出了对三个连通域是否为保留连通域进行的判断。也可以理解为,图6中的预设序位n的值设定为4,因此,只需对序位为1、2、3的连通域,即连通域A、连通域B、连通域C是否为保留连通域进行判断。
最后输出保留的连通域。
在一些实施例中,第二阈值N的取值范围可以为0.05至0.2,在取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第二阈值N的取值范围可以为0.05,此种设置情况下,能够获得较为优异的软连通域分析准确率效果。
如图8所示,左边上下分别为未采用软连通域分析的粗分割结果的横断面医学影像和立体医学影像,右边分别为采用了软连通域分析的粗分割结果的横断面医学影像和立体医学影像。经过对比可知,基于软连通域分析对元素掩膜进行粗分割的结果显示,去除了左边影像中方框框出的假阳性区域,相比以往连通域分析方法,排除假阳性区域的准确性和可靠性更高,并且直接有助于后续合理提取元素掩膜定位信息的边界框,提高了分割效率。
在一些实施例中,元素掩膜的定位信息可以为元素掩膜的外接矩形的位置信息,例如,外界矩形的边框线的坐标信息。在一些实施例中,元素掩膜的外接矩形,覆盖元素的定位区域。在一些实施例中,外接矩形可以以外接矩形框的形式显示在医学影像中。在一些实施例中,外接矩形可以是基于元素中连通区域的各方位的底边缘,例如,连通区域上下左右方位上的底边缘,来构建相对于元素掩膜的外接矩形框。
在一些实施例中,元素掩膜的外接矩形可以是一个矩形框或多个矩形框的组合。例如,可以是一个较大面积的矩形框,或者多个较小面积矩形框组合拼成的较大面积的矩形框。
在一些实施例中,元素掩膜的外接矩形可以是仅存在一个矩形框的一个外接矩形框。例如,在元素中只存在一个连通区域时(例如,血管或腹腔中的器官),根据该连通区域各方位的底边缘,构建成一个较大面积的外接矩形。在一些实施例中,上述大面积的外接矩形可以应用于存在一个连通区域的器官。
在一些实施例中,元素掩膜的外接矩形可以是多个矩形框组合拼成的一个外接矩形框。例如,在元素存在多个连通区域时,多个连通区域对应的多个矩形框,根据这多个矩形框的底边缘构建成一个矩形框。可以理解的是,如三个连通区域对应的三个矩形框的底边缘构建成一个总的外接矩形框,计算时按照一个总的外接矩形框来处理,在保障实现预期精确度的同时,减少计算量。
在一些实施例中,医学影像中包括多个连通域时,可以先判断多个连通域的位置信息,再基于多个连通域的位置信息得到元素掩膜的定位信息。例如,可以先判断多个连通域中符合预设条件的连通域,即保留连通域的位置信息,进而根据保留连通域的位置信息得到元素掩膜的定位信息。
在一些实施例中,步骤530中,确定元素掩膜的定位信息,还可以包括以下操作:基于预设的元素的定位坐标,对元素掩膜进行定位。
在一些实施例中,该操作可以在元素掩膜外接矩形定位失败的情况下执行。可以理解的是,当元素掩膜外接矩形的坐标不存在时,判断对应元素定位失败。
在一些实施例中,预设的元素可以选取定位较为稳定的元素(例如,定位较为稳定的器官),在对该元素定位时出现定位失败的概率较低,由此实现对元素掩膜进行精确定位。在一些实施例中,由于在腹腔范围中肝部、胃部、脾部、肾部的定位失败的概率较低,胸腔范围中肺部的定位失败的概率较低,这些器官的定位较为稳定,因此肝部、胃部、脾部、肾部可以作为腹腔范围中的预设的器官,即预设的元素可以包括肝部、胃部、脾部、肾部、肺部,或者其他任何可能的器官组织。在一些实施例中,可以基于肝部、胃部、脾部、肾部的定位坐标对腹腔范围中的器官掩膜进行再次定位。在一些实施例中,可以基于肺部的定位坐标对胸腔范围中的器官掩膜进行定位。
在一些实施例中,可以以预设的元素的定位坐标为基准坐标,对元素掩膜进行再次定位。在一些实施例中,当定位失败的元素位于腹腔范围时,则以肝部、胃部、脾部、肾部的定位坐标作为再次定位的坐标,据此对腹腔中定位失败的元素进行再次定位。在一些实施例中,当定位失败的元素位于胸腔范围时,则以肺部的定位坐标作为再次定位的坐标,据此对胸腔中定位失败的元素进行再次定位。仅作为示例,当定位失败的元素位于腹腔范围时,可以以肝顶、肾底、脾左、肝右的定位坐标作为再次定位的横断面防线(上侧、下侧)、冠状面方向(左侧、右侧)的坐标,并取这四个器官坐标的最前端和最后端作为新定位的矢状面方向(前侧、后侧)的坐标,据此对腹腔中定位失败的元素进行再次定位。仅作为示例,当定位失败的元素位于胸腔范围时,以肺部定位坐标构成的外接矩形框扩张一定像素,据此对胸腔中定位失败的元素进行再次定位。
基于预设的元素的定位坐标,对元素掩膜进行精确定位,能够提高分割精确度,加上降低了分割时间,从而提高了分割效率,同时减少了分割计算量,节约了内存资源。
步骤540,基于掩膜的定位信息,对至少一个元素进行精准分割。
图9是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图。
在一些实施例中,步骤540中,基于掩膜的定位信息,对至少一个元素进行精准分割,可以包括以下子步骤:
子步骤541,对至少一个元素进行初步精准分割。初步精准分割可以是根据粗分割的元素掩膜的定位信息,进行的精准分割。在一些实施例中,可以根据输入数据和粗分割定位的外接矩形框,对元素进 行初步精准分割。通过初步精准分割可以生成精准分割的元素掩膜。
子步骤542,判断元素掩膜的定位信息是否准确。通过步骤542,可以判断粗分割得到的元素掩膜的定位信息是否准确,进一步判断粗分割是否准确。
在一些实施例中,可以对初步精准分割的元素掩膜进行计算获得其定位信息,将粗分割的定位信息与精准分割的定位信息进行比较。在一些实施例中,可以对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。在一些实施例中,可以在三维空间的6个方向上(即外接矩形框的整体在三维空间内为一个立方体),对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。仅作为示例,可以计算粗分割的元素掩膜的外接矩形框每个边与精准分割的元素掩膜的外接矩形框每个边的重合度,或者计算粗分割的元素掩膜的外接矩形框6个顶点坐标与精准分割的元素掩膜的外接矩形框的差值。
在一些实施例中,可以根据初步精准分割的元素掩膜的定位信息,来判断粗分割的元素掩膜的定位信息是否准确。在一些实施例中,可以根据粗分割的定位信息与精准分割的定位信息的差别大小,来确定判断结果是否准确。在一些实施例中,定位信息可以是元素掩膜的外接矩形(如外接矩形框),根据粗分割的元素掩膜的外接矩形与精准分割的元素掩膜的外接矩形,判断粗分割元素掩膜的外接矩形是否准确。此时,粗分割的定位信息与精准分割的定位信息的差别大小可以是指,粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离大小。在一些实施例中,当粗分割的定位信息与精准分割的定位信息差别较大(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较大),则判断粗分割的定位信息准确;当差别较小(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较小)时,则判断粗分割的定位信息不准确。需要注意的是,粗分割外接矩形框是对原始粗分割贴近元素的边框线上进行了像素扩张(例如,扩张15-20个体素)得到的。在一些实施例中,可以基于粗分割的外接矩形框与精准分割的外接矩形框中相距最近的边框线之间的距离与预设阈值的大小关系,来确定粗分割的定位信息是否准确,例如,当距离小于预设阈值时确定为不准确,当距离大于预设阈值时确定为准确。在一些实施例中,为了保障判断准确度,预设阈值的取值范围可以是小于或等于5体素。
图10至图11是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图。图12A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示意图。
其中,图10、图11中显示有粗分割得到的元素掩膜A以及元素掩膜A的外接矩形框B(即元素掩膜A的定位信息),以及根据粗分割的外接矩形框初次精准分割后的外接矩形框C,图12A中还示出了粗分割的外接矩形框B滑动后得到的滑窗B1,其中,(a)为滑动操作前的示意图,(b)为滑动操作后的示意图。另外,方便起见,以三维外接矩形框的一个平面内的平面矩形框进行示例说明,可以理解三维外接矩形框还存在其他5个平面矩形框,即在进行三维外接矩形框的具体计算时存在6个方向的边框线,这里仅以某一平面的4个边框线进行说明。
仅作为示例,如图10所示,精准分割外接矩形框C中的右边边框线与粗分割的外接矩形框B对应的边框线差别较小,由此可以判断粗分割外接矩形框B右边对应的方向上是不准确的,需要对右边边框线进行调整。但C中的上边、下边以及左边边框线与B中的上边、下边以及左边差别较大,由此可以判断粗分割外接矩形框B上边、下边以及左边对应的方向上是准确的。仅作为示例,如图11所示,精准分割外接矩形框C中4个边的边框线与粗分割的外接矩形框B对应边框线差别均较大,可以判断粗分割外接矩形框B中4个边的边框线均是准确的。需要注意的是,对于元素掩膜A共有6个方向,图10、图11中仅以4个边框线进行示意进行说明,实际情况中会对元素掩膜A中的6个方向的12个边框线进行判断。
子步骤543a,当判断结果为不准确,基于自适应滑窗获取准确的定位信息。在一些实施例中,当粗分割结果不准确时,对其精准分割获取到的元素大概率是不准确的,可以对其进行相应自适应滑窗计算,并获取准确的定位信息,以便继续进行精准分割。
在一些实施例中,基于自适应滑窗获取准确的定位信息,可以按以下方式实施:确定定位信息不准确的至少一个方向;根据重叠率参数,在所述方向上进行自适应滑窗计算。在一些实施例中,可以确定外接矩形框不准确的至少一个方向;确定粗分割外接矩形框不准确后,根据输入的预设重叠率参数,将粗分割外接矩形框按照相应方向滑动,即进行滑窗操作,并重复该滑窗操作直至所有外接矩形框完全准确。其中,重叠率参数指初始外接矩形框与滑动之后的外接矩形框之间重叠部分面积占初始外接矩形框面积的比例,当重叠率参数较高时,滑窗操作的滑动步长较短。在一些实施例中,若想保证滑窗计算的过程更加简洁(即滑窗操作的步骤较少),可以将重叠率参数设置的较小;若想保证滑窗计算的结果更加准确,可以将重叠率参数设置的较大。在一些实施例中,可以根据当前重叠率参数计算进行滑窗操作的滑动步长。根据图10的判断方法可知,图12A中粗分割的外接矩形框B的右边和下边边框线对应的方向上是不准确的。为方便描述,这里将外接矩形框B的右边边框线对应的方向记为第一方向(第一方向垂直于B的右边边框线),下边边框线对应的方向记为第二方向(第二方向垂直于B的下边边框线)。仅作为示例,如图 12A所示,假设外接矩形框B的长度为a,当重叠率参数为60%时,可以确定对应步长为a*(1-60%),如上述的,外接矩形框B的右边框线可以沿着第一方向滑动a*(1-60%)。同理,外接矩形框B的下边框线可以沿着第二方向滑动相应步长。外接矩形框B的右边边框线以及下边边框线分别重复相应滑窗操作,直至外接矩形框B完全准确,如图12A(b)中所示的滑窗B1。结合图10及图12A,当确定了粗分割的外接矩形框(即目标结构掩膜的定位信息)不准确时,对精分割外接矩形框上6个方向上边框线的坐标值与粗分割外接矩形框上6个方向上边框线的坐标值进行一一比对,当差距值小于坐标差值阈值(例如,坐标差值阈值为5pt)时(其中坐标差值阈值可以根据实际情况进行设定,在此不做限定),可以判断该外接矩形框的边框线为不准确的方向。
再例如,如图10所示,将精分割外接矩形框C影像中4条边对应的4个方向的像素点坐标,与粗分割外接矩形框B影像中4条边框线对应的4个方向的像素点坐标进行一一比对,其中,当一个方向的像素点坐标的差值小于坐标差值阈值8pt时,则可以判定图10中的粗分割外接矩形框该方向不准确。如,上边差值为20pt、下边差值为30pt、右边差值为1pt,左边为50pt,则右边对应的方向不准确,上边、下边、左边对应的方向准确。
再例如,结合图12A,其中B1为粗分割的外接矩形框B滑动后得到的外接矩形框(也称为滑窗),可以理解的是,滑窗为符合预期精确度标准的粗分割外接矩形框,需要将粗分割外接矩形框B的边框线(例如,右边边框线、下边边框线)分别沿着相应方向(例如,第一方向、第二方向)滑动对应的步长至滑窗B1的位置。其中,依次移动不符合标准的每条边框线对应的方向,例如,先滑动B的右边边框线,再滑动B的下边边框线至滑窗的指定位置,而B左边和上边对应的方向是标准的,则不需要进行滑动。可以理解的是,每一边滑动的步长取决于B1与B的重叠率,其中,重叠率可以是粗分割外接矩形框B与滑窗B1当前的重叠面积占总面积的比值,例如,当前的重叠率为40%等等。需要说明的是,粗分割外接矩形框B的边框线的滑动顺序可以是从左到右、从上到下的顺序,或者是其他可行的顺序,在此不做进一步限定。
图12B-图12E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图。结合图12B-12E,在一些实施例中,基于原粗分割外接矩形框(原滑窗),自适应滑窗后获取准确的粗分割外接矩形框,可以获取准确的外接矩形框的坐标值,并基于坐标值和重叠率参数,对新滑窗进行精准分割,将精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。具体地,参见图12B,可以对原滑窗B进行滑窗操作,得到滑窗B1(滑窗操作后的最大范围的外接矩形框),B沿第一方向滑动对应步长得到滑窗B1-1,然后对滑窗B1-1的全域范围进行精准分割,得到滑窗B1-1的精准分割结果。进一步地,参见图12C,B可以沿第二方向滑动对应步长得到滑窗B1-2,然后对滑窗B1-2的全域范围进行精准分割,得到滑窗B1-2的精准分割结果。再进一步地,参见图12D,B滑动可以得到滑窗B1-3(如B可以按照图12C所示滑动操作得到滑窗B1-2,再由滑窗B1-2滑动得到滑窗B1-3),然后对滑窗B1-3的全域范围进行精准分割,得到滑窗B1-3的精准分割结果。将滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。需要说明的是,滑窗B1-1、滑窗B1-2以及滑窗B1-3的尺寸与B的尺寸相同。滑窗B1是原滑窗B进行连续滑窗操作,即滑窗B1-1、滑窗B1-2以及滑窗B1-3得到的最终滑窗结果。在一些实施例中,滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果进行叠加时,可能存在重复叠加部分,例如,图12E中,滑窗B1-1和滑窗B1-2之间可能存在交集部分,在进行分割结果叠加时,该交集部分可能被重复叠加。针对这种情况,可以采用下述方法进行处理:对于元素掩膜A的某一部分,若一个滑窗对该部分的分割结果准确,另一滑窗的分割结果不准确,则将分割结果准确的滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都准确,则将右侧滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都不准确,则将右侧滑窗的分割结果作为该部分的分割结果,并继续进行精准分割,直至分割结果准确。
在一些实施例中,如图9所示,当判断结果为不准确,基于自适应滑窗获取准确的定位信息是一个循环过程。具体地,在对比精准分割边框线和粗分割边框线后,通过自适应滑窗可以得到更新后的精准分割外接矩形框坐标值,该精准分割外接矩形框扩张一定的像素后设定为新一轮循环的粗分割外接矩形框,然后对新的外接矩形框再次进行精准分割,得到新的精准分割外接矩形框,并计算其是否满足准确的要求。满足准确要求,则结束循环,否则继续循环。在一些实施例中,可以利用深度卷积神经网络模型对粗分割中的至少一个元素进行精准分割。在一些实施例中,可以利用粗分割前初始获取的历史医学影像作为训练数据,以历史精准分割结果数据,训练得到深度卷积神经网络模型。在一些实施例中,历史医学影像、历史精准分割结果数据从医学扫描设备获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。
子步骤543b,当判断结果为准确,将初步精准分割结果作为分割结果输出。
在一些实施例中,当判断结果(即粗分割结果)准确时,可以确定通过该粗分割结果进行精准分割获取到的元素的定位信息是准确的,可以将初步精准分割结果输出。
在一些实施例中,可以输出上述进行精准分割的至少一个元素结果数据。在一些实施例中,为了进一步降低噪声及优化影像显示效果,可以在分割结果输出之前进行影像后处理操作。影像后处理操作可以包括对影像进行边缘光滑处理和/或影像去噪等。在一些实施例中,边缘光滑处理可以包括平滑处理或模糊处理(blurring),以便减少医学影像的噪声或者失真。在一些实施例中,平滑处理或模糊处理可以采用以下方式:均值滤波、中值滤波、高斯滤波以及双边滤波。
图13是根据本说明书一些实施例所示的分割结果的示例性效果对比图。
如图13所示,左边上下分别为采用传统技术的粗分割结果的横断面医学影像和立体医学影像,右边分别为采用本申请实施例提供的器官分割方法的横断面医学影像和立体医学影像。经过对比可知,右边分割结果影像显示的目标器官分割结果,相比左边分割结果影像显示的目标器官分割结果,获取的目标器官更完整,降低了分割器官缺失的风险,提高了分割精准率,最终提高了整体分割效率。
步骤460,对第一医学影像与第二医学影像进行配准,确定手术中第三目标结构集的空间位置。
关于配准确定手术中第三目标结构集的空间位置的更多描述可以参见步骤420中的说明。
在一些实施例中,第四目标结构集也可以视为是第三目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。
在一些实施例中,第一医学影像(即对手术前增强影像分割得到的手术前第一目标结构集的分割影像),可以包括第一目标结构集(例如,术前目标器官内的血管、术前目标器官、术前病灶)的精确结构特征;第二医学影像(即手术中扫描影像分割得到的手术中第二目标结构集的分割影像),可以包括第二目标结构集(例如,术中目标器官、术中病灶、术中不可介入区域/所有重要器官)的精确结构特征。在一些实施例中,在配准之前,可以对第一医学影像、第二医学影像进行目标结构集外观特征与背景的分离处理。在一些实施例中,外观特征与背景的分离处理可以采用基于人工神经网络(线性决策函数等)、基于阈值的分割方法、基于边缘的分割方法、基于聚类分析的图像分割方法(如K均值等)或者其他任何可行的算法,如基于小波变换的分割方法等等。
下面以第一医学影像包括术前目标器官(如,靶器官)内的血管和术前目标器官的结构特征(即第一目标结构集包括目标器官内的血管和目标器官),第二医学影像包括术中目标器官、术中病灶、术中不可介入区域/所有重要器官的结构特征(即第二目标结构集包括目标器官、病灶、不可介入区域/所有重要器官)为例,对配准过程进行示例性描述。可以理解的是,病灶的结构特征并不限于包括在第二医学影像中,在其他实施例中,病灶的结构特征也可以包括在第一医学影像中,或者病灶的结构特征同时包括在第一医学影像和第二医学影像中。
图14是本说明书一些实施例中所示的对第一医学影像与第二医学影像进行配准过程的示例性流程图。
步骤461,对第一医学影像与第二医学影像进行配准,确定配准形变场。
关于配准以及配准过程的说明可以参见图2的相关描述,此处不再赘述。
图15至图16是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图。图17是本说明书一些实施例中所示的经过分割得到第一医学影像、第二医学影像的示例性示意图。
在一些实施例中,步骤461中,对第一分割影像与第二分割影像进行配准,确定配准形变场的过程,可以包括以下几个子步骤:
子步骤4611,基于元素之间的配准,确定第一初步形变场。
在一些实施例中,元素可以是第一医学影像、第二医学影像的元素轮廓(例如,器官轮廓、血管轮廓、病灶轮廓)。元素之间的配准可以是指元素轮廓(掩膜)所覆盖的影像区域之间的配准。例如图16和图17中的手术前增强影像经过分割后得到目标器官(如靶器官)的器官轮廓A所覆盖的影像区域(左下图中虚线区域内灰度相同或基本相同的区域)、手术中扫描影像中经过分割得到目标器官(如靶器官)的器官轮廓B所覆盖的影像区域(右下图中虚线区域内灰度相同或基本相同的区域)。
在一些实施例中,通过器官轮廓A所覆盖的影像区域与器官轮廓B所覆盖的影像区域之间的区域配准,得到第一初步形变场(如图16中的形变场1)。在一些实施例中,第一初步形变场可以是局部形变场。例如,通过肝脏术前轮廓A与术中轮廓B得到关于肝脏轮廓的局部形变场。
子步骤4612,基于元素之间的第一初步形变场,确定全图的第二初步形变场。
全图可以是包含元素的区域范围影像,例如,目标器官为肝脏时,全图可以是整个腹腔范围的影像。又例如,目标器官为肺时,全图可以是整个胸腔范围的影像。
在一些实施例中,可以基于第一初步形变场,通过插值确定全图的第二初步形变场。在一些实施例这种,第二初步形变场可以是全局形变场。例如,图16中的通过形变场1插值确定全图尺寸的形变场 2。
子步骤4613,基于全图的第二初步形变场,对浮动影像进行形变,确定浮动影像的配准图。
浮动影像可以是待配准的图像,例如,手术前增强影像、手术中扫描影像。例如,将手术中扫描影像配准到手术前扫描影像时,浮动影像为手术中扫描影像。可以通过配准形变场对手术中扫描影像进行配准,以使与手术前扫描影像空间位置一致。又例如,将手术前增强影像配准到手术中扫描影像时,浮动影像为手术前增强影像。可以通过配准形变场对手术前扫描影像进行配准,以使与手术中扫描影像空间位置一致。浮动影像的配准图可以是配准过程中得到的中间配准结果的图像。以手术前增强影像配准到手术中扫描影像为例,浮动影像的配准图可以是配准过程中得到的中间手术中扫描影像。为了便于理解,本说明书实施例以手术前增强影像配准到手术中扫描影像为例,对配准过程进行详细说明。
在一些实施例中,如图16所示,基于获取到的全图的形变场2,对浮动影像,即手术前增强影像进行形变,确定手术前增强影像的配准图,即中间配准结果的手术中扫描影像。例如,如图16所示,基于获取到肝脏所处腹腔范围的形变场,对手术前增强影像(腹腔增强影像)进行形变,获取到其配准图。
子步骤4614,对浮动影像的配准图与参考图像中第一灰度差异范围的区域进行配准,得到第三初步形变场。
在一些实施例中,参考图像是指配准前的目标图像,也可以称为未进行配准的目标图像。例如,手术前增强影像配准到手术中扫描影像时,参考图像是指未进行配准动作的手术中扫描影像。在一些实施例中,第三初步形变场可以是局部形变场。在一些实施例中,子步骤4614可以按以下方式实施:对浮动影像的配准图和参考图像的不同区域分别进行像素灰度计算,获得相应灰度值;计算浮动图像的配准图的灰度值与参考图像的对应区域的灰度值之间的差值;所述差值在第一灰度差异范围时,分别将浮动影像的配准图与参考图像的对应区域进行弹性配准,获得第三初步形变场。在一些实施例中,所述差值在第一灰度差异范围时,可以表示浮动影像的配准图中的一个区域与参考图像中对应区域的差异不大或比较小。例如,第一灰度差异范围为0至150,浮动影像的配准图中区域Q1与参考图像中同一区域的灰度差值为60,浮动影像的配准图中区域Q2与参考图像中同一区域的灰度差值为180,则两个图像(即浮动影像的配准图和参考图像)的区域Q1的差异不大,而区域Q2的差异较大,仅对两个图像中的区域Q1进行配准。在一些实施例中,如图16所示,对浮动图像的配准图与参考图像中的符合第一灰度差异范围的区域(差异不太大的区域)进行弹性配准,得到形变场3(即上述的第三初步形变场)。
子步骤4615,基于第三初步形变场,确定全图的第四初步形变场。
在一些实施例中,基于第三初步形变场,进行插值获得全图的第四初步形变场。在一些实施例中,第四初步形变场可以是全局形变场。在一些实施例中,可以通过该步骤将局部的第三初步形变场获取到关于全局的第四初步形变场。例如,图16中的通过形变场3插值确定全图尺寸的形变场4。
子步骤4616,基于第四初步形变场,对第二灰度差异范围的区域进行配准,获得最终配准的配准图。
在一些实施例中,第二灰度差异范围的区域可以是浮动影像的配准图灰度值与参考图像灰度值相比,灰度值差值较大的区域。在一些实施例中,可以设置一个灰度差异阈值(如灰度差值阈值为150),浮动影像的配准图灰度值与参考图像灰度值的差值小于灰度差异阈值的区域为第一灰度差异范围的区域,大于灰度差异阈值的则属于第二灰度差异范围的区域。
在一些实施例中,最终配准的配准图可以是基于至少一个形变场对浮动影像(例如,手术前增强影像),进行多次形变,获得最终与手术中扫描影像空间位置、解剖位置相同的图像。在一些实施例中,如图8所示,基于第四初步形变场,对第二灰度差异范围(即灰度差异比较大)的区域进行配准,获得最终配准的配准图。例如,灰度值差异比较大的脾脏之外的区域,针对该区域通过形变场4进行形变,获得最终的配准图。
在一些实施例中,利用图15-图16中所描述的配准方法,可以将浮动影像中进行了分割,并且参考图像中没有分割的元素(例如,靶器官内的血管),从浮动影像中映射到参考图像中。以浮动影像是手术前增强影像,参考图像是手术中扫描影像为例,靶器官内的血管在手术前增强影像中进行了分割,并且在手术中扫描影像中没有分割,通过配准可以将靶器官内的血管映射到手术中扫描影像。可以理解的是,对于快速分割模式下的不可介入区域以及精准分割模式下的所有重要器官的配准也可以采用图15-图16的配准方法,或者仅通过对应分割方法也可以实现类似的效果。
步骤462,基于配准形变场和手术前增强影像中的第一目标结构集中的至少部分元素的空间位置,确定手术中相应元素的空间位置。在一些实施例中,可以基于配准形变场和手术前增强影像中的目标器官内的血管,确定手术中目标器官内的血管(以下简称为血管)的空间位置。
在一些实施例中,可以基于下述公式(1),基于配准形变场和手术前增强影像中的血管,确定手术中血管的空间位置。
其中,IQ表示手术前增强影像,(x,y,z)表示血管的三维空间坐标,u(x,y,z)表示由手术前增强影像到手术中扫描影像的配准形变场,表示血管在手术中扫描影像中的空间位置。在一些实施例中,u(x,y,z)也可以理解为浮动图像中元素(例如,靶器官内的血管)的三维坐标至最终配准的配准图中的三维坐标的偏移。
由此,通过步骤461中确定的配准形变场,可以对手术前增强影像中的血管进行形变,生成与其空间位置相同的手术中血管的空间位置。
在一些实施例中,处理设备可以基于确定的手术中血管和病灶(包括在手术中扫描影像的第二分割影像中)的空间位置,计算病灶中心点,并生成病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据确定的可介入区域、不可介入区域,确定病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据潜在进针区域及基本避障约束条件,规划由经皮进针点到病灶中心点之间的基准路径。在一些实施例中,基本避障约束条件可以包括但不限于路径的入针角度、路径的入针深度、路径与血管及重要脏器之间不相交等。
步骤470,基于手术中第三目标结构集的空间位置规划介入路径,基于介入路径进行风险评估。
在一些实施例中,第三目标结构集中的元素(例如,靶器官、病灶、靶器官内的血管、不可介入区域、所有重要器官)的空间位置能够更全面、准确的反映扫描对象(例如,患者)的当前状况。介入路径可以基于第三目标结构集的空间位置进行规划,以使手术器械(例如,穿刺针)能够有效避开靶器官内的血管、不可介入区域和/或所有重要器官顺利抵达病灶的同时,降低手术风险。
在一些实施例中,第三目标结构集的元素选择可以基于规划介入路径的模式。在一些实施例中,规划介入路径的模式不同时,第三目标结构集中用于判定介入路径的风险评估的元素可以不同。例如,在快速规划模式下,第三目标结构集中用于判定介入路径的风险评估的元素可以包括靶器官内的血管和不可介入区域。又例如,在精准规划模式下,第三目标结构集中用于判定介入路径的风险评估的元素可以包括靶器官内的血管和所有的重要器官。
关于风险评估的更多说明可以参见图2的相关描述,此处不再赘述。
图18是本说明书一些实施例中所示的快速规划模式下确定第三目标结构集中至少部分元素的介入风险值的示例性流程图。
在一些实施例中,快速规划模式下确定第三目标结构集中至少部分元素的介入风险值的流程700,可以包括以下步骤:
步骤710,根据元素与介入路径的最短距离,确定元素的风险等级;
步骤720,根据风险等级确定元素的介入风险值。
在一些实施例中,流程700中的元素可以包括靶器官内的血管和不可介入区域。具体地,介入路径穿过第三目标结构集中的靶器官时,流程700中的元素可以是靶器官内的血管以及不可介入区域。相同距离下,靶器官内的血管和不可介入区域对于介入路径的风险等级不同,对应的风险值也不同。介入路径不穿过第三目标结构集中的靶器官时,流程700中的元素可以是不可介入区域。因此,可以根据靶器官内的血管与介入路径的最短距离,以及不可介入区域与介入路径的最短距离,以分别确定对应元素的风险等级,从而确定元素的相应介入风险值。
由以上描述可知,介入路径穿过或者不穿过靶器官的两种情况下,需要计算风险等级以及风险值的元素不同,下面针对两种情况分别对元素的风险值的计算方式进行描述。
介入路径穿过靶器官时,需计算靶器官内的血管以及不可介入区域的风险等级以及介入风险值。靶器官内的血管的介入风险值计算方式为:路径距离血管的最近直线距离为L1,0<L1<M1时,风险等级最高(记为第一风险等级),对应的介入风险值为第一介入风险值;M1<L1<N1时,风险等级为第二风险等级,对应的介入风险值为第二介入风险值;N1<L1<P1时,风险等级为第三风险等级,对应的介入风险值为第三介入风险值;L1>P1时,不考虑风险等级以及介入风险值。其中,第一风险等级高于第二风险等级,第二风险等级高于第三风险等级。例如,取值M1=5mm,N1=10mm,P1=15mm时,0<L1<=5时,第一风险等级对应的第一介入风险值可以为5分;5<L1<=10时,第二风险等级对应的第二介入风险值为3分;10<L1<=15时,第三风险等级对应的第三介入风险值为1分;L1>15时,不考虑风险等级以及介入风险值(也可以理解为介入风险值为0分)。
不可介入区域的介入风险值计算方式为:路径距离不可介入区域的最近直线距离为L2时,0<L2<A1时,风险等级最高(记为第一风险等级),对应的介入风险值为第一介入风险值;A1<L2<B1时,风险等级为第二风险等级,对应的介入风险值为第二介入风险值;B1<L2<C1时,风险等级为第三风险等级,对应的介入风险值为第三介入风险值;L2>C1时,不考虑风险等级以及介入风险值。例如,取值A1=3mm,B1=6mm,C1=10mm时,0<L2<=3时,第一风险等级对应的第一介入风险值为5分;3<L2<=6时,第二风 险等级对应的第二介入风险值为3分;6<L2<=10时,第三风险等级对应的第三介入风险值为1分;L2>10时,不考虑风险等级以及介入风险值(也可以理解为介入风险值为0分)。
其中,M1>A1,N1>B1,P1>C1,这是由于靶器官内的血管参与距离计算时,通常情况下靶器官内的血管距离介入路径较近,因此靶器官内的血管与介入路径之间的距离的控制较为严格;而不可介入区域距离介入路径较远,因此不可介入区域与介入路径之间的距离的控制较为宽松。例如,靶器官内的血管与介入路径之间的距离,以及不可介入区域与介入路径之间的距离均为5mm时,5mm的距离对于不可介入区域而言更能接受,介入风险值为3分,但是对于靶器官内的血管而言则风险较大,介入风险值为5分。
介入路径不穿过靶器官时,只需计算不可介入区域的风险等级以及介入风险值。这种情况下,不可介入区域的风险等级以及介入风险值的计算方式与穿过靶器官时不可介入区域的计算方式相同,在此不再赘述。
在一些实施例中,可以通过计算至少一条介入路径的总风险值;确定总风险值最小的介入路径为最优路径,因为总风险值越小,风险越小。在一些实施例中,可以将多条介入路径的介入风险值分别累加得到总风险值。在一些实施例中,利用总风险值最小的最优路径来规划介入路径。
在快速规划模式下,由于快速分割模式无需分割场景内的所有器官及组织,仅需分割不可介入区域,并通过配准的提取术中扫描影像中显影不明显的靶器官内的血管(和病灶)位置,在进行介入路径规划时,仅需绕过不可介入区域,使介入路径直达病灶即可,有助于介入规划及其介入手术的工作效率提升。
图19是本说明书一些实施例中所示的精准规划模式下确定第三目标结构集中至少部分元素的介入风险值的示例性流程图。
在一些实施例中,精准规划模式下确定第三目标结构集中至少部分元素的介入风险值的流程800可以包括以下步骤:
步骤810,根据元素与介入路径的最短距离,确定元素的风险等级;
步骤820,根据风险等级确定元素的介入风险值;
步骤830,基于元素的预设规则确定不同优先级,设定介入风险值的相应预设权重。
在一些实施例中,流程800中的元素可以包括靶器官内的血管和所有重要器官。具体地,介入路径穿过第三目标结构集中的靶器官时,流程800中的元素可以是靶器官内的血管以及所有重要器官。介入路径不穿过第三目标结构集中的靶器官时,流程800中的元素可以是所有重要器官。预设规则可以用于表征不同元素对于规划介入路径的不可介入重要程度。例如,在某一规划介入路径中,靶器官内的血管以及所有重要器官中的每个重要器官对于该规划介入路径的不可介入重要程度不同。不同元素在预设规则下具有不同的优先级。在一些实施例中,可以根据元素的预设规则确定每个元素的优先级。
在一些实施例中,在步骤830中,基于元素的预设规则确定不同优先级,设定介入风险值的相应预设权重,可以按以下方式实施:基于元素的不同优先级,设定介入风险值的相应预设权重。在一些实施例中,可以根据分割区域(即,元素)的不可介入重要程度确定分割区域的不同优先级,例如血管、重要脏器等一定不介入的分割区域,设定为较高优先级。在一些实施例中,可以为不同优先级的元素赋予不同预设权重。在一些实施例中,优先级越高,相应预设权重越大,优先级越低,相应预设权重越小。例如,预设权重可以用W表示,W∈{1,0.8,0.6},当优先级越高,则可以设定较大预设权重(如W为1),当优先级越低,可以设定较小预设权重(如W为0.6)。
介入路径穿过靶器官时,需计算靶器官内的血管以及所有重要器官的风险等级以及介入风险值。靶器官内的血管的风险等级以及介入风险值的计算方式与快速模式下的计算方式相同,在此不再赘述。重要器官的风险等级以及介入风险值的计算方式可以为:根据介入路径与其邻近的重要器官之间的最近距离(即针道与邻近器官最靠近针道的点之间的距离),确定分割器官区域的风险等级和介入风险值。在一些实施例中,可以根据针道与其邻近不可穿刺器官之间的最近距离,是否在设定阈值范围,确定分割器官区域的风险等级以及介入风险值。在一些实施例中,设定阈值范围可以利用设定的多个常数阈值来确定,如X、Y和Z来表示,其中X<Y<Z,介入路径与其邻近不可穿刺器官之间的最近距离可以表示为L3,介入风险值可以表示为R,当判断介入路径穿过了器官时,则该规划介入路径立即失效,无需再评估介入风险值;当0<L3<X时,风险等级较高,设定介入风险值R为a;当X<L3<Y时,风险等级次之,设定介入风险值为b;当Y<L3<Z时,风险等级较低,设定介入风险值R为c;当L3>Z时,可以忽略该风险,设定介入风险值为0,其中a>b>c。在一些实施例中,介入路径穿过靶器官时,可以基于靶器官内的血管以及所有重要器官的预设规则确定相应的优先级,并为不同优先级的血管以及所有重要器官的介入风险值赋予不同的权重。
介入路径不穿过靶器官时,只需计算重要器官的风险等级以及介入风险值。这种情况下,重要器官的风险等级以及介入风险值的计算方式与穿过靶器官时的计算方式相同,在此不再赘述。在一些实施例中,介入路径不穿过靶器官时,可以基于所有重要器官的预设规则确定相应的优先级,并为不同优先级的 所有重要器官的介入风险值赋予不同的权重。
在一些实施例中,基于介入风险值,规划介入路径,可以按以下方式实施:计算至少一条介入路径的加权风险值;确定加权风险值最小的介入路径为最优路径。在一些实施例中,加权风险值可以由多条介入路径的介入风险值经过加权计算获得。在一些实施例中,利用加权风险值最小的最优路径来规划介入路径。在一些实施例中,加权风险值可以用F表示,加权风险值F可以通过以下计算公式(2)计算得到:
F=∑S*W    (2)
当加权风险值F越小,意味着针道越远离重要器官和血管,风险也就越小。
在精准规划模式下,在进行介入(如穿刺针等)路径规划时,由于可以对靶器官和场景内其他组织的轮廓,根据介入风险值情况设立介入(如穿刺等)优先级,由介入点(如入针点等)到病灶区域规划出合理的路径,能够避开优先级较高的不可介入区域(如血管、重要脏器等),得到潜在的进针空间,从而提高介入规划及其介入手术的工作效率。
需要说明的是,在上述两种模式(即快速规划模式、精准规划模式)下,不同风险等级对应的介入风险值的数值大小可以根据实际情况进行设定,本说明书对此不做进一步限定。
图20是本说明书一些实施例中所示的图像异常检测过程900的示例性流程图。
在一些实施例中,图像异常检测过程900可以包括以下步骤:
步骤910,获取手术中扫描影像;
步骤920,对手术中扫描影像检测图像异常;
步骤930,基于检测到的图像异常,确定相应图像异常类型;
步骤940,基于图像异常类型,确定是否进行定量计算;
步骤950,根据是否进行定量计算的判断结果确定图像异常程度。
在一些实施例中,有关手术中扫描影像的获取方式,可参见图2至图5相关内容描述,在此不再赘述。在一些实施例中,图像异常可以包括图像数据中存在并发症的不合规部分。在一些实施例中,并发症可以包括出血、气胸、积液等。
在一些实施例中,可以利用深度学习建模的生成对抗网络方式检测图像异常,经过建模时的正常数据对比即可检测出图像异常。在一些实施例中,可以采用阈值处理、图像分割等方法中的至少一种检测图像异常。在一些实施例中,阈值处理可以实施为以下方式:由于不同并发症在图像上的反馈是不一致的,气胸、出血、积液等在图像中的像素值分布范围各有差别,可以通过设定像素阈值来区分异常区域的像素值属于哪一种并发症。在一些实施例中,图像分割可以实施为以下方式:在获取图像异常后,采用深度学习的方式对异常进行分割,对像素异常所在区域像素进行分类,判断属于哪一种并发症,如果非并发症,则手术进程可以继续;反之,可以对出血、积液、气胸等并发症进行快速识别和判断。
在一些实施例中,图像异常类型不同时,手术进程不同。例如,图像异常类型为气胸时,可以向操作者发出报警提示,手术进程结束。又例如,图像异常类型为出血或积液时,可以对出血量或积液量进行定量计算,并根据定量计算的结果确定手术进程是继续或结束。在一些实施例中,可以通过图像面积占比来计算出血或积液面积的相应出血量或积液量。在一些实施例中,可以判断出血量或积液量是否超过预设阈值(例如,预设血量阈值、预设液量阈值),当未超过时,少量出血或积液并不会影响介入手术的进程,继续执行手术,并进行持续观察;当出血量或积液量超过预设阈值,则会产生安全问题,此时可以提示向医生发出提示信息。
在一些实施例中,定量计算的判断结果决定了图像异常程度。例如,出血量或积液量超过预设阈值时,可以确定图像异常程度较高;出血量或积液量未超出预设阈值时,可以确定图像异常程度较低。
在一些实施例中,可以基于图像异常类型和图像异常程度,进行图像异常程度的相应报警提示。例如,图像异常类型为气胸时,可以提示操作者停止介入的提示信息。在一些实施例中,图像异常类型为出血或积液时,可以根据出血或积液的图像异常程度进行不同的报警提示。例如,图像异常程度较高时,可以提示操作者停止介入的提示信息。又例如,图像异常程度较低时,可以提示操作者可以进行介入,并进行持续观察的提示信息。
通过图像异常检测,能够有效检测出术中和术后随时可能发生的并发症,避免危险并发症状况的发生,即便在出现并发症时,也能及时提醒医生,及时中止手术进程,以便进行有利处理,提高介入手术安全性。
图21是本说明书一些实施例中所示的术后评估过程的示例性流程图。
在一些实施例中,术后评估过程的流程1000可以包括以下步骤:
步骤1010,将规划介入路径和实际介入路径配准到手术中扫描影像;
步骤1020,判断实际介入路径与规划介入路径的偏差;
步骤1030,判断偏差与手术中扫描影像的第三目标结构集中的特定元素是否有交集;
步骤1040,根据判断结果确定相应术后反馈信息。
在一些实施例中,规划介入路径可以是基于手术前增强影像和手术中扫描影像得到的;实际介入路径可以是基于手术后扫描影像得到的。在一些实施例中,手术后扫描影像是指扫描对象(如患者等)在介入手术后,经由医学扫描设备扫描得到的影像。有关手术后扫描影像的获取方式,可参见前文(例如,图2)相关内容描述,在此不再赘述。手术中扫描影像的第三目标结构集中的特定元素可以是第四目标结构集。即,快速规划模式下,特定元素是指不可介入区域;精准规划模式下,特定元素是指外部所有的重要器官/组织。
在一些实施例中,可以将规划介入路径和实际介入路径配准到手术中扫描影像,并进行配准计算,获取配准形变场。在一些实施例中,配准后的介入路径可以进行显示,并对配准后的实际介入路径和规划介入路径进行差别计算。如果实际介入路径与规划介入路径存在偏差,提取偏差部分,并判断所述偏差与手术中扫描影像的不可介入区域或所有重要器官/组织是否存在交集。如果交集不为空值,证明实际介入路径可能经过不可介入区域或所有重要器官/组织,可能对实质脏器造成影响,此时,可以确定对应的术后反馈信息为向临床医生发出提醒信息;如果交集为空值,则确定对应的术后反馈信息为不予提醒;如果实际介入路径与规划介入路径不存在差别,则确定对应的术后反馈信息为不予提醒。
图22是本说明书一些实施例中所示的术后评估过程的示例性流程图。
在一些实施例中,在获取术前增强影像和术后扫描影像后,对病灶(原病灶)及其周边器官区域进行分割,截取该区域的感兴趣区域进行配准,使得术前病灶和术后原病灶区域的位置对应起来,然后进行合并显示,以便于医生对手术结果进行分析。在一些实施例中,得到病灶和原病灶区域的分割结果后,根据分割结果截取该区域,一方面,可以计算病灶在术后变化的面积,用于评估手术的疗效;另一方面,可以对该区域的像素进行分析,以判断是否还有病灶存在,以及病灶的面积范围。在一些实施例中,基于术后扫描影像,可以执行图像异常检测,也即术后并发症检测识别操作,其具体方式可参见图20相关描述,在此不再赘述。在一些实施例中,基于术后扫描影像,可以采用过阈值分割和深度学习等方式对实际介入过程中的路径进行提取,将该路径与规划介入路径进行配准(即针道对比),并判断是否存在变动,以评估变动造成的影响,实现精准评估。具体地,当实际介入路径与规划介入路径之间存在偏差时,确定实际介入路径与规划介入路径的偏差,并判断该偏差与手术中扫描影像的第三目标结构集中的特定元素(即第四目标结构集)是否有交集,若交集为空集,则不予提醒;若交集不为空集,则表明路径穿过第四目标结构集,可以向临床医生发出提醒信息。当实际介入路径与规划介入路径之间不存在偏差时,则不予提醒。
图23是根据本说明书一些实施例所示的示例性介入手术引导方法的流程图。在一些实施例中,流程2300可以由处理设备140执行。例如,流程2300可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器执行程序或指令时,可以实现流程2300。在一些实施例中,流程2300可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。
步骤2310,在不同的时间分别采集目标对象的第一医学影像、第二医学影像、第三医学影像。
在一些实施例中,处理设备可以通过医学扫描设备在不同的时间采集目标对象的第一医学影像、第二医学影像、第三医学影像。在一些实施例中,处理设备可以从医学扫描设备110、存储设备150、处理设备140的存储单元等获取目标对象的第一医学影像、第二医学影像、第三医学影像。
在一些实施例中,第一医学影像、第二医学影像、第三医学影像可以由计算机断层扫描(CT)设备采集。
第一医学影像可以是术前增强图像或术前平扫图像。在一些实施例中,第一医学影像可以在介入手术前采集。介入手术前可以是介入手术进行前一定时间段内,例如,前1小时、前两小时、前5小时、前一天、前两天、前一周等。在一些实施例中,第一医学影像可以在目标对象首诊时、常规体检时、前次介入手术结束后采集。
第二医学影像可以是术中实时影像。在一些实施例中,第二医学影像在介入手术中且穿刺执行之前采集。介入手术中且穿刺执行之前可以是入针前的准备时间。例如,可以在定位时间、消毒时间、局部麻醉时间等采集第二医学影像。又例如,第二医学影像可以是手术中的第一帧实时图像。
第三医学影像可以是介入手术中的实时影像。在一些实施例中,第三医学影像在穿刺执行过程中采集。穿刺执行过程是指从皮肤进针,按照穿刺路径进入靶区,在靶区完成操作以及出针的过程。
在一些实施例中,第一医学影像、第二医学影像、第三医学影像可以通过不同的成像设备采集。例如,第一医学影像可以通过影像室的成像设备采集,第二医学影像、第三医学影像可以通过手术室的成像设备采集。在一些实施例中,第一医学影像、第二医学影像、第三医学影像的图像参数(例如,图像范围、精度、对比度、灰度、梯度等)可以相同或不同。例如,第一医学影像的扫描范围可以大于第二医学影像、第三医学影像的扫描范围,或者,第二医学影像、第三医学影像的精度可以高于第一医学影像。
关于获取第一医学影像、第二医学影像以及第三医学影像的更多说明,可以参见图2的步骤210的相关描述,此处不再赘述。
步骤2320,将第一医学影像、第二医学影像进行配准,得到第四医学影像。
在一些实施例中,第四医学影像中可以包含配准后的介入手术规划信息在一些实施例中,第一医学影像在介入手术前采集,采集和图像处理的时间相对充裕,第一医学影像扫描范围较大、片层较厚,例如,包括涵盖所有相关组织和/或器官的大量片层。在信息更全面的第一医学影像上规划穿刺路径有利于提高后续介入手术引导的准确性。
在一些实施例中,第二医学影像在介入手术中、穿刺执行之前采集,采集和图像处理的时间相对紧迫,第二医学影像扫描范围较小、片层较薄,例如,仅包括涵盖针尖周围的4到10个片层。可以理解,将第一医学影像、第二医学影像进行配准得到的第四医学影像包含配准后的穿刺规划信息。
关于对第一医学影像和第二医学影像进行配准的说明可以参见图2的步骤220的相关描述,此处不再赘述。
步骤2330,将第四医学影像映射至第三医学影像,以引导介入手术。
关于步骤2330的说明可以参见图2中的相关描述,此处不再赘述。
图24是根据本说明书一些实施例所示的示例性介入手术引导方法的示意图。
在一些实施例中,处理设备通过呼吸门控装置监控目标对象的呼吸。如图24所示,采集第一医学影像时,呼吸门控装置可以获取目标对象所处的呼吸幅度点A。介入手术中、穿刺执行之前,呼吸门控装置可以监控目标对象的呼吸,并且使医学扫描设备在目标对象处于呼吸幅度点A’时采集第二医学影像。处理设备通过对第一医学影像进行处理,得到穿刺规划信息图像,通过第一次配准,得到第一形变信息。处理设备将第一形变信息作用于穿刺规划信息图像,得到第四医学影像,第四医学影像包含第一次配准后的穿刺规划信息。
穿刺执行过程中,目标对象可以自行控制(例如,屏气)呼吸幅度至相同或相近呼吸幅度。或者,处理设备可以通过呼吸门控装置监控目标对象的呼吸幅度。在目标对象调整呼吸至第三呼吸幅度点A”时,医学扫描设备采集第三医学影像,处理设备将第四医学影像映射至第三医学影像,以引导介入手术。如果呼吸门控装置检测到呼吸幅度出现明显的偏差,处理设备可以给出提示和/或中断穿刺;待目标对象调整至相同或相近呼吸幅度时,继续穿刺过程。
通过利用呼吸门控装置,在相同或近似相同的呼吸幅度点下采集第一医学影像、第二医学影像、第三医学影像,可以使呼吸运动造成的图像之间的器官组织的移动较小,有利于提高术前规划的准确度。
在本说明书一些实施例中,通过将介入手术前扫描的大视野薄层图像的规划结果与术中实时影像相结合,即利用了实时影像实时显示病人穿刺部位的状态,又引入了介入手术前详细规划结果及细节信息,规避高风险区域,降低手术风险。并将术中实时穿刺过程以穿刺针针尖位置在视野中央为依据,穿刺针沿着穿刺规划路径穿刺向病灶,CT移床或移动探测器更新扫描范围,获取实时扫描图像,为穿刺过程进行穿刺引导,提高了手术效率,降低手术风险。
图25是根据本说明书另一些实施例所示的示例性介入手术引导方法的另一示意图。
在一些实施例中,处理设备可以不借助呼吸门控装置监控目标对象的呼吸。如图25所示,处理设备在不同的时间分别采集目标对象的第一医学影像、第二医学影像、第三医学影像。处理设备通过对第一医学影像进行处理,得到穿刺规划信息图像,通过第一次配准,得到第一形变信息。处理设备将第一形变信息作用于穿刺规划信息图像,得到第四医学影像,第四医学影像包含第一次配准后的穿刺规划信息。
处理设备将第二医学影像和第三医学影像进行第二次配准,得到第二形变信息,并将第二形变信息作用于第四医学影像,得到第五图像,第五图像中包含经过第二次配准后的穿刺规划信息。处理设备将第五图像映射至第三医学影像以引导介入手术。
由于第二医学影像与第三医学影像都是少片层的数据,第二次配准的计算量小,可在执行手术过程中获取第三医学影像后较短的时间内实现配准,降低手术风险。
本说明书实施例还提供一种手术机器人,包括:机械臂,执行介入手术;以及控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括适于致使所述一个或多个处理器执行包括以下步骤的操作的操作指令:在不同的时间分别采集目标对象的第一医学影像、第二医学影像、第三医学影像;将所述第一医学影像、所述第二医学影像进行配准,得到第四医学影像,所述第四医学影像包含配准后的穿刺规划信息;将所述第四医学影像映射至所述第三医学影像,以引导介入手术。
本说明书实施例还提供一种手术机器人,包括:机械臂,执行介入手术;以及控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括适于致使所述一个或多个处理器执行包括以下步骤的操作的操作指令:在不同的时间分别采集目标对象的第一医学影像、第二医学影像、第三医学影像;将所述第一医学影像、所述第二医学影像进行第一次配准,得到第一形变信息和第四医学影像,所述第四 医学影像包含配准后的穿刺规划信息;将所述第二医学影像、所述第三医学影像进行第二次配准,得到第二形变信息;将所述第二形变信息作用于所述第四医学影像,得到第五图像,所述第五图像中包含经过所述第二次配准后的穿刺规划信息;将所述第五图像映射至所述第三医学影像,以引导介入手术。
应当注意的是,上述有关各流程的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对个流程进行各种修正和改变,例如,添加存储步骤等。
图26是根据本说明书一些实施例所示的用于介入手术的医学影像处理系统的示例性模块图。如图26所示,系统2600可以包括获取模块2610、配准模块2620和风险评估模块2630。
获取模块2610,用于获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像。
配准模块2620,用于配准所述第二医学影像和所述第一医学影像,得到配准结果。
风险评估模块2630,用于至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
需要说明的是,有关获取模块2610、配准模块2620和风险评估模块2630,执行相应流程或功能实现介入手术影像辅助的更多技术细节,具体参见图1至图25所示的任一实施例描述的用于介入手术医学影像处理方法相关内容,在此不再赘述。
关于用于介入手术医学影像处理系统2600的以上描述仅用于说明目的,而无意限制本说明书的范围。对于本领域普通技术人员来说,在不背离本说明书原则的前提下,可以对上述方法及系统的应用进行各种形式和细节的改进和改变。然而,这些变化和修改不会背离本说明书的范围。在一些实施例中,用于介入手术医学影像处理系统2600可以包括一个或多个其他模块。例如,用于介入手术医学影像处理系统2600可以包括存储模块,以存储由用于介入手术医学影像处理系统2600的模块所生成的数据。在一些实施例中,图26的获取模块2610、配准模块2620和风险评估模块2630可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。本说明书描述的示例性实施方式的特征、结构、方法和其它特征可以以各种方式组合以获得另外的和/或替代的示例性实施例。例如,处理设备140和医学扫描设备110可以被集成到单个设备中。诸如此类的变形,均在本说明书的保护范围之内。
本说明书一些实施例还提供了一种用于介入手术医学影像处理装置,包括处理器,处理器用于执行任一实施例描述的用于介入手术医学影像处理方法,具体参见图1至图25相关描述,在此不再赘述。
本说明书一些实施例还提供了一种计算机可读存储介质,该存储介质存储计算机指令,当计算机读取计算机指令时,计算机执行如上述任一实施例的用于介入手术医学影像处理方法,具体参见图1至图25相关描述,在此不再赘述。
本说明书实施例提供的手术影像辅助方法、系统、装置及计算机可读存储介质,至少具有以下有益效果:
(1)首先,结合考虑手术前增强影像的对血管、病灶等目标良好显影效果特点,以及手术中扫描影像的接近患者真实情况优势,在分割过程中采用结合深度学习的由粗到细优化分割方法,通过精准器官定位为精准器官分割提供支持,提高了分割效率和图像处理鲁棒性;
(2)其次,通过在粗分割阶段采用软连通域分析方法,准确保留目标结构集区域的同时,有效排除了假阳性区域,首先提高了粗定位阶段对元素定位的准确率,并直接有助于后续合理提取元素掩膜定位信息的边界框,从而提升了分割效率;
(3)进一步的,针对粗分割阶段粗定位失准但未失效的不利情况,利用自适应的滑窗计算及相应滑窗操作,能够补全定位区域的缺失部分,并能自动规划及执行合理的滑窗操作,降低了精分割阶段对于粗定位结果的依赖性,在保持分割时间和计算资源无明显增加的前提下,提高了分割准确率;
(4)再者,即便当粗定位失效时,也能基于预设的元素的定位坐标,对元素掩膜进行精确定位,不仅提高了分割精确度,还降低了分割时间,减少了分割计算量,进一步提高了分割效率;
(5)然后,由于该目标结构集分割的整体工作流,充分考虑到降低目标结构分割准确率的多种不利情形,使得适用于不同种目标结构集分割任务的有效实施,具有较高的分割准确率和分割鲁棒性;
(6)并且,综合手术前增强影像、手术中扫描影像的各自优势,设置快速分割模式和精准分割模式(规划模式仅针对手术中扫描影像),根据选定规划模式的不同,确定不同的路径规划方案,快速分割模式下,规划速度快且时间短;精准模式下,规划路径的选择性更多,鲁棒性更强,在提供较强处理适用性的同时还能保障系统稳定性及介入安全性,使得术前规划能够达到较高的精准度,以便更好地辅助手术中准确地实施相应穿刺路径,能够获得更理想的手术效果;
(7)进一步的,为介入手术规划提供了两种全自动模式,精准规划模式下,在进行介入(如穿刺 针等)路径规划时,由于可以对规划出合理的路径,能够避开优先级较高的血管和重要脏器,得到潜在的进针空间,从而提高介入规划及其介入手术的工作效率;快速规划模式下,由于快速分割模式在进行介入路径规划时,仅需绕过不可介入区域,使介入路径直达病灶即可,同样有助于介入规划及其介入手术的工作效率提升;
(8)另外,能够高效、准确的自动规划出最佳介入路径,并对介入路径存在的风险进行分析,为介入手术提供良好的术前规划指导,且在手术过程中,提供了并发症实时检测和识别功能,进一步提升了介入过程的安全性,此外,工作流还实现了术后评估功能,可以协助操作者对手术过程及手术结果进行准确评估,提高了手术工作效率及手术安全性;
(9)在相同或相近的呼吸幅度点下采集各个图像,可以使呼吸运动造成的图像之间的器官组织的移动较小,有利于提高术前规划的准确度;
(10)在穿刺执行之前预先进行高精度配准,避免或减少穿刺执行开始之后的计算压力,减少穿刺执行的时长;
(11)同时避免病人屏气时间过长,提高病人体验;
(12)显示大视野引导图像,区别显示实时图像和规划图像,更明确的引导介入手术。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (23)

  1. 一种用于介入手术的医学影像处理系统,所述系统包括:
    控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行以下步骤的操作指令:
    获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像;
    配准所述第二医学影像和所述第一医学影像,得到配准结果;
    至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
  2. 根据权利要求1所述的系统,所述获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像,包括:
    获取手术前增强影像;
    对所述手术前增强影像的第一目标结构集进行分割,获得所述第一目标结构集的第一医学影像;
    获取手术中扫描影像;
    对所述手术中扫描影像的第二目标结构集进行分割,获得所述第二目标结构集的第二医学影像;其中,所述第一目标结构集与所述第二目标结构集有交集。
  3. 根据权利要求2所述的系统,所述配准结果包括手术中第三目标结构集的空间位置,所述第三目标结构集的元素基于规划介入路径的模式确定;
    其中,所述第三目标结构集中至少有一个元素包括在所述第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
  4. 根据权利要求3所述的系统,所述基于所述介入手术规划信息进行风险评估,包括:
    确定所述第三目标结构集中至少部分元素的介入风险值;
    基于所述介入风险值,进行风险评估。
  5. 根据权利要求4所述的系统,所述基于所述介入手术规划信息进行风险评估,包括:
    判断所述介入手术规划信息中的介入路径是否穿过所述第三目标结构集中的预设元素;
    当判断结果为是时,确定所述第三目标结构集中预设风险对象的所述介入风险值。
  6. 根据权利要求4或5任一项所述的系统,所述确定所述第三目标结构集中至少部分元素的介入风险值,包括:
    根据所述元素与所述介入路径的最短距离,确定所述元素的风险等级;
    根据所述风险等级确定所述元素的所述介入风险值。
  7. 根据权利要求4或5任一项所述的系统,所述确定所述第三目标结构集中至少部分元素的介入风险值,包括:
    根据所述元素与所述介入路径的最短距离,确定所述元素的风险等级;
    根据所述风险等级确定所述元素的所述介入风险值;
    基于所述元素的预设规则确定不同优先级,设定所述介入风险值的相应预设权重。
  8. 根据权利要求4所述的系统,基于所述介入风险值,进行风险评估,包括:
    计算至少一条所述介入路径的总风险值;
    确定所述总风险值最小的介入路径为最优路径。
  9. 根据权利要求3所述的系统,所述规划介入路径的模式至少包括快速规划模式和精准规划模式, 所述第三目标结构集的元素在所述快速规划模式下的元素总体积与所述第三目标结构集的元素在所述精准规划模式下的元素总体积比值大于预设效率因子m。
  10. 根据权利要求9所述的系统,所述预设效率因子m与所述介入手术的类型有关。
  11. 根据权利要求1所述的系统,所述操作指令还包括:
    获取手术中扫描影像;
    对所述手术中扫描影像检测图像异常;
    基于检测到的所述图像异常,确定相应图像异常类型;
    基于所述图像异常类型,确定是否进行定量计算;
    根据是否进行定量计算的判断结果确定图像异常程度。
  12. 根据权利要求10所述的系统,所述操作指令还包括:
    根据所述图像异常程度,进行所述图像异常程度的相应报警提示。
  13. 根据权利要求10所述的系统,所述操作指令还包括:
    将基于所述手术前增强影像、所述手术中扫描影像得到的规划介入路径和基于手术后扫描影像得到的实际介入路径,配准到所述手术中扫描影像;
    判断所述实际介入路径与所述规划介入路径的偏差与所述手术中扫描影像的所述第三目标结构集中的特定元素是否有交集,根据判断结果确定相应术后反馈信息。
  14. 根据权利要求1所述的系统,所述操作指令还包括:
    响应于所述介入手术规划信息的风险评估结果满足预设条件,基于所述满足预设条件的介入手术规划信息引导介入手术。
  15. 根据权利要求1所述的系统,所述操作指令还包括:
    获取介入手术执行中所述目标对象的第三医学影像
    将所述配准结果映射至所述第三医学影像,以引导介入手术。
  16. 根据权利要求15所述的系统,其中:
    所述第一医学影像在所述介入手术前所述目标对象处于第一呼吸幅度点下采集,所述第二医学影像在所述介入手术中、穿刺执行之前所述目标对象处于第二呼吸幅度点下采集、所述第三医学影像在所述穿刺执行过程中所述目标对象处于第三呼吸幅度点下采集;
    所述第二呼吸幅度点与所述第一呼吸幅度点的偏差小于预设值,所述第三呼吸幅度点与所述第一呼吸幅度点和/或所述第二呼吸幅度点的偏差小于预设值。
  17. 根据权利要求15所述的系统,所述配准所述第二医学影像和所述第一医学影像,得到配准结果,包括:
    基于所述第一医学影像得到介入手术规划信息图像;
    将所述第一医学影像、所述第二医学影像进行第一次配准,得到第一形变信息;
    将所述第一形变信息作用于所述介入手术规划信息图像,得到配准结果,其中,所述配准结果中的介入手术规划信息为经过所述第一次配准后的介入手术规划信息。
  18. 根据权利要求17所述的系统,所述操作指令还包括以下中的至少一项:
    在所述第三医学影像的显示范围之外,显示所述配准结果中位于所述第三医学影像的显示范围之外的 图像信息;
    在所述第三医学影像的显示范围之外,显示对应的介入手术规划路径信息;
    对所述第三医学影像的显示范围内和外的图像信息进行区别显示。
  19. 一种用于介入手术的医学影像处理方法,所述方法包括:
    获取介入手术前目标对象的第一医学影像以及介入手术中所述目标对象的第二医学影像;
    配准所述第二医学影像和所述第一医学影像,得到配准结果;
    至少基于所述配准结果确定所述目标对象的介入手术规划信息,基于所述介入手术规划信息进行介入手术风险评估,得到对应于所述介入手术规划信息的风险评估结果。
  20. 一种用于介入手术医学影像处理方法,所述方法包括:
    获取规划介入路径的模式;
    获取手术前增强影像;
    对所述手术前增强影像的第一目标结构集进行分割,获得所述第一目标结构集的第一医学影像;
    获取手术中扫描影像;
    对所述手术中扫描影像的第二目标结构集进行分割,获得所述第二目标结构集的第二医学影像;所述第一目标结构集与所述第二目标结构集有交集;
    对所述第一医学影像与所述第二医学影像进行配准,确定手术中第三目标结构集的空间位置,所述第三目标结构集的元素选择基于所述规划介入路径的模式;
    基于所述手术中第三目标结构集的空间位置规划介入路径,基于所述介入路径进行风险评估;
    其中,所述第三目标结构集中至少有一个元素包括在第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
  21. 一种介入手术引导系统,所述系统包括:
    控制系统,所述控制系统包括一个或多个处理器和存储器,所述存储器包括使所述一个或多个处理器执行以下步骤的操作指令:
    在不同的时间分别采集目标对象的第一医学影像、第二医学影像和第三医学影像;
    将所述第一医学影像、所述第二医学影像进行配准,得到第四医学影像,所述第四医学影像包含配准后的介入手术规划信息;
    将所述第四医学影像映射至所述第三医学影像,以引导介入手术。
  22. 一种用于介入手术医学影像处理装置,包括处理器,所述处理器用于执行权利要求1~18中任一项所述的用于介入手术的医学影像处理系统中涉及到的步骤操作指令。
  23. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如权利要求19所述的用于介入手术的医学影像处理方法。
PCT/CN2023/091895 2022-05-07 2023-04-28 一种用于介入手术的医学影像处理系统和方法 WO2023216947A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210493274.3A CN117045318A (zh) 2022-05-07 2022-05-07 一种穿刺手术引导系统、方法及手术机器人
CN202210493274.3 2022-05-07
CN202210764281.2A CN116912098A (zh) 2022-06-30 2022-06-30 用于介入手术医学影像处理方法、系统、装置及存储介质
CN202210764281.2 2022-06-30

Publications (1)

Publication Number Publication Date
WO2023216947A1 true WO2023216947A1 (zh) 2023-11-16

Family

ID=88729642

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/091895 WO2023216947A1 (zh) 2022-05-07 2023-04-28 一种用于介入手术的医学影像处理系统和方法

Country Status (1)

Country Link
WO (1) WO2023216947A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (zh) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 一种组织偏移的监测方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007159933A (ja) * 2005-12-15 2007-06-28 Hitachi Medical Corp 画像表示方法、プログラム、及び装置
CN103479430A (zh) * 2013-09-22 2014-01-01 江苏美伦影像系统有限公司 一种影像引导介入手术导航系统
WO2015135058A1 (en) * 2014-03-14 2015-09-17 Synaptive Medical (Barbados) Inc. Methods and systems for intraoperatively confirming location of tissue structures
US20150283379A1 (en) * 2014-04-03 2015-10-08 Pacesetter, Inc. Systems and method for deep brain stimulation therapy
CN112057165A (zh) * 2020-09-22 2020-12-11 上海联影医疗科技股份有限公司 一种路径规划方法、装置、设备和介质
CN112163987A (zh) * 2020-07-06 2021-01-01 中国科学院苏州生物医学工程技术研究所 穿刺路径规划系统
CN113057734A (zh) * 2021-03-12 2021-07-02 上海微创医疗机器人(集团)股份有限公司 一种手术系统
CN113349925A (zh) * 2021-06-01 2021-09-07 浙江工业大学 基于全自动三维成像的听神经瘤手术路径自动规划方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007159933A (ja) * 2005-12-15 2007-06-28 Hitachi Medical Corp 画像表示方法、プログラム、及び装置
CN103479430A (zh) * 2013-09-22 2014-01-01 江苏美伦影像系统有限公司 一种影像引导介入手术导航系统
WO2015135058A1 (en) * 2014-03-14 2015-09-17 Synaptive Medical (Barbados) Inc. Methods and systems for intraoperatively confirming location of tissue structures
US20150283379A1 (en) * 2014-04-03 2015-10-08 Pacesetter, Inc. Systems and method for deep brain stimulation therapy
CN112163987A (zh) * 2020-07-06 2021-01-01 中国科学院苏州生物医学工程技术研究所 穿刺路径规划系统
CN112057165A (zh) * 2020-09-22 2020-12-11 上海联影医疗科技股份有限公司 一种路径规划方法、装置、设备和介质
CN113057734A (zh) * 2021-03-12 2021-07-02 上海微创医疗机器人(集团)股份有限公司 一种手术系统
CN113349925A (zh) * 2021-06-01 2021-09-07 浙江工业大学 基于全自动三维成像的听神经瘤手术路径自动规划方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEI ZHANG, HUANG YU-YU, LUAN SHENG, ZHANG JIA-LEI: "Implementation of intensity-based 2D/3D image registration in orthopaedic navigation system", CHINESE JOURNAL OF MEDICAL IMAGING TECHNOLOGY, vol. 23, no. 7, 20 July 2007 (2007-07-20), pages 1080 - 1084, XP093107247 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117315032A (zh) * 2023-11-28 2023-12-29 北京智愈医疗科技有限公司 一种组织偏移的监测方法
CN117315032B (zh) * 2023-11-28 2024-03-08 北京智愈医疗科技有限公司 一种组织偏移的监测方法

Similar Documents

Publication Publication Date Title
Alam et al. Medical image registration in image guided surgery: Issues, challenges and research opportunities
JP6671432B2 (ja) 患者装着式の針ガイドの位置合わせおよび動き補償
US8630467B2 (en) Diagnosis assisting system using three dimensional image data, computer readable recording medium having a related diagnosis assisting program recorded thereon, and related diagnosis assisting method
US20190021677A1 (en) Methods and systems for classification and assessment using machine learning
US8942455B2 (en) 2D/3D image registration method
US9020229B2 (en) Surgical assistance planning method using lung motion analysis
US8334878B2 (en) Medical image processing apparatus and medical image processing program
US10068049B2 (en) Automated fiducial marker planning method
US20080242971A1 (en) Image system for supporting the navigation of interventional tools
US11471217B2 (en) Systems, methods, and computer-readable media for improved predictive modeling and navigation
JP6532206B2 (ja) 医用画像処理装置、医用画像処理方法
WO2023216947A1 (zh) 一种用于介入手术的医学影像处理系统和方法
CN111415404A (zh) 术中预设区域的定位方法、装置、存储介质及电子设备
CN108430376B (zh) 提供投影数据集
US20230316550A1 (en) Image processing device, method, and program
KR20230013042A (ko) 영상 분석을 통해 병변의 재발을 예측하는 방법
CN114283179A (zh) 基于超声图像的骨折远近端空间位姿实时获取与配准系统
WO2024002221A1 (zh) 介入手术影像辅助方法、系统、装置及存储介质
JP7015351B2 (ja) 医用画像処理装置、医用画像処理方法
JP6748762B2 (ja) 医用画像処理装置、医用画像処理方法
WO2022223042A1 (zh) 手术路径处理系统、方法、装置、设备及存储介质
EP4246427A1 (en) Providing normalised medical images
CN113940756B (zh) 一种基于移动dr影像的手术导航系统
CN116912098A (zh) 用于介入手术医学影像处理方法、系统、装置及存储介质
WO2022054541A1 (ja) 画像処理装置、方法およびプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23802719

Country of ref document: EP

Kind code of ref document: A1