WO2023169578A1 - 一种用于介入手术的图像处理方法、系统和装置 - Google Patents

一种用于介入手术的图像处理方法、系统和装置 Download PDF

Info

Publication number
WO2023169578A1
WO2023169578A1 PCT/CN2023/080956 CN2023080956W WO2023169578A1 WO 2023169578 A1 WO2023169578 A1 WO 2023169578A1 CN 2023080956 W CN2023080956 W CN 2023080956W WO 2023169578 A1 WO2023169578 A1 WO 2023169578A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
segmented
segmented image
target
Prior art date
Application number
PCT/CN2023/080956
Other languages
English (en)
French (fr)
Inventor
占徐政
胡锦涛
杨飞
盛梦涵
蹇新龙
何少文
王莹珑
何雄一
陈科
Original Assignee
武汉联影智融医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210241912.2A external-priority patent/CN116763401A/zh
Priority claimed from CN202210764217.4A external-priority patent/CN117392143A/zh
Priority claimed from CN202210963424.2A external-priority patent/CN117670945A/zh
Application filed by 武汉联影智融医疗科技有限公司 filed Critical 武汉联影智融医疗科技有限公司
Publication of WO2023169578A1 publication Critical patent/WO2023169578A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/20Surgical navigation systems; Devices for tracking or guiding surgical instruments, e.g. for frameless stereotaxis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This description relates to the field of image processing, and specifically to an image processing method, system and device for interventional surgery.
  • the current interventional surgery planning system has a relatively single function. For example, it only supports preoperative planning of a single target organ (for example, chest target organ or abdominal target organ) under plain scan images. This will lead to excessive learning costs and poor risk aversion. , thus making the interventional surgery less effective and greatly limiting the space for workflow optimization.
  • a target organ for example, chest target organ or abdominal target organ
  • Embodiments of this specification provide an image processing method for interventional surgery, including: a segmentation stage, the segmentation stage includes: acquiring a plurality of first medical images, wherein at least two of the first medical images correspond to the same Scan the object at different time points; determine a pre-recommended enhanced image based on a plurality of the first medical images; obtain an operation instruction, and perform at least some elements in the first target structure set based on the operation instruction and the pre-recommended enhanced image. Segment and generate the first segmented image.
  • Embodiments of this specification provide an image processing method for interventional surgery, including: a registration stage, the registration stage includes: obtaining the registration error between the first segmented image and the second segmented image; if the registration If the accuracy error does not meet the preset conditions, the first segmented image and the second segmented image are optimally registered through the rigid body registration matrix determined by the registration matrix adjustment process; wherein, the registration matrix adjustment process
  • the process of determining the rigid body registration matrix includes: determining the registration elements used in the registration matrix adjustment process; if the positions of the registration elements in the first segmented image and the second segmented image are obtained, then Based on the position of the registration element in the first segmented image and the second segmented image, the rigid body registration matrix is obtained; if the position of the registration element in the first segmented image and the second segmented image is not obtained, The rigid body registration matrix is obtained based on the translation operation or rotation operation on the first segmented image or the second segmented image based on the position in the second segmente
  • Embodiments of this specification also provide an image processing method for interventional surgery.
  • the image processing method includes: an interventional path planning stage.
  • the interventional path planning stage includes: obtaining a medical image of the scanned object; determining based on the medical image. Target point; determine a reference path based on the user's operation related to the target point, wherein the end point of the reference path is open; determine the target path based on the reference path.
  • Embodiments of this specification also provide an image processing system for interventional surgery, including: a segmentation module, a registration module and an interventional path planning module; the segmentation module is used to: obtain multiple first medical images of the scanned object at different stages. images and second medical images; wherein at least two of the first medical images correspond to different time points of the same scan object; segmenting at least some elements in the first target structure set based on the plurality of first medical images , generate a first segmented image; the registration module is configured to: segment at least some elements in the second target structure set based on the second medical image, and generate a second segmented image; register the first segmented image and The second segmented image; the intervention path planning module is configured to: based on the registered second segmented image and/or the second medical image, determine the distance between the needle insertion point on the skin surface and the target area. intervention path.
  • the segmentation module is used to: obtain multiple first medical images of the scanned object at different stages. images and second medical images; wherein at least two
  • An embodiment of this specification also provides an image processing device for interventional surgery, including a processor, where the processor is configured to execute the image processing method described in any embodiment of this specification.
  • Embodiments of this specification also provide a computer-readable storage medium that stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer executes the image processing method described in any embodiment of this specification.
  • Figure 1 is a schematic diagram of an application scenario of an image processing system for interventional surgery according to some embodiments of this specification
  • Figure 2 is an exemplary flow chart of an image processing method for interventional surgery according to some embodiments of this specification
  • Figure 3 is a schematic diagram of the segmentation settings of blood vessels in corresponding phases according to some embodiments of this specification
  • Figure 4 is an exemplary flowchart of determining pre-recommended enhanced images according to some embodiments of this specification
  • Figure 5 is an exemplary flow chart for determining an interventional surgical plan according to some embodiments of this specification.
  • Figure 6 is a schematic diagram of tissue segmentation and category setting in different segmentation modes according to some embodiments of this specification.
  • Figure 7 is an exemplary flow chart of a segmentation process involved in an image processing method for interventional surgery according to some embodiments of this specification
  • Figure 8 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification
  • Figure 9 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification.
  • Figure 10 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification.
  • Figure 11 is an exemplary flowchart of a process of accurately segmenting elements according to some embodiments of this specification.
  • Figures 12-13 are exemplary schematic diagrams of positioning information determination of element masks according to some embodiments of this specification.
  • Figure 14A is an exemplary diagram of determining the sliding direction based on the positioning information of the element mask according to some embodiments of this specification;
  • Figures 14B-14E are exemplary schematic diagrams of accurate segmentation after sliding windows according to some embodiments of this specification.
  • Figure 15 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • Figure 16 is an exemplary result diagram of a priority refresh display according to some embodiments of this specification.
  • Figure 17 is an exemplary flow chart of the registration process of the first segmented image and the second segmented image shown in some embodiments of this specification;
  • Figures 18-19 are exemplary flowcharts of a process of determining a registration deformation field according to some embodiments of this specification.
  • Figure 20 is an exemplary demonstration diagram of obtaining a first segmented image and a second segmented image through segmentation according to some embodiments of this specification;
  • Figure 21 is an exemplary result diagram of fusion mapping of multi-phase phase-enhanced images and second medical images according to some embodiments of this specification;
  • Figure 22 is a schematic diagram of element outline extraction using a first outline tool according to some embodiments of this specification.
  • Figure 23 is a schematic diagram of element outline extraction using a second outline tool according to some embodiments of this specification.
  • Figure 24 is an exemplary module diagram of an image processing system for interventional surgery according to some embodiments of the present specification.
  • Figure 25 is a schematic flowchart of a registration optimization method for medical images according to some embodiments of this specification.
  • Figure 26 is a schematic diagram of the spatial alignment effect of key organs according to some embodiments of this specification.
  • Figure 27A is a schematic diagram of fused display using an upper and lower layer fusion method according to some embodiments of this specification.
  • Figure 27B is a schematic diagram of fusion display using vertical dividing line fusion according to some embodiments of this specification.
  • Figure 27C is a schematic diagram of fusion display using horizontal dividing line fusion according to some embodiments of this specification.
  • Figure 27D is a schematic diagram of fusion display in a checkerboard fusion manner according to some embodiments of this specification.
  • Figure 27E is a schematic diagram of an annular area according to some embodiments of this specification.
  • Figure 28 is a schematic flowchart of another registration optimization method for medical images according to some embodiments of this specification.
  • Figure 29 is a structural block diagram of a registration module according to some embodiments of this specification.
  • Figure 30 is a flow chart of an exemplary puncture path planning method according to some embodiments of this specification.
  • Figure 31 is a flow chart of another exemplary puncture path planning method according to some embodiments of this specification.
  • Figure 32 is a schematic diagram of an exemplary provision of a first prompt according to some embodiments of this specification.
  • Figure 33 is a schematic diagram of an exemplary drag operation according to some embodiments of this specification.
  • Figure 34 is a schematic diagram of exemplary determination of candidate needle entry points according to some embodiments of this specification.
  • Figure 35 is a schematic diagram of an exemplary provision of a second prompt according to some embodiments of this specification.
  • Figure 36 is a schematic diagram of an exemplary provision of a third prompt according to some embodiments of this specification.
  • Figure 37 is a structural block diagram of a path segmentation module according to some embodiments of this specification.
  • system means of distinguishing between different components, elements, parts, portions or assemblies at different levels.
  • said words may be replaced by other expressions if they serve the same purpose.
  • Figure 1 is a schematic diagram of an application scenario of an image processing system for interventional surgery according to some embodiments of this specification.
  • interventional surgery/treatment may include cardiovascular interventional surgery, oncology interventional surgery, obstetrics and gynecology interventional surgery, musculoskeletal interventional surgery or any other feasible interventional surgery, such as neurointerventional surgery, etc.
  • interventional surgery/treatment may include percutaneous biopsy, coronary angiography, thrombolytic therapy, stent implantation, or any other feasible interventional surgery, such as ablation surgery, etc.
  • the workflow of interventional surgeries at different parts can be integrated into the image processing system 100 for interventional surgeries, so that users (eg, doctors) do not need to switch applications when planning interventional surgeries at different parts, but only need to load the corresponding The data of the part (for example, the first medical image, the pre-recommended enhanced image, the first segmented image, etc.) is sufficient.
  • Interventional surgeries at different locations can include chest-lung, abdominal-liver interventional surgeries.
  • the image processing system 100 for interventional surgery may include a medical scanning device 110, a network 120, one or more terminals 130, a processing device 140, a storage device 150, and a robotic arm 160.
  • the connections between components in the interventional image processing system 100 may be variable.
  • medical scanning device 110 may be connected to processing device 140 via network 120 .
  • medical scanning device 110 may be directly connected to processing device 140, as indicated by the dashed bidirectional arrow connecting medical scanning device 110 and processing device 140.
  • storage device 150 may be connected to processing device 140 directly or through network 120 .
  • terminal 130 may be connected directly to processing device 140 (as shown by the dashed arrow connecting terminal 130 and processing device 140), or may be connected to processing device 140 through network 120.
  • the medical scanning device 110 may be configured to scan the scanned object using high-energy rays (such as X-rays, gamma rays, etc.) to collect scan data related to the scanned object.
  • the scan data can be used to generate one or more images of the scanned object.
  • medical scanning device 110 may include an ultrasound imaging (US) device, a computed tomography (CT) scanner, a digital radiography (DR) scanner (eg, mobile digital radiography), digital subtraction angiography (DSA) scanner, dynamic space reconstruction (DSR) scanner, X-ray microscope scanner, multi-modal scanner, etc. or a combination thereof.
  • US ultrasound imaging
  • CT computed tomography
  • DR digital radiography
  • DSA digital subtraction angiography
  • DSR dynamic space reconstruction
  • the multi-modality scanner may include a computed tomography-positron emission tomography (CT-PET) scanner, a computed tomography-magnetic resonance imaging (CT-MRI) scanner.
  • CT-PET computed tomography-positron emission tomography
  • CT-MRI computed tomography-magnetic resonance imaging
  • Scan objects can be living or non-living.
  • scan objects may include patients, artificial objects (eg, artificial phantoms), and the like.
  • scan objects may include specific parts, organs, and/or tissues of the patient.
  • the medical scanning device 110 may include a frame 111 , a detector 112 , a detection area 113 , a workbench 114 and a radioactive source 115 .
  • Rack 111 may support detector 112 and radiation source 115 .
  • Scan objects may be placed on the workbench 114 for scanning.
  • Radiation source 115 may emit radiation toward the scanned object.
  • Detector 112 may detect radiation (eg, X-rays) emitted from radiation source 115 .
  • detector 112 may include one or more detector units.
  • the detector unit may include a scintillation detector (eg, a cesium iodide detector), a gas detector, or the like.
  • the detector unit may include a single row of detectors and/or multiple rows of detectors.
  • Network 120 may include any suitable network capable of facilitating the exchange of information and/or data of image processing system 100 for interventional procedures.
  • one or more components of the interventional image processing system 100 eg, medical scanning device 110, terminal 130, processing device 140, storage device 150, robotic arm 160
  • processing device 140 may obtain imaging data from medical scanning device 110 via network 120 .
  • processing device 140 may obtain user instructions from terminal 130 via network 120.
  • Network 120 may be and/or include a public network (eg, the Internet), a private network (eg, a local area network (LAN), a wide area network (WAN), etc.), a wired network (eg, an Ethernet network, a wireless network (eg, an 802.11 network, Wi-Fi networks, etc.), cellular networks (such as Long Term Evolution (LTE) networks), Frame Relay networks, virtual private networks (“VPNs”), satellite networks, telephone networks, routers, hubs, switches, server computers and/or any combination thereof.
  • a public network eg, the Internet
  • a private network eg, a local area network (LAN), a wide area network (WAN), etc.
  • a wired network eg, an Ethernet network
  • a wireless network eg, an 802.11 network, Wi-Fi networks, etc.
  • cellular networks such as Long Term Evolution (LTE) networks
  • Frame Relay networks such as Long Term Evolution (LTE) networks
  • VPNs virtual private networks
  • net Network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), a BluetoothTM network, a ZigBeeTM network, Near Field Communication (NFC) networks, etc. or any combination thereof.
  • network 120 may include one or more network access points.
  • network 120 may include wired and/or wireless network access points, such as base stations and/or Internet exchange points, through which one or more components of interventional image processing system 100 may be connected to network 120 to exchange data and/or information.
  • Terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, etc., or any combination thereof.
  • mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, the like, or any combination thereof.
  • smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart TVs, smart cameras, intercoms, etc., or any combination thereof.
  • mobile devices may include cell phones, personal digital assistants (PDAs), gaming devices, navigation devices, point-of-sale (POS) devices, laptops, tablets, desktops, etc., or any combination thereof.
  • PDAs personal digital assistants
  • POS point-of-sale
  • virtual reality devices and/or augmented reality devices include virtual reality helmets, virtual reality glasses, virtual reality goggles, augmented reality helmets, augmented reality glasses, augmented reality goggles, etc., or any combination thereof.
  • virtual reality devices and/or augmented reality devices may include Google Glass TM , Oculus Rift TM , Hololens TM , Gear VR TM , etc.
  • terminal 130 may be part of processing device 140.
  • the processing device 140 may process data and/or information obtained from the medical scanning device 110, the terminal 130, and/or the storage device 150.
  • the processing device 140 can acquire the data acquired by the medical scanning device 110, and use the data to perform imaging to generate medical images (such as a first medical image, a second medical image), and segment the medical images to generate segmentation result data (such as first segmented image, second segmented image, registration map, etc.).
  • the processing device 140 may obtain medical images, segmentation mode data (such as fast segmentation mode data, precise segmentation mode data), target organ setting data, phase setting data and/or scanning protocols from the terminal 130 .
  • processing device 140 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data stored in medical scanning device 110, terminal 130, and/or storage device 150 via network 120. As another example, processing device 140 may be directly connected to medical scanning device 110, terminal 130, and/or storage device 150 to access stored information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform.
  • Storage device 150 may store data, instructions, and/or any other information.
  • storage device 150 may store data obtained from medical scanning device 110, terminal 130, and/or processing device 140.
  • the storage device 150 may store medical image data (such as first medical images, second medical images, first segmented images, second segmented images, etc.) and/or positioning information data acquired from the medical scanning device 110 .
  • the storage device 150 may store medical images and/or scan protocols input from the terminal 130 .
  • the storage device 150 can store data generated by the processing device 140 (for example, medical image data, element mask data, positioning information data, accurately segmented result data, spatial positions of blood vessels and lesions during surgery, registration maps, etc. ) to store.
  • storage device 150 may store data and/or instructions that processing device 140 may perform or be used to perform the example methods described in this specification.
  • the storage device 150 includes a mass storage device, a removable storage device, a volatile read-write memory, a read-only memory (ROM), etc., or any combination thereof.
  • Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like.
  • Exemplary removable storage devices may include flash drives, floppy disks, optical disks, memory cards, compact disks, tapes, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • Exemplary RAM may include dynamic random access memory (DRAM), double data rate synchronous dynamic access memory (DDR SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), and zero-capacitance random access memory. Access memory (Z-RAM), etc.
  • Exemplary ROMs may include masked read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disk Read-only memory (CD-ROM) and digital versatile disk reallocate memory, etc.
  • the storage device 150 may be implemented on a cloud platform.
  • storage device 150 may be connected to network 120 to communicate with one or more other components in interventional image processing system 100 (eg, processing device 140, terminal 130). One or more components in the interventional image processing system 100 may access data or instructions stored in the storage device 150 via the network 120 . In some embodiments, storage device 150 may be directly connected to or in communication with one or more other components in interventional image processing system 100 (eg, processing device 140, terminal 130). In some embodiments, storage device 150 may be part of processing device 140.
  • the robotic arm 160 may be a device capable of imitating the functions of a human arm and achieving automatic control.
  • the robotic arm 160 may include a multi-joint structure connected to each other and capable of moving in a plane or a three-dimensional space.
  • the robotic arm 160 may include a controller, a mechanical body, a sensor, etc.
  • the controller can set the action parameters of the mechanical body. numbers (for example, trajectory, direction, angle, speed, strength, etc.); the mechanical body can accurately execute the above action parameters; the sensor can detect or feel external signals, physical conditions (for example, light, heat, humidity) or chemical composition (for example, smoke) and pass the detected information to the controller.
  • the manipulator 160 may include a rigid manipulator, a flexible manipulator, a pneumatic-assisted manipulator, a cable-assisted manipulator, a linear manipulator, a horizontal multi-joint manipulator, a joint multi-axis manipulator, etc. or any combination thereof.
  • a rigid manipulator e.g., a rigid manipulator, a flexible manipulator, a pneumatic-assisted manipulator, a cable-assisted manipulator, a linear manipulator, a horizontal multi-joint manipulator, a joint multi-axis manipulator, etc. or any combination thereof.
  • the above description of the robotic arm is for illustrative purposes only and is not intended to limit the scope of this specification.
  • puncture instruments eg, fiber optic needles, intravenous indwelling needles, injection needles, puncture needles, biopsy needles, etc.
  • puncture instruments eg, fiber optic needles, intravenous indwelling needles, injection needles, puncture needles, biopsy needles, etc.
  • the puncture process is performed by the robotic arm 160 .
  • processing device 140 and robotic arm 160 may be integrated. In some embodiments, the processing device 140 and the robotic arm 160 may be connected directly or indirectly, and work together to implement the methods and/or functions described in this specification. In some embodiments, the medical scanning device 110, the processing device 140, and the robotic arm 160 may be integrated into one body. In some embodiments, the medical scanning device 110, the processing device 140, and the robotic arm 160 may be connected directly or indirectly, and work together to implement the methods and/or functions described in this specification.
  • the description of the image processing system 100 for interventional procedures is intended to be illustrative and not to limit the scope of the present application. Many alternatives, modifications and variations will be apparent to those of ordinary skill in the art. It can be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various modules or form a subsystem to connect with other modules without departing from this principle.
  • the segmentation module 2410, the registration module 2420, and the intervention path planning module 2430 in Figure 24 can be different modules in one system, or one module can implement the functions of two or more modules mentioned above. For example, each module can share a storage module, or each module can have its own storage module.
  • processing device 140 and medical scanning device 110 may be integrated into a single device. Such deformations are within the scope of this manual.
  • FIG. 2 is an exemplary flow chart of an image processing method for interventional surgery according to some embodiments of this specification.
  • the process 200 includes: a segmentation stage (steps 210 and 220), a registration stage (steps 230 and 240), and an interventional path planning stage (step 250).
  • step 210 multiple first medical images and second medical images of the scanned object at different stages are acquired, where at least two first medical images correspond to different time points of the same scanned object.
  • step 210 may be performed by segmentation module 2410.
  • the first medical image may refer to an image scanned by a medical scanning device after the scanning object (such as a patient, etc.) is injected with a contrast agent before surgery.
  • the first medical image may also be called a preoperative enhanced image.
  • the first medical image may include CT images, PET-CT images, US images or MR images.
  • the different stages may include a first stage.
  • the first phase may be a time phase for acquiring a first medical image of the scanned object.
  • the first phase may refer to the time period after the scan subject is injected with the contrast agent and before the surgery begins.
  • scan objects may include biometric scan objects or non-biological scan objects.
  • biological scanning objects may include patients, specific parts of the patient, organs and/or tissues, such as abdomen, heart or tumor tissue, etc.
  • non-biological scan objects may include artificial objects, such as artificial phantoms, and the like.
  • a plurality of first medical images of the scanned subject in a first stage before surgery may be acquired.
  • a plurality of first medical images of the scanned object such as PET-CT images
  • a plurality of first medical images of the scanned object such as US images
  • the first medical image can also be obtained through any other feasible method, for example, it can be obtained from a cloud server and/or a medical system (such as a hospital's medical system center, etc.) via the network 120
  • a cloud server and/or a medical system (such as a hospital's medical system center, etc.) via the network 120
  • the plurality of first medical images are not particularly limited in the embodiments of this specification.
  • At least two first medical images among the plurality of first medical images may correspond to different time points of the same scan object.
  • the same scanning object for example, a certain organ/tissue
  • different time points may include at least two different phases.
  • the plurality of first medical images may include at least two first medical images with different phases.
  • the visualization effects of blood vessels (and other organs/tissues, lesions, etc.) in the first medical images of different phases are different.
  • Intravenous injection of contrast agent can enhance the blood flow signal of the human body, resulting in different imaging effects of blood vessels (and other organs/tissues, lesions, etc.) at different time points/segments after the injection of contrast agent, thereby obtaining multiple first medical images.
  • the patient enters the arterial phase at 40 seconds after injecting the contrast agent, and the arterial blood vessels are more obvious at this time; between 40 seconds and 120 seconds, the patient enters the arterial phase.
  • the period between 120 and 360 seconds is the portal venous phase, when the portal vein and venous blood vessels are more obvious; the time between 120s and 360s is the delayed phase, when the liver parenchyma and lesions (such as tumor nodules) are more obvious.
  • the liver parenchyma maintains a certain degree of enhancement, and the liver parenchyma is more obvious; within 180s-360s, lesions, such as vascular cancer, cholangiocarcinoma, etc. It shows the characteristics of delayed enhancement, which is high density compared with the normal liver parenchyma, and the lesions in the liver are more obvious.
  • the first medical images of multiple phases can be obtained, and the blood vessels (and other organs/tissues, lesions, etc.) in the first medical images corresponding to the multiple phases respectively
  • the development effect is different.
  • the first medical images of the three phases can be obtained. Arterial vessels are clearly displayed in the first medical image corresponding to the arterial phase; portal veins and venous vessels are clearly visible in the portal phase.
  • the first medical image corresponding to the pulse phase is clearly visible; the liver parenchyma and lesions are clearly visible in the first medical image corresponding to the delayed phase.
  • multiple scans can also be performed within the arterial phase (and/or portal venous phase, and/or delayed phase) time range to obtain first medical images corresponding to multiple phases.
  • multiple, for example, 3 scans can be performed within the arterial phase time range to obtain first medical images of 3 phases, and the arterial blood vessels in the first medical images of 3 phases can be better visualized.
  • a certain method for example, image recognition
  • image recognition can be used to determine which artery has the best visualization effect. It can be understood that when scanning other organs (for example, chest organs, lung organs, etc.), multiple first medical images with different phases can also be obtained by scanning the corresponding organs at different time points, and in different phases. The imaging effects of blood vessels (and other organs/tissues, lesions, etc.) in the corresponding first medical image are different.
  • Step 220 Segment at least some elements in the first target structure set based on the plurality of first medical images to generate a first segmented image.
  • step 220 may be performed by segmentation module 2410.
  • the first segmented image may be a segmented image of at least some elements of the first pre-operative target structure set (eg, target organ, blood vessels within the target organ, and lesions) obtained by segmenting the first medical image.
  • the number of first segmented images may be multiple, wherein each first segmented image corresponds to a first medical image.
  • At least two first medical images corresponding to different periods may be fused to obtain a first fused image; and at least some elements in the first target structure set may be segmented based on the first fused image.
  • the first fused image may be an image obtained by merging multiple corresponding first medical images in different periods.
  • the first fused image may segment different elements in the plurality of first medical images used for fusion. For example, it is set that blood vessel a is segmented in the first medical image corresponding to phase-1, blood vessel b is segmented in the first medical image corresponding to phase-2, and the two first medical images corresponding to phase-1 and phase-2 are segmented.
  • the medical images are fused to obtain a first fusion image, in which blood vessel a and blood vessel b can be segmented.
  • At least some elements of the first target structure set may also be segmented on the first medical image. At least some of the elements here may be non-vascular elements concentrated in the first target structure, such as lesions and target organs.
  • segmenting at least part of the elements in the first target structure set based on a plurality of first medical images may include: determining a pre-recommended enhanced image based on a plurality of first medical images; obtaining an operation instruction, and based on the operation instruction and the pre-recommendation It is recommended that the enhanced image segment at least some elements of the first target structure set.
  • the above process may be performed by segmentation module 2410.
  • Operation instructions may include user instructions and automatic instructions.
  • the user instructions may include instructions input by a user (eg, a doctor), for example, a phase adjustment instruction input by the doctor through a terminal.
  • a user e.g, a doctor
  • a phase adjustment instruction input by the doctor through a terminal.
  • the doctor based on the pre-recommended enhanced image determined in step 220, if the doctor believes that a certain element (eg, blood vessel) cannot be performed on the first medical image corresponding to the current period (ie, the current pre-recommended enhanced image) If the segmentation or segmentation effect is poor, you can enter the phase adjustment command through the terminal to manually adjust the segmentation phase of the element, thereby improving the segmentation effect of the element.
  • Automatic instructions may include automatic segmentation instructions. Automatic instructions can be issued automatically without user input, and the segmentation module 2410 automatically performs element segmentation operations. It can be understood that when the automatic instruction is executed, it can be considered that the segmentation effect of the first medical image corresponding to the element in the
  • a pre-recommended enhanced image may include at least one element with the best automatic recognition effect.
  • the pre-recommended enhanced image only includes one element with the best automatic recognition effect (that is, there is a one-to-one correspondence between the pre-recommended enhanced image and the element)
  • the element can be segmented to obtain the first segmented image;
  • the pre-recommended enhanced image includes multiple elements with the best automatic recognition effect (that is, there is a one-to-many relationship between the pre-recommended enhanced image and the elements)
  • one or more of the multiple elements can be segmented to obtain the first segmented image.
  • the pre-recommended enhanced image may be one or more images in the first medical image. Based on the description in step 210, it can be known that the development effects of blood vessels (and other organs/tissues, lesions, etc.) in the first medical images corresponding to different periods are different.
  • the pre-recommended enhanced image can be based on blood vessels (and other organs/tissues, lesions, etc.) lesions, etc.), selected from multiple first medical images The first medical image with the best development effect.
  • the development effects of blood vessels (and other organs/tissues, lesions, etc.) in the first medical image are different, and the segmentation effects of blood vessels (and other organs/tissues, lesions, etc.) in the corresponding first medical image may also be different (such as the development effect Good, the segmentation effect is good; poor development effect, the segmentation effect is poor).
  • a pre-recommended enhanced image may correspond to a blood vessel (or other organ/tissue, or lesion), that is, there is a one-to-one correspondence between the blood vessel (or other organ/tissue, or lesion) and the pre-recommended enhanced image.
  • a blood vessel or other organ/tissue, or lesion
  • the pre-recommended enhanced image can correspond to multiple blood vessels (or other organs/tissues, lesions), that is, there is a many-to-one relationship between blood vessels (or other organs/tissues, lesions) and the pre-recommended enhanced images. corresponding relationship.
  • the portal venous phase time can be determined
  • the first medical image obtained by scanning within the scope is the pre-recommended enhanced image of the hepatic portal vein and hepatic veins.
  • the phases corresponding to the plurality of first medical images may be determined according to the generation time of the plurality of first medical images.
  • the multiple imported first medical images can be sorted according to the generation time, and are defined in sequence as phase-1, phase-2,..., phase-R. , and set the blood vessels (and organs/tissues, lesions) to be segmented on the first medical image corresponding to a certain period.
  • Figure 3 is a schematic diagram of the segmentation settings of blood vessels in corresponding phases according to some embodiments of this specification. Among them, (a) represents the segmentation setting of blood vessels in the liver, and (b) represents the segmentation setting of blood vessels in the lungs.
  • the four first medical images are defined in sequence as phase-1, phase-2, phase-3 and phase according to the generation time.
  • -4 and set the hepatic artery to be segmented on the first medical image corresponding to phase-1, the hepatic portal vein to be segmented on the first medical image corresponding to phase-2, and the hepatic vein to be segmented on the first medical image corresponding to phase-3. Segmentation is performed on the first medical image corresponding to phase-4, and the inferior vena cava is segmented on the first medical image corresponding to phase-4.
  • the first medical image corresponding to phase-1 can be regarded as the pre-recommended enhanced image of the hepatic artery
  • the first medical image corresponding to phase-2 can be regarded as the pre-recommended enhanced image of the hepatic portal vein
  • the first medical image corresponding to phase-3 can be regarded as the pre-recommended enhanced image of the hepatic portal vein.
  • the medical image is regarded as the pre-recommended enhanced image of the hepatic vein
  • the first medical image corresponding to phase-4 is regarded as the pre-recommended enhanced image of the inferior vena cava. See Figure 3(b).
  • the four first medical images of the lungs are imported into the system and are defined as phase-1, phase-2, phase-3 and phase-4 according to the generation time, and the intrapulmonary artery is set
  • the segmentation of the external pulmonary artery and the external pulmonary vein is performed on the first medical image corresponding to phase-3, and the intrapulmonary vein and extrapulmonary vein are segmented on the first medical image corresponding to phase-4.
  • the first medical image corresponding to phase-3 can be regarded as the pre-recommended enhanced image of the intrapulmonary artery and extrapulmonary artery
  • the first medical image corresponding to phase-4 can be regarded as the pre-recommended enhanced image of the intrapulmonary veins and extrapulmonary veins. Enhance images.
  • the plurality of first medical images are sorted according to the generation time into phase-1, phase-2,..., phase-R, and the blood vessels (and organs/tissues, lesions) in a certain phase correspond to
  • the segmentation settings on the first medical image may be preset.
  • multiple first medical images can be automatically identified to determine the phase corresponding to the first medical image, thereby ensuring that blood vessels (and other organs/tissues, lesions) are in the first medical image of the corresponding phase. (that is, the pre-recommended enhanced image corresponding to the blood vessel) has the best segmentation effect.
  • FIG. 4 is an exemplary flowchart of determining pre-recommended enhanced images according to some embodiments of this specification. As shown in Figure 4, based on multiple first medical images, determining the pre-recommended enhanced image may include the following sub-steps:
  • Sub-step 401 automatically identify multiple first medical images
  • Sub-step 402 Determine phases corresponding to the plurality of first medical images based on the automatic recognition results.
  • the automatic recognition of the first medical image may refer to image recognition of the first medical image.
  • it can be determined which one of the multiple first medical images has the best segmentation effect of the blood vessel.
  • a plurality of first medical images with the best segmentation effects corresponding to each blood vessel can be assigned to different phases. For example, if it is determined through automatic recognition that blood vessel a has the best segmentation effect on a certain first medical image, the first medical image can be defined as phase-1, and the first medical image with the best segmentation effect of blood vessel b can be defined as Phase-2, and so on, the first medical image with the best segmentation effect corresponding to blood vessel r is defined as phase-R.
  • the first medical image corresponding to phase-1 is the pre-recommended enhanced image of blood vessel a
  • the first medical image corresponding to phase-2 is the pre-recommended enhanced image of blood vessel b
  • the first medical image corresponding to phase-R is Medical images are pre-recommended enhanced images of blood vessels. In some embodiments, different blood vessels may also appear. For example, if the first medical image with the best segmentation effect of blood vessel a and blood vessel b is the same image, then this image can be used as the pre-recommended enhanced image of blood vessel a and blood vessel b at the same time.
  • the phase defined here for the first medical image can be randomly generated. This phase is equivalent to labeling the first medical image to distinguish which first medical image the blood vessel has the best segmentation effect. .
  • the phases corresponding to the multiple first medical images can also be determined first based on the generation time of the multiple first medical images, and then the first medical images corresponding to each phase are automatically identified to identify
  • the first medical image corresponding to each issue is Which blood vessels are present in the image are compared with the identification results of the same blood vessel on the first medical image corresponding to each period to determine the best segmented image of the blood vessel.
  • multiple first medical images are defined as phase-1, phase-2,..., phase-R according to the generation time. After phase-R, the first medical images corresponding to R phases are automatically identified, and each phase can be obtained. What blood vessels are there on the corresponding first medical images? For example, blood vessel a is recognized in the two first medical images corresponding to phase-1 and phase-3 respectively.
  • the first medical image corresponding to phase-1 can be determined as the pre-recommended enhanced image of blood vessel a.
  • the first medical image corresponding to phase-3 can be determined as the pre-recommended enhanced image of blood vessel a.
  • the blood vessels it is also possible to set the blood vessels to be segmented on the first medical image corresponding to a specific period.
  • the specific phase here can be determined based on the automatic recognition results.
  • FIG. 3(a) it can be set that the hepatic artery is segmented on the first medical image corresponding to phase-1, the hepatic portal vein is segmented on the first medical image corresponding to phase-2, and the hepatic vein is segmented on phase-3. Segmentation is performed on the corresponding first medical image, and the inferior vena cava is segmented on the first medical image corresponding to phase-4.
  • phase-1, phase-2, phase-3 and phase-4 are not determined based on the generation time, but based on the automatic recognition results.
  • first medical images are automatically recognized, and based on the recognition results, it is determined that the segmentation effect of the hepatic portal vein on one of the first medical images is the best, then the first medical image is set to phase-2. For another example, if it is determined based on the recognition results that one of the first medical images has the best segmentation effect on the superior and inferior vena cava, then the first medical image is set to phase-4.
  • automatic identification of multiple first medical images may be implemented in the following manner.
  • rough segmentation can be performed on multiple first medical images to obtain a rough segmentation mask for each blood vessel (or other organ/tissue, lesion) on each first medical image, and then each blood vessel (or or other organs/tissues, lesions), perform same-direction comparison on the corresponding first medical images in multiple phases for the same blood vessel (or other organs/tissues, lesions), thereby obtaining the blood vessel (or other organs/tissues, lesions)
  • the first medical image with the best segmentation effect for other organs/tissues, lesions).
  • the segmentation effect of the coarse segmentation mask of blood vessels can be determined through unsupervised methods and/or supervised methods.
  • the supervised method can be to evaluate the gap between the rough segmentation mask and the mask of the reference image (for example, an ideal image) to determine the segmentation quality, such as no segmentation error, Recall, SSIM image similarity, etc.
  • Unsupervised methods do not require a reference image, but evaluate the segmentation quality of the segmented image based on the degree of matching between the segmented image and the broad features of the desired segmented image, such as Region Nonuniformity (region nonuniformity), fuzzy segmentation coefficient, and fuzzy segmentation entropy. , Segmentation Accuracy (SA), etc.
  • SA Segmentation Accuracy
  • image processing methods such as image enhancement methods, threshold segmentation methods, region growing methods, feature point extraction methods, etc.
  • image processing methods can be used to roughen the blood vessels (or other organs/tissues, lesions) in the first medical image.
  • a deep learning convolutional network method such as UNet
  • UNet can be used to perform a coarse segmentation operation on blood vessels (or other organs/tissues, lesions) in the first medical image. More descriptions about coarse segmentation and its methods can be found elsewhere in this specification, such as Figures 7 to 10, and their related descriptions.
  • determining the pre-recommended enhanced image based on the first medical image may be based on the pre-segmentation effects of preset elements in the first target structure set on the first medical images corresponding to different periods.
  • the first set of target structures may include blood vessels within the target organ (eg, target organ).
  • the first set of target structures may include target organs and lesions in addition to blood vessels within the target organ.
  • the target organ may include the brain, lungs, liver, spleen, kidneys or any other possible organ tissue, such as the thyroid gland, etc.
  • the preset elements may be elements that are preset to be divided.
  • the preset elements may include one or more of blood vessels, lesions, and target organs within the target organ.
  • the pre-recommended enhanced image of the blood vessels in the target organ may be determined based only on the pre-segmentation effect of the blood vessels in the target organ on the first medical images corresponding to different phases.
  • the blood vessels and lesions (and/or target organs) in the target organ can also be determined respectively based on the pre-segmentation effects of the blood vessels and lesions (and/or target organs) in the target organ on the first medical images corresponding to different periods. /or target organ) pre-recommended enhanced imaging.
  • the pre-segmentation effect may be the result of automatic recognition, for example, the segmentation result of coarse segmentation.
  • the segmentation results of the target organ and/or lesion in the first medical image and its segmentation results in the pre-recommended enhanced image may be fused to obtain a more accurate segmentation result.
  • the outline of at least some elements can be extracted to quickly locate the target area.
  • a contour drawing tool can be used to extract the contours of at least some elements (eg, lesions, blood vessels).
  • the outline correction tool can also be used to correct the outline of the element on the image, such as adding or subtracting, until it meets the clinical situation. .
  • Step 230 Segment at least some elements in the second target structure set based on the second medical image to generate a second segmented image.
  • step 230 may be performed by registration module 2420.
  • the different stages may also include a second stage.
  • the second phase may be a time phase for acquiring a second medical image of the scanned object.
  • the second phase may refer to a time period when the scanned subject is in the process of surgery but the surgery has not yet started. For example, the time period when the scanned subject is ready for surgery and is about to undergo a surgical procedure.
  • the time period of the second stage is closer to the surgery time of the scanned object. Therefore, compared with the first medical image obtained in the first stage, the second medical image obtained in the second stage can better reflect the clinical condition of the scanned object.
  • the second medical image refers to the image of the scanning object obtained by plain scanning with the medical scanning equipment during the operation.
  • the second medical image may include CT images, PET-CT images, US images or MR images.
  • the second medical image may be a real-time scan image.
  • the second medical image may also be called a plain scan image or an intraoperative plain scan image, which is a scan image taken during the preparation process of the operation and before the operation is performed (that is, before the needle is actually inserted).
  • a second medical image of the scanned subject may be acquired.
  • a second medical image of the scanned object such as a PET-CT image, may be acquired from the medical scanning device 110 .
  • the second medical image of the scanned object can be obtained from the terminal 130, the processing device 140, and the storage device 150.
  • the elements for example, blood vessels, lesions, target organs, etc.
  • the second medical image reflects the clinical condition of the scanned object more realistically
  • the first medical image can be combined with
  • the image is registered with the second medical image to obtain a registration map.
  • the registration map can not only more accurately reflect the clinical condition of the scanned object, but also better display the position and/or outline of different elements, thereby making subsequent surgeries safer.
  • at least some elements in the second target structure set may be segmented based on the second medical image to generate a second segmented image.
  • the interventional surgery plan may be generated based on the first medical image, the first segmented image, the pre-recommended enhanced image, the second medical image, and the second segmented image.
  • the interventional surgery plan can refer to the preliminary work plan for puncture path planning. Interventional surgical protocols can be used to differentiate between interventional and non-interventional areas.
  • the accessible area refers to the area through which the puncture path can pass (for example, fat, etc.).
  • the non-interventionable area refers to the area (for example, blood vessels, vital organs) that needs to be avoided when planning the surgical plan.
  • non-interventional areas may include non-penetrable areas, non-introducible or implantable areas, and non-injectable areas.
  • the interventional surgery plan may include the first medical image, the pre-recommended enhanced image, the second medical image, the selection of the target structure set, the registration of the first medical image and the second medical image, and so on.
  • the segmentation results of at least part of the elements in the first target structure set may be marked as bookmarks and saved.
  • the process of obtaining the segmentation results of at least some elements in the first target structure set can be recorded as a preoperative offline planning stage.
  • the bookmarks generated in the preoperative offline planning stage may be used to characterize the segmentation results of at least some elements of the first target structure set.
  • marking the segmentation results of at least part of the elements in the first target structure set as bookmarks may include: marking the segmentation results of all elements in the first target structure set as bookmarks, so that the bookmarks can characterize preoperative offline Segmentation results of all elements in the planning phase.
  • the bookmark may also include the first medical image used for element segmentation, such as a pre-recommended enhanced image.
  • Bookmarks can be used to generate interventional surgical plans.
  • an interventional surgery plan can be generated using the first medical image in the bookmark, the pre-recommended enhanced image, and the segmentation results of at least some elements in the first target structure set (such as the first segmented image). For details, see the description below.
  • the bookmark generation method may be to use the dicom file generation technology, and save the results of the offline planning phase in the designated tag of the dicom through serialization technology and compression technology.
  • the serialization format can use ProtoBuf.
  • step 240 the first segmented image and the second segmented image are registered.
  • step 240 may be performed by registration module 2420.
  • the first segmented image and the second segmented image may be registered to determine the registration deformation field; based on the registration deformation field and the pre-recommendation, the space of at least some elements in the first target structure set in the image is enhanced.
  • Position determine the spatial position of the corresponding element in the second medical image.
  • step 250 determines an intervention path from the needle insertion point on the skin surface to the target area based on the registered second segmented image and/or the second medical image. In some embodiments, step 250 may be performed by interventional path planning module 2430.
  • the intervention path may refer to the path of the components of the robotic arm 160 from the needle entry point on the skin surface to the target area.
  • the intervention path may be a puncture path of a puncture needle, or the like.
  • the intervention path can be selected in the interventional area to avoid the non-interventionable area, thereby ensuring the safety of the operation.
  • the target point may be determined based on the registered second medical image of the scanned object and the interventional surgical plan, and then the reference path may be determined based on the user's operation related to the target point, thereby determining the target path based on the reference path.
  • the components of the robotic arm can follow the target path from the needle entry point on the skin surface to the target area to perform corresponding surgical operations.
  • Figures 30 to 36 and their related descriptions please refer to Figures 30 to 36 and their related descriptions.
  • process 200 is only for example and explanation, and does not limit the scope of this specification. Use scope. For those skilled in the art, various modifications and changes can be made to the process 200 under the guidance of this description.
  • Figure 5 is an exemplary flow chart for determining an interventional surgical plan according to some embodiments of this specification. As shown in Figure 5, process 500 may include the following sub-steps:
  • Sub-step 231 obtain bookmarks.
  • the results of the preoperative offline planning stage corresponding to those saved in the bookmarks can be loaded, for example, the first medical image, the pre-recommended enhanced image, and the first target structure concentration.
  • the segmentation result of at least some elements can also be called the bookmark restoration process.
  • loading the bookmarks can be understood as completely restoring the segmentation results in the preoperative offline planning stage. That is, loading the bookmark can restore the segmentation results of at least part of the elements in the first target structure set.
  • bookmark restoration technology can read the specified tag of the bookmark dicom, obtain the results of the preoperative offline planning stage through deserialization technology, and then reset the results to the original sequence.
  • bookmarks may be obtained from terminal 130, processing device 140, and storage device 150.
  • Sub-step 232 segment the second target structure set of the second medical image to obtain a second segmented image of the second target structure set.
  • the regions or organs included in the second target structure set of the second medical image may be determined based on a segmentation mode (eg, a fast segmentation mode and a precise segmentation mode). That is, when the segmentation modes are different, the regions or organs included in the second target structure set are different.
  • segmentation mode In fast segmentation mode the second target structure set may include non-interventionable regions.
  • the second target structure set in the precise segmentation mode the second target structure set may include all important organs in the second medical image.
  • Vital organs refer to the organs that need to be avoided during interventional surgery, such as the liver, kidneys, blood vessels outside the target organ, etc.
  • the second target structure set may also include target organs and lesions in addition to all important organs in the inaccessible area/second medical image.
  • the second segmented image is a segmented image of the second target structure set during surgery (for example, inaccessible areas/vital organs, target organs, lesions) obtained by segmenting the second medical image.
  • the first target structure set and the second target structure set intersect.
  • the first target structure set includes blood vessels (for example, blood vessels within the target organ) and target organs
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the first target structure set and The intersection of the second target structure set is the target organ.
  • the first target structure set and the second target structure set include The intersection of the target structure set is the target organ and the lesion.
  • target organ setting may be performed before sub-step 232 is performed.
  • the image processing method 200 for interventional surgery can be used for preoperative planning of interventional surgeries in different parts of the body (eg, liver, lungs).
  • the target organ settings are also different.
  • the target organ can be set to the liver, and other organs/tissues (such as kidneys, pancreas, adrenal glands, etc.) are non-interventionable areas in the fast segmentation mode and important in the precise segmentation mode. Organ/Tissue.
  • the user can set the target organ according to the operating site of the interventional surgery.
  • the segmentation mode may be obtained before performing sub-step 232, and the segmentation mode may include a fast segmentation mode and a precise segmentation mode.
  • the segmentation mode may be a segmentation mode used for segmenting the second medical image.
  • segmenting the second target structure set of the second medical image can be implemented in the following manner: segmenting the fourth target structure set of the second medical image according to the segmentation mode.
  • the fourth target structure set of the second medical image may be segmented according to a fast segmentation mode and/or a precise segmentation mode.
  • the fourth target structure set may be part of the second target structure set, for example, non-interventionable areas, all important organs outside the target organ. Under different segmentation modes, the fourth target structure set includes different regions/organs. In some embodiments, in fast segmentation mode, the fourth set of target structures may include inaccessible regions. In some embodiments, in the precise segmentation mode, the fourth target structure set may include preset important organs.
  • region positioning calculation can be performed on the second medical image, and the inaccessible regions can be segmented and extracted.
  • post-processing can be performed on the area outside the inaccessible area and the target organ (such as the target organ) to ensure that there is no cavity area in the intermediate area between the inaccessible area and the target organ.
  • the hole area refers to the background area formed by the boundaries connected by the foreground pixels.
  • the non-interventional area can be obtained by subtracting the target organ and the interventional area from the abdominal cavity (or chest) area. After subtracting the target organ and the interventional area from the abdominal cavity (or chest) area to obtain the non-interventional area, there may be a cavity area between the target organ and the non-interventional area. This cavity area does not belong to the target organ or the inaccessible area. intervention area.
  • post-processing may include corrosion operations and expansion operations.
  • the erosion operation and the expansion operation may be implemented based on convolution processing of the second medical image and the filter.
  • the erosion operation may be to convolve the filter with the second medical image, and then find a local minimum according to the predetermined erosion range, so that the outline of the second medical image is reduced to the desired range, and the initial image is displayed on the second medical image. Reduce the highlighted area of the target to a certain range.
  • the expansion operation may be to convolve the filter with the intraoperative scan image, and then find the local maximum according to the predetermined erosion range, so that the outline of the second medical image is expanded to the desired range, and the initial image is displayed on the second medical image. Reduce the highlighted area of the target to a certain range.
  • the region positioning calculation can be performed on the second medical image first, and then the segmentation and extraction of the inaccessible region can be performed.
  • the interior of the target organ may be determined based on the segmentation mask and blood vessel mask of the target organ in the second medical image (the blood vessel mask may be obtained based on the registration mapping of the second medical image and the first medical image). blood vessel mask. It should be noted that in the fast segmentation mode, only the blood vessels inside the target organ are segmented; in the precise segmentation mode, the blood vessels inside the target organ and other external blood vessels can be segmented.
  • Mask such as organ mask
  • the mask can be a pixel-level classification label.
  • the mask represents the classification of each pixel in the medical image. For example, it can be divided into background, liver, spleen, Kidney, etc., the summary area of a specific category is represented by the corresponding label value. For example, all pixels classified as liver are summarized, and the summary area is represented by the label value corresponding to the liver.
  • the label value here can be set according to the specific rough segmentation task.
  • the segmentation mask refers to the corresponding mask obtained after the segmentation operation.
  • the fast segmentation mode only the thoracic cavity or abdominal cavity region is used as an example.
  • the regional positioning of the thoracic cavity or abdominal cavity region within the scanning range of the second medical image is calculated. Specifically, for the abdominal cavity, the liver roof is selected.
  • the abdominal cavity Until the bottom of the rectum, it is used as the positioning area of the abdominal cavity; if it is the chest cavity, take the top of the esophagus to the bottom of the lungs (or the top of the liver) as the positioning area of the thoracoabdominal cavity; after determining the regional positioning information of the chest or abdominal cavity, then the abdominal cavity or The chest cavity is segmented, and segmented again within the segmented area to extract interventional areas (as opposed to non-interventional areas, such as penetrable areas, fat, etc.); finally, use the abdominal cavity or thoracic cavity segmentation mask to remove the segmentation of the target organ
  • the mask and the intrusive area mask can extract the inaccessible area.
  • the interventional area may include a fat portion, such as a fat-containing gap between two organs. Taking the liver as an example, part of the area between the subcutaneous area and the liver can be covered by fat. Due to the fast processing speed in the fast segmentation mode, the planning speed is faster, the time is shorter, and the image processing efficiency is improved.
  • a fat portion such as a fat-containing gap between two organs. Taking the liver as an example, part of the area between the subcutaneous area and the liver can be covered by fat. Due to the fast processing speed in the fast segmentation mode, the planning speed is faster, the time is shorter, and the image processing efficiency is improved.
  • all organs of the second medical image can be segmented.
  • all organs of the second medical image may include basic organs and important organs of the second medical image.
  • the basic organ of the second medical image may include a target organ (eg, target organ) of the second medical image.
  • preset important organs of the second medical image can be segmented. The preset important organs may be determined based on the importance of each organ in the second medical image. For example, all important organs in the second medical image can be used as preset important organs.
  • the ratio of the preset total volume of vital organs in the precise segmentation mode to the total volume of the inaccessible region in the fast segmentation mode may be less than the preset efficiency factor m.
  • the preset efficiency factor m can represent the difference in segmentation efficiency (or segmentation detail) of segmentation based on different segmentation modes.
  • the preset efficiency factor m may be equal to or less than 1.
  • the setting of the preset efficiency factor m is related to the type of interventional surgery. Interventional surgery types may include but are not limited to urological surgery, thoracoabdominal surgery, cardiovascular surgery, obstetrics and gynecology interventional surgery, musculoskeletal surgery, etc.
  • the preset efficiency factor m in urological surgery can be set smaller; the preset efficiency factor m in thoracoabdominal surgery can be set larger.
  • segmentation masks of all important organs of the second medical image are obtained through segmentation.
  • the segmented image content is more detailed, making the surgical planning scheme more selective and the image processing more robust. Sexuality is also enhanced.
  • Figure 6 is a schematic diagram of tissue segmentation and category setting in different segmentation modes according to some embodiments of this specification.
  • Figure 6(a) shows the tissue segmentation and category settings in the precise segmentation mode
  • Figure 6(b) shows the tissue segmentation and category settings in the fast segmentation mode.
  • the precise segmentation mode a single organ/tissue can be segmented, and the target organ, that is, the liver, is set as the area to be penetrated, and other important organs, such as kidneys, pancreas, gallbladder, and stomach , spleen, heart, lungs, adrenal glands or other custom tissues are set as danger zones.
  • the non-interventional area can be segmented, and the target organ, that is, the liver, is set as the area to be penetrated, the non-penetrable tissue (that is, the non-interventional area) and the custom tissue setting designated as a danger zone.
  • the target organ that is, the liver
  • the non-penetrable tissue that is, the non-interventional area
  • the custom tissue setting designated as a danger zone.
  • the planned needle path path can pass through the organ/tissue; when the organ/tissue type is a dangerous area, the relationship between the organ/tissue and the planned needle path path needs to be determined.
  • Safety distance to improve the safety and stability of needle path planning.
  • the safe distance between the organ/tissue and the planned path of the needle track may be preset.
  • the safe distance between each organ/tissue and the planned needle path can be initially set based on empirical values.
  • the doctor can also adapt the safety distance according to the actual situation.
  • the doctor can check and set the organ/tissue category according to actual needs.
  • the custom tissue can be reasonably set by a doctor based on the patient's actual condition.
  • an outline drawing tool may also be used to extract the outline of at least some elements (eg, organs).
  • the outline correction tool can also be used to correct the outline of the element on the image, such as adding or subtracting, until it meets the clinical situation.
  • FIG. 7 is an exemplary flowchart of a segmentation process involved in an image processing method for interventional surgery according to some embodiments of this specification.
  • the segmentation process 300 involved in the image processing method for interventional surgery may include the following steps:
  • Step 310 Perform rough segmentation on at least one element in the target structure set in the medical image
  • Step 320 Obtain a mask of at least one element
  • Step 330 determine the positioning information of the mask
  • Step 340 Accurately segment at least one element based on the positioning information of the mask.
  • the medical image may include a first medical image, a pre-recommended enhanced image, and a second medical image.
  • the target structure set may include any one or more of the first target structure set, the second target structure set, and the fourth target structure set.
  • a threshold segmentation method, a region growing method, or a level set method may be used to perform a coarse segmentation operation on at least one element in the target structure set in the medical image.
  • Elements may include target organs in medical images (eg, target organs), blood vessels within the target organ, lesions, non-interventionable areas, all important organs, etc.
  • coarse segmentation based on the threshold segmentation method can be implemented in the following manner: multiple different pixel threshold ranges can be set to classify each pixel in the medical image according to the pixel value of the input medical image, Divide pixels whose pixel values are within the same pixel threshold range into the same area.
  • coarse segmentation based on the region growing method can be implemented in the following manner: based on known pixels on the medical image or a predetermined area composed of pixels, preset similarity discrimination conditions according to needs, and based on the predetermined Set the similarity discrimination condition, compare a pixel with its surrounding pixels, or compare a predetermined area with its surrounding areas, merge pixels or areas with high similarity, stop merging until the above process cannot be repeated, and complete the rough segmentation process.
  • the preset similarity discrimination condition can be determined based on preset image features, for example, such as grayscale, texture and other image features.
  • coarse segmentation based on the level set method can be implemented in the following manner: setting the target contour of the medical image as the zero level set of a high-dimensional function, differentiating the function, and extracting the zero level set from the output To obtain the contour of the target, and then segment the pixel area within the contour range.
  • a method based on a deep learning convolutional network can be used to perform a coarse segmentation operation on at least one element of the target structure set in the medical image.
  • methods based on deep learning convolutional networks may include segmentation methods based on fully convolutional networks.
  • the convolutional network can adopt a network framework based on a U-shaped structure, such as UNet, etc.
  • the network framework of the convolutional network may be composed of an encoder and a decoder and a residual connection (skip connection) structure, where the encoder and the decoder are composed of a convolutional layer or a convolutional layer combined with an attention mechanism,
  • the convolutional layer is used to extract features
  • the attention module is used to apply more attention to key areas
  • the residual connection structure is used to combine the features of different dimensions extracted by the encoder to the decoder part, and finally the segmentation result is output via the decoder .
  • a method based on deep learning convolutional networks for rough segmentation can be implemented in the following manner: the encoder of the convolutional neural network performs feature extraction of medical images through convolution, and then the decoding of the convolutional neural network The device restores the features into a pixel-level segmentation probability map.
  • the segmentation probability map represents the probability that each pixel in the image belongs to a specific category. Finally, the segmentation probability map is output as a segmentation mask, thereby completing rough segmentation.
  • FIG. 8 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification.
  • Figure 9 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification.
  • Figure 10 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification.
  • determining the positioning information of the element mask can be implemented in the following manner: performing soft connected domain analysis on the element mask.
  • Connected domain that is, connected area, generally refers to the image area composed of foreground pixels with the same pixel value and adjacent positions in the image.
  • step 330 performs soft connected domain analysis on the element mask, which may include the following sub-steps:
  • Sub-step 331, determine the number of connected domains
  • Sub-step 332 when the number of connected domains ⁇ 2, determine the area of the connected domain that meets the preset conditions;
  • Sub-step 333 When the ratio of the area of the largest connected domain among the multiple connected domains to the total area of the connected domains is greater than the first threshold M, it is determined that the largest connected domain meets the preset conditions;
  • Sub-step 334 determine that the retained connected domain at least includes the maximum connected domain
  • Sub-step 335 Determine the positioning information of the element mask based on the preserved connected domain.
  • the preset conditions refer to the conditions that need to be met when the connected domain is retained as a connected domain.
  • the preset condition can be for the connected domain Area constraints.
  • the medical image may include multiple connected domains, and the multiple connected domains have different areas. Multiple connected domains with different areas can be sorted according to area size, for example, from large to small, and the sorted connected domains can be recorded as the first connected domain, the second connected domain, and the kth connected domain. Among them, the first connected domain may be the connected domain with the largest area among multiple connected domains, also called the maximum connected domain.
  • the preset conditions for determining connected domains with different area orders as retained connected domains can be different. For details, see the relevant description in Figure 9 .
  • the connected domains that meet the preset conditions may include: connected domains whose areas are ordered from largest to smallest and are within the preset order n. For example, when the preset order n is 3, it is possible to determine whether each connected domain is a preserved connected domain in order according to the area order and according to the corresponding preset conditions. That is, first determine whether the first connected domain is a retained connected domain, and then determine whether the second connected domain is a retained connected domain.
  • the preset order n may be set based on the category of the element (or target structure), for example, chest target structure, abdominal target structure.
  • the first threshold M may range from 0.8 to 0.95, within which the expected accuracy of soft connected domain analysis can be ensured.
  • the first threshold M may range from 0.9 to 0.95, further improving the accuracy of soft connected domain analysis. In some embodiments, the first threshold M may be set based on the category of the target structure, for example, chest target structure, abdominal target structure. In some embodiments, the preset order n/first threshold M can also be reasonably set based on machine learning and/or big data, and is not further limited here.
  • step 330 performs soft connected domain analysis on the element mask, which can be performed in the following manner:
  • the number of connected domains is 0, it means that the corresponding mask is empty, that is, the mask acquisition or rough segmentation fails or the segmentation object does not exist and no processing is performed.
  • the spleen in the abdominal cavity there may be a situation where the spleen is removed and the mask of the spleen is empty.
  • the number of connected domains When the number of connected domains is 1, it means that there is only one connected domain. There are no false positives, no separation and disconnection, etc., and the connected domain is retained. It can be understood that when the number of connected domains is 0 and 1, there is no need to pre-determine the connected domain. Set a condition to determine whether the connected domain is a preserved connected domain.
  • connected domain A When the number of connected domains is 2, obtain connected domains A and B respectively according to the size of the area (S). Among them, the area of connected domain A is larger than the area of connected domain B, that is, S(A)>S(B).
  • connected domain A can also be called the first connected domain or the maximum connected domain; connected domain B can be called the second connected domain.
  • the preset condition that the connected domain needs to satisfy as a retained connected domain may be the relationship between the ratio of the maximum connected domain area and the total area of the connected domain and the threshold. Calculate the connected domain.
  • the connected domain B can be determined to be false.
  • the ratio of the area of A to the total area of A and B is greater than the first threshold M, that is, S(A)/S(A+B)>the first threshold M
  • the connected domain B can be determined to be false.
  • the positive area only connected domain A is retained (that is, connected domain A is determined to be a retained connected domain); when the proportion of the area of A to the total area of A and B is less than or equal to the first threshold M, both A and B can be determined to be element masks.
  • a part of the membrane that simultaneously preserves connected domains A and B that is, determines connected domains A and B as preserved connected domains).
  • the maximum connected domain (ie, connected domain A) as the preset condition that needs to be met to retain the connected domain may be the ratio of the maximum connected domain area to the total area of the connected domains.
  • the maximum connected domain (i.e., connected domain A) as the preset condition that needs to be met to retain the connected domain can also be the area of the second connected domain and the area of the largest connected domain.
  • the relationship between the ratio and the threshold for example, the second threshold N).
  • the ratio of the area of connected domain A to the total area S(T) is greater than the first threshold M, that is, S(A)/S(T)>the first threshold M, or the area of connected domain B accounts for the area of connected domain A
  • the ratio of the area is less than the second threshold N, that is, S(B)/S(A) ⁇ the second threshold N
  • the connected domain A is determined as the element mask part and retained (that is, the connected domain A is a retained connected domain), and the remaining The connected domains are all determined as false positive areas; otherwise, the calculation continues, that is, it continues to determine whether the second connected domain (ie, connected domain B) is a retained connected domain.
  • the preset condition that connected domain B needs to satisfy as a retained connected domain may be the relationship between the ratio of the sum of the areas of the first connected domain and the second connected domain to the total area of the connected domain and the first threshold M. In some embodiments, the preset conditions that connected domain B needs to satisfy as a retained connected domain may also be the ratio of the area of the third connected domain to the sum of the area of the first connected domain and the area of the second connected domain and a threshold (for example, the The size relationship between the two thresholds N).
  • the judgment method of connected domain C is similar to the judgment method of connected domain B.
  • the preset conditions that connected domain C needs to satisfy as a retained connected domain can be the sum of the areas of the first connected domain, the second connected domain and the third connected domain and the connectivity.
  • Ratio of total domain area The relationship between the value and the first threshold M, or the proportion of the fourth connected domain area to the sum of the first connected domain area, the second connected domain area and the third connected domain area and the threshold (for example, the second threshold N) size relationship.
  • FIG. 8 only shows the judgment of whether three connected domains are retained connected domains. It can also be understood that the value of the preset order n in Figure 8 is set to 4. Therefore, only the connected domains with order 1, 2, and 3, that is, connected domain A, connected domain B, and connected domain C Determine whether to preserve the connected domain.
  • the second threshold N may range from 0.05 to 0.2. Within the value range, it is possible to ensure that the soft connected domain analysis obtains the expected accuracy. In some embodiments, the value range of the second threshold N may be 0.05. With this setting, a relatively excellent soft connected domain analysis accuracy effect can be obtained.
  • the upper and lower left are respectively the cross-sectional medical images and the three-dimensional medical images of the coarse segmentation results without using soft connected domain analysis
  • the right side are respectively the cross-sectional medical images and the three-dimensional medical images of the rough segmentation results using soft connected domain analysis.
  • Medical Imaging After comparison, it can be seen that the results of rough segmentation of the element mask based on soft connected domain analysis show that the false positive areas outlined by the box in the left image are removed. Compared with the previous connected domain analysis method, the accuracy and reliability of excluding false positive areas are better. Higher, and directly contributes to the subsequent reasonable extraction of bounding boxes of element mask positioning information, improving segmentation efficiency.
  • the positioning information of the element mask may be the position information of the enclosing rectangle of the element mask, for example, the coordinate information of the border line of the enclosing rectangle.
  • the bounding rectangle of the element mask covers the positioning area of the element.
  • the bounding rectangle may be displayed in the medical image in the form of a bounding rectangular frame.
  • the bounding rectangle may be constructed based on the bottom edge of the connected area in the element in each direction, for example, the bottom edge of the connected area in the up, down, left, and right directions, to construct a circumscribing rectangular frame relative to the element mask.
  • the bounding rectangle of the element mask may be a rectangular box or a combination of multiple rectangular boxes.
  • it can be a rectangular frame with a larger area, or a rectangular frame with a larger area formed by a combination of multiple rectangular frames with smaller areas.
  • the bounding rectangle of the element mask may be a bounding rectangle in which only one rectangle exists.
  • a larger circumscribed rectangle is constructed based on the bottom edges of the connected area in each direction.
  • the above-mentioned large-area circumscribed rectangle can be applied to organs where there is a connected area.
  • the bounding rectangle of the element mask may be a circumscribing rectangular frame composed of multiple rectangular frames.
  • multiple rectangular boxes corresponding to the multiple connected areas are constructed into a rectangular box based on the bottom edges of the multiple rectangular boxes. It is understandable that if the bottom edges of the three rectangular boxes corresponding to the three connected areas are constructed into a total circumscribed rectangular box, the calculation will be processed as a total circumscribed rectangular box, which can reduce the calculation while ensuring the expected accuracy. quantity.
  • the location information of the multiple connected domains can be determined first, and then the positioning information of the element mask is obtained based on the location information of the multiple connected domains. For example, you can first determine the connected domain among multiple connected domains that meets the preset conditions, that is, retain the location information of the connected domain, and then obtain the positioning information of the element mask based on the retained location information of the connected domain.
  • determining the positioning information of the mask may also include the following operations: positioning the element mask based on the positioning coordinates of the reference element.
  • this operation may be performed if positioning of the element mask's bounding rectangle fails. It is understandable that when the coordinates of the enclosing rectangle of the element mask do not exist, it is judged that the positioning of the corresponding element fails.
  • the reference element can select an element with relatively stable positioning (for example, an organ with relatively stable positioning), and the probability of positioning failure when positioning the element is low, thereby achieving precise positioning of the element mask.
  • an element with relatively stable positioning for example, an organ with relatively stable positioning
  • the probability of positioning failure of the liver, stomach, spleen, and kidneys in the abdominal cavity is low, and the probability of positioning failure of the lungs in the thoracic cavity is low, the positioning of these organs is relatively stable, Therefore, the liver, stomach, spleen, and kidneys can be used as reference organs in the abdominal cavity, that is, the reference elements can include the liver, stomach, spleen, kidneys, lungs, or any other possible organ tissue.
  • the organ mask in the abdominal cavity can be repositioned based on the positioning coordinates of the liver, stomach, spleen, and kidneys. In some embodiments, the organ mask in the chest range may be positioned based on the positioning coordinates of the lungs.
  • the element mask can be repositioned using the positioning coordinates of the reference element as the reference coordinate.
  • the positioning coordinates of the liver, stomach, spleen, and kidney are used as the coordinates for repositioning, and the failed element in the abdominal cavity is repositioned accordingly.
  • the positioning coordinates of the lungs are used as the coordinates for re-positioning, and the elements that fail to be positioned in the chest are processed accordingly. Position again.
  • the positioning coordinates of the top of the liver, the bottom of the kidney, the left spleen, and the right liver can be used as the cross-sectional defense line (upper and lower sides) and coronal direction (left) for repositioning. side, right side), and take the most anterior and posterior ends of the coordinates of these four organs as the coordinates of the newly positioned sagittal direction (anterior, posterior), based on which the failed elements in the abdominal cavity are re-located. position.
  • the circumscribed rectangular frame formed by the lung positioning coordinates is expanded by a certain pixel, and the element that fails to be positioned in the chest is positioned again accordingly.
  • Figure 11 is an exemplary flowchart of a process of accurately segmenting elements according to some embodiments of this specification.
  • accurately segmenting at least one element based on the positioning information of the mask may include the following sub-steps:
  • Sub-step 341 Perform preliminary precise segmentation on at least one element.
  • the preliminary precise segmentation may be a precise segmentation based on the positioning information of the rough segmented element mask.
  • a preliminary precise segmentation of the element may be performed based on the input data and the bounding rectangular frame positioned by rough segmentation. Precisely segmented element masks can be generated through preliminary accurate segmentation.
  • Sub-step 342 determine whether the positioning information of the element mask is accurate. Through step 342, it can be determined whether the positioning information of the element mask obtained by rough segmentation is accurate, and further whether the rough segmentation is accurate.
  • the element mask of the preliminary precise segmentation can be calculated to obtain its positioning information, and the positioning information of the rough segmentation can be compared with the positioning information of the precise segmentation.
  • the bounding rectangular frame of the roughly segmented element mask can be compared with the bounded rectangular frame of the precisely segmented element mask to determine the difference between the two.
  • the circumscribed rectangular frame of the roughly segmented element mask can be masked with the precisely segmented element mask in six directions of the three-dimensional space (that is, the entire circumscribed rectangular frame is a cube in the three-dimensional space). Compare the surrounding rectangular boxes to determine the difference between the two.
  • whether the positioning information of the roughly segmented element mask is accurate can be determined based on the positioning information of the initially precisely segmented element mask. In some embodiments, whether the judgment result is accurate can be determined based on the difference between the coarse segmentation positioning information and the precise segmentation positioning information.
  • the positioning information may be a circumscribed rectangle (such as a circumscribed rectangle) of the element mask. The coarse segmented element mask is determined based on the circumscribed rectangle of the roughly segmented element mask and the circumscribed rectangle of the precisely segmented element mask. Is the bounding rectangle accurate?
  • the difference between the coarse segmentation positioning information and the precise segmentation positioning information may refer to the distance between the closest border lines in the coarse segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame.
  • the positioning information of coarse segmentation is significantly different from the positioning information of precise segmentation (that is, the distance between the closest border lines in the rough segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame is relatively large)
  • the positioning information of rough segmentation is inaccurate.
  • the rough segmentation bounding rectangle is obtained by pixel expansion (for example, 15-20 voxels) on the border line of the original rough segmentation close to the element.
  • whether the positioning information of coarse segmentation is accurate can be determined based on the relationship between the distance between the nearest border lines in the roughly segmented circumscribed rectangular frame and the precisely segmented circumscribed rectangular frame and a preset threshold, for example , when the distance is less than the preset threshold, it is determined to be inaccurate, and when the distance is greater than the preset threshold, it is determined to be accurate.
  • the value range of the preset threshold may be less than or equal to 5 voxels.
  • FIG. 12 to 13 are exemplary schematic diagrams of positioning information determination of element masks according to some embodiments of this specification.
  • FIG. 14A is an exemplary diagram of determining the sliding direction based on the positioning information of the element mask according to some embodiments of this specification.
  • Figures 12 and 13 show the element mask A obtained by rough segmentation and the circumscribed rectangular frame B of element mask A (that is, the positioning information of element mask A), as well as the first accurate segmentation based on the circumscribed rectangular frame of rough segmentation.
  • Figure 14A also shows the sliding window B1 obtained after sliding the roughly divided circumscribed rectangular frame B.
  • (a) is a schematic diagram before the sliding operation
  • (b) is a schematic diagram after the sliding operation.
  • a planar rectangular frame within a plane of a three-dimensional circumscribed rectangular frame is used as an example.
  • Sub-step 343a when the judgment result is inaccurate, obtain accurate positioning information based on the adaptive sliding window.
  • the coarse segmentation result when the coarse segmentation result is inaccurate, the elements obtained by precise segmentation are likely to be inaccurate.
  • Corresponding adaptive sliding window calculations can be performed on them and accurate positioning information can be obtained to continue. Precise segmentation.
  • obtaining accurate positioning information based on adaptive sliding windows can be implemented in the following manner: determining at least one direction in which the positioning information is inaccurate; and performing adaptive sliding window calculations in the direction according to the overlap rate parameter.
  • at least one direction in which the circumscribed rectangular frame is inaccurate can be determined; after determining that the rough segmented circumscribed rectangular frame is inaccurate, the rough segmented circumscribed rectangular frame is slid in the corresponding direction according to the input preset overlap rate parameter, that is, Sliding window operation, and repeat the sliding window operation until all bounding rectangular boxes are completely accurate.
  • the overlap rate parameter refers to the ratio of the overlap area between the initial circumscribed rectangular frame and the sliding circumscribed rectangular frame to the area of the initial circumscribed rectangular frame.
  • the sliding step length of the sliding window operation is shorter.
  • the overlap rate parameter if you want to ensure that the sliding window calculation process is more concise (that is, there are fewer steps in the sliding window operation), you can set the overlap rate parameter smaller; if you want to ensure that the results of the sliding window calculation are more accurate, you can set The overlap ratio parameter is set larger.
  • the sliding step size for sliding window operation may be calculated according to the current overlap rate parameter. According to the judgment method in Figure 12, it can be seen that the directions corresponding to the right and lower border lines of the roughly segmented circumscribed rectangular frame B in Figure 14A are inaccurate.
  • the direction corresponding to the right border line of the external rectangular frame B is recorded as the first direction (the first direction is perpendicular to the right border line of B), and the direction corresponding to the lower border line is recorded as the second direction (the second direction). perpendicular to the lower border line of B).
  • the length of the circumscribed rectangle B is a
  • the overlap rate parameter is 60%
  • it can be determined that the corresponding step size is a*(1-60%).
  • the circumscribed rectangle The right border line of box B can slide a*(1-60%) along the first direction.
  • the lower border line of the external rectangular frame B can slide along the second direction by a corresponding step.
  • the pixel point coordinates in the four directions corresponding to the four sides in the image of the finely segmented circumscribed rectangular frame C are compared with the four directions corresponding to the four border lines in the image of the coarsely segmented circumscribed rectangular frame B.
  • the pixel coordinates are compared one by one.
  • the difference in pixel coordinates in one direction is less than the coordinate difference threshold of 8pt, it can be determined that the rough segmentation enclosing rectangular frame in Figure 12 is inaccurate in that direction.
  • the top difference is 20pt
  • the bottom difference is 30pt
  • the right difference is 1pt
  • the left is 50pt
  • B1 is a circumscribed rectangular frame (also called a sliding window) obtained by sliding the roughly segmented circumscribed rectangular frame B.
  • the sliding window is a roughly segmented circumscribed rectangular frame that meets the expected accuracy standard.
  • the direction corresponding to each border line that does not meet the standard is moved in sequence.
  • each side sliding depends on the overlap rate of B1 and B, where the overlap rate can be the ratio of the current overlapping area of the rough segmented circumscribed rectangular frame B and the sliding window B1 to the total area.
  • the current overlap The rate is 40% and so on.
  • the sliding order of the border lines of the roughly divided circumscribed rectangular frame B may be from left to right, from top to bottom, or other feasible order, which is not further limited here.
  • 14B-14E are exemplary schematic diagrams of accurate segmentation after sliding windows according to some embodiments of this specification.
  • the accurate coarse segmented circumscribed rectangular frame is obtained after adaptive sliding window, and the accurate coordinate value of the circumscribed rectangular frame can be obtained,
  • the new sliding window is accurately segmented, and the accurate segmentation result is superimposed with the preliminary accurate segmentation result to obtain the final accurate segmentation result.
  • the sliding window operation can be performed on the original sliding window B to obtain the sliding window B1 (the maximum range of the circumscribed rectangular frame after the sliding window operation).
  • B slides the corresponding step along the first direction to obtain the sliding window B1- 1. Then accurately segment the entire range of the sliding window B1-1 to obtain the accurate segmentation result of the sliding window B1-1.
  • B can slide the corresponding step along the second direction to obtain the sliding window B1-2, and then accurately segment the entire range of the sliding window B1-2 to obtain an accurate segmentation result of the sliding window B1-2.
  • B can obtain the sliding window B1-3 by sliding (for example, B can obtain the sliding window B1-2 by sliding as shown in Figure 14C, and then slide the sliding window B1-2 to obtain the sliding window B1-3) , and then accurately segment the entire range of the sliding window B1-3, and obtain the accurate segmentation result of the sliding window B1-3.
  • the precise segmentation results of sliding window B1-1, sliding window B1-2 and sliding window B1-3 are superimposed with the preliminary precise segmentation results to obtain the final precise segmentation. result.
  • Sliding window B1 is the final sliding window result obtained by performing continuous sliding window operations on original sliding window B, namely sliding window B1-1, sliding window B1-2 and sliding window B1-3.
  • the precise segmentation results of sliding window B1-1, sliding window B1-2 and sliding window B1-3 there may be repeated overlapping parts.
  • the sliding window There may be an intersection between B1-1 and the sliding window B1-2.
  • the intersection may be repeatedly superimposed.
  • the following method can be used to deal with it: for a certain part of the element mask A, if the segmentation result of one sliding window is accurate for this part and the segmentation result of the other sliding window is inaccurate, the segmentation result will be accurate.
  • the segmentation result of the sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are accurate, the segmentation result of the right sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are not accurate , then the segmentation result of the right sliding window is used as the segmentation result of this part, and precise segmentation continues until the segmentation result is accurate.
  • obtaining accurate positioning information based on the adaptive sliding window is a cyclic process. Specifically, after comparing the precise segmentation border line and the coarse segmentation border line, the updated coordinate value of the precise segmentation external rectangular frame can be obtained through the adaptive sliding window.
  • the precise segmentation external rectangular frame is expanded by a certain pixel and set to a new one. Roughly segment the circumscribed rectangular frame in a round cycle, and then accurately segment the new circumscribed rectangular frame again to obtain a new precisely segmented circumscribed rectangular frame, and calculate whether it meets the accurate requirements. If the exact requirements are met, the cycle ends, otherwise the cycle continues.
  • a deep convolutional neural network model may be used to perform precise segmentation on at least one element in the coarse segmentation.
  • the historical medical images initially acquired before rough segmentation can be used as training data, and the historical accurate segmentation result data can be used to train the deep convolutional neural network model.
  • the historical medical images and the historical accurate segmentation result data are obtained from the medical scanning device 110 .
  • historical scanned medical images of the scanned object and historical accurate segmentation result data can be obtained from the terminal 130, the processing device 140, and the storage device 150.
  • Sub-step 343b when the judgment result is accurate, the preliminary accurate segmentation result is output as the segmentation result.
  • the judgment result ie, the coarse segmentation result
  • At least one element result data for accurate segmentation can be output.
  • image post-processing operations may be performed before outputting the segmentation results.
  • Image post-processing operations may include edge smoothing and/or image denoising, etc.
  • edge smoothing processing may include smoothing processing or blurring processing to reduce noise or distortion of medical images.
  • smoothing processing or blurring processing may adopt the following methods: mean filtering, median filtering, Gaussian filtering, and bilateral filtering.
  • Figure 15 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • the upper and lower left are cross-sectional medical images and three-dimensional medical images using rough segmentation results using traditional technology
  • the right are respectively cross-sectional and three-dimensional medical images using the organ segmentation method provided by the embodiment of the present application.
  • the target structure set when segmenting at least some elements in the target structure set (for example, the first target structure set, the second target structure set, etc.), over-segmentation may occur, resulting in different locations in the medical image.
  • the outline of the element that is, the outlines of different elements overlap).
  • at least some elements in the target structure set can be marked with priorities; and the outlines of the elements can be refreshed and displayed according to the order of priorities.
  • the outline of an element with a higher priority level overrides the outline of an element with a lower priority level.
  • the priority order of different elements from high to low may be dangerous area, target area, and penetrable area.
  • the outlines of the elements can be refreshed and displayed according to the order of priority, so that the outlines of the elements in the target area
  • the element profile of the penetrable zone covers the element profile of the danger zone
  • the element profile of the danger zone covers the element profile of the target zone.
  • elements classified as dangerous areas have the highest priority level, which can ensure that the outlines of elements classified as dangerous areas will not be covered by element outlines of other categories, thus improving the safety and stability of subsequent interventional surgeries.
  • all elements in the target structure set may be marked with priority.
  • prioritizing at least some elements in the target structure set may include: focusing important elements on the target structure (for example, blood vessels in the target organ in the first target structure set, blood vessels in the second target structure set in the precise segmentation mode). Important organs/tissues, etc.) are marked with priorities, while the priorities of other organs/tissues can adopt default levels (that is, other organs/tissues can have preset default priorities).
  • prioritizing at least some elements in the target structure set may include: prioritizing all elements in the target structure set. All elements here can be all organs/tissues that are segmented.
  • the target organ, blood vessels, and lesions within the target organ can be marked with priority.
  • the priority of elements in the target structure set may be related to the element related to the element category (e.g., area to be penetrated, danger area).
  • An element whose element type is a danger zone may have a higher priority than an element whose element type is a zone to be penetrated.
  • Figure 16 is an exemplary result diagram of a priority refresh display according to some embodiments of this specification.
  • Figure 16(a) shows the segmented liver outline
  • Figure 16(b) shows the segmented lesion outline
  • Figure 16(c) shows the segmented blood vessel outline. From the above description, it can be seen that blood vessels have the highest priority, lesions have the second priority, and liver has the lowest priority. This is because the organ where the lesion is located is the target organ, that is, the liver is used as the target organ, and the target organ can be regarded as a penetrable area by default. Comparing Figure 16(a) and (b), the lesion outline covers the liver outline. Comparing Figure 16(b) and (c), the blood vessel outline covers the lesion outline.
  • sub-step 233 is to register the first segmented image and the second segmented image to determine the spatial position of the third target structure set.
  • the third target structure set is a complete set of structures obtained after registering the first segmented image and the second segmented image.
  • the third set of target structures may include the target organ (eg, target organ), blood vessels within the target organ, lesions, and other areas/organs (eg, non-interventional areas, all vital organs).
  • other regions/organs may refer to non-interventionable areas; in the precise segmentation mode, other regions/organs may refer to all important organs.
  • at least one element in the third target structure set is included in the first target structure set, and at least one element in the third target structure set is not included in the second target structure set.
  • the first target structure set includes blood vessels, target organs, and lesions within the target organ
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the blood vessels within the target organ are included in the first target structure set.
  • the target structure is in the set and is not included in the second target structure set.
  • the fourth target structure set can also be regarded as part of the third target structure set, for example, the inaccessible area and all important organs outside the target organ.
  • the first segmented image (such as the segmented image of the first pre-operative target structure set obtained by segmenting the first medical image and/or the pre-recommended enhanced image) may include the first target structure set (eg, surgical The precise structural characteristics of the blood vessels in the preoperative target organ, preoperative target organ, and preoperative lesions); the second segmented image (i.e., the segmented image of the second target structure set in the operation obtained by segmenting the second medical image) may include the second segmented image.
  • Precise structural characteristics of the target structure set e.g., intraoperative target organ, intraoperative lesion, intraoperative inaccessible area/all vital organs).
  • the first segmented image and the second segmented image may be subjected to separation processing of the appearance features of the target structure set and the background.
  • the separation process of appearance features and background can be based on artificial neural networks (linear decision functions, etc.), threshold-based segmentation methods, edge-based segmentation methods, image segmentation methods based on cluster analysis (such as K-means) etc.) or any other feasible algorithm, such as segmentation method based on wavelet transform, etc.
  • the first segmented image includes the blood vessels in the preoperative target organ (eg, target organ) and the structural characteristics of the preoperative target organ (ie, the first target structure set includes the blood vessels and target organ in the target organ), and the second segmented image Including the structural characteristics of the intraoperative target organ, intraoperative lesions, intraoperative non-interventional areas/all important organs (i.e. the second target structure set includes the target organ, intraoperative lesions, non-interventional areas/all important organs) as an example, for the registration process Make an illustrative description.
  • the structural features of the lesion are not limited to being included in the second segmented image. In other embodiments, the structural features of the lesion may also be included in the first segmented image, or the structural features of the lesion may be included in the first segmented image at the same time. in the split image and the second split image.
  • FIG 17 is an exemplary flowchart of a process of registering a first segmented image and a second segmented image shown in some embodiments of this specification. As shown in Figure 17, step 233 may include the following sub-steps:
  • Step 2331 Register the first segmented image and the second segmented image to determine the registration deformation field.
  • Registration may be an image processing operation that uses spatial transformation to make the corresponding points of the first segmented image and the second segmented image consistent in spatial position and anatomical position.
  • the registration deformation field can be used to reflect the spatial position changes of the first segmented image and the second segmented image.
  • the second medical image can undergo spatial position transformation based on the registration deformation field, so that the transformed second medical image is in the same space as the first medical image and/or the pre-recommended enhanced image. The location and anatomy are exactly the same.
  • Figure 20 is an exemplary demonstration diagram of obtaining the first segmented image and the second segmented image after segmentation as shown in some embodiments of this specification.
  • the process of registering the first segmented image and the second segmented image and determining the registration deformation field may include the following sub-steps:
  • Sub-step 23311 determine the first preliminary deformation field based on the registration between elements.
  • the elements may be element outlines of the first segmented image and the second segmented image (eg, organ outlines, blood vessel outlines, lesion outlines).
  • the registration between elements may refer to the registration between image areas covered by the element outlines (mask).
  • the pre-recommended enhanced image in Figure 20 is segmented to obtain the image area covered by the organ outline A of the target organ (such as the target organ) (the area with the same or basically the same grayscale in the dotted line area in the lower left image), the second medical image After segmentation, the image area covered by the organ outline B of the target organ (such as the target organ) is obtained (the area with the same or basically the same gray level in the dotted line area in the lower right figure).
  • the difference between the image area covered by organ outline A and the image area covered by organ outline B is The first preliminary deformation field is obtained through regional registration (deformation field 1 in Figure 19).
  • the first preliminary deformation field may be a local deformation field.
  • the local deformation field about the liver contour is obtained through the liver preoperative contour A and the intraoperative contour B.
  • Sub-step 23312 Determine the second preliminary deformation field of the entire image based on the first preliminary deformation field between elements.
  • the full image can be a region-wide image containing elements.
  • the full image can be an image of the entire abdominal cavity.
  • the full image can be an image of the entire chest cavity.
  • the second preliminary deformation field of the entire image may be determined through interpolation based on the first preliminary deformation field.
  • the second preliminary deformation field may be a global deformation field.
  • the deformation field 2 of the full image size is determined by interpolation of the deformation field 1.
  • Sub-step 23313 Deform the floating image based on the second preliminary deformation field of the entire image, and determine the registration map of the floating image.
  • the floating image may be an image to be registered, for example, a first medical image (such as a pre-recommended enhanced image) and a second medical image.
  • a first medical image such as a pre-recommended enhanced image
  • the floating image is the second medical image.
  • the second medical image can be registered by registering the deformation field so as to be consistent with the spatial position of the pre-recommended enhanced image.
  • the pre-recommended enhanced image is registered to the second medical image
  • the pre-recommended enhanced image can be registered by registering the deformation field so that the spatial position of the pre-recommended enhanced image is consistent with the second medical image.
  • the registration map of the floating image may be the image of the intermediate registration result obtained during the registration process.
  • the registration map of the floating image may be the intermediate second medical image obtained during the registration process.
  • the embodiment of this description takes the registration of a first medical image (such as a pre-recommended enhanced image) to a second medical image as an example to describe the registration process in detail.
  • the floating image that is, the pre-recommended enhanced image
  • the pre-recommended enhanced image is deformed to determine the registration map of the first medical image, that is, the intermediate registration result. of second medical imaging.
  • the pre-recommended enhanced image is deformed and its registration map is obtained.
  • Sub-step 23314 Register the registration map of the floating image with the area in the first grayscale difference range in the reference image to obtain the third preliminary deformation field.
  • the reference image refers to the target image before registration, which may also be called the target image without registration.
  • the reference image refers to the second medical image without registration operation.
  • the third preliminary deformation field may be a local deformation field.
  • sub-step 23314 can be implemented in the following manner: perform pixel grayscale calculations on different areas of the registration map of the floating image and the reference image to obtain corresponding grayscale values; calculate the grayscale of the registration map of the floating image.
  • the difference value when the difference value is within the first grayscale difference range, it may mean that the difference between an area in the registration map of the floating image and the corresponding area in the reference image is not large or relatively small.
  • the first grayscale difference range is 0 to 150
  • the grayscale difference between area Q1 in the registration map of the floating image and the same area in the reference image is 60
  • the area Q2 in the registration map of the floating image is the same as that in the reference image.
  • the grayscale difference value of the area is 180, then the difference in area Q1 of the two images (i.e., the registration map and the reference image of the floating image) is not large, while the difference in area Q2 is large. Only the area Q1 in the two images Perform registration.
  • elastic registration is performed on the registration map of the floating image and the area in the reference image that conforms to the first grayscale difference range (the area where the difference is not too large) to obtain the deformation field 3 (That is, the third preliminary deformation field mentioned above).
  • Sub-step 23315 based on the third preliminary deformation field, determine the fourth preliminary deformation field of the entire image.
  • interpolation is performed to obtain a fourth preliminary deformation field of the entire image.
  • the fourth preliminary deformation field may be a global deformation field.
  • this step can be used to obtain the local third preliminary deformation field into the global fourth preliminary deformation field.
  • the deformation field 4 of the full image size is determined by interpolation of the deformation field 3.
  • Sub-step 23316 Based on the fourth preliminary deformation field, register the area in the second grayscale difference range to obtain a final registration map.
  • the area of the second grayscale difference range may be an area where the grayscale value difference between the registration map grayscale value of the floating image and the reference image grayscale value is larger.
  • a grayscale difference threshold can be set (for example, the grayscale difference threshold is 150).
  • the area where the difference between the grayscale value of the registration map of the floating image and the grayscale value of the reference image is less than the grayscale difference threshold is The area in the first grayscale difference range, which is greater than the grayscale difference threshold, belongs to the second grayscale difference range.
  • the final registration map may be based on at least one deformation field, deforming the floating image (for example, the pre-recommended enhanced image) multiple times to obtain the same spatial position and anatomical position as the second medical image. Image.
  • the area in the second grayscale difference range that is, the grayscale difference is relatively large
  • elements that are segmented in the floating image and are not segmented in the reference image can be extracted from the floating image. mapped to the reference image.
  • the floating image as a pre-recommended enhanced image and the reference image as a second medical image as an example
  • the blood vessels in the target organ are segmented in the pre-recommended enhanced image and are not segmented in the second medical image.
  • the target can be segmented through registration. Blood vessels within the organ are mapped to the second medical image.
  • the registration method of Figure 18-19 can also be used for the registration of inaccessible areas in the fast segmentation mode and all important organs in the precise segmentation mode, or similar results can be achieved only through the corresponding segmentation method. Effect.
  • Step 2332 Determine the spatial position of the corresponding element in the second medical image based on the registered deformation field and the spatial position of at least some elements in the first target structure set in the pre-recommended enhanced image.
  • the spatial position of the blood vessels in the target organ during surgery (hereinafter referred to as blood vessels) may be determined based on the registered deformation field and the blood vessels in the target organ in the pre-recommended enhanced image.
  • the spatial position of the blood vessel during surgery can be determined based on the following formula (1), based on the registered deformation field and the blood vessel in the pre-recommended enhanced image:
  • I Q represents the pre-recommended enhanced image
  • (x, y, z) represents the three-dimensional spatial coordinates of the blood vessel
  • u(x, y, z) represents the registration deformation field from the pre-recommended enhanced image to the second medical image
  • u(x, y, z) can also be understood as the offset of the three-dimensional coordinates of elements in the floating image (for example, blood vessels in the target organ) to the three-dimensional coordinates in the final registered registration map.
  • the blood vessels in the pre-recommended enhanced image can be deformed through the registration deformation field determined in step 2332, and the spatial position of the blood vessels during the operation that is the same as its spatial position can be generated.
  • different elements may be segmented on the same or different pre-recommended enhanced images. Therefore, there may be multiple first segmented images generated by segmenting at least some elements in the first target structure set. Based on this, there are different implementations for registering the first segmented image and the second segmented image.
  • multiple first segmented images and second segmented images respectively corresponding to multiple pre-recommended enhanced images can be registered to obtain multiple final registration maps, thereby determining the location of corresponding elements in the surgery. Spatial location.
  • the first segmented image and the second segmented image corresponding to the hepatic artery, hepatic portal vein, hepatic vein, and inferior vena cava can be registered to obtain four final registration images, and the images can be obtained according to the four final registration images.
  • the accurate registration map determines the spatial position of the hepatic artery, hepatic portal vein, hepatic vein, and inferior vena cava during surgery.
  • the obtained multiple final registration maps can be fused to obtain a fused registration map, and then the spatial position of the corresponding element during the operation is determined based on the fused registration map.
  • one of the plurality of first segmented images can be selected as the reference image, and the rest can be used as non-reference images. Further, each non-reference image is separately registered with the reference image, and then the reference image is Register with the second segmented image.
  • the first segmented image corresponding to the hepatic artery can be selected as the reference image
  • the first segmented images corresponding to the hepatic portal vein, hepatic vein, and inferior vena cava can be selected as the non-reference image.
  • the basemap itself can determine the spatial position of the hepatic artery. Furthermore, compare the basemap with the basemap. The second segmented images are registered to determine the spatial positions of the hepatic artery, hepatic portal vein, hepatic vein and inferior vena cava during surgery.
  • multiple first segmented images corresponding to multiple pre-recommended enhanced images can be fused to obtain a fused segmented image, and then the fused segmented image and the second segmented image can be registered to obtain the final registered image.
  • Figure 21 is an exemplary result diagram of fusion mapping of a multi-phase phase-enhanced image and a second medical image according to some embodiments of this specification.
  • (a) represents the segmentation result of the hepatic artery on the pre-recommended enhanced image of the corresponding phase
  • (b) represents the segmentation result of the hepatic portal vein on the pre-recommended enhanced image of the corresponding phase
  • (c) represents the segmentation result of the inferior vena cava on the pre-recommended enhanced image of the corresponding phase.
  • (d) represents the result mapped to the second medical image.
  • the segmentation results of the elements in the pre-recommended enhanced images corresponding to multiple periods can be mapped to the second medical image, thereby determining the space of the corresponding elements in the second medical image. Location.
  • the contour correction tool can be used to modify the contours of the elements on the registration map.
  • the contour is modified, such as adding or subtracting, until it meets the clinical situation.
  • the outlines of different elements may also occur that the outlines of different elements exist at the same position in the registration map (that is, the outlines of different elements overlap).
  • priorities may be marked on at least part or all of the elements in the third target structure set, and the outlines of the elements may be refreshed and displayed according to the order of priorities.
  • element priority please refer to other places in this specification (for example, Figure 16), and will not be described again here.
  • sub-step 234 is to generate an interventional surgery plan based on the spatial location of the third target structure set.
  • the center point of the lesion can be calculated based on the determined spatial positions of the blood vessels and lesions (included in the second segmented image of the second medical image) during the operation, and a preoperative planning solution for the interventional surgery (eg, Puncture path planning).
  • a safe area around the lesion and a potential needle insertion area can be generated.
  • the safe area around the lesion and the potential needle insertion area can be determined based on the determined interventional area and non-interventionable area.
  • a reference path from the percutaneous needle insertion point to the center point of the lesion can be planned based on the potential needle insertion area and basic obstacle avoidance constraints.
  • the basic obstacle avoidance constraints may include but are not limited to the needle insertion angle of the path, the needle insertion depth of the path, the path not intersecting with blood vessels and important organs, etc.
  • the process from loading bookmarks to generating an interventional surgical plan ie, the process of sub-steps 231 to 234) may also be referred to as the preoperative planning stage.
  • Figure 22 is a schematic diagram of element outline extraction using a first outline tool according to some embodiments of this specification.
  • Figure 23 is a schematic diagram of element outline extraction using a second outline tool according to some embodiments of this specification.
  • an outline drawing tool can be used to draw different types of lines on the corresponding segmented image to extract the element outline.
  • the segmented image may include any one or more of a first segmented image, a second segmented image, a fused segmented image, a registration map, etc.
  • the target structure set may include any one or more of the first target structure set, the second target structure set, the third target structure set, and the fourth target structure set.
  • the first contour drawing tool can be used to draw contour lines on the segmented image; based on the contour lines, at least part of the element contours in the target structure set are extracted.
  • the first contouring tool may be a manual contouring tool.
  • the first contour drawing tool can be used to quickly locate the lesion area. Specifically, the user can use the first contour drawing tool to draw contour lines on both sides (for example, above and below) of the lesion on the segmented image (as shown in Figure 22(a)).
  • the algorithm can combine the two The area in the middle of the contour line can be determined as the lesion area, and the lesion outline is extracted (as shown in Figure 22(b)).
  • the first contour drawing tool can also be used to draw a larger contour line on the segmented image so that the lesion is included in the area surrounded by the contour line, and then the lesion outline is further extracted in the contour area. . This method can make the lesion contour extraction more accurate and reduce the complexity of the algorithm.
  • Figure 23 represents a lesion line segment drawn using the second contour drawing tool
  • Figure 23 represents a lesion outline generated according to the algorithm result.
  • a second contour drawing tool can be used to draw a line segment on the segmented image; based on the line segment, at least part of the element contours in the target structure set are extracted.
  • the second contouring tool may be a semi-automatic contouring tool.
  • the second contour drawing tool can be used to quickly locate the lesion area.
  • the user can use the second contour drawing tool to draw a line segment on the segmented image, and the preoperative planning system passes the coordinates of the two endpoints of the line segment to the algorithm to generate the lesion outline in the image, and displays it in the image.
  • the line segment drawn by the second contour drawing tool can represent the long diameter of the region, and a circle can be drawn using the long diameter as the diameter to obtain a circular region, and further, the lesion outline can be extracted in the circular region.
  • the contour drawing tool can also be used to extract contours of other elements other than the lesions concentrated in the target structure, such as blood vessels, target organs, other important organs, and/or non-interventionable areas.
  • the contour drawing tool can include a blood vessel marking tool.
  • the blood vessel marking tool can be used to perform marking operations near blood vessels. After the marking is confirmed, the preoperative planning system can pass the marking coordinates to the algorithm to generate a blood vessel outline.
  • the contour correction tool can be used to correct at least part of the element contours in the target structure set, for example, increase the contour and/or modify the contour. , until the element profile matches the clinical situation. For example, compared with the lesion outline in clinical situations, the lesion outline generated by the algorithm is smaller (it can also be understood that the current lesion outline cannot cover the actual lesion area). At this time, the outline correction tool can be used to increase the lesion outline, that is, Expand the scope of the lesion outline so that the revised lesion outline can better match the actual lesion area.
  • Figure 24 is an exemplary block diagram of an image processing system for interventional surgery according to some embodiments of the present specification.
  • the image processing system 2400 for interventional surgery may include a segmentation module 2410 for acquiring a plurality of first medical images and second medical images of the scanned object at different stages; wherein at least two of the first medical images are The medical images correspond to different time points of the same scan object; at least some elements in the first target structure set are segmented based on the plurality of first medical images to generate a first segmented image.
  • the registration module 2420 segments at least some elements in the second target structure set based on the second medical image to generate a second segmented image; and registers the first segmented image and the second segmented image.
  • the intervention path planning module 2430 is configured to determine an intervention path from the needle insertion point on the skin surface to the target area based on the registered second medical image and/or the second segmented image.
  • different time points include at least two different phases.
  • determining the pre-recommended enhanced image based on multiple first medical images may include: automatically identifying the multiple first medical images; and determining phases corresponding to the multiple first medical images based on the results of automatic identification. .
  • FIG. 25 is a schematic flowchart of a registration optimization method for medical images according to some embodiments of this specification.
  • Step 2510 Obtain the registration error between the first segmented image and the second segmented image. In some embodiments, this step may be performed by registration error acquisition module 2901.
  • a set of first segmented images and second segmented images with unknown spatial differences can be initially registered, for example, rigid automatic registration and elastic automatic registration are performed on the first segmented image and the second segmented image. , and then obtain the registration error between the first segmented image and the second segmented image regarding the initial registration.
  • Step 2520 If the registration error does not meet the preset conditions, the first segmented image and the second segmented image are optimally registered through the rigid body registration matrix determined by the registration matrix adjustment process. In some embodiments, this step may be performed by registration optimization module 2903.
  • the rigid body registration matrix is determined through the registration matrix adjustment process, which specifically includes the following steps: determining the registration elements used in the registration matrix adjustment process; if the registration elements are obtained in the first segmented image and the second segmented image, position in the image, then a rigid body registration matrix is obtained based on the position of the registration element in the first segmented image and the second segmented image; if the position of the registration element in the first segmented image is not obtained and the position in the second segmented image, and obtain a rigid body registration matrix based on the translation operation or rotation operation on the first segmented image or the second segmented image.
  • the above steps may be performed by the registration matrix determination module 2902.
  • the registration matrix adjustment process it may be necessary to use the position of certain organs in the image to To determine the rigid body registration matrix, these organs can be called registration elements.
  • the organ used as the registration element can be the same as the organ used as the key segmentation organ.
  • the lung can be used as the key segmentation organ and registration element; of course, the organ used as the registration element can be the same.
  • the organs that are quasi-elements can also be different from the organs that are used as key organs for segmentation.
  • the rigid body registration matrix is determined through (1) manual adjustment and (2) automatic adjustment to determine the registration matrix adjustment process.
  • the user can be prompted to perform manual adjustment, perform a translation operation or rotation operation on the first segmentation image, or perform a translation operation on the second segmentation image. operation or rotation operation.
  • the system can provide 2D/3D manual registration tools and 2D/3D fusion display, and prompt the adjustment direction of the image in the 2D/3D manual registration tool.
  • the user can adjust the 2D/3D form of the first image according to the prompted adjustment direction.
  • the segmented image and/or the second segmented image are roughly aligned with the first segmented image and the second segmented image in 2D/3D form. Users can also adjust the first segmented image and/or the second segmented image in 2D/3D form in any direction to meet user requirements.
  • the rigid body registration matrix is obtained based on the translation operation or rotation operation performed by the user.
  • the automatic adjustment method when the registration element organization is a preset human tissue or organ with obvious anatomical feature points, such as liver, lung, kidney, etc., the automatic adjustment method can be recommended by default. Feature point pairs are automatically extracted through the algorithm. Users can modify or confirm the extracted corresponding point pairs on the 2D/3D interface (2D original CT scan image, 3D reconstructed image with segmentation results), and then automatically calculate the rigid body registration matrix. and automatically align, prompting the user for confirmation. After the user confirms, manual fine-tuning can also be performed. For manual fine-tuning methods, see (1) Manual adjustment.
  • the first segmented image and the second segmented image will be optimally registered through the rigid body registration matrix determined by the registration matrix adjustment process. In this way, through the optimization of registration processing, the registration results can be improved.
  • the rigid body registration matrix when determining the rigid body registration matrix through the registration matrix adjustment process, if the positions of the registration elements in the first segmented image and the second segmented image are obtained, based on the registration elements The rigid body registration matrix is obtained based on the position in the first segmented image and the second segmented image; if the position of the registration element in the first segmented image and the second segmented image is not obtained, the rigid body registration matrix is obtained according to the position of the registration element in the first segmented image and the second segmented image.
  • the translation operation or rotation operation of the first segmented image or the second segmented image is used to obtain the rigid body registration matrix; in the above method, depending on whether the position of the registration element used in the registration matrix adjustment process is obtained in the image, the Different strategies can obtain rigid body registration matrices with higher applicability.
  • the rigid body registration matrix can be obtained through the following steps: obtaining the feature points of the registration elements in the first segmented image and the second segmented image to obtain a pair of feature points; based on the registration According to the position of the quasi-element in the first segmented image and the second segmented image, the position of each feature point included in the feature point pair in the respective image is obtained; based on the position of each feature point included in the feature point pair in the respective image position to obtain the rigid body registration matrix.
  • the rigid body registration matrix is automatically determined based on the position of each feature point included in the feature point pair in the respective image, thereby improving the efficiency of optimized registration.
  • the rigid body registration matrix can also be determined through (3) semi-automatic adjustment to determine the registration matrix adjustment process.
  • the registration element in the first segmented image and the second segmented image is obtained, it is determined that the registration element is at the first centroid position of the first segmented image and at the second segmented image.
  • the second center of mass position of the segmented image prompts the image movement direction on the fusion display interface based on the relative position relationship between the first center of mass position and the second center of mass position. According to the first segmented image or the The translation operation or rotation operation of the second segmented image obtains the rigid body registration matrix.
  • the image movement direction can be calculated based on the centroid position of the registration element in the first segmented image and the second segmented image in 3D form.
  • the calculation formula is: Arrows are used to prompt the user to adjust the direction on the first segmented image and the second segmented image; where m r (x, y, z) represents the centroid coordinates of the registration element in the first segmented image; m m (x, y, z) ) represents the centroid coordinate of the registration element in the second segmented image.
  • the user after obtaining the position of the registration element in the first segmented image and the second segmented image, the user does not need to be prompted for the movement direction.
  • the user can directly perform a translation operation or rotation operation on the first segmented image, or directly perform a translation operation on the first segmented image.
  • the second segmented image is subjected to a translation operation or rotation operation to obtain a rigid body registration matrix.
  • the position of the registration element in the first segmented image and the second segmented image is obtained, then according to the position of each feature point included in the feature point pair in the respective image, and the first segmented image Or the translation operation or rotation operation of the second segmented image to obtain a rigid body registration matrix.
  • the semi-automatic adjustment method can be recommended by default.
  • the user can select one or more corresponding feature point pairs of mutually corresponding anatomical structures on the first segmented image and registration medicine in 2D/3D form.
  • the rigid body registration matrix is automatically calculated and automatically Align, prompting the user for confirmation.
  • manual fine-tuning can also be performed. For manual fine-tuning methods, see (1) Manual adjustment.
  • obtaining the feature points of the registration element in the first segmented image and the second segmented image, and obtaining the feature point pair may include the following steps: using a preset feature extraction algorithm, Extract feature points of registration elements with the second segmented image; if the preset feature extraction algorithm extracts the corresponding feature points, a feature point pair is formed; if the preset feature extraction algorithm does not extract the corresponding feature points, then A feature point pair is formed according to the feature point selection operation on the first segmented image and the second segmented image.
  • the first segmented image and the second segmented image are directly input into the preset feature extraction algorithm to extract feature points. If the preset feature extraction algorithm fails to extract the feature points, the user manually selects the features. points, forming feature point pairs based on the feature points selected by the user.
  • obtaining the feature points of the registration element in the first segmented image and the second segmented image, and obtaining the feature point pair may also include the following steps: if the registration element is an organ for which the feature extraction algorithm is applicable, Then use the feature extraction algorithm to extract the feature points of the registration elements for the first segmented image and the second segmented image; if the registration element is not an organ for which the feature extraction algorithm is applicable, then extract the feature points of the registration elements based on the first segmented image and the second segmented image.
  • Feature point selection operation for segmenting images to form feature point pairs.
  • the matching Whether the quasi-element is an organ for which the feature point extraction algorithm is applicable If so, the first segmented image and the second segmented image are input into the feature point extraction algorithm. Otherwise, the user manually selects the feature points.
  • an initial registration may be performed on a set of first segmented images and a second segmented image with unknown spatial differences, for example, rigid automatic registration and elastic automatic registration may be performed on the first segmented image and the second segmented image. , and then obtain the registration error between the first segmented image and the second segmented image regarding the initial registration.
  • obtaining the registration error between the first segmented image and the second segmented image specifically includes the following steps: performing initial registration on the first segmented image and the second segmented image, and obtaining segmentation key organs used in the segmentation process.
  • segmentation process will be carried out before surgery and registration (including initial registration). During the segmentation process, the medical image will be segmented. These segmented organs are called segmentation key organs. In some surgeries, the segmentation process will fix the target organ of the segmentation surgery. For example, in chest surgery, the lung is the target organ of the surgery. Therefore, the segmentation process will fix the segmentation of the lung. Therefore, in some scenarios, segmenting key organs may include target organs for surgery.
  • lungs are segmented as an example: rigid automatic registration and elastic automatic registration are performed on the first segmented image and the second segmented image. After the initial registration is completed, the lungs can be determined. The area where the first segmented image after initial registration is located, and the area where the second segmented image after initial registration is located, and then, based on the overlap of the lung between the areas of the two images after initial registration degree to obtain the registration error. Among them, the higher the degree of coincidence, the smaller the registration error.
  • obtaining the registration error between the first segmented image and the second segmented image may include the following steps: positioning the segmented key organ in the first segmented image after initial registration area as the first area; use the area where the second segmented image of the segmented key organ is located after initial registration as the second area; determine the intersection area between the first area and the second area, and calculate the area based on the location of the second segmented image.
  • the sum of the number of pixels in one area and the number of pixels in the second area is the total number of pixels. According to the proportion of the number of pixels in the intersection area to the total number of pixels, the degree of coincidence between the areas is obtained.
  • M r represents the number of pixels located in the first area
  • M m represents the number of pixels located in the second area
  • M r ⁇ m represents the number of pixels located in the intersection area.
  • the registration error e When the registration error e is greater than the preset threshold, it can be confirmed that the registration error does not meet the preset conditions. At this time, the registration process can be optimized. In addition, when the registration error is greater than the preset threshold, the user can also be prompted that "the current registration result has a large error, and it is recommended to optimize the registration to improve the registration effect."
  • the preset threshold can be selected in a preset interval.
  • the preset interval can be left open and right closed, for example (0, 0.10].
  • 0.01, 0.02, 0.03, 0.04, 0.05 can be set.
  • 0.06, 0.07, 0.08, 0.09, 0.10 these 10 values are available for users to choose.
  • the registration error is obtained by calculating the coincidence degree of the segmented key organs in the image area, and the registration effect is quantified in order to optimize the registration.
  • the first segmented image and the second segmented image are optimally registered through the rigid body registration matrix determined through the registration matrix adjustment process, which may include the following steps: converting the rigid body determined through the registration matrix adjustment process into Registration matrix, input into the elastic automatic registration algorithm, use the elastic automatic registration algorithm obtained after input to optimize the registration of the first segmented image and the second segmented image; if the registration error obtained after the optimized registration does not meet If the preset conditions are set, the optimized registration will continue until the registration error after the optimized registration meets the preset conditions.
  • the rigid body registration matrix is input into the elastic automatic registration algorithm, and the first segmented image and the second segmented image are processed using the elastic automatic registration algorithm obtained after input.
  • Optimize registration if the registration error obtained after optimized registration does not meet the preset conditions, a new rigid body registration matrix can be obtained through the registration matrix adjustment process, and the optimized registration can be continued until the optimized registration The registration error satisfies the preset conditions.
  • the method provided by this application further includes the following steps: fusion displaying the first segmented image and the second segmented image on a fusion display interface, and displaying an upward panning control, a downward panning control, and a downward panning control on the fusion display interface. At least one of a translation control, a left translation control, and a right translation control; based on the selection operation of at least one of the translation controls, a translation operation on the first segmented image or the second segmented image is obtained.
  • fusion display can be performed according to the original spatial pose; the display methods include 2D fusion display and 3D fusion display.
  • FIG. 26 is a schematic diagram of the spatial alignment effect of key organs according to some embodiments of the present specification. As shown in Figure 26.
  • the first segmented image and the second segmented image can be fused and displayed on the display screen using the preset fusion method;
  • the preset fusion methods include: upper and lower layer fusion method, dividing line fusion method and chessboard At least one of the grid fusion methods.
  • At least one of an upward panning control, a downward panning control, a leftward panning control, and a rightward panning control may be displayed; when the user selects the corresponding panning control , you can translate the first segmented image or the second segmented image according to the direction and distance corresponding to the translation control.
  • the method provided by the embodiments of this specification further includes the following steps: fusion displaying the first segmented image and the second segmented image on a fusion display interface, and displaying a counterclockwise rotation control on the fusion display interface and at least one of the clockwise rotation controls; based on the selection operation of at least one of the rotation controls, a rotation operation on the first segmented image or the second segmented image is obtained.
  • the method provided by the embodiments of this specification may further include the following steps: fusion displaying the first segmented image and the second segmented image on a fusion display interface, and displaying a ring area on the fusion display interface ; Based on the dragging and rotating operation on the ring area, a rotation operation on the first segmented image or the second segmented image is obtained.
  • the ring area refers to the gray ring area shown in Figure 27E.
  • the setting of the rotation center specifically includes the following steps: display the rotation center icon at a preset position on the image; the image is the first segmented image or Second split image; when the cursor moves to the preset position and the mouse is pressed and dragged, adjust the cursor position to adjust the position of the rotation center icon; change the cursor position when the mouse is released , as the adjusted position of the rotation center icon; taking the adjusted position of the rotation center icon as the rotation center, based on the drag rotation operation of the ring area, the first segmented image or the Rotation operation of the second divided image.
  • the interventional surgery robot system generally performs preoperative CT enhanced scans and intraoperative plain CT scans.
  • Preoperative images obtained from preoperative enhanced CT scans are used to provide anatomical information related to organs, blood vessels, and tumors.
  • Intraoperative images obtained from plain intraoperative CT scans are used for path planning and surgical navigation during interventional surgeries. To ensure the accuracy and safety of intraoperative navigation, it is necessary to register the anatomical information obtained before surgery into intraoperative images.
  • the preoperative image has been contrast-enhanced, and when it is registered with the intraoperative image, there are gaps in the registration. Information asymmetry; 2. Preoperative images are used for doctor diagnosis, and the scanning range is large, while intraoperative images focus on the puncture site, the scanning range is small, and there is a mismatch in the registration range; 3. Changes in the environment, patient's body, and breathing Due to the influence of soft tissue drift caused by movement, the spatial difference between preoperative images and intraoperative images is large.
  • the preoperative image is used as the first segmented image
  • the intraoperative image is used as the second segmented image
  • a set of first segmented images and second segmented images with unknown spatial differences are rigidly automatically registered and Flexible automatic registration to complete the initial registration; according to the area where the key organs are located in the first segmented image after initial registration and the second segmented image after initial registration, the registration error is obtained to quantify whether the initial registration result is Meet user requirements. If the initial registration result does not meet the user's requirements, three methods are provided to obtain the rigid body registration matrix for optimized registration. Simultaneous 2D image fusion display and 3D image fusion display allow users to view the effect of optimized registration in real time. After the user adjusts and confirms, the rigid body registration matrix is obtained, which is used as the input of the elastic automatic registration algorithm, and elastic automatic registration is performed again to improve the accuracy and robustness of medical image registration.
  • This application example mainly includes the following parts:
  • Registration result display and evaluation The first segmented image and the second segmented image after initial registration are fused and displayed, and the registration error is obtained based on the positions of the segmented key organs in the first segmented image and the second segmented image. Quantify the initial registration effect and automatically evaluate it based on the registration error, or prompt the user to conduct a subjective evaluation based on the registration error to determine whether the current registration effect meets the user's requirements.
  • the manual adjustment method is to use the translation and rotation buttons provided by the registration tool to adjust the pose of the first segmented image or the second segmented image
  • the semi-automatic adjustment method is to adjust the first segmented image and the second segmented image in 2D/3D form.
  • the pose of the first segmented image or the second segmented image is adjusted
  • the automatic adjustment method is to automatically extract feature point pairs of anatomical structures and adjust the first segmented image. The pose of the image or the second segmented image.
  • Figure 28 is a schematic flowchart of another registration optimization method for medical images according to some embodiments of this specification. The above parts will be explained in detail below with reference to Figure 28.
  • Step 2801 load images: load the first segmented image and the second segmented image, and obtain the segmented key organs in the first segment. The area where the segmented image and the second segmented image are located;
  • Step 2802 rigid automatic registration: perform rigid automatic registration based on the gray value registration algorithm on the first segmented image and the second segmented image to obtain a rigid registration result;
  • Step 2803 elastic automatic registration: perform elastic registration on the first segmented image and the second segmented image in the rigid registration result obtained in step 2802, and obtain the elastic registration result;
  • the initial registration can be regarded as completed.
  • Step 2804 display and evaluation of registration results:
  • the first segmented image after initial registration and the second segmented image after initial registration are fused and displayed in different pseudo-colors. According to the contour deviation of the segmented key organs in the first segmented image and the second segmented image , users can subjectively evaluate the effect of initial registration,
  • M r represents the number of pixels located in the first area
  • M m represents the number of pixels located in the second area
  • M r ⁇ m represents the number of pixels located in the intersection area.
  • the registration error e When the registration error e is greater than the preset threshold, it can be confirmed that the registration error does not meet the preset conditions. At this time, the registration process can be optimized. In addition, when the registration error is greater than the preset threshold, the user can also be prompted that "the current registration result has a large error, and it is recommended to optimize the registration to improve the registration effect."
  • Step 2805 if the registration result does not meet the user's requirements, for example, the fully automatic registration process focuses on the effect of global registration, and the user may pay more attention to the registration results of segmented key organs, resulting in a fully automatic registration algorithm that cannot meet the user's requirements. If required, provide a rigid manual registration tool and prompt the registration image to adjust the direction, and enter the registration matrix adjustment process. See steps 2806 to 2808 for details.
  • Step 2806 fusion display: obtain the positions of the segmented key organs in the first segmented image and the second segmented image, and perform fusion display according to the original spatial pose.
  • Display modes include 2D fusion display and 3D fusion display.
  • 2D fusion can display the original superposition effect of the first segmented image and the second segmented image.
  • the second segmented image can be displayed in pseudo-color.
  • the 3D fusion display can visually display the spatial alignment effect of key organs.
  • Step 2807 Observe the alignment effect of the segmented key organs in the first segmented image and the second segmented image.
  • Step 2808, 2D and 3D manual registration tool prompts the image adjustment direction.
  • the image movement direction can be calculated based on the centroid position of the registration element in the first segmented image and the second segmented image in 3D form.
  • the calculation formula is: Arrows are used to prompt the user to adjust the direction on the first segmented image and the second segmented image; where m r (x, y, z) represents the centroid coordinates of the registration element in the first segmented image; m m (x, y, z) ) represents the centroid coordinate of the registration element in the second segmented image.
  • the methods for determining the rigid body registration matrix include (1) manual adjustment, (2) automatic adjustment, and (3) semi-automatic adjustment:
  • the user can be prompted to perform manual adjustment, perform a translation operation or rotation operation on the first segmentation image, or perform a translation operation on the second segmentation image. operation or rotation operation.
  • the computer equipment can provide a 2D/3D manual registration tool and a 2D/3D fusion display, and the 2D/3D manual registration tool prompts the adjustment direction of the image.
  • the user can adjust the 2D/3D form of the third image according to the prompted adjustment direction.
  • the first segmented image and/or the second segmented image substantially align the first segmented image and the second segmented image in 2D/3D form.
  • Users can also adjust the first segmented image and/or the second segmented image in 2D/3D form in any direction to meet user requirements.
  • the rigid body registration matrix is obtained based on the translation operation or rotation operation performed by the user.
  • the automatic adjustment method can be recommended by default.
  • Feature point pairs are automatically extracted through the algorithm. Users can modify or confirm the extracted corresponding point pairs on the 2D/3D interface (2D original CT scan image, 3D reconstructed image with segmentation results), and then automatically calculate the rigid body registration matrix. and automatically align, prompting the user for confirmation. After the user confirms, manual fine-tuning can also be performed. For manual fine-tuning methods, see (1) Manual adjustment.
  • the registration elements are pre-set, human tissues or organs with smooth boundaries and no obvious anatomical feature points, than For example, the prostate, gallbladder, etc. can recommend the semi-automatic adjustment method by default.
  • the user can select one or more corresponding feature point pairs of mutually corresponding anatomical structures on the first segmented image and registration medicine in 2D/3D form.
  • the rigid body registration matrix is automatically calculated and automatically Align, prompting the user for confirmation.
  • manual fine-tuning can also be performed. For manual fine-tuning methods, see (1) Manual adjustment.
  • Step 2809 input the rigid registration matrix obtained in step 2808 into the elastic automatic registration algorithm of step 2803, and perform elastic automatic registration again;
  • Step 2810 Repeat steps 2803 to 2809 until the registration effect meets the user's requirements, save the elastic registration result, and the registration ends.
  • Figure 29 is a structural block diagram of a registration module according to some embodiments of this specification.
  • the registration module 2440 may include: a registration error acquisition module 2901, a registration matrix determination module 2902, and a registration optimization module. 2903.
  • Registration error acquisition module 2901 used to acquire the registration error between the first segmented image and the second segmented image
  • the registration matrix determination module 2902 is used to determine the registration elements used in the registration matrix adjustment process; if the positions of the registration elements in the first segmented image and the second segmented image are obtained, based on the registration The position of the registration element in the first segmented image and the second segmented image is obtained to obtain the rigid body registration matrix; if the position of the registration element in the first segmented image and the second segmented image is not obtained, the rigid body registration matrix is obtained according to Perform a translation operation or a rotation operation on the first segmented image or the second segmented image to obtain a rigid body registration matrix;
  • the registration optimization module 2903 is used to optimize the registration of the first segmented image and the second segmented image through the rigid body registration matrix determined by the registration matrix adjustment process if the registration error does not meet the preset conditions.
  • the registration matrix determination module 2902 is also used to obtain the feature points of the registration element in the first segmented image and the second segmented image to obtain a pair of feature points; based on the registration element in the Based on the positions of each feature point included in the first segmented image and the second segmented image, the position of each feature point included in the feature point pair in the respective image is obtained; based on the position of each feature point included in the feature point pair in the respective image, the rigid body configuration is obtained Quasi matrix.
  • the registration matrix determination module 2902 is also used to extract feature points of registration elements from the first segmented image and the second segmented image using a preset feature extraction algorithm; if the preset feature extraction algorithm If the corresponding feature points are extracted, a feature point pair is formed; if the preset feature extraction algorithm does not extract the corresponding feature points, then a feature point pair is formed based on the feature point selection operation of the first segmented image and the second segmented image.
  • the registration matrix determination module 2902 is also used to use the feature extraction algorithm to register the features of the elements on the first segmented image and the second segmented image if the registration element is an organ for which the feature extraction algorithm is applicable. Point extraction; if the registration element is not an organ for which the feature extraction algorithm is applicable, a feature point pair is formed based on the feature point selection operation on the first segmented image and the second segmented image.
  • the registration matrix determination module 2902 is also used to determine the first centroid of the registration element in the first segmented image if the position of the registration element in the first segmented image and the second segmented image is obtained. position, and the second center of mass position in the second segmented image. According to the relative position relationship between the first center of mass position and the second center of mass position, the image movement direction is prompted on the fusion display interface. According to the first segmented image or the second center of mass position, the image movement direction is prompted on the fusion display interface. Segment the translation operation or rotation operation of the image to obtain the rigid body registration matrix;
  • the rigid body registration matrix is obtained based on the translation operation or rotation operation on the first segmented image or the second segmented image;
  • the registration error acquisition module 2901 is also used to perform initial registration of the first segmented image and the second segmented image, and obtain the segmented key organs used in the segmentation process in the first segmented image after the initial registration. and the area where the second segmented image is located after initial registration; based on the degree of coincidence between the areas, the registration error between the first segmented image and the second segmented image is obtained.
  • the registration optimization module 2903 is also used to input the rigid body registration matrix determined through the registration matrix adjustment process into the elastic automatic registration algorithm, and use the elastic automatic registration algorithm obtained after input to perform the first First segmentation image and second segmentation The images are optimally registered; if the registration error obtained after the optimized registration does not meet the preset conditions, the optimized registration will continue until the registration error after the optimized registration meets the preset conditions.
  • Each module in the above-mentioned registration module can be implemented in whole or in part by software, hardware and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor of the computer device in the form of hardware, or may be stored in the memory of the computer device in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • Figure 30 is a flowchart of an exemplary puncture path planning method according to some embodiments of this specification.
  • process 3000 may be performed by processing device 140.
  • the process 3000 may be stored in a storage device (eg, the storage device 150, a storage unit of the processing device 140) in the form of a program or instructions, and when the processor executes the program or instructions, the process 3000 may be implemented.
  • the puncture path planning method of process 3000 can be executed by an image processing system.
  • the image processing system can include: a display device for displaying the puncture path and displaying at least one option of a manual mode, an automatic mode, and a semi-automatic mode; and a control system including one or more processors and memory, which may include one or more processors adapted to cause the various steps of process 3000 to be performed.
  • process 3000 may utilize one or more additional operations not described below, and/or be completed without one or more operations discussed below.
  • the order of operations shown in FIG. 30 is not limiting.
  • Step 3010 Determine the target point based on the spatial position of the corresponding element in the registered second medical image. In some embodiments, this step may be performed by target determination module 3710.
  • the target point refers to the position point related to the target area.
  • the target point may be a location in the body reached by a puncture instrument (eg, a biopsy needle) penetrating the skin, a location where minimally invasive surgery occurs, etc.
  • the target point may be a center point or a point close to the center of the lesion to be treated or the area to be detected.
  • the target point may be the lesion to be treated or any point within the area to be detected.
  • the target point may be a lesion to be treated or a location point in the area to be detected that satisfies preset conditions, for example, a point with a maximum gray value, a point with a minimum gray value, or a point with a gray value greater than a gray threshold. Points, points whose gray value is less than the gray threshold, etc.
  • the processing device 140 may automatically determine the target point based on the registered second medical image of the scanned object (hereinafter also referred to as the target image for ease of description). For example, the treatment device 140 may automatically target the center of a lesion to be treated or an area to be detected. For another example, the processing device 140 can automatically determine the preset conditions and determine the corresponding position point as the target point.
  • processing device 140 may determine the target based on user operations.
  • user operations may include clicking (eg, mouse clicks, sensor pen clicks, touch screen touches, etc.) operations, sensing operations (eg, gesture sensing, sound sensing, etc.), user instructions (eg, code instructions, voice instructions, etc.), etc. or any combination thereof.
  • the processing device 140 may determine candidate target points based on user operations. For example, the user can click (eg, mouse click) on a position point in the target image through the display interface. Accordingly, based on the user's click operation, the processing device 140 may determine the location clicked by the user as a candidate target point.
  • user operations For example, the user can click (eg, mouse click) on a position point in the target image through the display interface. Accordingly, based on the user's click operation, the processing device 140 may determine the location clicked by the user as a candidate target point.
  • the processing device 140 may further determine whether the candidate target meets the first preset condition.
  • the first preset condition may include that the target point is within the target area, the target point is located at the center of the target area, the target point is not located within the blood vessel, etc., or any combination thereof.
  • the processing device 140 may provide a first prompt.
  • the first prompt can be provided in the form of text, symbols, pictures, audio, video, etc.
  • the display interface prompts "T point is not within the target area!”.
  • the processing device 140 may re-execute the above process until a target point that meets the first preset condition is determined.
  • the processing device 140 may determine the candidate target point as the final available target point.
  • Step 3020 Determine the reference path based on the user's operations related to the target. In some embodiments, this step may be performed by reference path determination module 3720.
  • the processing device 140 may determine the reference path in a semi-automatic manner.
  • the end point of the reference path may be open. "Open" can mean that there is no restriction on the end point, and it can be any point within the target image.
  • the user's operation related to the target point may include a drag operation starting from the target point.
  • the line segment or ray formed by the drag operation is the reference path.
  • the drag direction can be any direction.
  • the drag length may be any length within the target image.
  • the end point of the drag operation can be any point within the target image.
  • 3310, 3320, 3330, 3340 and 3350 respectively represent different reference paths formed by different drag directions or different drag lengths.
  • the processing device 140 may display the reference path synchronously during the drag operation.
  • the line segments or rays formed by the dragging operation can be displayed simultaneously on the display interface.
  • the "+" in the figure The number represents the current cursor position point (that is, the real-time end point of the user's drag operation).
  • the corresponding reference path 3350 is synchronously displayed on the display interface.
  • processing device 140 may determine the reference path through other means. For example, manual method. In some embodiments, the user can set the starting point and end point of the reference path on the display interface respectively. Correspondingly, the processing device 140 may determine the connection between the two as the reference path.
  • the processing device 140 can also determine the reference path and the target path in an automatic manner.
  • the target point is automatically determined based on the contour of the target area, for example, the coordinates of the target point are automatically calculated.
  • At least one candidate needle insertion point is automatically determined on the skin surface.
  • the candidate needle insertion point is connected to the target point to form a candidate path, and the candidate path and the dangerous area are determined. (e.g., non-intervention zone). That is, each candidate needle insertion point and the target point are connected to form N candidate paths; the distance between each candidate path and each dangerous area is calculated in turn.
  • the target path is determined based on the depth of the candidate path, the angle of the candidate path, and the distance between the candidate path and the danger zone. For example, it is comprehensively judged whether the depth of the candidate path meets the target distance limit, whether the angle is within the preset range, and whether the distance between the candidate path and the danger zone is lower than the preset safety distance, etc., to determine the final target path.
  • Step 3030 Determine the target path based on the reference path. In some embodiments, this step may be performed by target path determination module 3730.
  • the processing device 140 may automatically determine candidate needle entry points based on the reference path, with the candidate needle entry points located at the skin interface. In some embodiments, the processing device 140 may use the intersection point of the reference path (or its extension) and the skin as a candidate needle entry point. For example, as shown in Figure 34, the processing device 140 determines corresponding candidate needle entry points 3410, 3420, 3430, 3440 and 3450 based on the reference paths 3310, 3320, 3330, 3340 and 3350 respectively.
  • the reference path is a line segment or ray formed by the user's drag operation, and the reference path is displayed simultaneously during the drag process. Accordingly, automatically determining the candidate needle entry point based on the reference path can be understood as automatically determining the intersection point of the line segment or ray and the skin when the user's drag operation ends (for example, the moment the user releases the drag operation).
  • the processing device 140 may determine a candidate path based on the target point and the candidate needle entry point. For example, a line segment connecting the candidate needle entry point and the target point is used as the candidate path. Further, the processing device 140 may determine the target path based on the candidate paths.
  • the processing device 140 can determine the distance and/or angle between the candidate needle insertion point and the target point (hereinafter referred to as "skin target distance"), and determine whether the distance and/or angle satisfy the second preset condition.
  • the angle can refer to the angle between the TE vector formed by the target point T and the needle entry point E and the positive direction of the transverse Y-axis. In some embodiments, the angle may be a positive angle, a negative angle, an acute angle, an obtuse angle, etc.
  • the second preset condition may include a preset range of target distance, a preset range of angle, etc., or any combination thereof.
  • the preset range of the target distance is 2cm-12cm
  • the preset range of the angle is 10 degrees-80 degrees.
  • the second preset condition can be set based on experience or needs. For example, different types of target areas, different target objects, etc. may correspond to different second preset conditions.
  • the second preset condition may be set by the user. For example, the user inputs the length of the puncture instrument (eg, biopsy needle), the preset range of the skin target distance, etc.
  • the processing device 140 may provide a second prompt.
  • the second prompt can be provided in the form of text, symbols, pictures, audio, video, etc.
  • the display interface displays "The length is greater than the maximum threshold and puncture cannot be performed". Further, the processing device 140 may re-execute the above process until a candidate path that satisfies the second preset condition is determined.
  • the processing device 140 may further determine whether the candidate path satisfies the third preset condition.
  • the third preset condition may include that the distance between the candidate path and the danger zone (hereinafter referred to as "danger distance") is greater than the distance threshold.
  • the danger distance is greater than 2mm.
  • the third preset condition can be set based on experience or needs. For example, different types of danger zones, different target objects, different application scenarios, etc. can correspond to different third preset conditions. For example, for DBS surgery, the danger distance is 1mm; for SEEG surgery, the danger distance is 1.6mm; for needle biopsy system, the danger distance is 2mm.
  • the processing device 140 may provide a third prompt.
  • the third prompt can be provided in the form of text, symbols, pictures, audio, video, etc. For example, as shown in Figure 36, it prompts "Needle path 1 interferes with bone tissue, please adjust the path.” Further, the processing device 140 may re-execute the above process until a candidate path that satisfies the third preset condition is determined.
  • the processing device 140 may further determine the target path based on the candidate path.
  • the processing device 140 may randomly select one of the determined target paths from the candidate paths that satisfy both the second preset condition and the third preset condition. In some embodiments, the processing device 140 may select a candidate path that satisfies the filtering condition as the target path from the candidate paths that satisfy both the second preset condition and the third preset condition based on the filtering condition. In some embodiments, the filtering conditions may include that the safety distance is greater than the first threshold (or the safe distance is the largest), the target distance is less than the second threshold (or the target distance is the smallest), the angle is greater than the third threshold (or the angle is the largest), etc. or any combination thereof.
  • the processing device 140 may also adjust the target path based on user feedback. For example, a user (eg, a doctor) can fine-tune the angle of the target path, fine-tune the position of the needle entry point, etc. based on experience.
  • a user eg, a doctor
  • the processing device 140 may also adjust the target path based on user feedback. For example, a user (eg, a doctor) can fine-tune the angle of the target path, fine-tune the position of the needle entry point, etc. based on experience.
  • process 200 is only for example and explanation, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 200 under the guidance of this description.
  • such modifications and changes remain within the scope of this specification.
  • the execution order of the judgment step based on the second preset condition and the judgment step based on the third preset condition is not restrictive. For example, they can be executed at the same time.
  • the judgment step based on the second preset condition can be executed first and then executed.
  • the judgment step based on the third preset condition, or the judgment step based on the third preset condition is performed first, and then the judgment step based on the second preset condition is performed.
  • the image processing system 100 for interventional surgery may provide options such as manual mode, automatic mode, semi-automatic mode, etc. on the display interface (for example, the display interface of the terminal device 130) for the user to select.
  • Figure 31 is a flowchart of another exemplary puncture path planning method according to some embodiments of this specification.
  • process 3100 may be performed by processing device 140.
  • the process 3100 may be stored in a storage device (eg, the storage device 150, a storage unit of the processing device 140) in the form of a program or instructions, and when the processor executes the program or instructions, the process 3100 may be implemented.
  • process 3100 may utilize one or more additional operations not described below, and/or be completed without one or more operations discussed below.
  • the order of operations shown in FIG. 31 is not limiting.
  • Step 3102 Obtain the reference image of the target object.
  • the target object may refer to a scan object.
  • the reference image may be the medical image obtained in the first stage.
  • the reference image may include the first medical image.
  • Step 3104 Perform image segmentation on the reference image of the target object. For example, based on the image segmentation algorithm, the target area, the penetrable area and/or the dangerous area are segmented and respectively identified.
  • Step 3106 Obtain the current image of the target object.
  • the current image may be a medical image obtained in the second stage.
  • the current image may include a second medical image.
  • Step 3108 Fusion of the current image and the reference image to determine the target image.
  • Step 3110 Determine the target point in the target image, and draw a reference path based on the user's operations related to the target point. For example, the user takes the target as the starting point and performs a drag operation. During the drag operation, the reference path is displayed simultaneously. In some embodiments, it can also be determined whether the initially determined candidate target meets target conditions (for example, the first preset condition described above) to determine the final target.
  • target conditions for example, the first preset condition described above
  • Step 3112 Automatically determine candidate needle entry points. For example, the intersection point between the reference path formed by the drag operation and the skin is automatically used as a candidate needle entry point. Obtain the candidate path consisting of the candidate needle entry point and the target point. Steps 3114 and 3118 are executed. In some embodiments, steps 3114 and 3118 can be performed in parallel or serially, and this specification does not limit this.
  • Step 3114 Determine whether the candidate path satisfies the safe distance condition (for example, the third preset condition mentioned above).
  • Step 3116 If the candidate path does not meet the safe distance condition, you may be prompted to adjust the target point or redraw the reference path.
  • Step 3118 Determine whether the candidate path satisfies the target distance condition and/or angle condition (for example, the second preset condition mentioned above).
  • the candidate path If the candidate path satisfies the safe distance condition, and the candidate path satisfies the target distance condition and/or angle condition, the candidate path is used as the target path.
  • Step 3120 If the candidate path does not meet the second preset condition, you may be prompted to adjust the target point or redraw the reference path.
  • the image segmentation methods, registration methods, etc. should not be limited to preoperative processing or intraoperative processing.
  • the segmentation method described herein can segment the first medical image and the second medical image to obtain corresponding segmented images.
  • the registration method described herein can register the first segmented image and the second segmented image; or it can also register the first medical image and the second medical image; or it can also register the first segmented image and the second segmented image. The two first medical images are registered.
  • This specification provides an image processing device for interventional surgery, including a processor, and the processor is used to execute the image processing method.
  • This specification provides a computer-readable storage medium.
  • the storage medium stores computer instructions.
  • the computer reads the storage medium After receiving the computer instructions in the quality, the computer executes the image processing method.
  • the image processing method, system, device and computer-readable storage medium for interventional surgery provided by the embodiments of this specification have at least the following beneficial effects:
  • the preoperative preparations can be quickly completed and the surgical stage can be entered, which can effectively improve the efficiency of preoperative planning. Error tolerance rate reduces patient waiting time, thereby improving the safety of surgery;
  • the blood vessel (or lesion, organ) segmentation results can be made more refined and accurate. Thereby improving the safety and stability of interventional surgery;
  • the target structure set area is accurately retained while effectively eliminating false positive areas. This first improves the accuracy of element positioning in the coarse positioning stage and directly contributes to subsequent Reasonably extract the bounding box of element mask positioning information, thereby improving segmentation efficiency;
  • adaptive sliding window calculation and corresponding sliding window operation can be used to complete the missing parts of the positioning area, and can automatically plan and execute reasonable sliding windows.
  • the operation reduces the dependence of the fine segmentation stage on the coarse positioning results, and improves the segmentation accuracy without significantly increasing the segmentation time and computing resources;
  • this application uses specific words to describe the embodiments of the application.
  • “one embodiment”, “an embodiment”, and/or “some embodiments” means a certain feature, structure or characteristic related to at least one embodiment of the present application. Therefore, it should be emphasized and noted that “one embodiment” or “an embodiment” or “an alternative embodiment” mentioned twice or more at different places in this specification does not necessarily refer to the same embodiment. .
  • certain features, structures or characteristics in one or more embodiments of the present application may be appropriately combined.
  • aspects of the present application may be illustrated and described in several patentable categories or circumstances, including any new and useful process, machine, product, or combination of matter, or combination thereof. any new and useful improvements. Accordingly, various aspects of the present application may be executed entirely by hardware, may be entirely executed by software (including firmware, resident software, microcode, etc.), or may be executed by a combination of hardware and software.
  • the above hardware or software may be referred to as "data block”, “module”, “engine”, “unit”, “component” or “system”.
  • aspects of the present application may be embodied as a computer product including computer-readable program code located on one or more computer-readable media.
  • Computer storage media may contain a propagated data signal embodying computer program code, such as on baseband or operating as part of the carrier wave.
  • the propagated signal may have multiple manifestations, including electromagnetic form, optical form, etc., or a suitable combination.
  • Computer storage media may be any computer-readable media other than computer-readable storage media that enables communication, propagation, or transfer of a program for use in connection with an instruction execution system, apparatus, or device.
  • Program code located on a computer storage medium may be transmitted via any suitable medium, including radio, electrical cable, fiber optic cable, RF, or similar media, or a combination of any of the foregoing.
  • the computer program coding required for the operation of each part of this application can be written in any one or more programming languages, including object-oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python etc., conventional procedural programming languages such as C language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may run entirely on the user's computer, as a stand-alone software package, partially on the user's computer and partially on a remote computer, or entirely on the remote computer or processing device.
  • the remote computer can be connected to the user computer via any form of network, such as a local area network (LAN) or a wide area network (WAN), or to an external computer (e.g. via the Internet), or in a cloud computing environment, or as a service Use software as a service (SaaS).
  • LAN local area network
  • WAN wide area network
  • SaaS service Use software as a service
  • numbers are used to describe the quantities of components and properties. It should be understood that such numbers used to describe the embodiments are modified by the modifiers "about”, “approximately” or “substantially” in some examples. Grooming. Unless otherwise stated, “about,” “approximately,” or “substantially” means that the stated number is allowed to vary by ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending on the desired features of the individual embodiment. In some embodiments, numerical parameters should account for the specified number of significant digits and use general digit preservation methods. Although the numerical fields and parameters used to confirm the breadth of the ranges in some embodiments of the present application are approximations, in specific embodiments, such numerical values are set as accurately as feasible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Robotics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种用于介入手术的图像处理方法,包括:分割阶段(210,220);分割阶段(210,220)包括:获取多个第一医学影像,其中,至少有两个第一医学影像对应于同一扫描对象的不同时间点;基于多个第一医学影像,确定预推荐增强影像;获取操作指令,基于操作指令和预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。

Description

一种用于介入手术的图像处理方法、系统和装置
交叉引用
本申请要求于2022年03月11日提交的申请号为CN202210241912.2、2022年08月11日提交的申请号为CN202210963424.2、2022年06月30日提交的申请号为CN202210764217.4的中国申请的优先权,其全部内容通过引用并入本文。
技术领域
本说明书涉及图像处理领域,具体涉及一种用于介入手术的图像处理方法、系统和装置。
背景技术
在术前医疗图像处理中,需要在医学影像设备的辅助下,获取患者血管、病灶和器官等处的影像,实现术前规划。目前的介入手术规划系统的功能较为单一,例如仅支持平扫影像下单类靶器官(例如,胸部靶器官或腹部靶器官)的术前规划,这会导致过多的学习成本,风险规避性差,从而使得介入手术效果差,也大大限制了工作流的优化空间。
因此,如何提供一种更完善、更省时、更准确的介入手术的图像处理方法、系统和装置,以提高手术效率和手术安全性,就变得极为重要。
发明内容
本说明书实施例提供一种用于介入手术的图像处理方法,包括:分割阶段,所述分割阶段包括:获取多个第一医学影像,其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点;基于多个所述第一医学影像,确定预推荐增强影像;获取操作指令,基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。
本说明书实施例提供一种用于介入手术的图像处理方法,包括:配准阶段,所述配准阶段包括:获取第一分割影像和第二分割影像之间的配准误差;若所述配准误差未满足预设条件,则通过配准矩阵调整流程确定的刚体配准矩阵,对所述第一分割影像和所述第二分割影像进行优化配准;其中,所述通过配准矩阵调整流程确定刚体配准矩阵,包括:确定所述配准矩阵调整流程所用的配准元素;若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和所述第二分割影像中的位置,得到所述刚体配准矩阵;若未获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到所述刚体配准矩阵。
本说明书实施例还提供一种用于介入手术的图像处理方法,所述图像处理方法包括:介入路径规划阶段,所述介入路径规划阶段包括:获取扫描对象的医学影像;基于所述医学影像确定靶点;基于用户与所述靶点相关的操作,确定参考路径,其中,所述参考路径的终点为开放式;基于所述参考路径确定目标路径。
本说明书实施例还提供一种用于介入手术的图像处理系统,包括:分割模块、配准模块和介入路径规划模块;所述分割模块用于:获取扫描对象在不同阶段的多个第一医学影像和第二医学影像;其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点;基于所述多个第一医学影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像;所述配准模块用于:基于所述第二医学影像对第二目标结构集中的至少部分元素进行分割,生成第二分割影像;配准所述第一分割影像和所述第二分割影像;所述介入路径规划模块用于:基于配准后的所述第二分割影像和/或所述第二医学影像,确定由皮肤表面进针点到靶区之间的介入路径。
本说明书实施例还提供一种用于介入手术的图像处理装置,包括处理器,所述处理器用于执行本说明书任一实施例中所述的图像处理方法。
本说明书实施例还提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行本说明书任一实施例中所述的图像处理方法。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描 述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的介入手术的图像处理系统的应用场景示意图;
图2是根据本说明书一些实施例所示的介入手术的图像处理方法的示例性流程图;
图3是根据本说明书一些实施例所示的血管在对应期相上的分割设置的示意图;
图4是根据本说明书一些实施例所示的确定预推荐增强影像的示例性流程图;
图5是根据本说明书一些实施例所示的确定介入手术方案的示例性流程图;
图6是根据本说明书一些实施例所示的不同分割模式下的组织分割及类别设定的示意图;
图7是根据本说明书一些实施例所示的介入手术的图像处理方法中涉及的分割过程的示例性流程图;
图8是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图;
图9是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图;
图10是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图;
图11是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图;
图12-图13是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图;
图14A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示例图;
图14B-图14E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图;
图15是根据本说明书一些实施例所示的分割结果的示例性效果对比图;
图16是根据本说明书一些实施例所示的优先级刷新显示的示例性结果图;
图17是本说明书一些实施例中所示的对第一分割影像与第二分割影像进行配准过程的示例性流程图;
图18-图19是根据本说明书一些实施例中所示的确定配准形变场过程的示例性流程图;
图20是根据本说明书一些实施例中所示的经过分割得到第一分割影像、第二分割影像的示例性演示图;
图21是根据本说明书一些实施例所示的多期相增强影像与第二医学影像融合映射的示例性结果图;
图22是根据本说明书一些实施例所示的利用第一轮廓工具进行元素轮廓提取的示意图;
图23是根据本说明书一些实施例所示的利用第二轮廓工具进行元素轮廓提取的示意图;
图24是根据本说明书一些实施例所示的用于介入手术的图像处理系统的示例性模块图;
图25是根据本说明书一些实施例所示的医学图像的配准优化方法的流程示意图;
图26是根据本说明书一些实施例所示的关键器官的空间对齐效果的示意图;
图27A是根据本说明书一些实施例所示的以上下层融合方式进行融合显示的示意图;
图27B是根据本说明书一些实施例所示的以竖直分割线融合方式进行融合显示的示意图;
图27C是根据本说明书一些实施例所示的以水平分割线融合方式进行融合显示的示意图;
图27D是根据本说明书一些实施例所示的以棋盘格融合方式进行融合显示的示意图;
图27E是根据本说明书一些实施例所示的圆环区域示意图;
图28是根据本说明书一些实施例所示的另一医学图像的配准优化方法的流程示意图;
图29是根据本说明书一些实施例所示的配准模块的结构框图;
图30是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程图;
图31是根据本说明书一些实施例所示的另一示例性穿刺路径规划方法的流程图;
图32是根据本说明书一些实施例所示的示例性提供第一提示的示意图;
图33是根据本说明书一些实施例所示的示例性拖拽操作的示意图;
图34是根据本说明书一些实施例所示的示例性确定候选进针点的示意图;
图35是根据本说明书一些实施例所示的示例性提供第二提示的示意图;
图36是根据本说明书一些实施例所示的示例性提供第三提示的示意图;
图37是根据本说明书一些实施例所示的路径分割模块的结构框图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域 的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
图1是根据本说明书一些实施例所示的介入手术的图像处理系统的应用场景示意图。
在一些实施例中,介入手术的图像处理系统100可以应用于多种介入手术/介入治疗。在一些实施例中,介入手术/介入治疗可以包括心血管介入手术、肿瘤介入手术、妇产科介入手术、骨骼肌肉介入手术或其他任何可行的介入手术,如神经介入手术等。在一些实施例中,介入手术/介入治疗可以包括经皮穿刺活检术、冠状动脉造影、溶栓治疗、支架置入手术或者其他任何可行的介入手术,如消融手术等。在一些实施例中,不同部位的介入手术的工作流可以集成到介入手术的图像处理系统100中,使得用户(例如,医生)在进行不同部位介入手术的规划时无需切换应用,只需加载对应部位的数据(例如,第一医学影像、预推荐增强影像、第一分割影像等)即可。不同部位的介入手术可以包括胸部-肺部、腹部-肝脏的介入手术。
介入手术的图像处理系统100可以包括医学扫描设备110、网络120、一个或多个终端130、处理设备140、存储设备150和机械臂160。介入手术的图像处理系统100中的组件之间的连接可以是可变的。如图1所示,医学扫描设备110可以通过网络120连接到处理设备140。又例如,医学扫描设备110可以直接连接到处理设备140,如连接医学扫描设备110和处理设备140的虚线双向箭头所指示的。再例如,存储设备150可以直接或通过网络120连接到处理设备140。作为示例,终端130可以直接连接到处理设备140(如连接终端130和处理设备140的虚线箭头所示),也可以通过网络120连接到处理设备140。
医学扫描设备110可以被配置为使用高能射线(如X射线、γ射线等)对扫描对象进行扫描以收集与扫描对象有关的扫描数据。扫描数据可用于生成扫描对象的一个或以上影像。在一些实施例中,医学扫描设备110可以包括超声成像(US)设备、计算机断层扫描(CT)扫描仪、数字放射线摄影(DR)扫描仪(例如,移动数字放射线摄影)、数字减影血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪、X射线显微镜扫描仪、多模态扫描仪等或其组合。在一些实施例中,多模态扫描仪可以包括计算机断层摄影-正电子发射断层扫描(CT-PET)扫描仪、计算机断层摄影-磁共振成像(CT-MRI)扫描仪。扫描对象可以是生物的或非生物的。仅作为示例,扫描对象可以包括患者、人造物体(例如人造模体)等。又例如,扫描对象可以包括患者的特定部位、器官和/或组织。
如图1所示,医学扫描设备110可以包括机架111、探测器112、检测区域113、工作台114和放射源115。机架111可以支撑探测器112和放射源115。可以在工作台114上放置扫描对象以进行扫描。放射源115可以向扫描对象发射放射线。探测器112可以检测从放射源115发出的放射线(例如,X射线)。在一些实施例中,探测器112可以包括一个或以上探测器单元。探测器单元可以包括闪烁探测器(例如,碘化铯探测器)、气体探测器等。探测器单元可以包括单行探测器和/或多行探测器。
网络120可以包括能够促进介入手术的图像处理系统100的信息和/或数据的交换的任何合适的网络。在一些实施例中,一个或多个介入手术的图像处理系统100的组件(例如,医学扫描设备110、终端130、处理设备140、存储设备150、机械臂160)可以通过网络120彼此交换信息和/或数据。例如,处理设备140可以经由网络120从医学扫描设备110获得影像数据。又例如,处理设备140可以经由网络120从终端130获得用户指令。
网络120可以是和/或包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN)、广域网(WAN)等)、有线网络(例如,以太网络、无线网络(例如802.11网络、Wi-Fi网络等)、蜂窝网络(例如长期演进(LTE)网络)、帧中继网络、虚拟专用网络(“VPN”)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任何组合。仅作为示例,网 络120可以包括电缆网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公用电话交换网络(PSTN)、蓝牙TM网络、ZigBee TM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括一个或多个网络接入点。例如,网络120可以包括诸如基站和/或互联网交换点之类的有线和/或无线网络接入点,通过它们,介入手术的图像处理系统100的一个或多个组件可以连接到网络120以交换数据和/或信息。
终端130可以包括移动设备131、平板计算机132、膝上型计算机133等,或其任意组合。在一些实施例中,移动设备131可以包括智能家居设备、可穿戴设备、移动设备、虚拟现实设备、增强现实设备等或其任意组合。在一些实施例中,智能家居设备可以包括智能照明设备、智能电气设备的控制设备、智能监控设备、智能电视、智能摄像机、对讲机等或其任意组合。在一些实施例中,移动设备可能包括手机、个人数字助理(PDA)、游戏设备、导航设备、销售点(POS)设备、笔记本电脑、平板电脑、台式机等或其任何组合。在一些实施例中,虚拟现实设备和/或增强现实设备包括虚拟现实头盔、虚拟现实眼镜、虚拟现实眼罩、增强现实头盔、增强现实眼镜、增强现实眼罩等或其任意组合。例如,虚拟现实设备和/或增强现实设备可以包括Google GlassTM,Oculus RiftTM,HololensTM,Gear VRTM等。在一些实施例中,终端130可以是处理设备140的一部分。
处理设备140可以处理从医学扫描设备110、终端130和/或存储设备150获得的数据和/或信息。例如,处理设备140可以获取医学扫描设备110获取的数据,并利用这些数据进行成像生成医学影像(如第一医学影像、第二医学影像),并且对医学影像进行分割,生成分割结果数据(如第一分割影像、第二分割影像、配准图等)。再例如,处理设备140可以从终端130获取医学影像、分割模式数据(如快速分割模式数据、精准分割模式数据)、靶器官设置数据、期相设置数据和/或扫描协议。
在一些实施例中,处理设备140可以是单个服务器或服务器组。服务器组可以是集中式或分布式的。在一些实施例中,处理设备140可以是本地的或远程的。例如,处理设备140可以经由网络120访问存储在医学扫描设备110、终端130和/或存储设备150中的信息和/或数据。又例如,处理设备140可以直接连接到医学扫描设备110、终端130和/或存储设备150以访问存储的信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从医学扫描设备110、终端130和/或处理设备140获得的数据。例如,存储设备150可以将从医学扫描设备110获取的医学影像数据(如第一医学影像、第二医学影像、第一分割影像、第二分割影像等等)和/或定位信息数据进行存储。再例如,存储设备150可以将从终端130输入的医学影像和/或扫描协议进行存储。再例如,存储设备150可以将处理设备140生成的数据(例如,医学影像数据、元素掩膜数据、定位信息数据、精准分割后的结果数据、手术中血管和病灶的空间位置、配准图等)进行存储。
在一些实施例中,存储设备150可以存储处理设备140可以执行或用于执行本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150包括大容量存储设备、可移动存储设备、易失性读写存储器、只读存储器(ROM)等或其任意组合。示例性大容量存储设备可以包括磁盘、光盘、固态驱动器等。示例性可移动存储设备可以包括闪存驱动器、软盘、光盘、内存卡、压缩盘、磁带等。示例性易失性读写存储器可以包括随机存取存储器(RAM)。示例性RAM可包括动态随机存取存储器(DRAM)、双倍数据速率同步动态访问存储器(DDR SDRAM)、静态随机存取存储器(SRAM)、晶闸管随机存取存储器(T-RAM)和零电容随机存取存储器(Z-RAM)等。示例性ROM可以包括掩膜式只读存储器(MROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、光盘只读存储器(CD-ROM)和数字多功能磁盘重新分配存储器等。在一些实施例中,所述存储设备150可以在云平台上实现。
在一些实施例中,存储设备150可以连接到网络120以与介入手术的图像处理系统100中的一个或多个其他组件(例如,处理设备140、终端130)通信。介入手术的图像处理系统100中的一个或多个组件可以经由网络120访问存储在存储设备150中的数据或指令。在一些实施例中,存储设备150可以直接连接到介入手术的图像处理系统100中的一个或多个其他组件或与之通信(例如,处理设备140、终端130)。在一些实施例中,存储设备150可以是处理设备140的一部分。
机械臂160可以是能够模仿人类手臂的功能并实现自动控制的设备。在一些实施例中,机械臂160可以包括彼此连接的多关节结构,能够在平面或三维空间运动。在一些实施例中,机械臂160可以包括控制器、机械主体、感应器等。在一些实施例中,控制器可以设置机械主体的动作参 数(例如,轨迹、方向、角度、速度、力度等);机械主体可以精确地执行上述动作参数;感应器可以探测或感受外界的信号、物理条件(例如,光、热、湿度)或化学组成(例如,烟雾),并将探知的信息传递给控制器。在一些实施例中,机械臂160可以包括刚性机械臂、柔性机械臂、气动助力机械臂、软索助力机械臂、线性机械臂、水平多关节机械臂、关节多轴机械臂等或其任意组合。上述机械臂的相关描述仅用于说明目的,而无意限制本说明书的范围。
在一些实施例中,穿刺器械(例如,光纤针、静脉留置针、注射针、穿刺针、活检针等)可以安装在机械臂160上,并通过机械臂160执行穿刺过程。
在一些实施例中,处理设备140和机械臂160可以集成为一体。在一些实施例中,处理设备140和机械臂160可以直接或间接相连接,联合作用实现本说明书所述的方法和/或功能。在一些实施例中,医学扫描设备110、处理设备140和机械臂160可以集成为一体。在一些实施例中,医学扫描设备110、处理设备140和机械臂160可以直接或间接相连接,联合作用实现本说明书所述的方法和/或功能。
关于介入手术的图像处理系统100的描述旨在是说明性的,而不是限制本申请的范围。许多替代、修改和变化对本领域普通技术人员将是显而易见的。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。在一些实施例中,图24的分割模块2410、配准模块2420以及介入路径规划模块2430可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。本申请描述的示例性实施方式的特征、结构、方法和其它特征可以以各种方式组合以获得另外的和/或替代的示例性实施例。例如,处理设备140和医学扫描设备110可以被集成到单个设备中。诸如此类的变形,均在本说明书的保护范围之内。
图2是根据本说明书一些实施例所示的介入手术的图像处理方法的示例性流程图。其中,流程200包括:分割阶段(步骤210和步骤220)、配准阶段(步骤230和步骤240)和介入路径规划阶段(步骤250)。
在分割阶段,步骤210,获取扫描对象在不同阶段的多个第一医学影像和第二医学影像,其中,至少有两个第一医学影像对应于同一扫描对象的不同时间点。在一些实施例中,步骤210可以由分割模块2410执行。
第一医学影像可以指扫描对象(如患者等)在手术前注入造影剂后,经由医学扫描设备扫描得到的影像。第一医学影像也可以称为术前增强影像。在一些实施例中,第一医学影像可以包括CT影像、PET-CT影像、US影像或MR影像。
在一些实施例中,不同阶段可以包括第一阶段。第一阶段可以是用于获取扫描对象的第一医学影像的时间阶段。在一些实施例中,第一阶段可以是指扫描对象注入造影剂后且在手术开始进行之前的时间段。
在一些实施例中,扫描对象可以包括生物扫描对象或非生物扫描对象。在一些实施例中,生物扫描对象可以包括患者、患者的特定部位、器官和/或组织,例如腹部、心脏或肿瘤组织等。在一些实施例中,非生物扫描对象可以包括人造物体,例如人造模体等。
在一些实施例中,可以获取扫描对象手术前第一阶段的多个第一医学影像。在一些实施例中,可以从医学扫描设备110获取扫描对象的多个第一医学影像,如PET-CT影像等。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的多个第一医学影像,如US影像等。
需要说明的是,在另一些实施例中,还可以通过任何其他可行的方式获取第一医学影像,例如,可以经由网络120从云端服务器和/或医疗系统(如医院的医疗系统中心等)获取多个第一医学影像,本说明书实施例不做特别限定。
在一些实施例中,多个第一医学影像中可以至少有两个第一医学影像对应于同一扫描对象的不同时间点。在一些实施例中,可以在注入造影剂后的多个时间点或时间间隔对同一扫描对象(例如,某一器官/组织)进行扫描,以得到对应不同时间点的多个第一医学影像。在一些实施例中,不同时间点可以至少包括两个不同期相。此时,多个第一医学影像中可以至少包括两个具有不同期相的第一医学影像。在一些实施例中,不同期相的第一医学影像中血管(以及其他器官/组织、病灶等)的显影效果不同。通过静脉注射造影剂可以增强人体的血流信号,使得注入造影剂后的不同时间点/段内血管(以及其他器官/组织、病灶等)的显影效果不同,从而获得多个第一医学影像。以对肝脏进行扫描为例,患者注入造影剂后至40s进入动脉期,此时动脉血管显影较为明显;40s-120s的时 间内为门脉期,此时门静脉和静脉血管显影较为明显;120s-360s的时间内为延迟期,此时肝脏实质和病灶(例如,肿瘤结节)显影较为明显。其中,在延迟期阶段,在注入造影剂后的120s-180s的时间内,肝脏实质保持一定程度强化,肝脏实质显影较为明显;180s-360s的时间内,病灶,例如血管癌、胆管细胞癌等则表现出延迟强化的特点,相对于正常肝脏实质为高密度,肝脏中的病灶显影较为明显。
由以上可知,通过在不同时间点对肝脏进行扫描,可以获得多个期相的第一医学影像,并且多个期相分别对应的第一医学影像中血管(以及其他器官/组织、病灶等)的显影效果不同。例如,分别在动脉期、门脉期和延迟期各扫描一次,可以得到3个期相的第一医学影像,动脉血管在动脉期对应的第一医学影像中显影明显;门静脉和静脉血管在门脉期对应的第一医学影像中显影明显;肝脏实质和病灶在延迟期对应的第一医学影像中显影明显。在一些实施例中,也可以在动脉期(和/或门脉期、和/或延迟期)时间范围内进行多次扫描,得到多个期相对应的第一医学影像。例如,可以在动脉期时间范围内进行多次,例如,3次扫描,可以得到3个期相的第一医学影像,3个期相的第一医学影像中动脉血管的显影效果都可以较好,利用一定方法(例如,图像识别)可以确定动脉血管在哪一个中的显影效果最好。可以理解的是,对其他器官(例如,胸部器官、肺部器官等)进行扫描时,也可以通过在不同时间点扫描对应器官以获得具有不同期相的多个第一医学影像,且不同期相对应的第一医学影像中血管(以及其他器官/组织、病灶等)的显影效果不同。
步骤220,基于多个第一医学影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。在一些实施例中,步骤220可以由分割模块2410执行。
在一些实施例中,第一分割影像可以是对第一医学影像分割得到的手术前第一目标结构集的至少部分元素(例如,靶器官、靶器官内的血管、病灶)的分割影像。在一些实施例中,第一分割影像的数量可以是多个,其中,每一个第一分割影像对应于一个第一医学影像。
在一些实施例中,可以对至少两个不同期相对应的第一医学影像进行融合,获得第一融合影像;基于第一融合影像对第一目标结构集中的至少部分元素进行分割。第一融合影像可以是多个不同期相对应的第一医学影像融合后得到的图像。第一融合影像可以对用于融合的多个第一医学影像中的不同元素进行分割。例如,设定血管a在phase-1对应的第一医学影像中进行分割,血管b在phase-2对应的第一医学影像中进行分割,将phase-1和phase-2对应的两个第一医学影像进行融合得到第一融合影像,第一融合影像中可以对血管a和血管b进行分割。
在一些实施例中,第一目标结构集中的至少部分元素也可以在第一医学影像上进行分割。这里的至少部分元素可以是第一目标结构集中的非血管元素,例如病灶、靶器官。
关于对第一目标结构集中的元素的分割方法,具体可以参见本说明书图7-图15,及其相关描述。
在一些实施例中,基于多个第一医学影像对第一目标结构集中的至少部分元素进行分割可以包括:基于多个第一医学影像确定预推荐增强影像;获取操作指令,基于操作指令和预推荐增强影像对第一目标结构集中的至少部分元素进行分割。在一些实施例中,上述过程可以由分割模块2410执行。
操作指令可以包括用户指令和自动指令。用户指令可以包括用户(例如,医生)输入的指令,例如,医生通过终端输入的期相调整指令。在一些实施例中,基于步骤220中确定的预推荐增强影像,如果医生认为某一元素(例如,血管)在当前期相对应的第一医学影像(即,当前预推荐增强影像)上无法进行分割或分割效果较差,则可以通过终端输入期相调整指令,以手动调整该元素的分割期相,从而提高元素的分割效果。自动指令可以包括自动分割指令。自动指令可以自动发出,而不需要用户输入,分割模块2410自动执行元素分割操作。可以理解的是,执行自动指令时,可以认为元素在当前期相对应的第一医学影像(即,当前预推荐增强影像)的分割效果符合预期标准。
在一些实施例中,一个预推荐增强影像中可以至少包括一个自动识别效果最好的元素。当预推荐增强影像中只包括一个自动识别效果最好的元素(即预推荐增强影像与元素之间是一一对应关系)时,可以对该元素进行分割,得到第一分割影像;当预推荐增强影像中包括多个自动识别效果最好的元素(即预推荐增强影像与元素之间是一对多关系)时,可以对多个元素中的一个或多个进行分割,得到第一分割影像。
预推荐增强影像可以是第一医学影像中的一个或以上影像。基于步骤210中的描述可知,不同期相对应的第一医学影像中的血管(以及其他器官/组织、病灶等)的显影效果不同,预推荐增强影像可以是基于血管(以及其他器官/组织、病灶等)的显影效果,从多个第一医学影像中筛选出 的显影效果最好的第一医学影像。第一医学影像中的血管(以及其他器官/组织、病灶等)的显影效果不同,血管(以及其他器官/组织、病灶等)在对应第一医学影像中的分割效果也可能不同(如显影效果好,分割效果好;显影效果差,分割效果差)。在一些实施例中,预推荐增强影像与血管(以及其他器官/组织、病灶等)之间可以具有对应关系。血管(以及其他器官/组织、病灶等)在与其对应的预推荐增强影像上的显影效果(也可以是分割效果)最好。在一些实施例中,一个预推荐增强影像可以对应于一个血管(或其他器官/组织、病灶),即血管(或其他器官/组织、病灶)与预推荐增强影像之间是一一对应关系。例如,对于肝脏的动脉血管而言,可以确定在动脉期时间范围内扫描(扫描一次)得到的第一医学影像为动脉血管的预推荐增强影像。在一些实施例中,同一个预推荐增强影像可以对应于多个血管(或其他器官/组织、病灶),即血管(或其他器官/组织、病灶)与预推荐增强影像之间是多对一的对应关系。例如,对于肝脏的肝门静脉血管和肝静脉血管,二者都是在门脉期时间范围内扫描(扫描一次)得到的第一医学影像中的显影效果最好,因此,可以确定门脉期时间范围内扫描得到的第一医学影像为肝门静脉血管和肝静脉血管的预推荐增强影像。
在一些实施例中,可以根据多个第一医学影像的生成时间来确定多个第一医学影像对应的期相。在一些实施例中,将多个第一医学影像导入至系统时,可以将导入的多个第一医学影像按照生成时间进行排序,依次定义为phase-1、phase-2,…,phase-R,并设定血管(以及器官/组织、病灶)在某个期相对应的第一医学影像上进行分割。图3是根据本说明书一些实施例所示的血管在对应期相上的分割设置的示意图。其中,(a)表示肝脏中血管的分割设置,(b)表示肺部的血管的分割设置。参见图3(a),以将4个第一医学影像导入至介入手术的规划系统为例,4个第一医学影像根据生成时间依次定义为phase-1、phase-2、phase-3和phase-4,并设定肝动脉在phase-1对应的第一医学影像上进行分割、肝门静脉在phase-2对应的第一医学影像上进行分割、肝静脉在phase-3对应的第一医学影像上进行分割,以及下腔静脉在phase-4对应的第一医学影像上进行分割。此时,phase-1对应的第一医学影像即可视为肝动脉的预推荐增强影像,phase-2对应的第一医学影像视为肝门静脉的预推荐增强影像,phase-3对应的第一医学影像视为肝静脉的预推荐增强影像,phase-4对应的第一医学影像视为下腔静脉的预推荐增强影像。参见图3(b),同理,肺部的4个第一医学影像导入系统,根据生成时间依次定义为phase-1、phase-2、phase-3和phase-4,并设定肺内动脉和肺外动脉在phase-3对应的第一医学影像上进行分割,肺内静脉和肺外静脉在phase-4对应的第一医学影像上进行分割。此时,phase-3对应的第一医学影像可视为肺内动脉和肺外动脉的预推荐增强影像,phase-4对应的第一医学影像可视为肺内静脉和肺外静脉的预推荐增强影像。
在一些实施例中,多个第一医学影像按照生成时间排序成phase-1、phase-2,…,phase-R的期相,血管(以及器官/组织、病灶)在某个期相对应的第一医学影像上的分割设置可以是预先设定好的。
在一些实施例中,可以对多个第一医学影像进行自动识别,以确定第一医学影像对应的期相,从而保证血管(以及其他器官/组织、病灶)在对应期相的第一医学影像(也即是与该血管对应的预推荐增强影像)上的分割效果最佳。
图4是根据本说明书一些实施例所示的确定预推荐增强影像的示例性流程图。如图4所示,基于多个第一医学影像,确定预推荐增强影像可以包括以下几个子步骤:
子步骤401,对多个第一医学影像进行自动识别;
子步骤402,基于自动识别结果,确定多个第一医学影像对应的期相。
在一些实施例中,第一医学影像的自动识别可以是指第一医学影像的图像识别。通过对多个第一医学影像进行自动识别,可以确定血管在多个第一医学影像中哪一个上的分割效果最佳。在一些实施例中,可以将每个血管对应的分割效果最佳的多个第一医学影像分别赋予不同的期相。例如,通过自动识别确定出血管a在某个第一医学影像上的分割效果最佳,则可以将该第一医学影像定义为phase-1,血管b分割效果最佳的第一医学影像定义为phase-2,依次类推,血管r对应的分割效果最佳的第一医学影像定义为phase-R。此时,可以确定phase-1对应的第一医学影像为血管a的预推荐增强影像,phase-2对应的第一医学影像为血管b的预推荐增强影像,…,phase-R对应的第一医学影像为血管r的预推荐增强影像。在一些实施例中,也可能出现不同血管,例如,血管a和血管b分割效果最佳的第一医学影像是同一影像,则该影像可以同时作为血管a和血管b的预推荐增强影像。需要说明的是,这里将第一医学影像定义的期相可以是随机生成的,该期相相当于是给第一医学影像打上标签,以区分出血管在哪个第一医学影像上的分割效果最佳。
在一些实施例中,也可以先根据多个第一医学影像的生成时间来确定多个第一医学影像对应的期相,再对每个期相对应的第一医学影像进行自动识别,以识别出每个期相对应的第一医学影 像上有哪些血管,通过对比每个期相对应的第一医学影像上对同一血管的识别结果,从而确定该血管的最佳分割影像。例如,多个第一医学影像按照生成时间依次定义为phase-1、phase-2,…,phase-R后,分别对R个期相对应的第一医学影像进行自动识别,可以得到每个期相对应的第一医学影像上有哪些血管,如phase-1和phase-3分别对应的两个第一医学影像中都识别到了血管a,对比phase-1和phase-3对应的两个第一医学影像上对血管a的识别结果,若血管a在phase-1对应的第一医学影像上的分割效果更佳,则phase-1对应的第一医学影像可以确定为血管a的预推荐增强影像;反之,phase-3对应的第一医学影像可以确定为血管a的预推荐增强影像。在一些实施例中,也可能出现某一期相对应的第一医学影像上多个血管的分割效果最佳的情况,则该期相对应的第一医学影像中的多个血管分割设置都保留。
在一些实施例中,还可以设定血管在特定期相对应的第一医学影像上进行分割。这里的特定期相可以是根据自动识别结果来确定的。结合图3(a)所示,可以设定肝动脉在phase-1对应的第一医学影像上进行分割,肝门静脉在phase-2对应的第一医学影像上进行分割,肝静脉在phase-3对应的第一医学影像上进行分割,下腔静脉在phase-4对应的第一医学影像上进行分割。不同的是,这里的phase-1、phase-2、phase-3和phase-4不是根据生成时间确定的期相,而是根据自动识别结果确定的。例如,对4个第一医学影像进行自动识别,根据识别结果确定了其中一个第一医学影像上肝门静脉的分割效果最佳,则将该第一医学影像设定为phase-2。又例如,根据识别结果确定了其中一个第一医学影像上下腔静脉的分割效果最佳,则将该第一医学影像设定为phase-4。
在一些实施例中,多个第一医学影像的自动识别可以通过以下方式来实现。在一些实施例中,可以对多个第一医学影像进行粗分割,获得各个第一医学影像上对于每种血管(或其他器官/组织、病灶)的粗分割掩膜,然后在获得各个血管(或其他器官/组织、病灶)的粗分割掩膜后,针对同一血管(或其他器官/组织、病灶)在多个期相对应的第一医学影像上进行同向对比,从而得到该血管(或其他器官/组织、病灶)分割效果最好的第一医学影像。在一些实施例中,可以通过无监督方法和/或有监督方法来判断血管(或其他器官/组织、病灶)粗分割掩膜的分割效果。有监督方法可以是将粗分割掩膜与参考图像(例如,理想图像)的掩膜进行差距评估,判断分割质量,如无分割错误、Recall、SSIM图像相似度等。无监督方法可以是不需要参考图像,而是根据分割图像与期望分割图像的广泛特征的匹配程度来评估分割图像的分割质量,如Region Nonuniformity(区域不均匀性)、模糊分割系数、模糊分割熵、分割准确度(Segmentation Accuracy,SA)等。
在一些实施例中,可以利用图像处理方法,例如图像增强方法、阈值分割方法、区域生长方法或特征点提取方法等,对第一医学影像中的血管(或其他器官/组织、病灶)进行粗分割的操作。在一些实施例中,可以利用深度学习卷积网络的方法,如UNet,对第一医学影像中的血管(或其他器官/组织、病灶)进行粗分割的操作。关于粗分割及其方法的更多描述可以参见本说明书其他地方,例如图7-图10,及其相关描述。
在一些实施例中,基于第一医学影像确定预推荐增强影像,可以是基于第一目标结构集中的预设元素在不同期相对应的第一医学影像上的预分割效果,确定预推荐增强影像。在一些实施例中,第一目标结构集可以包括目标器官(例如,靶器官)内的血管。在一些实施例中,第一目标结构集除了靶器官内的血管外,还可以包括靶器官和病灶。在一些实施例中,靶器官可以包括大脑、肺、肝脏、脾脏、肾脏或其他任何可能的器官组织,如甲状腺等。在一些实施例中,预设元素可以是预先设定需要进行分割的元素。预设元素可以包括靶器官内的血管、病灶、靶器官中的一种或多种。在一些实施例中,可以仅基于靶器官内的血管在不同期相对应的第一医学影像上的预分割效果,确定靶器官内的血管的预推荐增强影像。在一些实施例中,也可以基于靶器官内的血管以及病灶(和/或靶器官)在不同期相对应的第一医学影像上的预分割效果,分别确定靶器官内的血管以及病灶(和/或靶器官)的预推荐增强影像。在一些实施例中,预分割效果可以是自动识别的结果,例如,粗分割的分割结果。
在一些实施例中,靶器官和/或病灶在第一医学影像中的分割结果与其在预推荐增强影像中的分割结果可以进行融合,以得到更为准确的分割结果。
在一些实施例中,第一目标结构集中的至少部分元素分割完成后,可以对至少部分元素的轮廓进行提取,以快速定位靶区。例如,可以利用轮廓绘制工具对至少部分元素(例如,病灶、血管)的轮廓进行提取。在一些实施例中,当医生根据临床经验判断当前元素的轮廓与实际情况存在差异时,还可以利用轮廓修正工具对影像图上的元素的轮廓进行修正,如增加、修减,直至符合临床情况。关于轮廓绘制工具和轮廓修正工具的更多内容可以参见本说明书图22和图23及其相关描述。
步骤230,基于第二医学影像对第二目标结构集中的至少部分元素进行分割,生成第二分割影像。在一些实施例中,步骤230可以由配准模块2420执行。
在一些实施例中,不同阶段还可以包括第二阶段。第二阶段可以是用于获取扫描对象的第二医学影像的时间阶段。在一些实施例中第二阶段可以是指扫描对象在手术过程中但手术还未开始的时间段。例如,扫描对象已做好手术准备即将进行手术操作的时间段。相比于第一阶段,第二阶段的时间段更加接近扫描对象的手术时间。因此,与第一阶段中获取的第一医学影像相比,第二阶段中获取的第二医学影像更加能够反映扫描对象的临床情况。
第二医学影像是指扫描对象在手术中经由医学扫描设备平扫得到的影像。在一些实施例中,第二医学影像可以包括CT影像、PET-CT影像、US影像或MR影像。在一些实施例中,第二医学影像可以是实时扫描影像。在一些实施例中,第二医学影像也可以称为平扫影像或术中平扫影像,是手术准备过程中且手术执行前(即实际进针前)的扫描影像。在一些实施例中,可以获取扫描对象的第二医学影像。在一些实施例中,可以从医学扫描设备110获取扫描对象的第二医学影像,如PET-CT影像等。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的第二医学影像,如US影像等。在一些实施例中,第一医学影像中的元素(例如,血管、病灶、靶器官等)的显影效果更佳,而第二医学影像反映的扫描对象的临床情况更加真实,可以将第一医学影像与第二医学影像进行配准,从而得到配准图。配准图中既能更加准确反映扫描对象的临床情况,还能较好的显示不同元素的位置和/或轮廓,从而使得后续的手术更加安全。例如,可以基于第二医学影像对第二目标结构集中的至少部分元素进行分割,生成第二分割影像。
在一些实施例中,可以基于第一医学影像、第一分割影像、预推荐增强影像、第二医学影像和第二分割影像生成介入手术方案。介入手术方案可以指穿刺路径规划的前置工作方案。介入手术方案可以用于区分可介入区域和不可介入区域。可介入区域是指穿刺路径可以经过的区域(例如,脂肪等)。不可介入区域是指在规划手术方案时介入规划路径需要避开的区域(例如,血管、重要器官)。在一些实施例中,不可介入区域可以包括不可穿刺区域、不可导入或置入区域以及不可注入区域。在一些实施例中,介入手术方案可以包括第一医学影像、预推荐增强影像、第二医学影像、目标结构集的选择以及第一医学影像和第二医学影像的配准等。
在一些实施例中,可以将第一目标结构集中的至少部分元素的分割结果标注为书签并进行保存。得到第一目标结构集中的至少部分元素的分割结果的流程过程可以记为术前离线规划阶段。术前离线规划阶段中生成的书签可以用于表征第一目标结构集中的至少部分元素的分割结果。在一些实施例中,将第一目标结构集中的至少部分元素的分割结果标注为书签可以包括:将第一目标结构集中的全部元素的分割结果标注为书签,以使该书签能够表征术前离线规划阶段的全部元素的分割结果。在一些实施例中,书签中也可以包括用于元素分割的第一医学影像,例如预推荐增强影像。书签可以用于生成介入手术方案。例如,利用书签中的第一医学影像、预推荐增强影像、第一目标结构集中的至少部分元素的分割结果(如第一分割影像),可以生成介入手术方案,具体可以参见后文描述。
在一些实施例中,书签的生成方法可以是采用dicom文件的生成技术,并将离线规划阶段的结果通过序列化技术和压缩技术保存在dicom的指定Tag中。其中,序列化格式可以采用ProtoBuf。
在配准阶段,步骤240,配准第一分割影像和第二分割影像。在一些实施例中,步骤240可以由配准模块2420执行。在一些实施例中,可以对第一分割影像和第二分割影像进行配准,确定配准形变场;基于配准形变场和预推荐增强影像中的第一目标结构集中的至少部分元素的空间位置,确定第二医学影像中相应元素的空间位置。关于第一分割影像与第二分割影像的配准的具体内容可以参见本说明书的其他地方,例如图25-图29及其相关描述。
在介入路径规划阶段,步骤250,基于配准后的第二分割影像和/或第二医学影像,确定由皮肤表面进针点到靶区之间的介入路径。在一些实施例中,步骤250可以由介入路径规划模块2430执行。
介入路径可以指机械臂160的组件从皮肤表面进针点到靶区的路径。例如,介入路径可以是穿刺针的穿刺路径等。在一些实施例中,可以在可介入区域中选择介入路径,以避开不可介入区域,从而保证手术的安全性。在一些实施例中,可以基于配准后的扫描对象的第二医学影像以及介入手术方案确定靶点,然后基于用户与靶点相关的操作确定参考路径,从而基于参考路径确定目标路径。机械臂的组件可以从皮肤表面进针点按照目标路径到达靶区,从而进行相应的手术操作。关于路径规划的具体说明,可以参见图30-图36及其相关描述。
应当注意的是,上述有关流程200的描述仅仅是为了示例和说明,而不限定本说明书的适 用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程200进行各种修正和改变。
图5是根据本说明书一些实施例所示的确定介入手术方案的示例性流程图。如图5所示,流程500可以包括以下几个子步骤:
子步骤231,获取书签。
在一些实施例中,书签导入到介入手术的图像处理系统中时,可以加载书签中对应保存的术前离线规划阶段的结果,例如,第一医学影像、预推荐增强影像以及第一目标结构集中的至少部分元素的分割结果。书签的加载过程也可以称为书签的还原过程。在一些实施例中,将第一目标结构集中的全部元素的分割结果标注为书签时,加载书签可以理解为是完全还原术前离线规划阶段的分割结果。即,加载书签可以还原第一目标结构集中的至少部分元素的分割结果。书签还原技术可以是读取书签dicom的指定Tag,并通过反序列化技术获得术前离线规划阶段的结果,随后将该结果重新设置在原序列上。在一些实施例中,可以从终端130、处理设备140和存储设备150获取书签。
子步骤232,对第二医学影像的第二目标结构集进行分割,获得第二目标结构集的第二分割影像。
在一些实施例中,第二医学影像的第二目标结构集所包括的区域或器官可以基于分割模式(例如,快速分割模式和精准分割模式)确定。也即是,分割模式不同时,第二目标结构集包括的区域或器官不同。例如,分割模式在快速分割模式下,第二目标结构集可以包括不可介入区域。又例如,分割模式在精准分割模式下,第二目标结构集可以包括第二医学影像中所有重要器官。重要器官是指介入手术时介入规划路径需要避开的器官,例如,肝脏、肾脏、靶器官外部血管等。在一些实施例中,第二目标结构集除不可介入区域/第二医学影像中所有重要器官外,也可以包括靶器官和病灶。第二分割影像是对第二医学影像分割得到的手术中第二目标结构集(例如,不可介入区域/重要器官、靶器官、病灶)的分割影像。
在一些实施例中,第一目标结构集和第二目标结构集有交集。例如,第一目标结构集包括血管(例如,靶器官内的血管)和靶器官,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官。又例如,第一目标结构集包括靶器官内的血管、靶器官和病灶,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官和病灶。
在一些实施例中,可以在执行子步骤232之前,进行靶器官设置。在一些实施例中,介入手术的图像处理方法200可以用于不同部位(例如,肝脏、肺部)的介入手术的术前规划。进行介入手术操作的人体部位不同时,靶器官设置也不同。例如,进行肝脏手术的术前规划时,靶器官可以设置为肝脏,其他器官/组织(例如,肾脏、胰腺、肾上腺等)在快速分割模式下为不可介入区域,在精准分割模式下为重要的器官/组织。在一些实施例中,用户可以根据介入手术的操作部位,进行靶器官设置。
在一些实施例中,可以在执行子步骤232之前,获取分割模式,分割模式可以包括快速分割模式和精准分割模式。在一些实施例中,分割模式可以是用于对第二医学影像分割的分割模式。子步骤232中,对第二医学影像的第二目标结构集进行分割,可以按以下方式实施:根据分割模式,对第二医学影像的第四目标结构集进行分割。在一些实施例中,可以根据快速分割模式和/或精准分割模式,对第二医学影像的第四目标结构集进行分割。
在一些实施例中,第四目标结构集可以是第二目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。在不同分割模式下,第四目标结构集包括的区域/器官不同。在一些实施例中,在快速分割模式下,第四目标结构集可以包括不可介入区域。在一些实施例中,在精准分割模式下,第四目标结构集可以包括预设的重要器官。
在一些实施例中,在快速分割模式下,可以对第二医学影像进行区域定位计算,以及对不可介入区域进行分割提取。
在一些实施例中,可以对不可介入区域和目标器官(如靶器官)之外的区域进行后处理,以保障不可介入区域与目标器官的中间区域不存在空洞区域。空洞区域是指由前景像素相连接的边界包围所形成的背景区域。在一些实施例中,不可介入区域可以用腹腔(或是胸腔)区域减去目标器官和可介入区域得到。而用腹腔(或是胸腔)区域减去目标器官和可介入区域得到不可介入区域后,目标器官和不可介入区域的中间可能会存在空洞区域,该空洞区域既不属于目标器官,也不属于不可介入区域。此时,可以对空洞区域进行后处理操作以将空洞区域补全,也即是经过后处理操作的空洞区域可以视为不可介入区域。在一些实施例中,后处理可以包括腐蚀操作和膨胀操作。在 一些实施例中,腐蚀操作和膨胀操作可以基于第二医学影像与滤波器进行卷积处理来实施。在一些实施例中,腐蚀操作可以是滤波器与第二医学影像卷积后,根据预定腐蚀范围求局部最小值,使得第二医学影像的轮廓缩小至期望范围,在第二医学影像显示初始影像中目标高亮区域缩小一定范围。在一些实施例中,膨胀操作可以是滤波器与术中扫描影像卷积后,根据预定腐蚀范围求局部最大值,使得第二医学影像的轮廓扩大至期望范围,在第二医学影像显示初始影像中目标高亮区域缩小一定范围。
在一些实施例中,在快速分割模式下,可以在对第二医学影像先进行区域定位计算,再进行不可介入区域的分割提取。在一些实施例中,可以基于第二医学影像的目标器官的分割掩膜和血管掩膜(血管掩膜可以是基于第二医学影像和第一医学影像配准映射得到的),确定目标器官内部的血管掩膜。需要说明的是,在快速分割模式下,仅需分割目标器官内部的血管;在精准分割模式下,可以分割目标器官内部的血管以及外部其他血管。
掩膜(Mask),如器官掩膜,可以是像素级的分类标签,以腹腔医学影像为例进行说明,掩膜表示对医学影像中各个像素进行分类,例如,可以分成背景、肝脏、脾脏、肾脏等,特定类别的汇总区域用相应的标签值表示,例如,所有分类为肝脏的像素进行汇总,汇总区域用肝脏对应的标签值表示,这里的标签值根据可以具体粗分割任务进行设定。分割掩膜是指经过分割操作后得到的相应掩膜。
在一些实施例中,快速分割模式下,仅以胸腔区域或腹腔区域作为示例,首先对第二医学影像的扫描范围内胸腔或是腹腔区域的区域定位计算,具体地,对于腹腔,选取肝顶直至直肠底部,作为腹腔的定位区域;如果是胸腔,则取食管顶至肺底(或肝顶),作为胸腹腔的定位区域;确定胸腔或是腹腔区域的区域定位信息后,再对腹腔或是胸腔进行分割,并在该分割区域内进行再次分割以提取可介入区域(与不可介入区域相对,如可穿区域,脂肪等);最后,用腹腔或是胸腔分割掩膜去掉目标器官的分割掩膜和可介入区域掩膜,即可提取到不可介入区域。在一些实施例中,可介入区域可以包括脂肪部分,如两个器官之间包含脂肪的缝隙等。以肝脏为例,皮下至肝脏之间的区域中的部分区域可以被脂肪覆盖。由于快速分割模式下处理速度快,进而使得规划速度更快,时间更短,提高了影像处理效率。
在一些实施例中,在精准分割模式下,可以对第二医学影像的所有器官进行分割。在一些实施例中,第二医学影像的所有器官可以包括第二医学影像的基本器官以及重要器官。在一些实施例中,第二医学影像的基本器官可以包括第二医学影像的目标器官(如靶器官)。在一些实施例中,在精准分割模式下,可以对第二医学影像的预设的重要器官进行分割。预设的重要器官可以根据第二医学影像的每个器官的重要程度来确定。例如,第二医学影像中的所有重要器官均可以作为预设的重要器官。在一些实施例中,精准分割模式下的预设的重要器官总体积与快速分割模式下的不可介入区域总体积的比值可以小于预设效率因子m。预设效率因子m可以表征基于不同分割模式进行分割的分割效率(或分割细致程度)的差异情况。在一些实施例中,预设效率因子m可以等于或小于1。在一些实施例中,预设效率因子m的设定与介入手术类型有关。介入手术类型可以包括但不限于泌尿手术、胸腹手术、心血管手术、妇产科介入手术、骨骼肌肉手术等。仅作为示例性说明,泌尿手术中的预设效率因子m可以设置的较小;胸腹手术中的预设效率因子m可以设置的较大。
在一些实施例中,在精准分割模式下,通过分割获取第二医学影像的所有重要器官的分割掩膜,分割的影像内容更细致,使得手术规划方案的选择性更多,影像处理的鲁棒性也得到了加强。
图6是根据本说明书一些实施例所示的不同分割模式下的组织分割及类别设定的示意图。以肝脏为靶器官为例,图6(a)表示精准分割模式下的组织分割及类别设定,图6(b)表示快速分割模式下的组织分割及类别设定。参见图6(a),在精准分割模式下,可以将单个器官/组织分割出来,并将靶器官,即肝脏设定为待穿区,其他重要器官,例如,肾脏、胰腺、胆囊、胃部、脾脏、心脏、肺部、肾上腺或其他自定义组织设定为危险区。参见图6(b),在快速分割模式下,可以将不可介入区域分割出来,并将靶器官,即肝脏设定为待穿区,不可穿组织(也就是不可介入区域)和自定义组织设定为危险区。在一些实施例中,器官/组织类别为待穿区时,针道规划路径可以穿过该器官/组织;器官/组织类别为危险区时,则需要确定该器官/组织与针道规划路径的安全距离,以提高针道规划的安全性和稳定性。在一些实施例中,器官/组织与针道规划路径的安全距离可以是预先设定好的。例如,可以根据经验值对每个器官/组织与针道规划路径的安全距离进行初步设定。在一些实施例中,医生也可以根据实际情况对安全距离进行适应性调整。在一些实施例中,医生可以根据实际需要进行勾选并设定器官/组织类别。在一些实施例中,自定义组织可以由医生根据患者实际情况进行合理设定。
在一些实施例中,对第二目标结构集中的元素分割后,也可以利用轮廓绘制工具对至少部分元素(例如,器官)的轮廓进行提取。在一些实施例中,当医生根据临床经验判断当前元素的轮廓与实际情况存在差异时,还可以利用轮廓修正工具对影像图上的元素的轮廓进行修正,如增加、修减,直至符合临床情况。关于轮廓绘制工具和轮廓修正工具的更多内容可以参见本说明书图22和图23及其相关描述。
图7是根据本说明书一些实施例所示的介入手术的图像处理方法中涉及的分割过程的示例性流程图。
在一些实施例中,介入手术的图像处理方法中涉及的分割过程300可以包括以下步骤:
步骤310,对医学影像中的目标结构集中的至少一个元素进行粗分割;
步骤320,得到至少一个元素的掩膜;
步骤330,确定掩膜的定位信息;
步骤340,基于掩膜的定位信息,对至少一个元素进行精准分割。
在一些实施例中,医学影像可以包括第一医学影像、预推荐增强影像和第二医学影像。目标结构集可以包括第一目标结构集、第二目标结构集和第四目标结构集中的任意一个或多个。
在一些实施例中,步骤310中,可以利用阈值分割方法、区域生长方法或水平集方法,对医学影像中的目标结构集中的至少一个元素进行粗分割的操作。元素可以包括医学影像中的目标器官(例如,靶器官)、靶器官内的血管、病灶、不可介入区域、所有重要器官等。在一些实施例中,基于阈值分割方法进行粗分割,可以按以下方式实施:可以通过设定多个不同的像素阈值范围,根据输入医学影像的像素值,对医学影像中的各个像素进行分类,将像素值在同一像素阈值范围内的像素点分割为同一区域。在一些实施例中,基于区域生长方法进行粗分割,可以按以下方式实施:基于医学影像上已知像素点或由像素点组成的预定区域,根据需求预设相似度判别条件,并基于该预设相似度判别条件,将像素点与其周边像素点比较,或者将预定区域与其周边区域进行比较,合并相似度高的像素点或区域,直到上述过程无法重复则停止合并,完成粗分割过程。在一些实施例中,预设相似度判别条件可以根据预设影像特征确定,示例性地,如灰度、纹理等影像特征。在一些实施例中,基于水平集方法进行粗分割,可以按以下方式实施:将医学影像的目标轮廓设为一个高维函数的零水平集,对该函数进行微分,从输出中提取零水平集来得到目标的轮廓,然后将轮廓范围内的像素区域分割出来。
在一些实施例中,可以利用基于深度学习卷积网络的方法,对医学影像中的目标结构集的至少一个元素进行粗分割的操作。在一些实施例中,基于深度学习卷积网络的方法可以包括基于全卷积网络的分割方法。在一些实施例中,卷积网络可以采用基于U形结构的网络框架,如UNet等。在一些实施例中,卷积网络的网络框架可以由编码器和解码器以及残差连接(skip connection)结构组成,其中编码器和解码器由卷积层或卷积层结合注意力机制组成,卷积层用于提取特征,注意力模块用于对重点区域施加更多注意力,残差连接结构用于将编码器提取的不同维度的特征结合到解码器部分,最后经由解码器输出分割结果。在一些实施例中,基于深度学习卷积网络的方法进行粗分割,可以按以下方式实施:由卷积神经网络的编码器通过卷积进行医学影像的特征提取,然后由卷积神经网络的解码器将特征恢复成像素级的分割概率图,分割概率图表示图中每个像素点属于特定类别的概率,最后将分割概率图输出为分割掩膜,由此完成粗分割。
图8是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图。图9是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图。图10是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图。
在一些实施例中,步骤330中,确定元素掩膜的定位信息,可以按以下方式实施:对元素掩膜进行软连通域分析。连通域,即连通区域,一般是指影像中具有相同像素值且位置相邻的前景像素点组成的影像区域。
在一些实施例中,步骤330对元素掩膜进行软连通域分析,可以包括以下几个子步骤:
子步骤331,确定连通域数量;
子步骤332,当连通域数量≥2时,确定符合预设条件的连通域面积;
子步骤333,当多个连通域中最大连通域的面积与连通域总面积的比值大于第一阈值M,确定最大连通域符合预设条件;
子步骤334,确定保留连通域至少包括最大连通域;
子步骤335,基于保留连通域确定元素掩膜的定位信息。
预设条件是指连通域作为保留连通域时需要满足的条件。例如,预设条件可以是对连通域 面积的限定条件。在一些实施例中,医学影像中可能会包括多个连通域,多个连通域具有不同的面积。可以将具有不同面积的多个连通域按照面积大小,例如,从大到小进行排序,排序后的连通域可以记为第一连通域、第二连通域、第k连通域。其中,第一连通域可以是多个连通域中面积最大的连通域,也叫最大连通域。这种情况下,判断不同面积序位的连通域作为保留连通域的预设条件可以不同,具体参见图9的相关描述。在一些实施例中,符合预设条件的连通域可以包括:连通域的面积按照从大到小排序在预设序位n以内的连通域。例如,预设序位n为3时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域是否为保留连通域。即,先判断第一连通域是否为保留连通域,再判断第二连通域是否为保留连通域。在一些实施例中,预设序位n可以基于元素(或目标结构)的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,第一阈值M的取值范围可以为0.8至0.95,在取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第一阈值M的取值范围可以为0.9至0.95,进一步提高了软连通域分析的准确率。在一些实施例中,第一阈值M可以基于目标结构的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,预设序位n/第一阈值M也可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
在一些实施例中,步骤330对元素掩膜进行软连通域分析,可以按以下方式进行:
基于获取到的元素掩膜,对元素掩膜内连通域的个数及其对应面积进行分析和计算,过程如下:
当连通域个数为0时,表示对应掩膜为空,即掩膜获取或粗分割失败或分割对象不存在,不作处理。例如,对腹腔中的脾脏进行分割时,可能存在脾脏切除的情况,此时脾脏的掩膜为空。
当连通域个数为1时,表示仅此一个连通域,无假阳性,无分割断开等情况,保留该连通域;可以理解的是,连通域个数为0和1时,无需根据预设条件判断连通域是否为保留连通域。
当连通域个数为2时,按面积(S)的大小分别获取连通域A和B,其中,连通域A的面积大于连通域B的面积,即S(A)>S(B)。结合上文,连通域A也可以称为第一连通域或最大连通域;连通域B可以称为第二连通域。当连通域的个数为2时,连通域作为保留连通域需要满足的预设条件可以是最大连通域面积与连通域总面积的比值与阈值的大小关系。对连通域进行计算,当A面积占A、B总面积的比例大于第一阈值M时,即S(A)/S(A+B)>第一阈值M,可以将连通域B判定为假阳性区域,仅保留连通域A(即确定连通域A为保留连通域);当A面积占A、B总体面积的比例小于或等于第一阈值M时,可以将A和B均判定为元素掩膜的一部分,同时保留连通域A和B(即确定连通域A和B为保留连通域)。
当连通域个数大于或等于3时,按面积(S)的大小分别获取连通域A、B、C…P,其中,连通域A的面积大于连通域B的面积,连通域B的面积大于连通域C的面积,以此类推,即S(A)>S(B)>S(C)>…>S(P);然后计算连通域A、B、C…P的总面积S(T),对连通域进行计算,此时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域(或者面积序位在预设序位n以内的连通域)是否为保留连通域。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件可以是最大连通域面积与连通域总面积的比值与阈值(例如,第一阈值M)的大小关系。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件也可以是第二连通域面积与最大连通域面积的比值与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A面积占总面积S(T)的比例大于第一阈值M时,即S(A)/S(T)>第一阈值M,或者,连通域B面积占连通域A面积的比例小于第二阈值N时,即S(B)/S(A)<第二阈值N,将连通域A判定为元素掩膜部分并保留(即连通域A为保留连通域),其余连通域均判定为假阳性区域;否则,继续进行计算,即继续判断第二连通域(即连通域B)是否为保留连通域。在一些实施例中,连通域B作为保留连通域需要满足的预设条件可以是第一连通域与第二连通域的面积之和与连通域总面积的比值与第一阈值M的大小关系。在一些实施例中,连通域B作为保留连通域需要满足的预设条件也可以是第三连通域面积占第一连通域面积与第二连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A和连通域B的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B)/S(T)>第一阈值M,或者,连通域C面积占连通域A和连通域B面积的比例小于第二阈值N时,即S(C)/S(A+B)<第二阈值N,将连通域A和B判定为元素掩膜部分并保留(即连通域A和连通域B为保留连通域),剩余部分均判定为假阳性区域;否则,继续进行计算,即继续判断第三连通域(即连通域C)是否为保留连通域。连通域C的判断方法与连通域B的判断方法类似,连通域C作为保留连通域需要满足的预设条件可以是第一连通域、第二连通域和第三连通域的面积之和与连通域总面积的比 值与第一阈值M的大小关系,或者,第四连通域面积占第一连通域面积、第二连通域面积和第三连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A、连通域B和连通域C的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B+C)/S(T)>第一阈值M,或者,连通域D面积占连通域A、连通域B和连通域C面积的比例小于第二阈值N时,即S(D)/S(A+B+C)<第二阈值N,将连通域A、B和C均判定为元素掩膜部分并保留(即连通域A、连通域B和连通域C均为保留连通域)。参照上述判断方法,可以依次判断连通域A、B、C、D…P,或者是面积序位在预设序位n以内的部分连通域是否为保留连通域。需要说明的是,图8中仅示出了对三个连通域是否为保留连通域进行的判断。也可以理解为,图8中的预设序位n的值设定为4,因此,只需对序位为1、2、3的连通域,即连通域A、连通域B、连通域C是否为保留连通域进行判断。
最后输出保留的连通域。
在一些实施例中,第二阈值N的取值范围可以为0.05至0.2,在取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第二阈值N的取值范围可以为0.05,此种设置情况下,能够获得较为优异的软连通域分析准确率效果。
如图10所示,左边上下分别为未采用软连通域分析的粗分割结果的横断面医学影像和立体医学影像,右边分别为采用了软连通域分析的粗分割结果的横断面医学影像和立体医学影像。经过对比可知,基于软连通域分析对元素掩膜进行粗分割的结果显示,去除了左边影像中方框框出的假阳性区域,相比以往连通域分析方法,排除假阳性区域的准确性和可靠性更高,并且直接有助于后续合理提取元素掩膜定位信息的边界框,提高了分割效率。
在一些实施例中,元素掩膜的定位信息可以为元素掩膜的外接矩形的位置信息,例如,外界矩形的边框线的坐标信息。在一些实施例中,元素掩膜的外接矩形,覆盖元素的定位区域。在一些实施例中,外接矩形可以以外接矩形框的形式显示在医学影像中。在一些实施例中,外接矩形可以是基于元素中连通区域的各方位的底边缘,例如,连通区域上下左右方位上的底边缘,来构建相对于元素掩膜的外接矩形框。
在一些实施例中,元素掩膜的外接矩形可以是一个矩形框或多个矩形框的组合。例如,可以是一个较大面积的矩形框,或者多个较小面积矩形框组合拼成的较大面积的矩形框。
在一些实施例中,元素掩膜的外接矩形可以是仅存在一个矩形框的一个外接矩形框。例如,在元素中只存在一个连通区域时(例如,血管或腹腔中的器官),根据该连通区域各方位的底边缘,构建成一个较大面积的外接矩形。在一些实施例中,上述大面积的外接矩形可以应用于存在一个连通区域的器官。
在一些实施例中,元素掩膜的外接矩形可以是多个矩形框组合拼成的一个外接矩形框。例如,在元素存在多个连通区域时,多个连通区域对应的多个矩形框,根据这多个矩形框的底边缘构建成一个矩形框。可以理解的,如三个连通区域对应的三个矩形框的底边缘构建成一个总的外接矩形框,计算时按照一个总的外接矩形框来处理,在保障实现预期精确度的同时,减少计算量。
在一些实施例中,医学影像中包括多个连通域时,可以先判断多个连通域的位置信息,再基于多个连通域的位置信息得到元素掩膜的定位信息。例如,可以先判断多个连通域中符合预设条件的连通域,即保留连通域的位置信息,进而根据保留连通域的位置信息得到元素掩膜的定位信息。
在一些实施例中,步骤330中,确定掩膜的定位信息,还可以包括以下操作:基于参考元素的定位坐标,对元素掩膜进行定位。
在一些实施例中,该操作可以在元素掩膜外接矩形定位失败的情况下执行。可以理解的,当元素掩膜外接矩形的坐标不存在时,判断对应元素定位失败。
在一些实施例中,参考元素可以选取定位较为稳定的元素(例如,定位较为稳定的器官),在对该元素定位时出现定位失败的概率较低,由此实现对元素掩膜进行精确定位。在一些实施例中,由于在腹腔范围中肝部、胃部、脾部、肾部的定位失败的概率较低,胸腔范围中肺部的定位失败的概率较低,这些器官的定位较为稳定,因此肝部、胃部、脾部、肾部可以作为腹腔范围中的参考器官,即参考元素可以包括肝部、胃部、脾部、肾部、肺部,或者其他任何可能的器官组织。在一些实施例中,可以基于肝部、胃部、脾部、肾部的定位坐标对腹腔范围中的器官掩膜进行再次定位。在一些实施例中,可以基于肺部的定位坐标对胸腔范围中的器官掩膜进行定位。
在一些实施例中,可以以参考元素的定位坐标为基准坐标,对元素掩膜进行再次定位。在一些实施例中,当定位失败的元素位于腹腔范围时,则以肝部、胃部、脾部、肾部的定位坐标作为再次定位的坐标,据此对腹腔中定位失败的元素进行再次定位。在一些实施例中,当定位失败的元素位于胸腔范围时,则以肺部的定位坐标作为再次定位的坐标,据此对胸腔中定位失败的元素进行 再次定位。仅作为示例,当定位失败的元素位于腹腔范围时,可以以肝顶、肾底、脾左、肝右的定位坐标作为再次定位的横断面防线(上侧、下侧)、冠状面方向(左侧、右侧)的坐标,并取这四个器官坐标的最前端和最后端作为新定位的矢状面方向(前侧、后侧)的坐标,据此对腹腔中定位失败的元素进行再次定位。仅作为示例,当定位失败的元素位于胸腔范围时,以肺部定位坐标构成的外接矩形框扩张一定像素,据此对胸腔中定位失败的元素进行再次定位。
基于参考元素的定位坐标,对元素掩膜进行精确定位,能够提高分割精确度,加上降低了分割时间,从而提高了分割效率,同时减少了分割计算量,节约了内存资源。
图11是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图。
在一些实施例中,步骤340中,基于掩膜的定位信息,对至少一个元素进行精准分割,可以包括以下子步骤:
子步骤341,对至少一个元素进行初步精准分割。初步精准分割可以是根据粗分割的元素掩膜的定位信息,进行的精准分割。在一些实施例中,可以根据输入数据和粗分割定位的外接矩形框,对元素进行初步精准分割。通过初步精准分割可以生成精准分割的元素掩膜。
子步骤342,判断元素掩膜的定位信息是否准确。通过步骤342,可以判断粗分割得到的元素掩膜的定位信息是否准确,进一步判断粗分割是否准确。
在一些实施例中,可以对初步精准分割的元素掩膜进行计算获得其定位信息,将粗分割的定位信息与精准分割的定位信息进行比较。在一些实施例中,可以对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。在一些实施例中,可以在三维空间的6个方向上(即外接矩形框的整体在三维空间内为一个立方体),对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。仅作为示例,可以计算粗分割的元素掩膜的外接矩形框每个边与精准分割的元素掩膜的外接矩形框每个边的重合度,或者计算粗分割的元素掩膜的外接矩形框6个顶点坐标与精准分割的元素掩膜的外接矩形框的差值。
在一些实施例中,可以根据初步精准分割的元素掩膜的定位信息,来判断粗分割的元素掩膜的定位信息是否准确。在一些实施例中,可以根据粗分割的定位信息与精准分割的定位信息的差别大小,来确定判断结果是否准确。在一些实施例中,定位信息可以是元素掩膜的外接矩形(如外接矩形框),根据粗分割的元素掩膜的外接矩形与精准分割的元素掩膜的外接矩形,判断粗分割元素掩膜的外接矩形是否准确。此时,粗分割的定位信息与精准分割的定位信息的差别大小可以是指,粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离大小。在一些实施例中,当粗分割的定位信息与精准分割的定位信息差别较大(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较大),则判断粗分割的定位信息准确;当差别较小(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较小)时,则判断粗分割的定位信息不准确。需要注意的是,粗分割外接矩形框是对原始粗分割贴近元素的边框线上进行了像素扩张(例如,扩张15-20个体素)得到的。在一些实施例中,可以基于粗分割的外接矩形框与精准分割的外接矩形框中相距最近的边框线之间的距离与预设阈值的大小关系,来确定粗分割的定位信息是否准确,例如,当距离小于预设阈值时确定为不准确,当距离大于预设阈值时确定为准确。在一些实施例中,为了保障判断准确度,预设阈值的取值范围可以是小于或等于5体素。
图12至图13是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图。图14A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示例图。
其中,图12、图13中显示有粗分割得到的元素掩膜A以及元素掩膜A的外接矩形框B(即元素掩膜A的定位信息),以及根据粗分割的外接矩形框初次精准分割后的外接矩形框C,图14A中还示出了粗分割的外接矩形框B滑动后得到的滑窗B1,其中,(a)为滑动操作前的示意图,(b)为滑动操作后的示意图。另外,方便起见,以三维外接矩形框的一个平面内的平面矩形框进行示例说明,可以理解三维外接矩形框还存在其他5个平面矩形框,即在进行三维外接矩形框的具体计算时存在6个方向的边框线,这里仅以某一平面的4个边框线进行说明。
仅作为示例,如图12所示,精准分割外接矩形框C中的右边边框线与粗分割的外接矩形框B对应的边框线差别较小,由此可以判断粗分割外接矩形框B右边对应的方向上是不准确的,需要对右边边框线进行调整。但C中的上边、下边以及左边边框线与B中的上边、下边以及左边差别较大,由此可以判断粗分割外接矩形框B上边、下边以及左边对应的方向上是准确的。仅作为示例,如图13所示,精准分割外接矩形框C中4个边的边框线与粗分割的外接矩形框B对应边框线差别 均较大,可以判断粗分割外接矩形框B中4个边的边框线均是准确的。需要注意的是,对于元素掩膜A共有6个方向,图12、图13中仅以4个边框线进行示意进行说明,实际情况中会对元素掩膜A中的6个方向的12个边框线进行判断。
子步骤343a,当判断结果为不准确,基于自适应滑窗获取准确的定位信息。在一些实施例中,当粗分割结果不准确时,对其精准分割获取到的元素大概率是不准确的,可以对其进行相应自适应滑窗计算,并获取准确的定位信息,以便继续进行精准分割。
在一些实施例中,基于自适应滑窗获取准确的定位信息,可以按以下方式实施:确定定位信息不准确的至少一个方向;根据重叠率参数,在所述方向上进行自适应滑窗计算。在一些实施例中,可以确定外接矩形框不准确的至少一个方向;确定粗分割外接矩形框不准确后,根据输入的预设重叠率参数,将粗分割外接矩形框按照相应方向滑动,即进行滑窗操作,并重复该滑窗操作直至所有外接矩形框完全准确。其中,重叠率参数指初始外接矩形框与滑动之后的外接矩形框之间重叠部分面积占初始外接矩形框面积的比例,当重叠率参数较高时,滑窗操作的滑动步长较短。在一些实施例中,若想保证滑窗计算的过程更加简洁(即滑窗操作的步骤较少),可以将重叠率参数设置的较小;若想保证滑窗计算的结果更加准确,可以将重叠率参数设置的较大。在一些实施例中,可以根据当前重叠率参数计算进行滑窗操作的滑动步长。根据图12的判断方法可知,图14A中粗分割的外接矩形框B的右边和下边边框线对应的方向上是不准确的。为方便描述,这里将外接矩形框B的右边边框线对应的方向记为第一方向(第一方向垂直于B的右边边框线),下边边框线对应的方向记为第二方向(第二方向垂直于B的下边边框线)。仅作为示例,如图14A所示,假设外接矩形框B的长度为a,当重叠率参数为60%时,可以确定对应步长为a*(1-60%),如上述的,外接矩形框B的右边框线可以沿着第一方向滑动a*(1-60%)。同理,外接矩形框B的下边框线可以沿着第二方向滑动相应步长。外接矩形框B的右边边框线以及下边边框线分别重复相应滑窗操作,直至外接矩形框B完全准确,如图14A(b)中所示的滑窗B1。结合图12及图14A,当确定了粗分割的外接矩形框(即目标结构掩膜的定位信息)不准确时,对精分割外接矩形框上6个方向上边框线的坐标值与粗分割外接矩形框上6个方向上边框线的坐标值进行一一比对,当差距值小于坐标差值阈值(例如,坐标差值阈值为5pt)时(其中坐标差值阈值可以根据实际情况进行设定,在此不做限定),可以判断该外接矩形框的边框线为不准确的方向。
再例如,如图12所示,将精分割外接矩形框C影像中4条边对应的4个方向的像素点坐标,与粗分割外接矩形框B影像中4条边框线对应的4个方向的像素点坐标进行一一比对,其中,当一个方向的像素点坐标的差值小于坐标差值阈值8pt时,则可以判定图12中的粗分割外接矩形框该方向不准确。如,上边差值为20pt、下边差值为30pt、右边差值为1pt,左边为50pt,则右边对应的方向不准确,上边、下边、左边对应的方向准确。
再例如,结合图14A,其中B1为粗分割的外接矩形框B滑动后得到的外接矩形框(也称为滑窗),可以理解的,滑窗为符合预期精确度标准的粗分割外接矩形框,需要将粗分割外接矩形框B的边框线(例如,右边边框线、下边边框线)分别沿着相应方向(例如,第一方向、第二方向)滑动对应的步长至滑窗B1的位置。其中,依次移动不符合标准的每条边框线对应的方向,例如,先滑动B的右边边框线,再滑动B的下边边框线至滑窗的指定位置,而B左边和上边对应的方向是标准的,则不需要进行滑动。可以理解的,每一边滑动的步长取决于B1与B的重叠率,其中,重叠率可以是粗分割外接矩形框B与滑窗B1当前的重叠面积占总面积的比值,例如,当前的重叠率为40%等等。需要说明的是,粗分割外接矩形框B的边框线的滑动顺序可以是从左到右、从上到下的顺序,或者是其他可行的顺序,在此不做进一步限定。
图14B-图14E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图。结合图14B-14E,在一些实施例中,基于原粗分割外接矩形框(原滑窗),自适应滑窗后获取准确的粗分割外接矩形框,可以获取准确的外接矩形框的坐标值,并基于坐标值和重叠率参数,对新滑窗进行精准分割,将精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。具体地,参见图14B,可以对原滑窗B进行滑窗操作,得到滑窗B1(滑窗操作后的最大范围的外接矩形框),B沿第一方向滑动对应步长得到滑窗B1-1,然后对滑窗B1-1的全域范围进行精准分割,得到滑窗B1-1的精准分割结果。进一步地,参见图14C,B可以沿第二方向滑动对应步长得到滑窗B1-2,然后对滑窗B1-2的全域范围进行精准分割,得到滑窗B1-2的精准分割结果。再进一步地,参见图14D,B滑动可以得到滑窗B1-3(如B可以按照图14C所示滑动操作得到滑窗B1-2,再由滑窗B1-2滑动得到滑窗B1-3),然后对滑窗B1-3的全域范围进行精准分割,得到滑窗B1-3的精准分割结果。将滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果叠加,得到最终精准分割 结果。需要说明的是,滑窗B1-1、滑窗B1-2以及滑窗B1-3的尺寸与B的尺寸相同。滑窗B1是原滑窗B进行连续滑窗操作,即滑窗B1-1、滑窗B1-2以及滑窗B1-3得到的最终滑窗结果。在一些实施例中,滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果进行叠加时,可能存在重复叠加部分,例如,图14E中,滑窗B1-1和滑窗B1-2之间可能存在交集部分,在进行分割结果叠加时,该交集部分可能被重复叠加。针对这种情况,可以采用下述方法进行处理:对于元素掩膜A的某一部分,若一个滑窗对该部分的分割结果准确,另一滑窗的分割结果不准确,则将分割结果准确的滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都准确,则将右侧滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都不准确,则将右侧滑窗的分割结果作为该部分的分割结果,并继续进行精准分割,直至分割结果准确。
在一些实施例中,如图11所示,当判断结果为不准确,基于自适应滑窗获取准确的定位信息是一个循环过程。具体地,在对比精准分割边框线和粗分割边框线后,通过自适应滑窗可以得到更新后的精准分割外接矩形框坐标值,该精准分割外接矩形框扩张一定的像素后设定为新一轮循环的粗分割外接矩形框,然后对新的外接矩形框再次进行精准分割,得到新的精准分割外接矩形框,并计算其是否满足准确的要求。满足准确要求,则结束循环,否则继续循环。在一些实施例中,可以利用深度卷积神经网络模型对粗分割中的至少一个元素进行精准分割。在一些实施例中,可以利用粗分割前初始获取的历史医学影像作为训练数据,以历史精准分割结果数据,训练得到深度卷积神经网络模型。在一些实施例中,历史医学影像、历史精准分割结果数据从医学扫描设备110获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。
子步骤343b,当判断结果为准确,将初步精准分割结果作为分割结果输出。
在一些实施例中,当判断结果(即粗分割结果)准确时,可以确定通过该粗分割结果进行精准分割获取到的元素的定位信息是准确的,可以将初步精准分割结果输出。
在一些实施例中,可以输出上述进行精准分割的至少一个元素结果数据。在一些实施例中,为了进一步降低噪声及优化影像显示效果,可以在分割结果输出之前进行影像后处理操作。影像后处理操作可以包括对影像进行边缘光滑处理和/或影像去噪等。在一些实施例中,边缘光滑处理可以包括平滑处理或模糊处理(blurring),以便减少医学影像的噪声或者失真。在一些实施例中,平滑处理或模糊处理可以采用以下方式:均值滤波、中值滤波、高斯滤波以及双边滤波。
图15是根据本说明书一些实施例所示的分割结果的示例性效果对比图。
如图15所示,左边上下分别为采用传统技术的粗分割结果的横断面医学影像和立体医学影像,右边分别为采用本申请实施例提供的器官分割方法的横断面医学影像和立体医学影像。经过对比可知,右边分割结果影像显示的目标器官分割结果,相比左边分割结果影像显示的目标器官分割结果,获取的目标器官更完整,降低了分割器官缺失的风险,提高了分割精准率,最终提高了整体分割效率。
在一些实施例中,对目标结构集(例如,第一目标结构集、第二目标结构集等)中的至少部分元素进行分割时,可能会出现过分割情况,导致医学影像的同一位置存在不同元素的轮廓(也就是不同元素的轮廓有重叠)。这种情况下,可以对目标结构集中的至少部分元素标注优先级;根据优先级的顺序,对元素的轮廓进行刷新显示。优先级级别高的元素轮廓覆盖优先级级别低的元素轮廓。在一些实施例中,结合图6,不同元素的优先级顺序由高到低可以依次为危险区、靶区、可穿区。仅作为示例性描述,当医学影像的同一位置同时存在类别为危险区、靶区和可穿区的元素的轮廓时,可以根据优先级顺序对元素的轮廓进行刷新显示,使得靶区的元素轮廓覆盖可穿区的元素轮廓,并且,危险区的元素轮廓覆盖靶区的元素轮廓。这种优先级设置方式下,类别为危险区的元素的优先级级别最高,可以保证类别为危险区的元素轮廓不会被其他类别的元素轮廓覆盖,从而提高后续介入手术的安全稳定性。在一些实施例中,为了进一步保证优先级级别高的元素轮廓不会被优先级级别低的元素轮廓覆盖,可以对目标结构集中的所有元素标注优先级。
在一些实施例中,对目标结构集中的至少部分元素标注优先级可以包括:对目标结构集中重要元素(例如,第一目标结构集中靶器官内的血管、精准分割模式下第二目标结构集中的重要器官/组织等)标注优先级,而其他器官/组织的优先级则可以采用默认级别(即,其他器官/组织可以具有预先设定的默认优先级)。在一些实施例中,对目标结构集中的至少部分元素标注优先级可以包括:对目标结构集中所有元素标注优先级。这里的所有元素可以是所有进行分割的器官/组织。例如,对第一目标结构集中的靶器官、靶器官内的血管、病灶进行分割时,可以对靶器官、靶器官内的血管、病灶均标注优先级。在一些实施例中,结合图6,目标结构集中的元素的优先级可以与元 素类别(例如,待穿区、危险区)有关。元素类别为危险区的元素的优先级可以高于元素类别为待穿区的元素的优先级。
图16是根据本说明书一些实施例所示的优先级刷新显示的示例性结果图。其中,图16(a)表示分割的肝脏轮廓,图16(b)表示分割的病灶轮廓,图16(c)表示分割的血管轮廓。由以上描述可知,血管的优先级最高,病灶的优先级次之,肝脏优先级最低。这是由于病灶所在的器官为靶器官,即肝脏作为靶器官,靶器官可以默认为是可穿区。对比图16(a)和(b),病灶轮廓覆盖肝脏轮廓,对比图16(b)和(c),血管轮廓覆盖病灶轮廓。
继续参见图5,子步骤233,对第一分割影像与第二分割影像进行配准,确定第三目标结构集的空间位置。
第三目标结构集是第一分割影像与第二分割影像配准后得到的结构全集。在一些实施例中,第三目标结构集可以包括目标器官(例如,靶器官)、靶器官内的血管、病灶、以及其他区域/器官(例如,不可介入区域、所有重要器官)。在一些实施例中,在快速分割模式下,其他区域/器官可以是指不可介入区域;在精准分割模式下,其他区域/器官可以是指所有重要器官。在一些实施例中,第三目标结构集中至少有一个元素包括在第一目标结构集中,第三目标结构集中至少有一个元素不包括在第二目标结构集中。例如,第一目标结构集包括靶器官内的血管、靶器官和病灶,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,靶器官内的血管包括在第一目标结构集中且不包括在第二目标结构集中。在一些实施例中,第四目标结构集也可以视为是第三目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。
在一些实施例中,第一分割影像(如对第一医学影像和/或预推荐增强影像分割得到的手术前第一目标结构集的分割影像),可以包括第一目标结构集(例如,术前目标器官内的血管、术前目标器官、术前病灶)的精确结构特征;第二分割影像(即第二医学影像分割得到的手术中第二目标结构集的分割影像),可以包括第二目标结构集(例如,术中目标器官、术中病灶、术中不可介入区域/所有重要器官)的精确结构特征。在一些实施例中,在配准之前,可以对第一分割影像、第二分割影像进行目标结构集外观特征与背景的分离处理。在一些实施例中,外观特征与背景的分离处理可以采用基于人工神经网络(线性决策函数等)、基于阈值的分割方法、基于边缘的分割方法、基于聚类分析的图像分割方法(如K均值等)或者其他任何可行的算法,如基于小波变换的分割方法等等。
下面以第一分割影像包括术前目标器官(如,靶器官)内的血管和术前目标器官的结构特征(即第一目标结构集包括目标器官内的血管和目标器官),第二分割影像包括术中目标器官、术中病灶、术中不可介入区域/所有重要器官的结构特征(即第二目标结构集包括目标器官、病灶、不可介入区域/所有重要器官)为例,对配准过程进行示例性描述。可以理解的是,病灶的结构特征并不限于包括在第二分割影像中,在其他实施例中,病灶的结构特征也可以包括在第一分割影像中,或者病灶的结构特征同时包括在第一分割影像和第二分割影像中。
图17是本说明书一些实施例中所示的对第一分割影像与第二分割影像进行配准过程的示例性流程图。如图17所示,步骤233可以包括以下几个子步骤:
步骤2331,对第一分割影像与第二分割影像进行配准,确定配准形变场。
配准可以是通过空间变换使第一分割影像与第二分割影像的对应点达到空间位置和解剖位置一致的图像处理操作。配准形变场可以用于反映第一分割影像和第二分割影像的空间位置变化。在一些实施例中,经过配准后,第二医学影像可以基于配准形变场进行空间位置的变换,以使变换后的第二医学影像与第一医学影像和/或预推荐增强影像在空间位置和解剖位置上完全一致。
图18至图19是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图。图20是本说明书一些实施例中所示的经过分割得到第一分割影像、第二分割影像的示例性演示图。
在一些实施例中,步骤2331中,对第一分割影像与第二分割影像进行配准,确定配准形变场的过程,可以包括以下几个子步骤:
子步骤23311,基于元素之间的配准,确定第一初步形变场。
在一些实施例中,元素可以是第一分割影像、第二分割影像的元素轮廓(例如,器官轮廓、血管轮廓、病灶轮廓)。元素之间的配准可以是指元素轮廓(掩膜)所覆盖的影像区域之间的配准。例如图20中的预推荐增强影像经过分割后得到目标器官(如靶器官)的器官轮廓A所覆盖的影像区域(左下图中虚线区域内灰度相同或基本相同的区域)、第二医学影像中经过分割得到目标器官(如靶器官)的器官轮廓B所覆盖的影像区域(右下图中虚线区域内灰度相同或基本相同的区域)。
在一些实施例中,通过器官轮廓A所覆盖的影像区域与器官轮廓B所覆盖的影像区域之 间的区域配准,得到第一初步形变场(如图19中的形变场1)。在一些实施例中,第一初步形变场可以是局部形变场。例如,通过肝脏术前轮廓A与术中轮廓B得到关于肝脏轮廓的局部形变场。
子步骤23312,基于元素之间的第一初步形变场,确定全图的第二初步形变场。
全图可以是包含元素的区域范围影像,例如,目标器官为肝脏时,全图可以是整个腹腔范围的影像。又例如,目标器官为肺时,全图可以是整个胸腔范围的影像。
在一些实施例中,可以基于第一初步形变场,通过插值确定全图的第二初步形变场。在一些实施例这种,第二初步形变场可以是全局形变场。例如,图19中的通过形变场1插值确定全图尺寸的形变场2。
子步骤23313,基于全图的第二初步形变场,对浮动影像进行形变,确定浮动影像的配准图。
浮动影像可以是待配准的图像,例如,第一医学影像(如预推荐增强影像)、第二医学影像。例如,将第二医学影像配准到预推荐增强影像时,浮动影像为第二医学影像。可以通过配准形变场对第二医学影像进行配准,以使与预推荐增强影像空间位置一致。又例如,将预推荐增强影像配准到第二医学影像时,浮动影像为预推荐增强影像。可以通过配准形变场对预推荐增强影像进行配准,以使与第二医学影像空间位置一致。浮动影像的配准图可以是配准过程中得到的中间配准结果的图像。以预推荐增强影像配准到第二医学影像为例,浮动影像的配准图可以是配准过程中得到的中间第二医学影像。为了便于理解,本说明书实施例以第一医学影像(如预推荐增强影像)配准到第二医学影像为例,对配准过程进行详细说明。
在一些实施例中,如图19所示,基于获取到的全图的形变场2,对浮动影像,即预推荐增强影像进行形变,确定第一医学影像的配准图,即中间配准结果的第二医学影像。例如,如图19所示,基于获取到肝脏所处腹腔范围的形变场,对预推荐增强影像(腹腔增强影像)进行形变,获取到其配准图。
子步骤23314,对浮动影像的配准图与参考图像中第一灰度差异范围的区域进行配准,得到第三初步形变场。
在一些实施例中,参考图像是指配准前的目标图像,也可以称为未进行配准的目标图像。例如,第一医学影像配准到第二医学影像时,参考图像是指未进行配准动作的第二医学影像。在一些实施例中,第三初步形变场可以是局部形变场。在一些实施例中,子步骤23314可以按以下方式实施:对浮动影像的配准图和参考图像的不同区域分别进行像素灰度计算,获得相应灰度值;计算浮动图像的配准图的灰度值与参考图像的对应区域的灰度值之间的差值;所述差值在第一灰度差异范围时,分别将浮动影像的配准图与参考图像的对应区域进行弹性配准,获得第三初步形变场。在一些实施例中,所述差值在第一灰度差异范围时,可以表示浮动影像的配准图中的一个区域与参考图像中对应区域的差异不大或比较小。例如,第一灰度差异范围为0至150,浮动影像的配准图中区域Q1与参考图像中同一区域的灰度差值为60,浮动影像的配准图中区域Q2与参考图像中同一区域的灰度差值为180,则两个图像(即浮动影像的配准图和参考图像)的区域Q1的差异不大,而区域Q2的差异较大,仅对两个图像中的区域Q1进行配准。在一些实施例中,如图19所示,对浮动图像的配准图与参考图像中的符合第一灰度差异范围的区域(差异不太大的区域)进行弹性配准,得到形变场3(即上述的第三初步形变场)。
子步骤23315,基于第三初步形变场,确定全图的第四初步形变场。
在一些实施例中,基于第三初步形变场,进行插值获得全图的第四初步形变场。在一些实施例中,第四初步形变场可以是全局形变场。在一些实施例中,可以通过该步骤将局部的第三初步形变场获取到关于全局的第四初步形变场。例如,图19中的通过形变场3插值确定全图尺寸的形变场4。
子步骤23316,基于第四初步形变场,对第二灰度差异范围的区域进行配准,获得最终配准的配准图。
在一些实施例中,第二灰度差异范围的区域可以是浮动影像的配准图灰度值与参考图像灰度值相比,灰度值差值较大的区域。在一些实施例中,可以设置一个灰度差异阈值(如灰度差值阈值为150),浮动影像的配准图灰度值与参考图像灰度值的差值小于灰度差异阈值的区域为第一灰度差异范围的区域,大于灰度差异阈值的则属于第二灰度差异范围的区域。
在一些实施例中,最终配准的配准图可以是基于至少一个形变场对浮动影像(例如,预推荐增强影像),进行多次形变,获得最终与第二医学影像空间位置、解剖位置相同的图像。在一些实施例中,如图9所示,基于第四初步形变场,对第二灰度差异范围(即灰度差异比较大)的区域 进行配准,获得最终配准的配准图。例如,灰度值差异比较大的脾脏之外的区域,针对该区域通过形变场4进行形变,获得最终的配准图。
在一些实施例中,利用图18-19中所描述的配准方法,可以将浮动影像中进行了分割,并且参考图像中没有分割的元素(例如,靶器官内的血管),从浮动影像中映射到参考图像中。以浮动影像是预推荐增强影像,参考图像是第二医学影像为例,靶器官内的血管在预推荐增强影像中进行了分割,并且在第二医学影像中没有分割,通过配准可以将靶器官内的血管映射到第二医学影像。可以理解的是,对于快速分割模式下的不可介入区域以及精准分割模式下的所有重要器官的配准也可以采用图18-图19的配准方法,或者仅通过对应分割方法也可以实现类似的效果。
步骤2332,基于配准形变场和预推荐增强影像中的第一目标结构集中的至少部分元素的空间位置,确定第二医学影像中相应元素的空间位置。在一些实施例中,可以基于配准形变场和预推荐增强影像中的目标器官内的血管,确定手术中目标器官内的血管(以下简称为血管)的空间位置。
在一些实施例中,可以基于下述公式(1),基于配准形变场和预推荐增强影像中的血管,确定手术中血管的空间位置:
其中,IQ表示预推荐增强影像,(x,y,z)表示血管的三维空间坐标,u(x,y,z)表示由预推荐增强影像到第二医学影像的配准形变场,表示血管在第二医学影像中的空间位置。在一些实施例中,u(x,y,z)也可以理解为浮动图像中元素(例如,靶器官内的血管)的三维坐标至最终配准的配准图中的三维坐标的偏移。
由此,可以通过步骤2332中确定的配准形变场,对预推荐增强影像中的血管进行形变,生成与其空间位置相同的手术中血管的空间位置。
在一些实施例中,不同元素可能在同一或不同的预推荐增强影像上进行分割,因此,对第一目标结构集中的至少部分元素进行分割生成的第一分割影像可以有多个。基于此,第一分割影像与第二分割影像进行配准可以有不同的实施方案。
在一些实施例中,可以对多个预推荐增强影像分别对应的多个第一分割影像与第二分割影像进行配准,得到多个最终配准的配准图,从而确定手术中相应元素的空间位置。例如,可以将肝动脉、肝门静脉、肝静脉以及下腔静脉分别对应的第一分割影像与第二分割影像进行配准,得到4个最终配准的配准图,并分别根据4个最终配准的配准图确定手术中肝动脉、肝门静脉、肝静脉以及下腔静脉的空间位置。在一些实施例中,可以将得到的多个最终配准的配准图进行融合,得到融合配准图,进而根据融合配准图确定手术中相应元素的空间位置。
在一些实施例中,可以从多个第一分割影像中选取其中一个作为基准图,其余的作为非基准图,进一步地,将每个非基准图分别与基准图进行配准,然后对基准图与第二分割影像进行配准。仅作为示例,可以选取肝动脉对应的第一分割影像作为基准图,肝门静脉、肝静脉以及下腔静脉分别对应的第一分割影像作为非基准图。分别对3个非基准图与基准图进行配准,从而确定基准图中肝门静脉、肝静脉以及下腔静脉的空间位置(基准图本身可以确定肝动脉的空间位置),进一步,将基准图与第二分割影像进行配准,从而确定手术中肝动脉、肝门静脉、肝静脉以及下腔静脉的空间位置。
在一些实施例中,可以对多个预推荐增强影像分别对应的多个第一分割影像进行融合,得到融合分割影像,然后对融合分割影像与第二分割影像进行配准,得到最终配准的配准图,从而确定手术中相应元素的空间位置。例如,可以将肝动脉、肝门静脉、肝静脉以及下腔静脉分别对应的第一分割影像进行融合,得到融合分割影像,融合分割影像中包括4个血管的分割结果。然后,对融合分割影像与第二分割影像进行配准,得到最终配准的配准图。
图21是根据本说明书一些实施例所示的多期相增强影像与第二医学影像融合映射的示例性结果图。其中,(a)表示肝动脉在对应期相的预推荐增强影像上的分割结果,(b)表示肝门静脉在对应期相的预推荐增强影像上的分割结果,(c)表示下腔静脉在对应期相的预推荐增强影像上的分割结果,(d)表示映射到第二医学影像的结果。可知,根据本说明书实施例提供的配准方法,可以将多个期相对应的预推荐增强影像中的元素的分割结果映射到第二医学影像上,从而确定第二医学影像中对应元素的空间位置。
在一些实施例中,得到最终配准的配准图后,当医生根据临床经验判断配准图中当前元素的轮廓与实际情况存在差异时,可以利用轮廓修正工具对配准图上的元素的轮廓进行修正,如增加、修减,直至符合临床情况。关于轮廓修正工具的更多内容可以参见本说明书图22和图23及其相关描述。
在一些实施例中,在配准图中可能也会出现同一位置存在不同元素的轮廓的情况(也就是不同元素的轮廓有重叠)。此时,可以对第三目标结构集中的至少部分元素或全部元素标注优先级,并根据优先级的顺序,对元素的轮廓进行刷新显示。关于元素优先级的具体内容可以参见本说明书其他地方(例如,图16),在此不再赘述。
继续参见图5,子步骤234,基于第三目标结构集的空间位置,生成介入手术方案。
在一些实施例中,可以基于确定的手术中血管和病灶(包括在第二医学影像的第二分割影像中)的空间位置,计算病灶中心点,并生成介入手术的术前规划方案(例如,穿刺路径规划)。例如,可以生成病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据确定的可介入区域、不可介入区域,确定病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据潜在进针区域及基本避障约束条件,规划由经皮进针点到病灶中心点之间的基准路径。在一些实施例中,基本避障约束条件可以包括但不限于路径的入针角度、路径的入针深度、路径与血管及重要脏器之间不相交等。在一些实施例中,从加载书签至生成介入手术方案的过程(即子步骤231至子步骤234的过程)也可以称为术前规划阶段。
图22是根据本说明书一些实施例所示的利用第一轮廓工具进行元素轮廓提取的示意图。图23是根据本说明书一些实施例所示的利用第二轮廓工具进行元素轮廓提取的示意图。
在一些实施例中,对目标结构集中的元素进行分割得到各个元素的轮廓后,可以利用轮廓绘制工具在对应的分割影像上绘制不同类型的线条,从而对元素轮廓进行提取。分割影像可以包括第一分割影像、第二分割影像、融合分割影像、配准图等中的任意一个或多个。目标结构集可以包括第一目标结构集、第二目标结构集、第三目标结构集、第四目标结构集中的任意一个或多个。
参见图22,图22中(a)表示利用第一轮廓绘制工具绘制的病灶轮廓线,图22中(b)表示根据算法结果生成的病灶轮廓。在一些实施例中,可以利用第一轮廓绘制工具,在分割影像上绘制轮廓线;基于轮廓线,对目标结构集中的至少部分元素轮廓进行提取。第一轮廓绘制工具可以是手动轮廓绘制工具。在一些实施例中,可以利用第一轮廓绘制工具快速定位病灶区域。具体地,用户可以利用第一轮廓绘制工具在分割影像上病灶的两侧(例如,上方和下方)绘制轮廓线(如图22(a)所示),轮廓线确认后,算法可以将两条轮廓线中间的区域可以确定为病灶区域,提取病灶轮廓(如图22(b)所示)。在一些实施例中,也可以利用第一轮廓绘制工具在分割影像上绘制一个范围较大的轮廓线,使得病灶包括该轮廓线包围的区域中,然后进一步的在该轮廓线区域中提取病灶轮廓。这种方式可以使得病灶轮廓提取更为准确,同时还能降低算法复杂度。
参见图23,图23中(a)表示利用第二轮廓绘制工具绘制的病灶线段,图23中(b)表示根据算法结果生成的病灶轮廓。在一些实施例中,可以利用第二轮廓绘制工具,在分割影像上绘制线段;基于线段,对目标结构集中的至少部分元素轮廓进行提取。第二轮廓绘制工具可以是半自动轮廓绘制工具。在一些实施例中,可以利用第二轮廓绘制工具快速定位病灶区域。具体地,用户可以利用第二轮廓绘制工具在分割影像上绘制线段,术前规划系统将线段的两个端点坐标传递给算法后生成该图像中的病灶轮廓,并显示在影像中。在一些实施例中,可以将第二轮廓绘制工具绘制的线段表示区域长径,以长径为直径画圆得到圆形区域,进一步地,在圆形区域中提取病灶轮廓。
在一些实施例中,利用轮廓绘制工具也可以对目标结构集中的病灶以外的其他元素进行轮廓提取,例如,血管、靶器官、其他重要器官和/或不可介入区域。以血管轮廓提取为例,轮廓绘制工具可以包括血管打点工具,利用血管打点工具可以在血管附近进行打点操作,打点确认后,术前规划系统可以将打点坐标传递给算法,从而生成血管轮廓。
在一些实施例中,医生根据经验判断当前器官/组织的轮廓与实际情况存在差异时,可以使用轮廓修正工具对目标结构集中的至少部分元素轮廓进行修正,例如,增加轮廓和/或修减轮廓,直至元素轮廓符合临床情况为止。例如,相比于临床情况中的病灶轮廓,算法生成的病灶轮廓较小(也可以理解为当前病灶轮廓无法覆盖实际的病灶区域),此时,可以利用轮廓修正工具对病灶轮廓进行增加,即扩大病灶轮廓的范围,以使修正后的病灶轮廓与实际病灶区域能够更加匹配。
图24是根据本说明书一些实施例所示的用于介入手术的图像处理系统的示例性模块图。如图24所示,用于介入手术的图像处理系统2400可以包括分割模块2410,用于获取扫描对象在不同阶段的多个第一医学影像和第二医学影像;其中,至少有两个第一医学影像对应于同一扫描对象的不同时间点;基于多个第一医学影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。
配准模块2420,基于第二医学影像对第二目标结构集中的至少部分元素进行分割,生成第二分割影像;配准第一分割影像和第二分割影像。
介入路径规划模块2430,用于基于配准后的第二医学影像和/或第二分割影像,确定由皮肤表面进针点到靶区之间的介入路径。在一些实施例中,不同时间点至少包括两个不同期相。在一些实施例中,基于多个第一医学影像,确定预推荐增强影像可以包括:对多个第一医学影像进行自动识别;基于自动识别的结果,确定多个第一医学影像对应的期相。
需要说明的是,有关上述模块执行相应流程或功能实现用于介入手术的图像处理方法的更多技术细节,具体参见图1至图23所示的任一实施例描述的用于介入手术的图像处理方法相关内容,在此不再赘述。
在一些实施例中,处理设备140对两个医学图像(如第一分割影像与第二分割影像)进行配准时,可能会出现配准误差。当出现配准误差时则会降低配准结果的准确性,从而影响后续手术过程。因此,需要对两个医学图像配准中出现的配准误差进行优化,从而提高配准的准确度。图25是根据本说明书一些实施例所示的医学图像的配准优化方法的流程示意图。
步骤2510,获取第一分割影像和第二分割影像之间的配准误差。在一些实施例中,该步骤可以由配准误差获取模块2901执行。
在一些实施例中,可以将一组未知空间差异的第一分割影像和第二分割影像进行初始配准,例如,对第一分割影像和第二分割影像进行刚性自动配准和弹性自动配准,进而得到第一分割影像和第二分割影像之间关于初始配准的配准误差。
步骤2520,若配准误差未满足预设条件,则通过配准矩阵调整流程确定的刚体配准矩阵,对第一分割影像和第二分割影像进行优化配准。在一些实施例中,该步骤可以由配准优化模块2903执行。
其中,通过配准矩阵调整流程确定刚体配准矩阵,具体包括如下步骤:确定配准矩阵调整流程所用的配准元素;若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和第二分割影像中的位置,得到刚体配准矩阵;若未获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵。在一些实施例中,上述步骤可以由配准矩阵确定模块2902执行。
其中,在进行优化配准时,需确定相应的刚体配准矩阵,确定刚体配准矩阵的流程称为配准矩阵调整流程,在该流程中,可能需要借助某些器官在图像中的位置,来确定刚体配准矩阵,这些器官可以称为配准元素。
一些场景中,被作为配准元素的器官,与被作为分割关键器官的器官可以是相同的,例如,在胸部术式中,可以将肺作为分割关键器官和配准元素;当然,被作为配准元素的器官,与被作为分割关键器官的器官也可以是不同的。
在一些实施例中,通过(1)手动调整和(2)自动调整确定配准矩阵调整流程确定刚体配准矩阵。
(1)手动调整:
如果未获取到配准元素在第一分割影像和第二分割影像中的位置,则可以提示用户进行手动调整,对第一分割影像进行平移操作或旋转操作,或者,对第二分割影像进行平移操作或旋转操作。
此时,系统可以提供2D/3D手动配准工具及2D/3D融合显示,并在2D/3D手动配准工具提示图像的调整方向,用户可根据提示的调整方向调整2D/3D形式的第一分割影像和/或第二分割影像,使2D/3D形式的第一分割影像和第二分割影像大致对齐。用户也可按任意方向调整2D/3D形式的第一分割影像和/或第二分割影像,以达到用户要求。
在用户手动调整完之后,基于用户进行的平移操作或旋转操作,得到刚体配准矩阵。
(2)自动调整:
如果可以获取到配准元素在第一分割影像和第二分割影像中的位置,则根据该位置进行自动调整,得到刚体配准矩阵。
上述自动调整方式中,当配准元素组织为预先设置的,有明显解剖特征点的人体组织或器官,比如肝脏、肺、肾脏等,可以默认推荐自动调整方式。通过算法自动提取特征点对,用户可以在2D/3D界面(2D原始的CT扫描图像,3D重建后有分割结果的图像)上修改或确认已经提取的对应点对,然后自动计算刚体配准矩阵并自动对齐,提示用户确认。用户确认后,还可以进行手动的微调,手动微调的方式参见(1)手动调整。
本说明书实施例中,在得到第一分割影像和第二分割影像之间的配准误差后,如果配准误 差未满足预设条件,则通过配准矩阵调整流程确定的刚体配准矩阵,对第一分割影像和第二分割影像进行优化配准,这样,通过优化配准的处理,可以提高配准结果的准确性;其中,通过配准矩阵调整流程确定刚体配准矩阵时,若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和第二分割影像中的位置,得到刚体配准矩阵;若未获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵;上述方式中,根据是否获取到配准矩阵调整流程所用的配准元素在图像中的位置,采取不同的策略,得到刚体配准矩阵,适用性更高。
上述(2)自动调整方式中,具体可以通过如下步骤,得到刚体配准矩阵:获取所述配准元素在第一分割影像和第二分割影像的特征点,得到特征点对;基于所述配准元素在所述第一分割影像和第二分割影像中的位置,得到所述特征点对包括的各特征点在各自图像的位置;基于所述特征点对包括的各特征点在各自图像的位置,得到刚体配准矩阵。
上述处理方式中,通过特征点对包括的各特征点在各自图像中的位置,自动确定刚体配准矩阵,提高优化配准的效率。
在一些实施例中,还可以通过(3)半自动调整确定配准矩阵调整流程确定刚体配准矩阵。
(3)半自动调整:
若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则确定所述配准元素在所述第一分割影像的第一质心位置、以及在所述第二分割影像的第二质心位置,根据所述第一质心位置和所述第二质心位置之间的相对位置关系,在融合显示界面上提示图像移动方向,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵。
具体来说,可以根据配准元素在3D形式的第一分割影像和第二分割影像的质心位置,计算图像移动方向,计算公式为通过箭头提示用户在第一分割影像和第二分割影像上调整方向;其中,mr(x,y,z)表示配准元素在第一分割影像的质心坐标;mm(x,y,z)表示配准元素在第二分割影像的质心坐标。
或者,若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵。
也就是说,在得到配准元素在第一分割影像和第二分割影像中的位置后,可以不提示用户移动方向,用户可以直接对第一分割影像进行平移操作或旋转操作,也可以直接对第二分割影像进行平移操作或旋转操作,进而得到刚体配准矩阵。
或者,若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则根据特征点对包括的各特征点在各自图像的位置,以及对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵。
上述半自动调整方式中,如果配准元素为预先设置的,边界比较光滑且没有明显解剖特征点的人体组织或器官,比如前列腺、胆囊等,可以默认推荐半自动调整方式。此时,用户可以在2D/3D形式的第一分割影像和配准医学上选择一对或多对相互对应的解剖结构的对应特征点对,根据特征点对,自动计算刚体配准矩阵并自动对齐,提示用户确认。用户确认后,还可以进行手动的微调,手动微调的方式参见(1)手动调整。
在一些实施例中,获取所述配准元素在第一分割影像和第二分割影像的特征点,得到特征点对时,可以包括如下步骤:利用预设的特征提取算法,对第一分割影像和第二分割影像进行配准元素的特征点提取;若预设的特征提取算法提取到相应的特征点,则形成特征点对;若预设的特征提取算法未提取到相应的特征点,则根据对第一分割影像和所述第二分割影像的特征点选取操作,形成特征点对。
本实施例中,直接将第一分割影像和第二分割影像输入预设的特征提取算法,进行特征点的提取,如果预设的特征提取算法未能提取出特征点,则由用户手动选取特征点,根据用户选取的特征点,形成特征点对。
在一些实施例中,获取所述配准元素在第一分割影像和第二分割影像的特征点,得到特征点对时,还可以包括如下步骤:若配准元素为适用特征提取算法的器官,则利用特征提取算法,对第一分割影像和第二分割影像进行配准元素的特征点提取;若配准元素不为适用特征提取算法的器官,则根据对第一分割影像和所述第二分割影像的特征点选取操作,形成特征点对。
本实施例中,在将第一分割影像和第二分割影像输入预设的特征点提取算法前,先判断配 准元素是否为适用特征点提取算法的器官,如果是,则将第一分割影像和第二分割影像输入特征点提取算法中,否则,由用户手动选取特征点。
在执行步骤2510之前,可以对一组未知空间差异的第一分割影像和第二分割影像进行初始配准,例如,对第一分割影像和第二分割影像进行刚性自动配准和弹性自动配准,进而得到第一分割影像和第二分割影像之间关于初始配准的配准误差。
在此情况下,获取第一分割影像和第二分割影像之间的配准误差,具体包括如下步骤:对第一分割影像和第二分割影像进行初始配准,获取分割流程所用的分割关键器官在初始配准后的第一分割影像所处的区域、以及在初始配准后的第二分割影像所处的区域;基于区域之间的重合度,得到第一分割影像和第二分割影像之间的配准误差。
术前和配准(包括初始配准)前会进行分割流程,分割流程中,会对医学图像进行器官的分割,这些被分割的器官被称为分割关键器官。一些术式中,分割流程会固定分割术式的目标器官,例如在胸部术式中,肺是该术式的目标器官,因此,分割流程会固定分割肺。由此,一些场景中,分割关键器官可以包括手术的目标器官。
示例性地,在胸部术式中,分割关键器官包括肺为例介绍:对第一分割影像和第二分割影像进行刚性自动配准和弹性自动配准,完成初始配准之后,可以确定肺在初始配准后的第一分割影像所处的区域、以及在初始配准后的第二分割影像所处的区域,接着,根据肺在初始配准后的这两张图像的区域之间的重合度,得到配准误差。其中,重合度越高,配准误差越小。
更具体地,基于区域之间的重合度,得到第一分割影像和第二分割影像之间的配准误差,可以包括如下步骤:将分割关键器官在初始配准后的第一分割影像所处的区域,作为第一区域;将分割关键器官在初始配准后的第二分割影像所处的区域,作为第二区域;确定第一区域和第二区域的相交区域,并根据位于所述第一区域的像素数量与位于第二区域的像素数量之间的和值,得到总像素数量;根据相交区域的像素数量在总像素数量中的占比,得到区域之间的重合度。
记配准误差为e,得到配准误差的公式为:其中,Mr表示位于第一区域的像素数量;Mm表示位于第二区域的像素数量;Mr∩m表示位于相交区域的像素数量。
当配准误差e大于预设阈值时,可以确认配准误差未满足预设条件,此时,可以进行优化配准流程。另外,当配准误差大于预设阈值时,还可以提示用户“当前的配准结果误差较大,建议优化配准提升配准效果”。
其中,预设阈值可以在预设区间里选择,该预设区间可以是左开右闭的,例如(0,0.10],在该预设区间里,可以设置0.01、0.02、0.03、0.04、0.05、0.06、0.07、0.08、0.09、0.10这10个值,供用户选择。
上述处理中,通过计算分割关键器官在图像所处区域的重合度,得到配准误差,量化配准效果,以便进行优化配准。
在一些实施例中,通过配准矩阵调整流程确定的刚体配准矩阵,对第一分割影像和第二分割影像进行优化配准,具体可以包括如下步骤:将通过配准矩阵调整流程确定的刚体配准矩阵,输入弹性自动配准算法中,利用输入后得到的弹性自动配准算法,对第一分割影像和第二分割影像进行优化配准;若优化配准后得到的配准误差未满足预设条件,则继续进行优化配准,直至优化配准后的配准误差满足预设条件。
本实施例中,在得到通过刚体配准矩阵后,将该刚体配准矩阵输入弹性自动配准算法中,利用输入后得到的弹性自动配准算法,对第一分割影像和第二分割影像进行优化配准;若优化配准后得到的配准误差未满足预设条件,则可以再通过配准矩阵调整流程得到新的刚体配准矩阵,并继续进行优化配准,直至优化配准后的配准误差满足预设条件。
在一些实施例中,本申请提供的方法还包括如下步骤:在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示向上平移控件、向下平移控件、向左平移控件和向右平移控件中的至少一个;基于对至少一个所述平移控件的选中操作,得到对所述第一分割影像或所述第二分割影像的平移操作。
在得到第一分割影像和第二分割影像之后,可以按照原始空间位姿进行融合显示;显示方式包括2D融合显示和3D融合显示。
2D融合可以显示第一分割影像和第二分割影像原始叠加效果,为区分第一分割影像和第二分割影像,第二分割影像可采用伪彩形式显示。3D融合显示可直观显示分割关键器官的空间对齐效果。图26是根据本说明书一些实施例所示的关键器官的空间对齐效果的示意图。如图26所示。
其中,在融合显示时,可以利用预设的融合方式,将第一分割影像和第二分割影像融合显示在显示屏上;预设的融合方式包括:上下层融合方式、分割线融合方式和棋盘格融合方式中的至少一种。
其中,各种融合方式的功能描述可以参照表1。
表1
在融合显示界面上融合显示第一分割影像和第二分割影像之后,可以显示向上平移控件、向下平移控件、向左平移控件和向右平移控件中的至少一个;当用户选中相应的平移控件时,可以按该平移控件对应的方向以及距离,平移第一分割影像或第二分割影像。
在一些实施例中,本说明书实施例提供的方法还包括如下步骤:在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示逆时针旋转控件和顺时针旋转控件中的至少一个;基于对至少一个所述旋转控件的选中操作,得到对所述第一分割影像或所述第二分割影像的旋转操作。
在一些实施例中,本说明书实施例提供的方法还可以包括如下步骤:在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示圆环区域;基于对所述圆环区域的拖动旋转操作,得到对所述第一分割影像或所述第二分割影像的旋转操作。
其中,圆环区域参照图27E所示的灰色圆环区域。
无论是基于旋转控件还是基于圆环区域,都是根据旋转中心进行的旋转,旋转中心的设置具体包括如下步骤:在图像上的预设位置显示旋转中心图标;所述图像为第一分割影像或第二分割影像;当光标移动至所述预设位置处,且鼠标被按住拖动时,调整光标位置,以调整所述旋转中心图标的位置;将鼠标被松开时的光标所处位置,作为所述旋转中心图标的调整后位置;以所述旋转中心图标的调整后位置为旋转中心,基于对所述圆环区域的拖动旋转操作,得到对所述第一分割影像或所述第二分割影像的旋转操作。
关于各种平移控件、旋转控件和圆环区域等的描述,可以参照表2。

表2
为了更好地理解上述方法,以下以胸部介入手术为例,详细阐述一个本申请医学图像的配准优化方法的应用实例。
介入手术机器人系统在胸部介入手术过程中,一般会进行术前CT增强扫描和术中CT平扫。术前CT增强扫描得到的术前图像用于提供器官、血管及肿瘤等相关的解剖信息,术中CT平扫得到的术中图像用于介入手术过程中路径规划和手术导航。为保证术中导航的精确性和安全性,需要将术前获得的解剖信息配准到术中图像中。
但是,如下3个因素导致基于配准技术的配准效果达不到路径规划和术中导航的要求:1.术前图像做过造影增强,当与术中图像进行配准时,存在配准的信息不对称;2.术前图像用于医生诊断,扫描范围较大,而术中图像聚焦于穿刺部位,扫描范围较小,存在配准的范围不匹配;3.环境和病人身体变化以及呼吸运动带来的软组织漂移等影响,术前图像和术中图像的空间差异较大。
基于此,本应用实施例中,将术前图像作为第一分割影像,术中图像作为第二分割影像,将一组未知空间差异的第一分割影像和第二分割影像进行刚性自动配准和弹性自动配准,完成初始配准;根据分割关键器官在初始配准后的第一分割影像和初始配准后的第二分割影像所处的区域,得到配准误差,量化初始配准结果是否达到用户要求。若初始配准结果不满足用户要求时,提供3种方式得到刚体配准矩阵,以进行优化配准。同时进行2D图像融合显示和3D图像融合显示便于用户实时查看优化配准的效果。用户调整确认后,得到刚体配准矩阵,作为弹性自动配准算法的输入,再次进行弹性自动配准,提高医学图像配准的精确性和鲁棒性。
本应用实施例主要包括如下部分:
1.医学图像的自动配准:获取第一分割影像和第二分割影像,将第一分割影像和第二分割影像进行刚性自动配准和弹性自动配准,完成初始的自动化配准,根据配准结果对第一分割影像进行处理得到配准后的图像。
2.配准结果显示及评估:对初始配准后的第一分割影像和第二分割影像进行融合显示,根据分割关键器官在第一分割影像和第二分割影像的位置,得到配准误差,量化初始配准效果,根据配准误差进行自动评估,或者,提示用户根据配准误差进行主观评估,确定当前配准效果是否满足用户要求。
3.智能推荐图像调整方向:当初始配准效果不满足用户要求时,用户可以通过2D/3D手动配准工具进行手动配准。根据分割关键器官在图像的质心位置,计算图像调整方向并以箭头的方式给出提示,同时在2D/3D融合显示窗口显示手动配准的对齐效果。其中3D手动配准的好处是相比于2D,在3D上进行手动配准的调整,可以给用户更好的空间感,能够更加准确的判断整体调整的方向,同时能够在3D空间上更好的显示手动配准的效果。
4.提供手动调整、半自动调整、自动调整3种方式,确定刚体配准矩阵。其中,手动调整方式为使用配准工具提供的平移和旋转按键,调整第一分割影像或第二分割影像的位姿;半自动调整方式为在2D/3D形式的第一分割影像和第二分割影像上,通过选择一对或多对相互对应的解剖结构的对应点对,调整第一分割影像或第二分割影像的位姿;自动调整方式为自动提取解剖结构的特征点对,调整第一分割影像或第二分割影像的位姿。
图28是根据本说明书一些实施例所示的另一医学图像的配准优化方法的流程示意图。以下结合图28,详细阐述上述各部分。
步骤2801,加载图像:加载第一分割影像和第二分割影像,并获取分割关键器官在第一分 割影像和第二分割影像所处的区域;
步骤2802,刚性自动配准:对第一分割影像和第二分割影像进行基于灰度值配准算法的刚性自动配准,得到刚性配准结果;
步骤2803,弹性自动配准:对步骤2802得到的刚性配准结果中的第一分割影像和第二分割影像进行弹性配准,得到弹性配准结果;
在完成步骤2802和2803后,可以视为完成初始配准。
步骤2804,配准结果显示及评估:
4.1主观评估:对初始配准后的第一分割影像和初始配准后的第二分割影像分别以不同伪彩进行融合显示,根据分割关键器官在第一分割影像和第二分割影像的轮廓偏差,用户可主观评估初始配准的效果,
4.2量化评估:通过计算分割关键器官在图像所处区域的重合度,得到配准误差,量化配准效果,提示用户当前配准效果是否满足用户预设的阈值。
记配准误差为e,得到配准误差的公式为:其中,Mr表示位于第一区域的像素数量;Mm表示位于第二区域的像素数量;Mr∩m表示位于相交区域的像素数量。
当配准误差e大于预设阈值时,可以确认配准误差未满足预设条件,此时,可以进行优化配准流程。另外,当配准误差大于预设阈值时,还可以提示用户“当前的配准结果误差较大,建议优化配准提升配准效果”。
步骤2805,如果配准结果不满足用户要求,比如全自动的配准过程关注全局配准的效果,而用户可能更加关注分割关键器官的配准结果,导致全自动的配准算法不能满足用户的需求,提供刚性手动配准工具并提示配准图像调整方向,进入配准矩阵调整流程,详见步骤2806至2808。
步骤2806,融合显示:获取分割关键器官在第一分割影像和第二分割影像中的位置,按照原始空间位姿进行融合显示。显示方式包括2D融合显示和3D融合显示。2D融合可以显示第一分割影像和第二分割影像的原始叠加效果,为区分第一分割影像和第二分割影像,第二分割影像可采用伪彩形式显示。3D融合显示可直观显示关键器官的空间对齐效果。
步骤2807,观察分割关键器官在第一分割影像和第二分割影像的对齐效果。
步骤2808,2D和3D手动配准工具:提示图像调整方向,可以根据配准元素在3D形式的第一分割影像和第二分割影像的质心位置,计算图像移动方向,计算公式为通过箭头提示用户在第一分割影像和第二分割影像上调整方向;其中,mr(x,y,z)表示配准元素在第一分割影像的质心坐标;mm(x,y,z)表示配准元素在第二分割影像的质心坐标。
确定配准矩阵调整流程确定刚体配准矩阵的方式包括(1)手动调整方式、(2)自动调整方式和(3)半自动调整方式:
(1)手动调整:
如果未获取到配准元素在第一分割影像和第二分割影像中的位置,则可以提示用户进行手动调整,对第一分割影像进行平移操作或旋转操作,或者,对第二分割影像进行平移操作或旋转操作。
此时,计算机设备可以提供2D/3D手动配准工具及2D/3D融合显示,并在2D/3D手动配准工具提示图像的调整方向,用户可根据提示的调整方向调整2D/3D形式的第一分割影像和/或第二分割影像,使2D/3D形式的第一分割影像和第二分割影像大致对齐。用户也可按任意方向调整2D/3D形式的第一分割影像和/或第二分割影像,以达到用户要求。在用户手动调整完之后,基于用户进行的平移操作或旋转操作,得到刚体配准矩阵。
(2)自动调整:
当配准元素组织为预先设置的,有明显解剖特征点的人体组织或器官,比如肝脏、肺、肾脏等,可以默认推荐自动调整方式。通过算法自动提取特征点对,用户可以在2D/3D界面(2D原始的CT扫描图像,3D重建后有分割结果的图像)上修改或确认已经提取的对应点对,然后自动计算刚体配准矩阵并自动对齐,提示用户确认。用户确认后,还可以进行手动的微调,手动微调的方式参见(1)手动调整。
(3)半自动调整:
如果配准元素为预先设置的,边界比较光滑且没有明显解剖特征点的人体组织或器官,比 如前列腺、胆囊等,可以默认推荐半自动调整方式。此时,用户可以在2D/3D形式的第一分割影像和配准医学上选择一对或多对相互对应的解剖结构的对应特征点对,根据特征点对,自动计算刚体配准矩阵并自动对齐,提示用户确认。用户确认后,还可以进行手动的微调,手动微调的方式参见(1)手动调整。
步骤2809,将步骤2808得到的刚性配准矩阵输入至步骤2803的弹性自动配准算法,再次进行弹性自动配准;
步骤2810,重复步骤2803至步骤2809,直到配准效果满足用户要求,保存弹性配准结果,配准结束。
应该理解的是,虽然如上所述的各实施例所涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,如上所述的各实施例所涉及的流程图中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
图29是根据本说明书一些实施例所示的配准模块的结构框图,如图29所示,配准模块2440可以包括:配准误差获取模块2901、配准矩阵确定模块2902以及配准优化模块2903。
配准误差获取模块2901,用于获取第一分割影像和第二分割影像之间的配准误差;
配准矩阵确定模块2902,用于确定配准矩阵调整流程所用的配准元素;若获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和第二分割影像中的位置,得到刚体配准矩阵;若未获取到所述配准元素在所述第一分割影像和第二分割影像中的位置,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到刚体配准矩阵;
配准优化模块2903,用于若配准误差未满足预设条件,则通过配准矩阵调整流程确定的刚体配准矩阵,对第一分割影像和第二分割影像进行优化配准。
在一些实施例中,配准矩阵确定模块2902,还用于获取所述配准元素在第一分割影像和第二分割影像的特征点,得到特征点对;基于所述配准元素在所述第一分割影像和第二分割影像中的位置,得到所述特征点对包括的各特征点在各自图像的位置;基于所述特征点对包括的各特征点在各自图像的位置,得到刚体配准矩阵。
在一些实施例中,配准矩阵确定模块2902,还用于利用预设的特征提取算法,对第一分割影像和第二分割影像进行配准元素的特征点提取;若预设的特征提取算法提取到相应的特征点,则形成特征点对;若预设的特征提取算法未提取到相应的特征点,则根据对第一分割影像和所述第二分割影像的特征点选取操作,形成特征点对。
在一些实施例中,配准矩阵确定模块2902,还用于若配准元素为适用特征提取算法的器官,则利用特征提取算法,对第一分割影像和第二分割影像进行配准元素的特征点提取;若配准元素不为适用特征提取算法的器官,则根据对第一分割影像和所述第二分割影像的特征点选取操作,形成特征点对。
在一些实施例中,配准矩阵确定模块2902,还用于若获取到配准元素在第一分割影像和第二分割影像中的位置,则确定配准元素在第一分割影像的第一质心位置、以及在第二分割影像的第二质心位置,根据第一质心位置和第二质心位置之间的相对位置关系,在融合显示界面上提示图像移动方向,根据对第一分割影像或第二分割影像的平移操作或旋转操作,得到刚体配准矩阵;
或者,若获取到配准元素在第一分割影像和第二分割影像中的位置,则根据对第一分割影像或第二分割影像的平移操作或旋转操作,得到刚体配准矩阵;
或者,若获取到配准元素在第一分割影像和第二分割影像中的位置,则根据特征点对包括的各特征点在各自图像的位置,以及对第一分割影像或第二分割影像的平移操作或旋转操作,得到刚体配准矩阵。
在一些实施例中,配准误差获取模块2901,还用于对第一分割影像和第二分割影像进行初始配准,获取分割流程所用的分割关键器官在初始配准后的第一分割影像所处的区域、以及在初始配准后的第二分割影像所处的区域;基于区域之间的重合度,得到第一分割影像和第二分割影像之间的配准误差。
在一些实施例中,配准优化模块2903,还用于将通过配准矩阵调整流程确定的刚体配准矩阵,输入弹性自动配准算法中,利用输入后得到的弹性自动配准算法,对第一分割影像和第二分割 影像进行优化配准;若优化配准后得到的配准误差未满足预设条件,则继续进行优化配准,直至优化配准后的配准误差满足预设条件。
关于配准模块的具体限定可以参见上文中对于医学图像的配准优化方法的限定,在此不再赘述。上述配准模块中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
图30是根据本说明书一些实施例所示的示例性穿刺路径规划方法的流程图。在一些实施例中,流程3000可以由处理设备140执行。例如,流程3000可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器执行程序或指令时,可以实现流程3000。在一些实施例中,流程3000的穿刺路径规划方法可以由图像处理系统执行,图像处理系统可以包括:显示设备,用于显示穿刺路径,显示手动方式、自动方式、半自动方式中的至少一个选项;以及控制系统,控制系统包括一个或多个处理器和存储器,存储器可以包括适于致使一个或多个处理器执行流程3000的各个步骤。在一些实施例中,流程3000可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图30所示的操作的顺序并非限制性的。
步骤3010,基于配准后的的第二医学影像中相应元素的空间位置确定靶点。在一些实施例中,该步骤可以由靶点确定模块3710执行。
靶点(T点)指与靶区相关的位置点。在一些实施例中,靶点可以是穿刺器械(例如,活检针)穿透皮肤到达的体内位置、微创手术发生的位置等。在一些实施例中,靶点可以是待处理病灶或待检测区域的中心点或接近中心位置的点。在一些实施例中,靶点可以是待处理病灶或待检测区域内的任意位置点。在一些实施例中,靶点可以是待处理病灶或待检测区域内满足预设条件的位置点,例如,灰度值最大的点、灰度值最小的点、灰度值大于灰度阈值的点、灰度值小于灰度阈值的点等。
在一些实施例中,处理设备140可以基于配准后的扫描对象的第二医学影像(为便于描述,后文也称为目标图像)自动确定靶点。例如,处理设备140可以自动将可以将待处理病灶或待检测区域的中心作为靶点。又例如,处理设备140可以自动判断预设条件并确定相应的位置点作为靶点。
在一些实施例中,处理设备140可以基于用户操作确定靶点。在一些实施例中,用户操作可以包括点击(例如,鼠标点击、感应笔点击、触摸屏点触等)操作、感应操作(例如,手势感应、声音感应等)、用户指令(例如,代码指令、语音指令等)等或其任意组合。
在一些实施例中,处理设备140可以基于用户操作,确定候选靶点。例如,用户可以通过显示界面点击(例如,鼠标点击)目标图像中的位置点。相应地,基于用户的点击操作,处理设备140可以将用户点击的位置确定为候选靶点。
在一些实施例中,处理设备140可以进一步判断候选靶点是否满足第一预设条件。在一些实施例中,第一预设条件可以包括靶点处于靶区内、靶点位于靶区中心点、靶点没有位于血管内等或其任意组合。
响应于候选靶点不满足第一预设条件,处理设备140可以提供第一提示。在一些实施例中,第一提示可以以文字、符号、图片、音频、视频等方式提供。例如,如图32所示,在显示界面提示“T点未处于靶区内!”。进一步地,处理设备140可以重新执行上述过程直到确定满足第一预设条件的靶点。
响应于候选靶点满足第一预设条件,处理设备140可以将候选靶点确定为最终可用靶点。
步骤3020,基于用户与靶点相关的操作,确定参考路径。在一些实施例中,该步骤可以由参考路径确定模块3720执行。
在一些实施例中,处理设备140可以以半自动方式,确定参考路径。在一些实施例中,参考路径的终点可以为开放式。“开放式”可以指对终点没有限制,可以是目标图像内的任意位置点。
在一些实施例中,用户与靶点相关的操作可以包括以靶点为起点的拖拽操作。在一些实施例中,拖拽操作形成的线段或射线即为参考路径。在一些实施例中,拖拽方向可以是任意方向。在一些实施例中,拖拽长度可以是目标图像内的任意长度。相应地,拖拽操作的终点可以是目标图像内的任意位置点。例如,如图33所示,3310、3320、3330、3340和3350分别表示不同拖拽方向或不同拖拽长度所形成的不同的参考路径。
在一些实施例中,处理设备140可以在拖拽操作过程中同步显示参考路径。例如,在用户拖拽过程中,显示界面上可以同步显示拖拽操作形成的线段或射线。例如,如图33所示,图中“+” 号表示当前光标位置点(即用户拖拽操作的实时终点),此时显示界面上同步显示了对应的参考路径3350。
在一些实施例中,处理设备140可以通过其他方式确定参考路径。例如,手动方式。在一些实施例中,用户可以分别在显示界面设置参考路径的起点和终点。相应地,处理设备140可以将二者间的连线确定为参考路径。
又例如,处理设备140还可以自动方式确定参考路径,以及目标路径。
首先,根据靶区的轮廓自动确定靶点,例如,自动计算靶点坐标。
随后,在皮肤表面自动确定至少一个候选进针点,对于至少一个候选进针点中的每一个,将所述候选进针点与所述靶点相连,构成候选路径,确定候选路径与危险区(例如,不可介入区域)的距离。即将各个候选进针点和靶点进行相连,形成N条候选路径;依次计算出每条候选路径与各个危险区的距离。
最后,基于候选路径的深度、候选路径的角度、候选路径与危险区的距离,确定目标路径。例如,综合判断候选路径的深度是否满足皮靶距的限制、角度是否在预设范围内与候选路径与危险区的距离是否低于预设的安全距离等,确定最终的目标路径。
步骤3030,基于参考路径确定目标路径。在一些实施例中,该步骤可以由目标路径确定模块3730执行。
在一些实施例中,处理设备140可以基于参考路径自动确定候选进针点,候选进针点位于皮肤交界。在一些实施例中,处理设备140可以将参考路径(或其延长线)与皮肤的交界点作为候选进针点。例如,如图34所示,处理设备140分别基于参考路径3310、3320、3330、3340和3350,确定相应的候选进针点3410、3420、3430、3440和3450。
结合上文所述,参考路径是用户的拖拽操作形成的线段或射线,且拖拽过程中同步显示参考路径。相应地,基于参考路径自动确定候选进针点可以理解为在用户的拖拽操作结束时(例如,用户拖拽操作松手的瞬间)自动确定线段或射线与皮肤的交界点。
在一些实施例中,处理设备140可以基于靶点和候选进针点,确定候选路径。例如,将候选进针点和靶点连接而成的线段作为候选路径。进一步地,处理设备140可以基于候选路径确定目标路径。
在一些实施例中,处理设备140可以确定候选进针点和靶点间的距离(后文简称“皮靶距离”)和/或角度,并判断距离和/或角度是否满足第二预设条件。角度可以指靶点T和进针点E形成的TE向量与横断位Y轴正方向的夹角。在一些实施例中,角度可以是正角度、负角度、锐角、钝角等情况。例如,一、四象限的TE向量角度为正值,二、三象限的针道角度为负值,一、二象限的针道角度为锐角,三、四象限的针道角度为钝角。第二预设条件可以包括皮靶距离的预设范围、角度的预设范围等或其任意组合。例如,皮靶距离的预设范围为2cm-12cm,角度的预设范围为10度-80度。在一些实施例中,第二预设条件可以根据经验或需求设定。例如,不同类型的靶区、不同的目标对象等可以对应不同的第二预设条件。在一些实施例中,第二预设条件可以由用户设定,例如,用户输入穿刺器械(例如,活检针)的长度、皮靶距离的预设范围等。
响应于距离或角度中的至少一个不满足第二预设条件,处理设备140可以提供第二提示。在一些实施例中,第二提示可以以文字、符号、图片、音频、视频等方式提供。例如,如图35所示,当皮靶距离过大,与穿刺器械长度不符,无法进行穿刺时,在显示界面显示“长度大于最大阈值,无法执行穿刺”。进一步地,处理设备140可以重新执行上述过程直到确定满足第二预设条件的候选路径。
响应于距离和角度均满足第二预设条件,在一些实施例中,处理设备140还可以进一步判断候选路径是否满足第三预设条件。在一些实施例中,第三预设条件可以包括候选路径与危险区的距离(后文简称“危险距离”)大于距离阈值。例如,危险距离大于2mm。在一些实施例中,第三预设条件可以根据经验或需求设定。例如,不同类型的危险区、不同的目标对象、不同的应用场景等可以对应不同的第三预设条件。示例的,针对DBS手术,危险距离为1mm;对于SEEG手术,危险距离为1.6mm;对于穿刺活检系统危险距离为2mm。
响应于候选路径不满足第三预设条件,处理设备140可以提供第三提示。在一些实施例中,第三提示可以以文字、符号、图片、音频、视频等方式提供。例如,如图36所示,提示“针道1与骨骼组织发生干涉,请调整路径”。进一步地,处理设备140可以重新执行上述过程直到确定满足第三预设条件的候选路径。
响应于候选路径满足第三预设条件,处理设备140可以进一步基于候选路径确定目标路径。
在一些实施例中,处理设备140可以从同时满足第二预设条件和第三预设条件的候选路径中随机选择其中一条确定目标路径。在一些实施例中,处理设备140可以基于筛选条件,从同时满足第二预设条件和第三预设条件的候选路径中选择满足筛选条件的候选路径作为目标路径。在一些实施例中,筛选条件可以包括安全距离大于第一阈值(或安全距离最大)、皮靶距小于第二阈值(或皮靶距最小)、角度大于第三阈值(或角度最大)等或其任意组合。
在一些实施例中,处理设备140还可以基于用户的反馈,调整目标路径。例如,用户(例如,医生)可以根据经验微调目标路径的角度、微调进针点的位置等。
应当注意的是,上述有关流程200的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程200进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。例如,在流程200中,获取目标对象的目标图像的同时,可以获取目标对象的病史等。又例如,基于第二预设条件的判断步骤和第三预设条件的判断步骤的执行顺序并非限制性的,例如,可以同时执行,可以先执行基于第二预设条件的判断步骤,再执行基于第三预设条件的判断步骤,或者先执行基于第三预设条件的判断步骤,再执行基于第二预设条件的判断步骤。再例如,介入手术的图像处理系统100可以在显示界面(例如,终端设备130的显示界面)提供手动方式、自动方式、半自动方式等选项供用户选择。
图31是根据本说明书一些实施例所示的另一示例性穿刺路径规划方法的流程图。在一些实施例中,流程3100可以由处理设备140执行。例如,流程3100可以以程序或指令的形式存储在存储设备(例如,存储设备150、处理设备140的存储单元)中,当处理器执行程序或指令时,可以实现流程3100。在一些实施例中,流程3100可以利用以下未描述的一个或以上附加操作,和/或不通过以下所讨论的一个或以上操作完成。另外,如图31所示的操作的顺序并非限制性的。
步骤3102,获取目标对象的参考图像。在一些实施例中,目标对象可以是指扫描对象。参考图像可以是第一阶段中获得的医学图像。例如,参考图像可以包括第一医学影像。
步骤3104,对目标对象的参考图像进行图像分割。例如,基于图像分割算法,分割并分别标识出靶区、可穿区和/或危险区。
步骤3106,获取目标对象的当前图像。在一些实施例中,当前图像可以是第二阶段中获得的医学图像。例如,当前图像可以包括第二医学影像。
步骤3108,融合当前图像和参考图像以确定目标图像。
步骤3110,在目标图像中确定靶点,并基于用户与靶点的相关操作,绘制参考路径。例如,用户以靶点为起点,执行拖拽操作,拖拽操作过程中,同步显示参考路径。在一些实施例中,还可以判断初始确定的候选靶点是否满足靶点条件(例如,前文所述的第一预设条件)以确定最终的靶点。
步骤3112,自动确定候选进针点。例如,自动将拖拽操作所形成的参考路径与皮肤的交界点作为候选进针点。获取候选进针点和靶点构成的候选路径。执行步骤3114和步骤3118,在一些实施例中,步骤3114和步骤3118可以并行或串行,本说明书对此不作限制。
步骤3114,判断该候选路径是否满足安全距离条件(例如,上文所述的第三预设条件)。
步骤3116,若该候选路径不满足安全距离条件,可以提示调整靶点或提示重新绘制参考路径。
步骤3118,判断该候选路径是否满足皮靶距条件和/或角度条件(例如,上文所述的第二预设条件)。
若候选路径满足安全距离条件,并且该候选路径满足皮靶距条件和/或角度条件,将该候选路径作为目标路径。
步骤3120,若该候选路径不满足第二预设条件,可以提示调整靶点或提示重新绘制参考路径。
需要说明的是,本说明书实施例中提供的影像分割方法、配准方法等不应局限于术前处理或术中处理。例如,本文所述的分割方法可以对第一医学影像以及第二医学影像进行分割,以得到对应的分割影像。又例如,本所文所述的配准方法可以对第一分割影像和第二分割影像进行配准;或者,也可以对第一医学影像和第二医学影像进行配准,再或者,还可以对两个第一医学影像进行配准。
本说明书提供一种用于介入手术的图像处理装置,包括处理器,处理器用于执行所述的图像处理方法。
本说明书提供一种计算机可读存储介质,存储介质存储计算机指令,当计算机读取存储介 质中的计算机指令后,计算机执行所述的图像处理方法。
本说明书实施例提供的用于介入手术的图像处理方法、系统、装置及计算机可读存储介质,至少具有以下有益效果:
(1)通过将不同部位的介入手术的规划集成到同一系统中,并支持多期相的医学影像的分割和融合映射,可以使得医生在进行不同部位介入手术的术前规划时无需切换应用,只需加载对应部位的数据即可,降低了学习成本和规划风险;
(2)通过将术前离线规划阶段得到的结果保存为书签,并在术前规划阶段加载书签还原分割结果,在此基础上快速完成术前准备,进入手术阶段,可以有效提升术前规划的容错率,降低病人的等待时间,从而提升手术的安全性;
(3)提供了组织类别标定功能和优先级刷新功能,从而保证介入手术的安全稳定性;
(4)通过对多期相的医学影像(例如,第一医学影像)进行自动识别,根据自动识别结果来生成预推荐增强影像,可以使血管(或病灶、器官)分割结果更加精细和准确,从而提高介入手术的安全稳定性;
(5)在分割过程中采用结合深度学习的由粗到细优化分割方法,通过精准定位为精准分割提供支持,提高了分割效率和图像处理鲁棒性;
(6)通过在粗分割阶段采用软连通域分析方法,准确保留目标结构集区域的同时,有效排除了假阳性区域,首先提高了粗定位阶段对元素定位的准确率,并直接有助于后续合理提取元素掩膜定位信息的边界框,从而提升了分割效率;
(7)针对粗分割阶段粗定位失准但未失效的不利情况,利用自适应的滑窗计算及相应滑窗操作,能够补全定位区域的缺失部分,并能自动规划及执行合理的滑窗操作,降低了精分割阶段对于粗定位结果的依赖性,在保持分割时间和计算资源无明显增加的前提下,提高了分割准确率;
(8)综合第一医学影像、第二医学影像的各自优势,设置快速分割模式和精准分割模式(精准分割模式仅针对手术中扫描影像),根据选定分割模式的不同,确定不同的术前规划方案,快速分割模式下,规划速度快且时间短;精准模式下,规划方案的选择性更多,鲁棒性更强,在提供较强处理适用性的同时还能保障系统稳定性及安全性,使得术前规划能够达到较高的精准度,以便更好地辅助手术中准确地实施相应穿刺路径,能够获得更理想的手术效果;
(9)通过配准矩阵调整流程对第一分割影像和第二分割影像优化配准的处理,可以提高配准结果的准确性;
(10)根据是否获取到配准矩阵调整流程所用的配准关键器官在图像中的位置,采取不同的策略(如手动调整、自动调整、半自动调整等),得到刚体配准矩阵,适用性更高;
(11)基于用户的拖拽操作绘制终点开放式的线段或射线,进而系统自动确定入针点,使用户操作更灵活,同时减少用户交互次数,提高效率;
(12)在路径规划过程中实时提供各种提示,提高规划效率;
(13)通过皮靶距、路径角度及路径与危险组织间的距离等多维度的监控和判断,提高路径规划的准确性和安全性。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“引擎”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机存储介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作 为载波的一部分。该传播信号可能有多种表现形式,包括电磁形式、光形式等,或合适的组合形式。计算机存储介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机存储介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质,或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran 2003、Perl、COBOL 2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或处理设备上运行。在后种情况下,远程计算机可以通过任何网络形式与用户计算机连接,比如局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的处理设备或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不仅限于本申请明确介绍和描述的实施例。

Claims (50)

  1. 一种用于介入手术的图像处理方法,包括:分割阶段;所述分割阶段包括:
    获取多个第一医学影像,其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点;
    基于多个所述第一医学影像,确定预推荐增强影像;
    获取操作指令,基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。
  2. 根据权利要求1所述的图像处理方法,其中,所述不同时间点至少包括两个不同期相。
  3. 根据权利要求1所述的图像处理方法,其中,所述基于多个所述第一医学影像,确定预推荐增强影像,包括:
    对多个所述第一医学影像进行自动识别;
    基于所述自动识别的结果,确定多个所述第一医学影像对应的期相。
  4. 根据权利要求3所述的图像处理方法,其中,还包括:基于所述第一目标结构集中的预设元素在不同期相的所述第一医学影像上的预分割效果,确定所述预推荐增强影像。
  5. 根据权利要求2所述的图像处理方法,其中,所述基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像,包括:
    对至少两个所述不同期相对应的所述预推荐增强影像进行融合,获得第一融合影像;
    基于所述第一融合影像对所述第一目标结构集中的至少部分元素进行分割。
  6. 根据权利要求2~5所述的图像处理方法,其中,包括:
    利用第一轮廓绘制工具,在分割影像上绘制轮廓线;
    基于所述轮廓线,对目标结构集中的至少部分元素轮廓进行提取。
  7. 根据权利要求2~5所述的图像处理方法,其中,包括:
    利用第二轮廓绘制工具,在分割影像上绘制线段;
    基于所述线段,对目标结构集中的至少部分元素轮廓进行提取。
  8. 根据权利要求6或7所述的图像处理方法,其中,包括:利用轮廓修正工具对所述目标结构集中的至少部分元素轮廓进行修正。
  9. 根据权利要求2~5所述的图像处理方法,其中,包括:
    对目标结构集中的至少部分所述元素标注优先级;
    根据所述优先级的顺序,对所述元素的轮廓进行刷新显示。
  10. 根据权利要求1所述的图像处理方法,其中,还包括:将所述第一目标结构集中的至少部分元素的分割结果保存为书签。
  11. 根据权利要求10所述的图像处理方法,其中,还包括:加载书签,还原所述第一目标结构集中的至少部分元素的分割结果。
  12. 根据权利要求1所述的图像处理方法,其中,还包括配准阶段,所述配准阶段包括:
    获取第二医学影像;
    对所述第二医学影像中的第二目标结构集进行分割,得到第二分割影像,所述第一目标结构集与所述第二目标结构集有交集;
    对所述第一分割影像和所述第二分割影像进行配准,确定第三目标结构集的空间位置;
    其中,所述第三目标结构集中至少有一个元素包括在所述第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
  13. 根据权利要求12所述的图像处理方法,其中,所述对所述第二医学影像中的第二目标结构 集进行分割,包括:
    获取分割模式,所述分割模式包括快速分割模式和精准分割模式;
    基于所述分割模式,对所述第二医学影像中的第二目标结构集进行分割。
  14. 根据权利要求1或12所述的图像处理方法,其中,还包括介入路径规划阶段,所述介入路径规划阶段包括:
    基于所述第二医学影像中相应元素的空间位置确定靶点;
    基于用户与所述靶点相关的操作,确定参考路径,其中,所述参考路径的终点为开放式;
    基于所述参考路径确定目标路径。
  15. 根据权利要求14所述的图像处理方法,其中,所述基于所述第二医学影像中相应元素的空间位置确定靶点,包括:
    基于用户操作,确定候选靶点;
    判断所述候选靶点是否满足第一预设条件;
    响应于所述候选靶点不满足所述第一预设条件,提供第一提示。
  16. 根据权利要求14所述的图像处理方法,其中,所述基于用户与所述靶点相关的操作包括以所述靶点为起点的拖拽操作。
  17. 根据权利要求14所述的图像处理方法,其中,所述基于所述参考路径确定目标路径,包括:
    基于所述参考路径自动确定候选进针点,所述候选进针点位于所述第二医学影像中的皮肤表面区域;
    基于所述靶点和所述候选进针点,确定候选路径;
    基于所述候选路径确定所述目标路径。
  18. 一种用于介入手术的图像处理方法,包括:配准阶段,所述配准阶段包括:
    获取第一分割影像和第二分割影像之间的配准误差;
    若所述配准误差未满足预设条件,则通过配准矩阵调整流程确定的刚体配准矩阵,对所述第一分割影像和所述第二分割影像进行优化配准;
    其中,所述通过配准矩阵调整流程确定刚体配准矩阵,包括:
    确定所述配准矩阵调整流程所用的配准元素;
    若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和所述第二分割影像中的位置,得到所述刚体配准矩阵;
    若未获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到所述刚体配准矩阵。
  19. 根据权利要求18所述的图像处理方法,其中,所述若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则基于所述配准元素在所述第一分割影像和所述第二分割影像中的位置,得到所述刚体配准矩阵,包括:
    获取所述配准元素在所述第一分割影像和所述第二分割影像的特征点,得到特征点对;
    基于所述配准元素在所述第一分割影像和所述第二分割影像中的位置,得到所述特征点对包括的各特征点在各自图像的位置;
    基于所述特征点对包括的各特征点在各自图像的位置,得到所述刚体配准矩阵。
  20. 根据权利要求19所述的图像处理方法,其中,获取所述配准元素在所述第一分割影像和所述第二分割影像的特征点,得到特征点对,包括:
    利用预设的特征提取算法,对所述第一分割影像和所述第二分割影像进行配准元素的特征点提取;
    若所述预设的特征提取算法提取到相应的特征点,则形成所述特征点对;
    若所述预设的特征提取算法未提取到相应的特征点,则根据对所述第一分割影像和所述第二分 割影像的特征点选取操作,形成所述特征点对。
  21. 根据权利要求19所述的图像处理方法,其中,获取所述配准元素在所述第一分割影像和所述第二分割影像的特征点,得到特征点对,包括:
    若所述配准元素为适用特征提取算法的器官,则利用所述特征提取算法,对所述第一分割影像和所述第二分割影像进行所述配准元素的特征点提取;
    若所述配准元素不为适用特征提取算法的器官,则根据对所述第一分割影像和所述第二分割影像的特征点选取操作,形成所述特征点对。
  22. 根据权利要求18所述的图像处理方法,其中,在确定配准矩阵调整流程所用的配准元素之后,还包括:
    若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则确定所述配准元素在所述第一分割影像的第一质心位置、以及在所述第二分割影像的第二质心位置,根据所述第一质心位置和所述第二质心位置之间的相对位置关系,在融合显示界面上提示图像移动方向,根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到所述刚体配准矩阵;
    或者,若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则根据对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到所述刚体配准矩阵;
    或者,若获取到所述配准元素在所述第一分割影像和所述第二分割影像中的位置,则根据特征点对包括的各特征点在各自图像的位置,以及对所述第一分割影像或所述第二分割影像的平移操作或旋转操作,得到所述刚体配准矩阵。
  23. 根据权利要求18~22所述的图像处理方法,其中,其中,获取所述第一分割影像和所述第二分割影像之间的配准误差,包括:
    对所述第一分割影像和所述第二分割影像进行初始配准,获取分割流程所用的分割关键器官在初始配准后的所述第一分割影像所处的区域、以及在初始配准后的所述第二分割影像所处的区域;
    基于区域之间的重合度,得到所述第一分割影像和所述第二分割影像之间的配准误差。
  24. 根据权利要求18所述的图像处理方法,其中,所述通过配准矩阵调整流程确定的刚体配准矩阵,对所述第一分割影像和所述第二分割影像进行优化配准,包括:
    将通过配准矩阵调整流程确定的刚体配准矩阵,输入弹性自动配准算法中,利用输入后得到的弹性自动配准算法,对所述第一分割影像和所述第二分割影像进行优化配准;若优化配准后得到的配准误差未满足预设条件,则继续进行优化配准,直至优化配准后的配准误差满足预设条件。
  25. 根据权利要求18所述的图像处理方法,其中,还包括:
    在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示向上平移控件、向下平移控件、向左平移控件和向右平移控件中的至少一个;
    基于对至少一个所述平移控件的选中操作,得到对所述第一分割影像或所述第二分割影像的平移操作。
  26. 根据权利要求18所述的图像处理方法,其中,还包括:
    在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示逆时针旋转控件和顺时针旋转控件中的至少一个;
    基于对至少一个所述旋转控件的选中操作,得到对所述第一分割影像或所述第二分割影像的旋转操作;
    或者,
    在融合显示界面上融合显示所述第一分割影像和所述第二分割影像,在所述融合显示界面显示圆环区域;
    基于对所述圆环区域的拖动旋转操作,得到对所述第一分割影像或所述第二分割影像的旋转操作。
  27. 根据权利要求25或26所述的图像处理方法,其中,在融合显示界面上融合显示所述第一 分割影像和所述第二分割影像的步骤,包括:
    利用预设的融合方式,将所述第一分割影像和所述第二分割影像融合显示在显示屏上;预设的融合方式包括:上下层融合方式、分割线融合方式和棋盘格融合方式中的至少一种。
  28. 根据权利要求18所述的图像处理方法,其中,还包括分割阶段,所述分割阶段包括:
    获取多个第一医学影像和第二医学影像,其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点;
    基于多个所述第一医学影像,确定预推荐增强影像;
    获取操作指令,基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成所述第一分割影像;
    对所述第二医学影像中的第二目标结构集进行分割,得到所述第二分割影像,所述第一目标结构集与所述第二目标结构集有交集。
  29. 根据权利要求28所述的图像处理方法,其中,所述基于多个所述第一医学影像,确定预推荐增强影像,包括:
    对多个所述第一医学影像进行自动识别;
    基于所述自动识别的结果,确定多个所述第一医学影像对应的期相。
  30. 根据权利要求28所述的图像处理方法,其中,所述基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像,包括:
    对至少两个所述不同期相对应的所述预推荐增强影像进行融合,获得第一融合影像;
    基于所述第一融合影像对所述第一目标结构集中的至少部分元素进行分割。
  31. 根据权利要求18或28所述的图像处理方法,其中,还包括介入路径规划阶段,所述介入路径规划阶段包括:
    基于所述第二医学影像中相应元素的空间位置确定靶点;
    基于用户与所述靶点相关的操作,确定参考路径,其中,所述参考路径的终点为开放式;
    基于所述参考路径确定目标路径。
  32. 根据权利要求31所述的图像处理方法,其中,所述基于所述第二医学影像中相应元素的空间位置确定靶点,包括:
    基于用户操作,确定候选靶点;
    判断所述候选靶点是否满足第一预设条件;
    响应于所述候选靶点不满足所述第一预设条件,提供第一提示。
  33. 根据权利要求31所述的图像处理方法,其中,所述基于用户与所述靶点相关的操作包括以所述靶点为起点的拖拽操作。
  34. 根据权利要求31所述的图像处理方法,其中,所述基于所述参考路径确定目标路径,包括:
    基于所述参考路径自动确定候选进针点,所述候选进针点位于所述第二医学影像中的皮肤表面区域;
    基于所述靶点和所述候选进针点,确定候选路径;
    基于所述候选路径确定所述目标路径。
  35. 一种用于介入手术的图像处理方法,所述图像处理方法包括:介入路径规划阶段,所述介入路径规划阶段包括:
    获取扫描对象的医学影像;
    基于所述医学影像确定靶点;
    基于用户与所述靶点相关的操作,确定参考路径,其中,所述参考路径的终点为开放式;
    基于所述参考路径确定目标路径。
  36. 根据权利要求35所述的图像处理方法,其中,所述基于所述医学影像确定靶点,包括:
    基于用户操作,确定候选靶点;
    判断所述候选靶点是否满足第一预设条件;
    响应于所述候选靶点不满足所述第一预设条件,提供第一提示。
  37. 根据权利要求35所述的图像处理方法,其中,所述基于用户与所述靶点相关的操作包括以所述靶点为起点的拖拽操作。
  38. 根据权利要求37所述的图像处理方法,其中,所述拖拽操作过程中同步显示所述参考路径。
  39. 根据权利要求35所述的图像处理方法,其中,所述基于所述参考路径确定目标路径包括:
    基于所述参考路径自动确定候选进针点,所述候选进针点位于所述医学影像中的皮肤表面区域;
    基于所述靶点和所述候选进针点,确定候选路径;
    基于所述候选路径确定所述目标路径。
  40. 根据权利要求39所述的图像处理方法,其中,所述操作还包括:
    确定所述候选进针点和所述靶点间的距离和/或角度;
    判断所述距离和/或所述角度是否满足第二预设条件;
    响应于所述距离或所述角度中的至少一个不满足所述第二预设条件,提供第二提示。
  41. 根据权利要求39所述的图像处理方法,其中,所述操作还包括:
    判断所述候选路径是否满足第三预设条件,所述第三预设条件包括所述候选路径与危险区的距离大于阈值;
    响应于所述候选路径不满足所述第三预设条件,提供第三提示。
  42. 根据权利要求35所述的图像处理方法,其中,所述操作还包括:
    根据靶区的轮廓自动确定靶点;
    在皮肤表面自动确定至少一个候选进针点;
    对于所述至少一个候选进针点中的每一个,将所述候选进针点与所述靶点相连,构成所述候选路径;
    确定所述候选路径与危险区的距离;
    基于所述候选路径的深度、所述候选路径的角度、所述候选路径与危险区的距离,确定目标路径。
  43. 根据权利要求35所述的图像处理方法,其中,所述医学影像包括多个第一医学影像和第二医学影像,所述图像处理方法还包括配准阶段,所述配准阶段包括:
    对多个所述第一医学影像中的第一目标结构集进行分割得到第一分割影像,对所述第二医学影像中的第二目标结构集进行分割得到第二分割影像;
    对所述第一分割影像和所述第二分割影像进行配准,确定第三目标结构集的空间位置;
    其中,所述第一目标结构集与所述第二目标结构集有交集,所述第三目标结构集中至少有一个元素包括在所述第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
  44. 根据权利要求43所述的图像处理方法,其中,所述对所述第二医学影像中的第二目标结构集进行分割,包括:
    获取分割模式,所述分割模式包括快速分割模式和精准分割模式;
    基于所述分割模式,对所述第二医学影像中的第二目标结构集进行分割。
  45. 根据权利要求43所述的图像处理方法,其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点,所述图像处理方法还包括分割阶段,所述分割阶段包括:
    基于多个所述第一医学影像,确定预推荐增强影像;
    获取操作指令,基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像。
  46. 根据权利要求45所述的图像处理方法,其中,所述基于多个所述第一医学影像,确定预推荐增强影像,包括:
    对多个所述第一医学影像进行自动识别;
    基于所述自动识别的结果,确定多个所述第一医学影像对应的期相。
  47. 根据权利要求45所述的图像处理方法,其中,所述基于所述操作指令和所述预推荐增强影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像,包括:
    对至少两个所述不同期相对应的所述预推荐增强影像进行融合,获得第一融合影像;
    基于所述第一融合影像对所述第一目标结构集中的至少部分元素进行分割。
  48. 一种用于介入手术的图像处理系统,包括:分割模块、配准模块和介入路径规划模块;
    所述分割模块用于:
    获取扫描对象在不同阶段的多个第一医学影像和第二医学影像;其中,至少有两个所述第一医学影像对应于同一扫描对象的不同时间点;
    基于所述多个第一医学影像对第一目标结构集中的至少部分元素进行分割,生成第一分割影像;
    所述配准模块用于:
    基于所述第二医学影像对第二目标结构集中的至少部分元素进行分割,生成第二分割影像;
    配准所述第一分割影像和所述第二分割影像;
    所述介入路径规划模块用于:
    基于配准后的所述第二分割影像和/或所述第二医学影像,确定由皮肤表面进针点到靶区之间的介入路径。
  49. 一种用于介入手术的图像处理装置,包括处理器,所述处理器用于执行权利要求1~47中任一项所述的图像处理方法。
  50. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行权利要求1~47中任一项所述的图像处理方法。
PCT/CN2023/080956 2022-03-11 2023-03-11 一种用于介入手术的图像处理方法、系统和装置 WO2023169578A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202210241912.2A CN116763401A (zh) 2022-03-11 2022-03-11 一种穿刺路径规划系统、方法及手术机器人
CN202210241912.2 2022-03-11
CN202210764217.4 2022-06-30
CN202210764217.4A CN117392143A (zh) 2022-06-30 2022-06-30 用于介入手术的术前规划方法、系统、装置及存储介质
CN202210963424.2A CN117670945A (zh) 2022-08-11 2022-08-11 医学图像的配准优化方法、系统和设备
CN202210963424.2 2022-08-11

Publications (1)

Publication Number Publication Date
WO2023169578A1 true WO2023169578A1 (zh) 2023-09-14

Family

ID=87936139

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/080956 WO2023169578A1 (zh) 2022-03-11 2023-03-11 一种用于介入手术的图像处理方法、系统和装置

Country Status (1)

Country Link
WO (1) WO2023169578A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274130A (zh) * 2023-11-22 2023-12-22 中南大学 一种口腔癌医疗影像自适应增强方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318567A (zh) * 2014-10-24 2015-01-28 东北大学 一种基于医学影像分割肾脏血管房室的方法
CN106236258A (zh) * 2016-08-17 2016-12-21 北京柏惠维康医疗机器人科技有限公司 腹腔微创手术穿刺路径的规划方法及装置
CN110148192A (zh) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 医学影像成像方法、装置、计算机设备和存储介质
US20200219252A1 (en) * 2018-12-26 2020-07-09 Canon Medical Systems Corporation Medical image diagnostic system and method for generating trained model
US10984530B1 (en) * 2019-12-11 2021-04-20 Ping An Technology (Shenzhen) Co., Ltd. Enhanced medical images processing method and computing device
CN113299385A (zh) * 2021-04-30 2021-08-24 北京深睿博联科技有限责任公司 一种基于深度学习的胰腺囊性病变临床决策方法和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318567A (zh) * 2014-10-24 2015-01-28 东北大学 一种基于医学影像分割肾脏血管房室的方法
CN106236258A (zh) * 2016-08-17 2016-12-21 北京柏惠维康医疗机器人科技有限公司 腹腔微创手术穿刺路径的规划方法及装置
US20200219252A1 (en) * 2018-12-26 2020-07-09 Canon Medical Systems Corporation Medical image diagnostic system and method for generating trained model
CN110148192A (zh) * 2019-04-18 2019-08-20 上海联影智能医疗科技有限公司 医学影像成像方法、装置、计算机设备和存储介质
US10984530B1 (en) * 2019-12-11 2021-04-20 Ping An Technology (Shenzhen) Co., Ltd. Enhanced medical images processing method and computing device
CN113299385A (zh) * 2021-04-30 2021-08-24 北京深睿博联科技有限责任公司 一种基于深度学习的胰腺囊性病变临床决策方法和系统

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274130A (zh) * 2023-11-22 2023-12-22 中南大学 一种口腔癌医疗影像自适应增强方法
CN117274130B (zh) * 2023-11-22 2024-02-09 中南大学 一种口腔癌医疗影像自适应增强方法

Similar Documents

Publication Publication Date Title
US10878573B2 (en) System and method for segmentation of lung
US10776914B2 (en) System and method for detecting trachea
US20200218922A1 (en) Systems and methods for determining a region of interest of a subject
US10460441B2 (en) Trachea marking
WO2018119766A1 (zh) 多模态图像处理系统及方法
CN107809955B (zh) 经由感兴趣界标的自动检测在x射线成像中进行实时准直和roi过滤器定位
RU2711140C2 (ru) Редактирование медицинских изображений
Liu et al. An augmented reality system for image guidance of transcatheter procedures for structural heart disease
US11676305B2 (en) Systems and methods for automated calibration
WO2023169578A1 (zh) 一种用于介入手术的图像处理方法、系统和装置
WO2023186133A1 (zh) 一种用于穿刺路径规划的系统及方法
CN106446515A (zh) 一种三维医学图像显示方法及装置
WO2023216947A1 (zh) 一种用于介入手术的医学影像处理系统和方法
US20240050172A1 (en) Surgical pathway processing system, method, device, and storage medium
WO2024002221A1 (zh) 介入手术影像辅助方法、系统、装置及存储介质
CN117392143A (zh) 用于介入手术的术前规划方法、系统、装置及存储介质
CN117392142A (zh) 一种介入手术影像辅助方法、系统、装置及存储介质
CN117392144A (zh) 医学影像的分割方法、系统、装置及存储介质
CN116912098A (zh) 用于介入手术医学影像处理方法、系统、装置及存储介质
EP4366644A1 (en) Methods, systems, and mediums for scanning
CN116763401A (zh) 一种穿刺路径规划系统、方法及手术机器人

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23766155

Country of ref document: EP

Kind code of ref document: A1