WO2024002221A1 - 介入手术影像辅助方法、系统、装置及存储介质 - Google Patents

介入手术影像辅助方法、系统、装置及存储介质 Download PDF

Info

Publication number
WO2024002221A1
WO2024002221A1 PCT/CN2023/103728 CN2023103728W WO2024002221A1 WO 2024002221 A1 WO2024002221 A1 WO 2024002221A1 CN 2023103728 W CN2023103728 W CN 2023103728W WO 2024002221 A1 WO2024002221 A1 WO 2024002221A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
image
fat
structure set
area
Prior art date
Application number
PCT/CN2023/103728
Other languages
English (en)
French (fr)
Inventor
廖明哲
张天
方伟
张璟
Original Assignee
武汉联影智融医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202210761324.1A external-priority patent/CN117392142A/zh
Priority claimed from CN202210907258.4A external-priority patent/CN117522886A/zh
Application filed by 武汉联影智融医疗科技有限公司 filed Critical 武汉联影智融医疗科技有限公司
Publication of WO2024002221A1 publication Critical patent/WO2024002221A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Definitions

  • This specification relates to the field of image processing technology, and in particular to an interventional surgery image assistance method, system, device and storage medium.
  • Preoperative planning for interventional surgery refers to, with the assistance of medical scanning equipment, obtaining images of blood vessels, lesions, organs, etc. of the scanned object (such as a patient, etc.), combined with pathology and anatomy knowledge,
  • preoperative planning can accurately determine the spatial location of blood vessels and lesions in medical scan images, and reasonably plan the puncture path of the puncture needle, directly affects whether the organs and blood vessels can be effectively avoided during surgery, so that the puncture needle can successfully reach the lesion. Therefore, it is particularly important to provide an image-assisted solution for interventional surgery so that preoperative planning can achieve a higher degree of accuracy, so as to better assist in accurately implementing the corresponding puncture path during surgery, thereby achieving the ideal surgical effect. .
  • One embodiment of the present specification provides an imaging assistance method for interventional surgery, which includes: acquiring a medical image; segmenting a target structure set in the medical image; and determining a segmentation result based on the target structure set to reflect the non-interventionable area.
  • the result structure set includes: acquiring a medical image; segmenting a target structure set in the medical image; and determining a segmentation result based on the target structure set to reflect the non-interventionable area.
  • the medical images include pre-operative enhanced images and intra-operative scan images;
  • the target structure set includes a first target structure set of pre-operative enhanced images and a second target structure set of the intra-operative scan images.
  • the result structure set includes a third target structure set during surgery;
  • the segmentation of the target structure set in the medical image includes: segmenting the first target structure set of the pre-surgery enhanced image to obtain the The first segmented image of the first target structure set; segment the second target structure set of the intra-operative scan image to obtain the second segmented image of the second target structure set; the first target structure set and The second target structure set has an intersection; the segmentation result based on the target structure set determines the result structure set used to reflect the non-interventionable area, including: performing on the first segmented image and the second segmented image.
  • Registration determines the spatial position of the third target structure set during surgery; at least one element in the third target structure set is included in the first target structure set, and at least one element in the third target structure set is not included in the
  • the method further includes: obtaining a planning mode of the interventional surgery, the planning mode at least including a fast segmentation mode and a precise segmentation mode; segmenting the fourth target structure set of the scanned image during the surgery according to the planning mode .
  • segmenting the fourth target structure set of the intra-operative scan image according to the planning mode further includes: in the fast segmentation mode, the fourth target structure set includes the No access area.
  • segmenting a fourth target structure set of the intra-operative scanned image according to the planning mode further includes: in the precise segmentation mode, the fourth target structure set includes a preset of vital organs.
  • the ratio of the preset total volume of vital organs in the fourth target structure set to the total volume of the inaccessible region is less than the preset efficiency factor m.
  • the setting of the efficiency factor m is related to the type of interventional surgery.
  • the segmentation includes: performing rough segmentation on at least one element in the target structure set in the medical image; obtaining a mask of at least one of the elements; determining positioning information of the mask; Based on the positioning information of the mask, at least one of the elements is accurately segmented.
  • registering the first segmented image and the second segmented image includes: registering the first segmented image and the second segmented image, and determining the registration deformation. field; based on the registration deformation field and the spatial position of at least some elements in the first target structure set in the pre-surgery enhanced image, determine the spatial position of the corresponding element during the operation.
  • determining the registration deformation field includes: determining a first preliminary deformation field based on the registration between the elements; determining a complete deformation field based on the first preliminary deformation field between the elements.
  • the result structure set includes a fat-removal mask
  • segmenting the target structure set in the medical image includes: segmenting the target structure set in the medical image to obtain The fat-removal mask; the segmentation result based on the target structure set is determined to reflect the result structure set of the non-interventionable area, including: determining that the target organ mask is within a preset range and is consistent with the fat-removal mask.
  • a first intersection adjusting the target organ mask and the area of the fat removal mask based on the first intersection to obtain the adjusted fat removal mask.
  • the method further includes: performing connected domain processing on the adjusted degreasing mask; and obtaining the processed degreasing mask based on the processed degreasing mask.
  • the determined target organ mask is a first intersection with the fat-removing mask within a preset range, and the target organ mask and the fat-removing mask are determined based on the first intersection.
  • the adjustment of the area includes: detecting the target organ mask; based on the detection results, determining the first intersection of the target organ mask and the fat removal mask within the first preset range, wherein, The first preset range is determined based on the first preset parameter; based on the first intersection, the first adjustment is made to the area of the target organ mask and the fat removal mask.
  • the method further includes: determining a second intersection of the target organ mask after the first adjustment and the fat removal mask after the first adjustment within a second preset range, wherein: The second preset range is determined according to the second preset parameter; based on the second intersection, the area of the target organ mask after the first adjustment and the fat removal mask after the first adjustment are Second adjustment.
  • the second preset parameter is less than or equal to the first preset parameter, and the first preset parameter and/or the second preset parameter are obtained based on big data or artificial intelligence.
  • performing connected domain processing on the adjusted fat-removing mask includes: determining whether the positioning information of the target organ mask overlaps with the positioning information of the pseudo-fat-removing connected domain; when the judgment result When there is no overlap, the pseudo-fat-free connected domain is identified as belonging to the fat-free mask; when the judgment result is overlapping, the pseudo-fat-free connected domain is determined based on the size relationship between the area of the pseudo-fat-free connected domain and the preset area threshold. Whether pseudo-delipid connected domains should belong to the delipid mask.
  • determining whether the pseudo-fat-removed connected domain should belong to a fat-removing mask based on the relationship between the area of the pseudo-fat-removed connected domain and a preset area threshold includes: when the pseudo-fat-removed connected domain When the area of the connected domain is greater than the preset area threshold, the pseudo-fat-removed connected domain is identified as belonging to the fat-removed mask; when the area of the pseudo-fat-removed connected domain is less than or equal to the preset area threshold, The pseudo-delipid connected domain is identified as not belonging to the delipid mask.
  • it also includes: retaining identifiers and/or discarding identifiers, retaining identifiers indicating pseudo-delipided connected domains belonging to the defatted mask, and discarding identifiers indicating pseudo-degreased connections that do not belong to the defatted mask. area.
  • obtaining the processed fat-free mask based on the connected domain-processed fat-free mask includes: detecting the adjusted target organ mask; based on the detection results , determine the adjacent boundary between the fat removal mask processed by the connected domain and the target organ mask; perform a third adjustment on the fat removal mask processed by the connected domain based on the adjacent boundary, and obtain The degreasing mask after processing.
  • the method further includes: obtaining an operation instruction; segmenting at least one target organ in the medical image according to the operation instruction to obtain at least one target organ mask; determining where the at least one target organ mask is A third intersection with the fast segmentation result mask within the first preset range, and adjusting the area of at least one of the target organ mask and the fast segmentation result mask based on the third intersection, wherein the fast segmentation result mask
  • the segmentation result mask at least includes the processed fat-removing mask; connected domain processing is performed on the adjusted fast segmentation result mask; based on the fast segmentation result mask processed by the connected domain, the processed The fast segmentation result mask.
  • One embodiment of this specification provides an interventional surgery image assistance system, including: an acquisition module for acquiring medical images; a segmentation module for segmenting target structure sets in the medical images; and a result determination module for A result structure set reflecting the inaccessible area is determined based on the segmentation result of the target structure set.
  • One embodiment of this specification provides a computer-readable storage medium.
  • the storage medium stores computer instructions. After the computer reads the computer instructions in the storage medium, the computer executes the interventional surgical images described in any embodiment of this specification. Helper methods.
  • One embodiment of this specification provides an interventional surgery image assistance device, including a processor, where the processor is configured to execute the interventional surgery image assistance method described in any embodiment of this specification.
  • the interventional surgery image-assisted device further includes a display device, the display device displays a segmentation result based on the interventional surgery image-assisted method executed by the processor, and the display device also displays a trigger mode option, so
  • the trigger mode options include fast segmentation mode and precise segmentation mode.
  • Figure 1 is a schematic diagram of an application scenario of an interventional surgery imaging assistance system according to some embodiments of this specification
  • Figure 2A is an exemplary flow chart of an image-assisted method for interventional surgery according to some embodiments of this specification
  • Figure 2B is an exemplary flow chart of an image-assisted method for interventional surgery provided according to some embodiments of this specification;
  • Figure 3 is an exemplary flow chart of the segmentation process involved in the image-assisted method for interventional surgery according to some embodiments of this specification;
  • Figure 4 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification
  • Figure 5 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification;
  • Figure 6 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification
  • Figure 7 is an exemplary flow chart of a process of accurately segmenting elements according to some embodiments of this specification.
  • Figure 8 is an exemplary schematic diagram of positioning information determination of element masks according to some embodiments of this specification.
  • Figure 9 is an exemplary schematic diagram of positioning information determination of element masks according to some embodiments of this specification.
  • Figure 10A is an exemplary diagram of determining the sliding direction based on positioning information of an element mask according to some embodiments of this specification;
  • Figures 10B to 10E are exemplary schematic diagrams of accurate segmentation after sliding windows according to some embodiments of this specification.
  • Figure 11 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • Figure 12 is an exemplary flow chart of the registration process of the first segmented image and the second segmented image shown in some embodiments of this specification;
  • Figures 13-14 are exemplary flow charts of the process of determining the registration deformation field shown in some embodiments of this specification.
  • Figure 15 is an exemplary demonstration diagram of obtaining the first segmented image and the second segmented image after segmentation as shown in some embodiments of this specification;
  • Figure 16 is an exemplary flow chart of an image-assisted method for interventional surgery according to some embodiments of this specification.
  • Figure 17A is an exemplary flowchart of a mask fusion algorithm according to some embodiments of the present specification.
  • FIG. 17B is an exemplary flowchart of determining a first intersection and adjusting an element mask based on the first intersection according to some embodiments of this specification;
  • Figure 18 is an exemplary flowchart of connected domain processing on a fat removal mask according to some embodiments of this specification.
  • Figure 19 is an exemplary flowchart of obtaining a processed degreasing mask based on a degreasing mask processed by connected domains according to some embodiments of the present specification
  • Figure 20 is a comparison diagram of exemplary effects of fusion of a fat removal mask and a target organ mask according to some embodiments of this specification;
  • Figure 21 is an exemplary flow chart of combining precise segmentation and fast segmentation according to some embodiments of this specification.
  • Figure 22 is an exemplary effect comparison diagram of fusion of the target organ mask and the fast segmentation result mask according to some embodiments of this specification;
  • Figure 23 is an exemplary frame diagram of an interventional surgery imaging assistance system according to some embodiments of this specification.
  • system means of distinguishing between different components, elements, parts, portions or assemblies at different levels.
  • said words may be replaced by other expressions if they serve the same purpose.
  • Figure 1 is a schematic diagram of an application scenario of an interventional surgery imaging assistance system according to some embodiments of this specification.
  • interventional surgery image assisting system 100 can be applied to a variety of interventional surgeries/interventional treatments.
  • interventional surgery/treatment may include cardiovascular interventional surgery, oncology interventional surgery, obstetrics and gynecology interventional surgery, musculoskeletal interventional surgery or any other feasible interventional surgery, such as neurointerventional surgery, etc.
  • interventional surgery/treatment may include percutaneous biopsy, coronary angiography, thrombolytic therapy, stent implantation, or any other feasible interventional surgery, such as ablation surgery, etc.
  • the interventional surgery imaging assistance system 100 may include a medical scanning device 110, a network 120, one or more terminals 130, a processing device 140, and a storage device 150.
  • the connections between components in the interventional imaging assistance system 100 may be variable.
  • medical scanning device 110 may be connected to processing device 140 via network 120 .
  • medical scanning device 110 may be directly connected to processing device 140, as indicated by the dashed bidirectional arrow connecting medical scanning device 110 and processing device 140.
  • storage device 150 may be connected to processing device 140 directly or through network 120 .
  • terminal 130 may be connected directly to processing device 140 (as shown by the dashed arrow connecting terminal 130 and processing device 140), or may be connected to processing device 140 through network 120.
  • the medical scanning device 110 may be configured to scan the scanned object using high-energy rays (such as X-rays, gamma rays, etc.) to collect scan data related to the scanned object.
  • the scan data can be used to generate one or more images of the scanned object.
  • medical scanning device 110 may include an ultrasound imaging (US) device, a computed tomography (CT) scanner, a digital radiography (DR) scanner (eg, mobile digital radiography), digital subtraction angiography (DSA) scanner, dynamic space reconstruction (DSR) scanner, X-ray microscope scanner, multi-modal scanner, etc. or a combination thereof.
  • US ultrasound imaging
  • CT computed tomography
  • DR digital radiography
  • DSA digital subtraction angiography
  • DSR dynamic space reconstruction
  • the multi-modality scanner may include a computed tomography-positron emission tomography (CT-PET) scanner, a computed tomography-magnetic resonance imaging (CT-MRI) scanner.
  • CT-PET computed tomography-positron emission tomography
  • CT-MRI computed tomography-magnetic resonance imaging
  • Scan objects can be living or non-living.
  • scan objects may include patients, artificial objects (eg, artificial phantoms), and the like.
  • scan objects may include specific parts, organs, and/or tissues of the patient.
  • the medical scanning device 110 may include a frame 111 , a detector 112 , a detection area 113 , a workbench 114 and a radioactive source 115 .
  • Rack 111 may support detector 112 and radiation source 115 .
  • Scan objects may be placed on the workbench 114 for scanning.
  • Radiation source 115 may emit radiation toward the scanned object.
  • Detector 112 may detect radiation (eg, X-rays) emitted from radiation source 115 .
  • detector 112 may include one or more detector units.
  • the detector unit may include a scintillation detector (eg, a cesium iodide detector), a gas detector, or the like.
  • the detector unit may include a single row of detectors and/or multiple rows of detectors.
  • Network 120 may include any suitable network that may facilitate the exchange of information and/or data for interventional imaging assistance system 100 .
  • one or more components of the interventional surgical imaging assistance system 100 eg, medical scanning device 110, terminal 130, processing device 140, storage device 150
  • processing device 140 may obtain imaging data from medical scanning device 110 via network 120 .
  • processing device 140 may obtain user instructions from terminal 130 via network 120.
  • Network 120 may be and/or include a public network (eg, the Internet), a private network (eg, a local area network (LAN), a wide area network (WAN), etc.), a wired network (eg, an Ethernet network, a wireless network (eg, an 802.11 network, Wi-Fi networks, etc.), cellular networks (such as Long Term Evolution (LTE) networks), Frame Relay networks, virtual private networks (“VPNs”), satellite networks, telephone networks, routers, hubs, switches, server computers and/or Any combination thereof.
  • a public network eg, the Internet
  • a private network eg, a local area network (LAN), a wide area network (WAN), etc.
  • a wired network eg, an Ethernet network
  • a wireless network eg, an 802.11 network, Wi-Fi networks, etc.
  • cellular networks such as Long Term Evolution (LTE) networks
  • Frame Relay networks such as Long Term Evolution (LTE) networks
  • VPNs virtual private networks
  • network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN), Bluetooth TM network, ZigBee TM network, Near Field Communication (NFC) network, etc., or any combination thereof.
  • network 120 may include one or more network access points.
  • network 120 may include such devices as base stations and/or Or wired and/or wireless network access points, such as Internet exchange points, through which one or more components of the interventional imaging assistance system 100 can connect to the network 120 to exchange data and/or information.
  • Terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, etc., or any combination thereof.
  • mobile device 131 may include a smart home device, a wearable device, a mobile device, a virtual reality device, an augmented reality device, the like, or any combination thereof.
  • smart home devices may include smart lighting devices, control devices for smart electrical devices, smart monitoring devices, smart TVs, smart cameras, intercoms, etc., or any combination thereof.
  • mobile device 131 may include a cell phone, personal digital assistant (PDA), gaming device, navigation device, point of sale (POS) device, laptop, tablet, desktop, etc., or any combination thereof.
  • PDA personal digital assistant
  • POS point of sale
  • virtual reality devices and/or augmented reality devices include virtual reality helmets, virtual reality glasses, virtual reality goggles, augmented reality helmets, augmented reality glasses, augmented reality goggles, etc., or any combination thereof.
  • virtual reality devices and/or augmented reality devices may include Google Glass TM , Oculus Rift TM , Hololens TM , Gear VR TM , etc.
  • terminal 130 may be part of processing device 140.
  • the processing device 140 may process data and/or information obtained from the medical scanning device 110, the terminal 130, and/or the storage device 150. interest.
  • the processing device 140 can acquire the data acquired by the medical scanning device 110, and use the data to perform imaging to generate medical images (such as pre-operative enhanced images, intra-operative scan images), and segment the medical images to generate segmentation result data (such as The first segmented image, the second segmented image, the spatial position of blood vessels and lesions during surgery, registration map, fat removal mask, etc.).
  • the processing device 140 may obtain medical images, planning mode data (such as fast segmentation mode data, precise segmentation mode data) and/or scanning protocols from the terminal 130 .
  • processing device 140 may be a single server or a group of servers. Server groups can be centralized or distributed. In some embodiments, processing device 140 may be local or remote. For example, processing device 140 may access information and/or data stored in medical scanning device 110, terminal 130, and/or storage device 150 via network 120. As another example, processing device 140 may be directly connected to medical scanning device 110, terminal 130, and/or storage device 150 to access stored information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform.
  • Storage device 150 may store data, instructions, and/or any other information.
  • storage device 150 may store data obtained from medical scanning device 110, terminal 130, and/or processing device 140.
  • the storage device 150 may store medical image data (such as pre-operative enhanced images, intra-operative scan images, first segmented images, second segmented images, etc.) and/or positioning information data acquired from the medical scanning device 110 .
  • the storage device 150 may store medical images and/or scan protocols input from the terminal 130 .
  • the storage device 150 can store data generated by the processing device 140 (for example, medical imaging data, organ mask data, positioning information data, accurately segmented result data, spatial positions of blood vessels and lesions during surgery, registration maps, Degreasing mask data, etc.) are stored.
  • data generated by the processing device 140 for example, medical imaging data, organ mask data, positioning information data, accurately segmented result data, spatial positions of blood vessels and lesions during surgery, registration maps, Degreasing mask data, etc.
  • storage device 150 may store data and/or instructions that processing device 140 may perform or be used to perform the example methods described in this specification.
  • the storage device 150 includes a mass storage device, a removable storage device, a volatile read-write memory, a read-only memory (ROM), etc., or any combination thereof.
  • Exemplary mass storage devices may include magnetic disks, optical disks, solid state drives, and the like.
  • Exemplary removable storage devices may include flash drives, floppy disks, optical disks, memory cards, compact disks, tapes, and the like.
  • Exemplary volatile read-write memory may include random access memory (RAM).
  • Exemplary RAM may include dynamic random access memory (DRAM), double data rate synchronous dynamic access memory (DDR SDRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), and zero-capacitance random access memory. Access memory (Z-RAM), etc.
  • Exemplary ROMs may include masked read-only memory (MROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), optical disk Read-only memory (CD-ROM) and digital versatile disk reallocate memory, etc.
  • the storage device 150 may be implemented on a cloud platform.
  • storage device 150 may be connected to network 120 to communicate with one or more other components (eg, processing device 140, terminal 130) in interventional imaging assistance system 100.
  • One or more components in the interventional surgery imaging assistance system 100 may access data or instructions stored in the storage device 150 via the network 120 .
  • storage device 150 may be directly connected to or in communication with one or more other components in interventional imaging assistance system 100 (eg, processing device 140, terminal 130).
  • storage device 150 may be part of processing device 140.
  • the acquisition module 2310, the segmentation module 2320 and the result determination module 2330 of Figure 23 can be different modules in one system, or one module can implement the functions of two or more modules mentioned above.
  • each module can share a storage module, or each module can have its own storage module.
  • processing device 140 and medical scanning device 110 may be integrated into a single device. Such deformations are within the scope of this manual.
  • FIG. 2A is an exemplary flow chart of an image-assisted method for interventional surgery according to some embodiments of this specification. As shown in Figure 2A, process 200A may include:
  • Step 210A Obtain medical images.
  • step 210A may be performed by the acquisition module 2310 or the medical scanning device 110.
  • Medical images refer to medical images generated based on various different imaging mechanisms.
  • the medical image may be a three-dimensional medical image.
  • the medical image may also be a two-dimensional medical image.
  • the medical images may include CT images, PET-CT images, US images, or MR images.
  • medical images of the scanned subject may be acquired.
  • medical images of the scanned object such as CT images
  • medical images of the scanned object may be acquired from the medical scanning device 110 .
  • medical images of the scanned object such as MR images
  • scan objects may include biometric scan objects or non-biological scan objects.
  • biological Scan objects may include patients, specific parts of the patient, organs and/or tissues, such as abdomen, chest or tumor tissue, etc.
  • non-biological scan objects may include artificial objects, such as artificial phantoms, and the like.
  • medical images can also be obtained through any other feasible method.
  • medical images can be obtained from a cloud server and/or a medical system (such as a hospital's medical system center, etc.) via the network 120 , the embodiments of this specification are not particularly limited.
  • medical images may include pre-surgery enhanced images.
  • Preoperative enhanced images can be referred to as preoperative enhanced images, which refer to images obtained by scanning medical scanning equipment after the scan object (such as a patient, etc.) is injected with contrast agent before surgery.
  • medical images may include intra-operative scan images. Intra-operative scanning images refer to images of the scanning object obtained through plain scanning with medical scanning equipment during surgery.
  • the intraoperative scan image may be a real-time scan image.
  • the intra-operative scan image may also be called a pre-operative plain scan image or an intra-operative plain scan image, which is a scan image taken during the surgical preparation process and before the operation is performed (that is, before the needle is actually inserted).
  • Step 220A Segment the target structure set in the medical image.
  • step 220A may be performed by segmentation module 2320.
  • the target structure set may be the parts, organs and/or tissues to be segmented in the medical image.
  • the set of target structures may include multiple elements, such as one or more of a target organ, blood vessels within the target organ, fat, thoracic/abdominal cavity, etc.
  • target organs may include lungs, liver, spleen, kidneys or any other possible organ tissue, such as thyroid gland, etc.
  • the target structure set may include a first target structure set of pre-surgery enhanced images.
  • the first set of target structures for the pre-surgery enhanced image may include blood vessels within the target organ (eg, target organ).
  • the first target structure set of the preoperative enhanced image may include target organs and lesions in addition to blood vessels within the target organ.
  • the first target structure set of the pre-surgery enhanced image may be segmented to obtain a segmentation result (eg, the first segmented image).
  • a segmentation result eg, the first segmented image.
  • the set of target structures may include a second set of target structures from the intraoperative scan image.
  • the regions or organs included in the second target structure set of the scanned images during surgery may be determined based on the planning mode of the interventional surgery (eg, fast segmentation mode and precise segmentation mode). That is, when the planning modes of the interventional surgery are different, the regions or organs included in the second target structure set are different.
  • the planning mode of the interventional surgery eg, fast segmentation mode and precise segmentation mode
  • the second target structure set may include inaccessible areas.
  • the second target structure set can include all important organs in the scanned images during surgery.
  • the second target structure set may also include target organs and lesions in addition to all important organs in the non-interventionable area/intra-operative scan image.
  • the second target structure set of the scanned image during surgery can be segmented to obtain a segmentation result (eg, a second segmented image).
  • a segmentation result eg, a second segmented image
  • preprocessing may include image preprocessing operations.
  • image preprocessing operations may include normalization and/or background removal.
  • the image segmentation method may include a threshold segmentation method, a region growing method, or a level set method.
  • methods based on deep learning convolutional networks can be used to segment target structure sets in medical images.
  • methods based on deep learning convolutional networks may include segmentation methods based on fully convolutional networks, such as UNet and so on. More information about the segmentation method can be found in Figures 3-11 and their related descriptions.
  • Step 230A Determine a result structure set reflecting the inaccessible area based on the segmentation result of the target structure set.
  • step 230A may be performed by result determination module 2330.
  • non-interventionable area refers to the area that needs to be avoided during interventional surgery.
  • non-interventional areas may include non-penetrable areas, non-introducible or implantable areas, and non-injectable areas.
  • non-interventionable areas may include, but are not limited to, organs, blood vessels, bones, etc.
  • the segmentation result of the target structure set can be obtained.
  • the segmentation results of the target structure set may include element masks.
  • the element mask (Mask) corresponding to each element in the target structure set can be obtained.
  • one or more of the target organ mask, the blood vessel mask within the target organ, the fat mask, the chest/abdominal cavity mask, etc. can be obtained. .
  • the thoracic cavity/abdominal cavity may be collectively referred to as the thoracoabdominal region, and the corresponding thoracic cavity/abdominal cavity mask is referred to as a thoracoabdominal mask.
  • the element mask can be a pixel-level classification label. Taking the abdominal medical image as an example, the element mask represents the classification of each pixel in the medical image. For example, it can be divided into background, liver, spleen, kidney, etc., a summary of specific categories. The area is represented by the corresponding label value. For example, all pixels classified as liver are summarized, and the summary area is represented by the label value corresponding to the liver.
  • the label value here can be set according to the specific segmentation task.
  • the resulting structure set may include a fat-removing mask.
  • target structures in medical images By segmenting the set, a fat mask and a thoracoabdominal mask can be obtained. Furthermore, a fat removal mask can be obtained based on the fat mask and the thoracoabdominal mask. In some embodiments, the thoracoabdominal mask and the fat mask can be subtracted, that is, the fat mask is removed from the thoracoabdominal mask, thereby obtaining a fat-removal mask of the thorax and abdomen. In interventional surgery, the fat area in the thoracic/abdominal cavity area can be considered an interventional area; the fat removal mask area is a non-interventional area. Detailed descriptions of the fat removal mask can be found elsewhere in this specification, for example, Figures 16 to 22, and their related descriptions.
  • the segmentation results of the target structure set may include segmented images.
  • a first segmented image of the first target structure set can be obtained.
  • the first segmented image is a segmented image of the pre-operative first target structure set (for example, the target organ, blood vessels in the target organ, and lesions in the pre-operative enhanced image) obtained by segmenting the pre-operative enhanced image.
  • a second segmented image of the second target structure set can be obtained.
  • the second segmented image is a segmented image of the second target structure set during the operation (for example, inaccessible areas/vital organs, target organs, lesions) obtained by segmenting the scanned images during the operation.
  • the resulting set of structures may include a third intra-operative target set of structures.
  • the spatial location of the third target structure set during the operation may be determined based on the first segmented image and the second segmented image. For example, the first segmented image and the second segmented image are registered to determine the spatial position of the third target structure set during the operation.
  • the third target structure set is a complete set of structures obtained after registering the first segmented image and the second segmented image.
  • the third target structure set can more comprehensively and accurately reflect the current condition of the scanned object (eg, patient).
  • the path planning of the interventional surgery can be planned based on the spatial position of the third target structure set, so that the puncture needle can effectively avoid the impenetrable area and/or all important organs, and successfully reach the lesion.
  • the segmented image and the third target structure set please refer to other places in this specification, for example, Figure 2B to Figure 15, and their related descriptions.
  • the result structure set (for example, fat removal mask) may be a non-interventionable area.
  • the result structure set may also be an interventionable area.
  • the total area may be Differences from the resulting structure set to reflect the inaccessible area.
  • process 200A is only for example and illustration, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 200A under the guidance of this description. However, these modifications and changes are still within the protection scope of this description.
  • Figure 2B is an exemplary flow chart of an image-assisted method for interventional surgery provided according to some embodiments of this specification. As shown in Figure 2B, process 200B may include the following steps:
  • Step 210B Segment the first target structure set of the pre-surgery enhanced image to obtain a first segmented image of the first target structure set.
  • step 210B may be performed by split module 2320.
  • the first set of target structures for the pre-operative enhancement image may include blood vessels within the target organ (eg, target organ).
  • the first target structure set of the preoperative enhanced image may include target organs and lesions in addition to blood vessels within the target organ.
  • the target organ may include the brain, lungs, liver, spleen, kidneys or any other possible organ tissue, such as the thyroid gland, etc.
  • the first segmented image is a segmented image of the pre-operative first target structure set (for example, the target organ, blood vessels in the target organ, and lesions in the pre-operative enhanced image) obtained by segmenting the pre-operative enhanced image.
  • Step 220B Segment the second target structure set of the scanned image during the operation to obtain a second segmented image of the second target structure set.
  • step 220B may be performed by segmentation module 2320.
  • the regions or organs included in the second target structure set of the scanned images during surgery may be determined based on the planning mode of the interventional surgery (eg, fast segmentation mode and precise segmentation mode). That is, when the planning modes of the interventional surgery are different, the regions or organs included in the second target structure set are different.
  • the second target structure set may include inaccessible areas.
  • the second target structure set can include all important organs in the scanned images during surgery. Vital organs refer to the organs that need to be avoided during interventional surgery, such as the liver, kidneys, blood vessels outside the target organ, etc.
  • the second target structure set may also include target organs and lesions in addition to all important organs in the non-interventionable area/intra-operative scan image.
  • the second segmented image is a segmented image of the second target structure set during the operation (for example, inaccessible areas/vital organs, target organs, lesions) obtained by segmenting the scanned images during the operation.
  • the first target structure set and the second target structure set intersect.
  • the first target structure set includes blood vessels and target organs within the target organ
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the first target structure set and the second target structure set The intersection of is the target organ.
  • the first target structure set includes blood vessels, target organs, and lesions within the target organ
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the first target structure set and the second target structure set include The intersection of the target structure set is the target organ and the lesion.
  • the planning mode of the interventional procedure may be obtained before performing step 220B.
  • Interventional surgery is a minimally invasive treatment surgery performed using modern high-tech means. Specifically, special catheters, guide wires and other precision instruments can be introduced into the human body under the guidance of medical scanning equipment or medical imaging equipment. , a medical method for diagnosing and locally treating pathological conditions in the body.
  • the interventional surgery may be an interventional surgery during the actual diagnosis and treatment phase (on the patient), or it may be It is an interventional surgery in the animal testing or simulation stage, and is not particularly limited in the embodiments of this specification.
  • the planning mode may be a planning mode for segmenting intra-operative scan images.
  • planning modes may include fast segmentation mode and precise segmentation mode.
  • segmenting the second target structure set of the intra-operative scanned image can be implemented in the following manner: segmenting the fourth target structure set of the intra-operative scanned image according to the planning mode.
  • the fourth target structure set of the scanned image during surgery can be segmented according to the fast segmentation mode and/or the precise segmentation mode.
  • the fourth target structure set may be part of the second target structure set, for example, non-interventionable areas, all important organs outside the target organ. Under different planning modes, the fourth target structure set includes different regions/organs. In some embodiments, in fast segmentation mode, the fourth set of target structures may include inaccessible regions. In some embodiments, in the precise segmentation mode, the fourth target structure set may include preset vital organs.
  • regional positioning calculations can be performed on scanned images during surgery, and inaccessible areas can be segmented and extracted.
  • post-processing can be performed on the area outside the inaccessible area and the target organ (such as the target organ) to ensure that there is no cavity area in the intermediate area between the inaccessible area and the target organ.
  • the hole area refers to the background area formed by the boundaries connected by the foreground pixels.
  • the non-interventional area can be obtained by subtracting the target organ and the interventional area from the abdominal cavity (or chest) area (for example, the fat mask can be obtained by subtracting the thoracoabdominal mask and the fat mask.
  • the fat removal mask may include a target organ mask.
  • the thoracoabdominal mask and the target organ mask can be first differentiated Obtain the thoracoabdominal target removal mask, and then subtract the thoracoabdominal target removal mask and the fat mask to obtain the fat removal mask. At this time, the fat removal mask does not include the target organ mask). After subtracting the target organ and the interventional area from the abdominal cavity (or chest) area to obtain the non-interventional area, there may be a cavity area between the target organ and the non-interventional area. This cavity area does not belong to the target organ or the inaccessible area. intervention area.
  • post-processing may include corrosion operations and expansion operations.
  • the erosion operation and the expansion operation may be performed based on convolution of the intra-operative scanned image with a filter.
  • the erosion operation can be performed by convolving the filter with the intra-operative scanned image, and then finding a local minimum according to the predetermined corrosion range, so that the outline of the intra-operative scanned image is reduced to the desired range, and the intra-operative scanned image displays the initial image. Reduce the highlighted area of the target to a certain range.
  • the expansion operation may be to convolve the filter with the intra-operative scanned image, and then find a local maximum according to the predetermined erosion range, so that the outline of the intra-operative scanned image is expanded to the desired range, and the intra-operative scanned image displays the initial image. Reduce the highlighted area of the target to a certain range.
  • the fast segmentation mode regional positioning calculations can be performed on scanned images during surgery, and then segmentation and extraction of inaccessible areas can be performed.
  • the blood vessel mask inside the target organ can be determined based on the segmentation mask and blood vessel mask of the target organ in the scanned images during surgery. It should be noted that in the fast segmentation mode, only the blood vessels inside the target organ are segmented; in the precise segmentation mode, the blood vessels inside the target organ and other external blood vessels can be segmented.
  • Mask such as organ mask
  • the mask can be a pixel-level classification label.
  • the mask represents the classification of each pixel in the medical image. For example, it can be divided into background, liver, spleen, Kidney, etc., the summary area of a specific category is represented by the corresponding label value. For example, all pixels classified as liver are summarized, and the summary area is represented by the label value corresponding to the liver.
  • the label value here can be set according to the specific rough segmentation task.
  • the segmentation mask refers to the corresponding mask obtained after the segmentation operation.
  • the masks may include organ masks (eg, organ masks of target organs) and blood vessel masks.
  • the fast segmentation mode only the thoracic cavity or abdominal cavity region is used as an example.
  • the regional positioning of the thoracic cavity or abdominal cavity region within the scanning range of the scanned image during surgery is calculated. Specifically, for the abdominal cavity, the top of the liver is selected.
  • the abdominal cavity Until the bottom of the rectum, it is used as the positioning area of the abdominal cavity; if it is the chest cavity, take the top of the esophagus to the bottom of the lungs (or the top of the liver) as the positioning area of the thoracoabdominal cavity; after determining the regional positioning information of the chest or abdominal cavity, then the abdominal cavity or The chest cavity is segmented, and segmented again within the segmented area to extract interventional areas (as opposed to non-interventional areas, such as penetrable areas, fat, etc.); finally, use the abdominal cavity or thoracic cavity segmentation mask to remove the segmentation of the target organ
  • the mask and the intrusive area mask can extract the inaccessible areas.
  • the interventional area may include a fat portion, such as a fat-containing gap between two organs.
  • a fat portion such as a fat-containing gap between two organs.
  • the liver part of the area between the subcutaneous area and the liver can be covered by fat. Due to the fast processing speed in the fast segmentation mode, the planning speed is faster, the time is shorter, and the image processing efficiency is improved.
  • the method and processing of segmentation and extraction of inaccessible areas in fast segmentation mode please refer to other places in this specification, for example, Figures 16 to 22 and their related descriptions.
  • all organs in the scanned images during surgery can be segmented.
  • all organs scanned in images during surgery may include basic organs and important organs scanned in images during surgery.
  • the basic organ of the intra-operative scan image may include a target organ (such as a target organ) of the intra-operative scan image.
  • preset important organs of scanned images during surgery can be segmented. The preset important organs can be determined based on the importance of each organ in the scanned images during surgery. For example, all vital organs in scanned images during surgery can be used as preset Vital organs.
  • the ratio of the preset total volume of important organs in the precise segmentation mode to the total volume of the inaccessible region in the fast segmentation mode may be less than the preset efficiency factor m.
  • the preset efficiency factor m can represent differences in segmentation efficiency and/or segmentation details based on different segmentation modes. By reasonably setting the preset efficiency factor m, the segmentation efficiency and segmentation detail of interventional surgery can be easily controlled.
  • the preset efficiency factor m may be equal to or less than 1. The larger the value of the preset efficiency factor m, the higher the segmentation efficiency (or the more detailed the segmentation) based on different segmentation modes; the smaller the value of the preset efficiency factor m, the higher the segmentation efficiency based on different segmentation modes.
  • the setting of the efficiency factor m is related to the type of interventional surgery.
  • Interventional surgery types may include but are not limited to urological surgery, thoracoabdominal surgery, cardiovascular surgery, obstetrics and gynecology interventional surgery, musculoskeletal surgery, etc.
  • the preset efficiency factor m in urological surgery can be set smaller; the preset efficiency factor m in thoracoabdominal surgery can be set larger.
  • the value of the preset efficiency factor m may not only affect the segmentation efficiency and/or segmentation detail, but also affect the time spent in the segmentation process.
  • a larger preset efficiency factor m means that the preset total volume of important organs in the precise segmentation mode is relatively larger, which will lead to a relatively longer accurate segmentation time. Therefore, the efficiency factor can be determined by considering comprehensively and dividing time. In other words, the efficiency factor can be comprehensively determined based on the type of interventional surgery, the requirements for segmentation efficiency, the requirements for segmentation meticulousness, and the requirements for segmentation time.
  • the efficiency factor m can be reasonably set based on big data and/or historical data.
  • the segmentation efficiency and segmentation time corresponding to different efficiency factors m in the type of interventional surgery can be collected from big data and/or historical data, and it is determined that the segmentation efficiency and segmentation time can meet the needs of this type of intervention.
  • the range of the efficiency factor m can also be optimized and updated based on the doctor's feedback, so that the updated range of the efficiency factor m can better meet the needs of interventional surgery and segmentation time.
  • the efficiency factor m of a certain interventional surgery when determining the efficiency factor m of a certain interventional surgery (recorded as the target surgery), you can first search in big data and/or historical data and determine multiple procedures that have similar conditions (such as similar surgery types) to the target surgery. operations (recorded as comparison operations), and then calculate the similarity between the target operation and each comparison operation (such as calculating the similarity based on the segmentation details, segmentation time, etc.) to select the comparison operation that is more similar to the target operation, thereby
  • the efficiency factor m of the target surgery is determined based on the efficiency factor m of the comparison surgery.
  • the efficiency factor m may range from 0.5 to 0.65.
  • the efficiency factor m may be determined based on big data and/or historical data using a machine learning model.
  • the inputs to the machine learning model may be parameters of an interventional procedure.
  • the parameters of the interventional surgery may include, but are not limited to, one or more of the types of interventional surgery, segmentation efficiency, segmentation time, etc.
  • the output of the machine learning model can be a preset efficiency factor m.
  • Machine learning models can be obtained through training based on big data and/or historical data. For example, big data and/or historical data are used as training samples to train an initial machine learning model to obtain the machine learning model.
  • machine learning models may include, but are not limited to, linear classification models (LR), support vector machine models (SVM), naive Bayes models (NB), K nearest neighbor models (KNN), decision tree models (DT ), integrated model (RF/GDBT, etc.), etc.
  • LR linear classification models
  • SVM support vector machine models
  • NB naive Bayes models
  • KNN K nearest neighbor models
  • DT decision tree models
  • RF/GDBT integrated model
  • segmentation masks of all important organs of the scanned images during surgery can be obtained through segmentation.
  • segmentation masks and blood vessel masks of all important organs in the scanned images during surgery can be obtained through segmentation.
  • the blood vessel masks inside all important organs are determined based on the segmentation masks and blood vessel masks of all important organs in the scanned images during surgery. It can be seen that in the precise segmentation mode, the segmented image content is more detailed, making the planning path more selective, and the robustness of image processing is also enhanced.
  • Figure 3 is an exemplary flowchart of a segmentation process involved in an image-assisted method for interventional surgery according to some embodiments of this specification.
  • the segmentation process 300 involved in the interventional surgery image-assisted method may include the following steps:
  • Step 310 Perform rough segmentation on at least one element in the target structure set in the medical image
  • Step 320 Obtain a mask of at least one element
  • Step 330 determine the positioning information of the mask
  • Step 340 Accurately segment at least one element based on the positioning information of the mask.
  • medical images may include pre-operative enhanced images and intra-operative scan images.
  • the target structure set may include any one or more of the first target structure set, the second target structure set, and the fourth target structure set.
  • a threshold segmentation method, a region growing method, or a level set method may be used to perform a coarse segmentation operation on at least one element in the target structure set in the medical image.
  • Elements may include target organs in medical images (eg, target organs), blood vessels within the target organ, lesions, non-interventionable areas, all important organs, etc.
  • coarse segmentation based on the threshold segmentation method can be implemented in the following manner: multiple different pixel threshold ranges can be set to classify each pixel in the medical image according to the pixel value of the input medical image, Divide pixels whose pixel values are within the same pixel threshold range into the same area.
  • coarse segmentation based on the region growing method can be implemented in the following manner: based on known pixels on the medical image or a predetermined area composed of pixels, preset similarity discrimination conditions according to needs, and based on the predetermined Set the similarity discrimination condition, compare the pixels with Compare its surrounding pixels, or compare the predetermined area with its surrounding areas, merge pixels or areas with high similarity, and stop merging until the above process cannot be repeated to complete the rough segmentation process.
  • the preset similarity discrimination condition can be determined based on preset image features, for example, such as grayscale, texture and other image features.
  • coarse segmentation based on the level set method can be implemented in the following manner: setting the target contour of the medical image as the zero level set of a high-dimensional function, differentiating the function, and extracting the zero level set from the output To obtain the contour of the target, and then segment the pixel area within the contour range.
  • a method based on a deep learning convolutional network can be used to perform a coarse segmentation operation on at least one element of the target structure set in the medical image.
  • methods based on deep learning convolutional networks may include segmentation methods based on fully convolutional networks.
  • the convolutional network can adopt a network framework based on a U-shaped structure, such as UNet, etc.
  • the network framework of the convolutional network may be composed of an encoder and a decoder and a residual connection (skip connection) structure, where the encoder and the decoder are composed of a convolutional layer or a convolutional layer combined with an attention mechanism,
  • the convolutional layer is used to extract features
  • the attention module is used to apply more attention to key areas
  • the residual connection structure is used to combine the features of different dimensions extracted by the encoder to the decoder part, and finally the segmentation result is output via the decoder .
  • coarse segmentation based on deep learning convolutional network methods can be implemented in the following manner: the encoder of the convolutional neural network performs feature extraction of medical images through convolution, and then the decoding of the convolutional neural network The feature is restored into a pixel-level segmentation probability map.
  • the segmentation probability map represents the probability that each pixel in the image belongs to a specific category. Finally, the segmentation probability map is output as a segmentation mask, thereby completing rough segmentation.
  • FIG. 4 is an exemplary flowchart of a process of determining positioning information of an element mask according to some embodiments of this specification.
  • Figure 5 is an exemplary flow chart of a soft connected domain analysis process according to the element mask shown in some embodiments of this specification.
  • Figure 6 is a comparison diagram of exemplary effects of coarse segmentation using soft connected domain analysis on element masks according to some embodiments of this specification.
  • determining the positioning information of the element mask can be implemented in the following manner: performing soft connected domain analysis on the element mask.
  • Connected domain that is, connected area, generally refers to the image area composed of foreground pixels with the same pixel value and adjacent positions in the image.
  • step 330 performs soft connected domain analysis on the element mask, which may include the following sub-steps:
  • Sub-step 331, determine the number of connected domains
  • Sub-step 332 when the number of connected domains ⁇ 2, determine the area of the connected domain that meets the preset conditions;
  • Sub-step 333 When the ratio of the area of the largest connected domain among the multiple connected domains to the total area of the connected domains is greater than the first threshold M, it is determined that the largest connected domain meets the preset conditions;
  • Sub-step 334 determine that the retained connected domain at least includes the maximum connected domain
  • Sub-step 335 Determine the positioning information of the element mask based on the preserved connected domain.
  • the preset conditions refer to the conditions that need to be met when the connected domain is retained as a connected domain.
  • the preset condition may be a limiting condition on the area of the connected domain.
  • the medical image may include multiple connected domains, and the multiple connected domains have different areas. Multiple connected domains with different areas can be sorted according to area size, for example, from large to small, and the sorted connected domains can be recorded as the first connected domain, the second connected domain, and the kth connected domain. Among them, the first connected domain may be the connected domain with the largest area among multiple connected domains, also called the maximum connected domain. In this case, the preset conditions for determining connected domains with different area orders as retained connected domains can be different.
  • the connected domains that meet the preset conditions may include: connected domains whose areas are ordered from largest to smallest and are within the preset order n.
  • the preset order n is 3, it is possible to determine whether each connected domain is a preserved connected domain in order according to the area order and according to the corresponding preset conditions. That is, first determine whether the first connected domain is a retained connected domain, and then determine whether the second connected domain is a retained connected domain.
  • the preset order n may be set based on the category of the target structure, for example, chest target structure, abdominal target structure.
  • the first threshold M may range from 0.8 to 0.95, within which the expected accuracy of soft connected domain analysis can be ensured. In some embodiments, the first threshold M may range from 0.9 to 0.95, further improving the accuracy of soft connected domain analysis. In some embodiments, the first threshold M may be set based on the category of the target structure, for example, chest target structure, abdominal target structure. In some embodiments, the preset order n/first threshold M can also be reasonably set based on machine learning and/or big data, and is not further limited here.
  • step 330 performs soft connected domain analysis on the element mask, which can be performed in the following manner:
  • the number of connected domains is 0, it means that the corresponding mask is empty, that is, the mask acquisition or rough segmentation fails or the segmentation object does not exist and no processing is performed.
  • the spleen in the abdominal cavity there may be a situation where the spleen is removed and the mask of the spleen is empty.
  • the number of connected domains When the number of connected domains is 1, it means that there is only one connected domain. There are no false positives, no separation and disconnection, etc., and the connected domain is retained. It can be understood that when the number of connected domains is 0 and 1, there is no need to pre-determine the connected domain. Set a condition to determine whether the connected domain is a preserved connected domain.
  • connected domain A When the number of connected domains is 2, obtain connected domains A and B respectively according to the size of the area (S). Among them, the area of connected domain A is larger than the area of connected domain B, that is, S(A)>S(B).
  • connected domain A can also be called the first connected domain or the maximum connected domain; connected domain B can be called the second connected domain.
  • the preset condition that the connected domain needs to satisfy as a retained connected domain can be the maximum The relationship between the ratio of the connected domain area to the total area of the connected domain and the threshold. Calculate the connected domain.
  • the connected domain B can be determined to be false.
  • the ratio of the area of A to the total area of A and B is greater than the first threshold M, that is, S(A)/S(A+B)>the first threshold M
  • the connected domain B can be determined to be false.
  • the positive area only connected domain A is retained (that is, connected domain A is determined to be a retained connected domain); when the proportion of the area of A to the total area of A and B is less than or equal to the first threshold M, both A and B can be determined to be element masks.
  • a part of the membrane that simultaneously preserves connected domains A and B that is, determines connected domains A and B as preserved connected domains).
  • the maximum connected domain (ie, connected domain A) as the preset condition that needs to be met to retain the connected domain may be the ratio of the maximum connected domain area to the total area of the connected domains.
  • the maximum connected domain (i.e., connected domain A) as the preset condition that needs to be met to retain the connected domain can also be the area of the second connected domain and the area of the largest connected domain.
  • the relationship between the ratio and the threshold for example, the second threshold N).
  • the ratio of the area of connected domain A to the total area S(T) is greater than the first threshold M, that is, S(A)/S(T)>the first threshold M, or the area of connected domain B accounts for the area of connected domain A
  • the ratio of the area is less than the second threshold N, that is, S(B)/S(A) ⁇ the second threshold N
  • the connected domain A is determined as the element mask part and retained (that is, the connected domain A is a retained connected domain), and the remaining The connected domains are all determined as false positive areas; otherwise, the calculation continues, that is, it continues to determine whether the second connected domain (ie, connected domain B) is a retained connected domain.
  • the preset condition that connected domain B needs to satisfy as a retained connected domain may be the relationship between the ratio of the sum of the areas of the first connected domain and the second connected domain to the total area of the connected domain and the first threshold M. In some embodiments, the preset conditions that connected domain B needs to satisfy as a retained connected domain may also be the ratio of the area of the third connected domain to the sum of the area of the first connected domain and the area of the second connected domain and a threshold (for example, the The size relationship between the two thresholds N).
  • the judgment method of connected domain C is similar to the judgment method of connected domain B.
  • the preset conditions that connected domain C needs to satisfy as a retained connected domain can be the sum of the areas of the first connected domain, the second connected domain and the third connected domain and the connectivity.
  • the relationship between the ratio of the total domain area and the first threshold M, or the ratio of the fourth connected domain area to the sum of the first connected domain area, the second connected domain area and the third connected domain area and the threshold for example, the The size relationship between the two thresholds N).
  • the second threshold N may range from 0.03 to 0.3, within which the expected accuracy of soft connected domain analysis can be ensured.
  • the upper and lower left are respectively the cross-sectional medical images and the three-dimensional medical images of the rough segmentation results without using soft connected domain analysis
  • the right side are respectively the cross-sectional medical images and the three-dimensional medical images of the rough segmentation results using soft connected domain analysis.
  • Medical Imaging After comparison, it can be seen that the results of rough segmentation of the element mask based on soft connected domain analysis show that the false positive areas outlined by the box in the left image are removed. Compared with the previous connected domain analysis method, the accuracy and reliability of excluding false positive areas are better. Higher, and directly contributes to the subsequent reasonable extraction of bounding boxes of element mask positioning information, improving segmentation efficiency.
  • the positioning information of the element mask may be the position information of the enclosing rectangle of the element mask, for example, the coordinate information of the border line of the enclosing rectangle.
  • the bounding rectangle of the element mask covers the positioning area of the element.
  • the bounding rectangle may be displayed in the medical image in the form of a bounding rectangular frame.
  • the bounding rectangle may be constructed based on the bottom edge of the connected area in the element in each direction, for example, the bottom edge of the connected area in the up, down, left, and right directions, to construct a circumscribing rectangular frame relative to the element mask.
  • the bounding rectangle of the element mask may be a rectangular box or a combination of multiple rectangular boxes.
  • it can be a rectangular frame with a larger area, or a rectangular frame with a larger area formed by a combination of multiple rectangular frames with smaller areas.
  • the bounding rectangle of the element mask may be a bounding rectangle in which only one rectangle exists.
  • a larger circumscribed rectangle is constructed based on the bottom edges of the connected area in all directions.
  • the above-mentioned large-area circumscribed rectangle can be applied to organs where there is a connected area.
  • the bounding rectangle of the element mask may be a circumscribing rectangular frame composed of multiple rectangular frames.
  • multiple rectangular boxes corresponding to the multiple connected areas are constructed into a rectangular box based on the bottom edges of the multiple rectangular boxes. It is understandable that if the bottom edges of the three rectangular boxes corresponding to the three connected areas are constructed into a total circumscribed rectangular box, the calculation will be processed as a total circumscribed rectangular box, which can reduce the calculation while ensuring the expected accuracy. quantity.
  • the location information of the multiple connected domains can be determined first, and then the positioning information of the element mask is obtained based on the location information of the multiple connected domains. For example, you can first determine the connected domain among multiple connected domains that meets the preset conditions, that is, retain the location information of the connected domain, and then obtain the positioning information of the element mask based on the retained location information of the connected domain.
  • determining the positioning information of the element mask may also include the following operations: positioning the element mask based on the preset positioning coordinates of the element.
  • this operation may be performed if positioning of the element mask's bounding rectangle fails. It is understandable that when the coordinates of the enclosing rectangle of the element mask do not exist, it is judged that the positioning of the corresponding element fails.
  • the preset element can select an element with a relatively stable positioning (for example, an organ with a relatively stable positioning), and the probability of positioning failure when positioning the element is low, thereby achieving accurate element masking. position.
  • a relatively stable positioning for example, an organ with a relatively stable positioning
  • the probability of positioning failure of the liver, stomach, spleen, and kidneys in the abdominal cavity is low, and the probability of positioning failure of the lungs in the thoracic cavity is low, the positioning of these organs is relatively stable, Therefore, the liver, stomach, spleen, and kidneys can be used as preset organs in the abdominal cavity, that is, the preset elements can include the liver, stomach, spleen, kidneys, lungs, or any other possible Organ tissue.
  • the organ mask in the abdominal cavity can be repositioned based on the positioning coordinates of the liver, stomach, spleen, and kidneys. In some embodiments, the organ mask in the chest range may be positioned based on the positioning coordinates of the lungs.
  • the element mask can be repositioned using the preset positioning coordinates of the element as the reference coordinates.
  • the positioning coordinates of the liver, stomach, spleen, and kidney are used as the coordinates for repositioning, and the failed element in the abdominal cavity is repositioned accordingly.
  • the positioning coordinates of the lungs are used as the coordinates for repositioning, and the element that fails to be positioned in the chest is repositioned accordingly.
  • the positioning coordinates of the top of the liver, the bottom of the kidney, the left spleen, and the right liver can be used as the cross-sectional defense line (upper and lower sides) and coronal direction (left) for repositioning. side, right side), and take the most anterior and posterior ends of the coordinates of these four organs as the coordinates of the newly positioned sagittal direction (anterior, posterior), based on which the failed elements in the abdominal cavity are re-located. position.
  • the element that fails to be positioned is located in the chest cavity, the enclosing rectangular frame formed by the lung positioning coordinates is expanded by a certain pixel, and the element that fails to be positioned in the chest is positioned again accordingly.
  • Figure 7 is an exemplary flowchart of a process of accurately segmenting elements according to some embodiments of this specification.
  • accurately segmenting at least one element based on the positioning information of the mask may include the following sub-steps:
  • Sub-step 341 Perform preliminary precise segmentation on at least one element.
  • the preliminary precise segmentation may be a precise segmentation based on the positioning information of the rough segmented element mask.
  • a preliminary precise segmentation of the element may be performed based on the input data and the bounding rectangular frame positioned by rough segmentation. Precisely segmented element masks can be generated through preliminary accurate segmentation.
  • Sub-step 342 determine whether the positioning information of the element mask is accurate. Through step 342, it can be determined whether the positioning information of the element mask obtained by rough segmentation is accurate, and further whether the rough segmentation is accurate.
  • the element mask of the preliminary precise segmentation can be calculated to obtain its positioning information, and the positioning information of the rough segmentation can be compared with the positioning information of the precise segmentation.
  • the bounding rectangular frame of the roughly segmented element mask can be compared with the bounded rectangular frame of the precisely segmented element mask to determine the difference between the two.
  • the circumscribed rectangular frame of the roughly segmented element mask can be masked with the precisely segmented element mask in six directions of the three-dimensional space (that is, the entire circumscribed rectangular frame is a cube in the three-dimensional space). Compare the surrounding rectangular boxes to determine the difference between the two.
  • whether the positioning information of the roughly segmented element mask is accurate can be determined based on the positioning information of the initially precisely segmented element mask. In some embodiments, whether the judgment result is accurate can be determined based on the difference between the coarse segmentation positioning information and the precise segmentation positioning information.
  • the positioning information may be a circumscribed rectangle (such as a circumscribed rectangle) of the element mask. The coarse segmented element mask is determined based on the circumscribed rectangle of the roughly segmented element mask and the circumscribed rectangle of the precisely segmented element mask. external moment of Is the shape accurate?
  • the difference between the coarse segmentation positioning information and the precise segmentation positioning information may refer to the distance between the closest border lines in the coarse segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame.
  • the positioning information of coarse segmentation is significantly different from the positioning information of precise segmentation (that is, the distance between the closest border lines in the rough segmentation enclosing rectangular frame and the precise segmentation enclosing rectangular frame is relatively large)
  • the positioning information of rough segmentation is inaccurate.
  • the rough segmentation bounding rectangle is obtained by pixel expansion (for example, 15-20 voxels) on the border line of the original rough segmentation close to the element.
  • whether the positioning information of coarse segmentation is accurate can be determined based on the relationship between the distance between the nearest border lines in the roughly segmented circumscribed rectangular frame and the precisely segmented circumscribed rectangular frame and a preset threshold, for example , when the distance is less than the preset threshold, it is determined to be inaccurate, and when the distance is greater than the preset threshold, it is determined to be accurate.
  • the value range of the preset threshold may be less than or equal to 5 voxels.
  • FIG. 8 to 9 are exemplary schematic diagrams of positioning information determination of element masks according to some embodiments of this specification.
  • FIG. 10A is an exemplary diagram of determining the sliding direction based on the positioning information of the element mask according to some embodiments of this specification.
  • Figures 8 and 9 show the element mask A obtained by rough segmentation and the circumscribed rectangular frame B of element mask A (that is, the positioning information of element mask A), as well as the first accurate segmentation based on the circumscribed rectangular frame of rough segmentation.
  • Figure 10A also shows the sliding window B1 obtained after sliding the roughly divided circumscribed rectangular frame B.
  • (a) is a schematic diagram before the sliding operation
  • (b) is a schematic diagram after the sliding operation.
  • a planar rectangular frame within a plane of a three-dimensional circumscribed rectangular frame is used as an example.
  • the difference between the right border line in the precisely segmented circumscribed rectangular frame C and the corresponding border line in the coarsely segmented circumscribed rectangular frame B is small. From this, it can be judged that the corresponding right border line in the roughly segmented circumscribed rectangular frame B The direction is inaccurate and the right border line needs to be adjusted.
  • the upper, lower, and left border lines in C are quite different from the upper, lower, and left sides in B. From this, it can be judged that the corresponding directions of the upper, lower, and left sides of the rough segmented external rectangular frame B are accurate.
  • Sub-step 343a when the judgment result is inaccurate, obtain accurate positioning information based on the adaptive sliding window.
  • the coarse segmentation result when the coarse segmentation result is inaccurate, the elements obtained by precise segmentation are likely to be inaccurate.
  • Corresponding adaptive sliding window calculations can be performed on them and accurate positioning information can be obtained to continue. Precise segmentation.
  • obtaining accurate positioning information based on adaptive sliding windows can be implemented in the following manner: determining at least one direction in which the positioning information is inaccurate; and performing adaptive sliding window calculations in the direction according to the overlap rate parameter.
  • at least one direction in which the circumscribed rectangular frame is inaccurate can be determined; after determining that the rough segmented circumscribed rectangular frame is inaccurate, the rough segmented circumscribed rectangular frame is slid in the corresponding direction according to the input preset overlap rate parameter, that is, Sliding window operation, and repeat the sliding window operation until all bounding rectangular boxes are completely accurate.
  • the overlap rate parameter refers to the ratio of the overlap area between the initial circumscribed rectangular frame and the sliding circumscribed rectangular frame to the area of the initial circumscribed rectangular frame.
  • the sliding step length of the sliding window operation is shorter.
  • the overlap rate parameter if you want to ensure that the sliding window calculation process is more concise (that is, there are fewer steps in the sliding window operation), you can set the overlap rate parameter smaller; if you want to ensure that the results of the sliding window calculation are more accurate, you can set The overlap ratio parameter is set larger.
  • the sliding step size for sliding window operation may be calculated according to the current overlap rate parameter. According to the judgment method in Figure 8, it can be seen that the directions corresponding to the right and lower border lines of the roughly segmented circumscribed rectangular frame B in Figure 10A are inaccurate.
  • the direction corresponding to the right border line of the external rectangular frame B is recorded as the first direction (the first direction is perpendicular to the right border line of B), and the direction corresponding to the lower border line is recorded as the second direction (the second direction). perpendicular to the lower border line of B).
  • the length of the enclosing rectangle B is a
  • the overlap rate parameter is 60%
  • it can be determined that the corresponding step size is a*(1-60%).
  • the enclosing rectangle The right border line of box B can slide a*(1-60%) along the first direction.
  • the lower border line of the external rectangular frame B can slide along the second direction by a corresponding step. Repeat the corresponding sliding window operation on the right border line and the lower border line of the external rectangular frame B until the external rectangular frame B is completely accurate, as shown in the sliding window B1 in Figure 10A(b).
  • the coordinate values of the border lines in the six directions of the finely segmented circumscribed rectangular frame are compared with the coarse segmented circumscribed rectangular frame.
  • the coordinate values of the border lines in the 6 directions on the rectangular frame are compared one by one.
  • the coordinate difference threshold for example, the coordinate difference threshold is 5pt
  • the coordinate difference threshold can be set according to the actual situation , not limited here
  • the pixel point coordinates in the four directions corresponding to the four sides in the image of the finely segmented circumscribed rectangular frame C are compared with the four directions corresponding to the four border lines in the image of the coarsely segmented circumscribed rectangular frame B.
  • the pixel coordinates are compared one by one.
  • the difference in pixel coordinates in one direction is less than the coordinate difference threshold of 8pt, it can be determined that the rough segmentation enclosing rectangular frame in Figure 8 is inaccurate in that direction. Indeed.
  • the top difference is 20pt
  • the bottom difference is 30pt
  • the right difference is 1pt
  • the left is 50pt
  • B1 is a circumscribed rectangular frame (also called a sliding window) obtained by sliding the roughly segmented circumscribed rectangular frame B.
  • the sliding window is a roughly segmented circumscribed rectangular frame that meets the expected accuracy standard.
  • the direction corresponding to each border line that does not meet the standard is moved in sequence.
  • each side sliding depends on the overlap rate of B1 and B, where the overlap rate can be the ratio of the current overlapping area of the rough segmented circumscribed rectangular frame B and the sliding window B1 to the total area.
  • the current overlap The rate is 40% and so on.
  • the sliding order of the border lines of the roughly divided circumscribed rectangular frame B may be from left to right, from top to bottom, or other feasible order, which is not further limited here.
  • 10B-10E are exemplary schematic diagrams of accurate segmentation after sliding windows according to some embodiments of this specification.
  • 10B-10E based on the original coarse segmented circumscribed rectangular frame (original sliding window), the accurate coarse segmented circumscribed rectangular frame is obtained after adaptive sliding window, and the accurate coordinate value of the circumscribed rectangular frame can be obtained, And based on the coordinate value and overlap rate parameters, the new sliding window is accurately segmented, and the accurate segmentation result is superimposed with the preliminary accurate segmentation result to obtain the final accurate segmentation result.
  • the sliding window operation can be performed on the original sliding window B to obtain the sliding window B1 (the maximum range of the circumscribed rectangular frame after the sliding window operation).
  • B slides the corresponding step along the first direction to obtain the sliding window B1- 1. Then accurately segment the entire range of the sliding window B1-1 to obtain the accurate segmentation result of the sliding window B1-1.
  • B can slide the corresponding step along the second direction to obtain the sliding window B1-2, and then accurately segment the entire range of the sliding window B1-2 to obtain an accurate segmentation result of the sliding window B1-2.
  • B can obtain the sliding window B1-3 by sliding (for example, B can obtain the sliding window B1-2 by sliding as shown in Figure 10C, and then slide the sliding window B1-2 to obtain the sliding window B1-3) , and then accurately segment the entire range of the sliding window B1-3, and obtain the accurate segmentation result of the sliding window B1-3.
  • sliding window B1-1, sliding window B1-2 and sliding window B1-3 are superimposed with the preliminary precise segmentation results to obtain the final precise segmentation result.
  • the sizes of the sliding window B1-1, the sliding window B1-2 and the sliding window B1-3 are the same as the size of B.
  • Sliding window B1 is the final sliding window result obtained by performing continuous sliding window operations on original sliding window B, namely sliding window B1-1, sliding window B1-2 and sliding window B1-3.
  • the precise segmentation results of the sliding window B1-1, the sliding window B1-2 and the sliding window B1-3 are superimposed with the preliminary precise segmentation results, there may be repeated overlapping parts.
  • the sliding window There may be an intersection between B1-1 and the sliding window B1-2.
  • the intersection may be repeatedly superimposed.
  • the following method can be used to deal with it: for a certain part of the element mask A, if the segmentation result of one sliding window is accurate for this part and the segmentation result of the other sliding window is inaccurate, the segmentation result will be accurate.
  • the segmentation result of the sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are accurate, the segmentation result of the right sliding window is used as the segmentation result of this part; if the segmentation results of the two sliding windows are not accurate , then the segmentation result of the right sliding window is used as the segmentation result of this part, and precise segmentation continues until the segmentation result is accurate.
  • obtaining accurate positioning information based on the adaptive sliding window is a cyclic process. Specifically, after comparing the precise segmentation border line and the coarse segmentation border line, the updated coordinate value of the precise segmentation external rectangular frame can be obtained through the adaptive sliding window.
  • the precise segmentation external rectangular frame is expanded by a certain pixel and set to a new one. Roughly segment the circumscribed rectangular frame in a round cycle, and then accurately segment the new circumscribed rectangular frame again to obtain a new precisely segmented circumscribed rectangular frame, and calculate whether it meets the accurate requirements. If the exact requirements are met, the cycle ends, otherwise the cycle continues.
  • a deep convolutional neural network model may be used to perform precise segmentation on at least one element in the coarse segmentation.
  • the historical medical images initially acquired before rough segmentation can be used as training data, and the historical accurate segmentation result data can be used to train the deep convolutional neural network model.
  • the historical medical images and the historical accurate segmentation result data are obtained from the medical scanning device 110 .
  • historical scanned medical images of the scanned object and historical accurate segmentation result data can be obtained from the terminal 130, the processing device 140, and the storage device 150.
  • Sub-step 343b when the judgment result is accurate, the preliminary accurate segmentation result is output.
  • the judgment result ie, the coarse segmentation result
  • At least one element result data for accurate segmentation can be output.
  • image post-processing operations may be performed before outputting the segmentation results.
  • Image post-processing operations may include edge smoothing and/or image denoising, etc.
  • edge smoothing processing may include smoothing processing or blurring processing to reduce noise or distortion of medical images.
  • smoothing processing or blurring processing may adopt the following methods: mean filtering, median filtering, Gaussian filtering, and bilateral filtering.
  • Figure 11 is an exemplary effect comparison diagram of segmentation results according to some embodiments of this specification.
  • the upper and lower left are cross-sectional medical images and three-dimensional medical images using rough segmentation results using traditional technology.
  • the right sides are respectively cross-sectional medical images and three-dimensional medical images using the organ segmentation method provided by the embodiment of the present application.
  • Step 230B Register the first segmented image and the second segmented image to determine the spatial position of the third target structure set during the operation.
  • step 230B may be performed by result determination module 2330.
  • the third target structure set is a complete set of structures obtained after registering the first segmented image and the second segmented image.
  • the third set of target structures may include the target organ (eg, target organ), blood vessels within the target organ, lesions, and other areas/organs (eg, non-interventional areas, all vital organs).
  • other regions/organs may refer to non-interventionable areas; in the precise segmentation mode, other regions/organs may refer to all important organs.
  • at least one element in the third target structure set is included in the first target structure set, and at least one element in the third target structure set is not included in the second target structure set.
  • the first target structure set includes blood vessels, target organs, and lesions within the target organ
  • the second target structure set includes inaccessible areas (or all important organs), target organs, and lesions
  • the blood vessels within the target organ are included in the first target structure set.
  • the target structure is in the set and is not included in the second target structure set.
  • the fourth target structure set can also be regarded as part of the third target structure set, for example, the inaccessible area and all important organs outside the target organ.
  • the third target structure set can more comprehensively and accurately reflect the current condition of the scanned object (eg, patient).
  • the path planning of the interventional surgery can be planned based on the spatial position of the third target structure set, so that the puncture needle can effectively avoid the impenetrable area and/or all important organs, and successfully reach the lesion.
  • the first segmented image may include precise structural features of the first target structure set (eg, blood vessels within the preoperative target organ, preoperative target organ, preoperative lesion); the second segmented image may include the second Precise structural characteristics of the target structure set (e.g., intraoperative target organ, intraoperative lesion, intraoperative inaccessible area/all vital organs).
  • the first segmented image and the second segmented image may be subjected to separation processing of the appearance features of the target structure set and the background.
  • the separation process of appearance features and background can be based on artificial neural networks (linear decision functions, etc.), threshold-based segmentation methods, edge-based segmentation methods, image segmentation methods based on cluster analysis (such as K-means) etc.) or any other feasible algorithm, such as segmentation method based on wavelet transform, etc.
  • the first segmented image includes the blood vessels in the preoperative target organ (eg, target organ) and the structural characteristics of the preoperative target organ (ie, the first target structure set includes the blood vessels and target organ in the target organ), and the second segmented image Including the structural characteristics of the intraoperative target organ, intraoperative lesions, intraoperative non-interventional areas/all important organs (i.e. the second target structure set includes the target organ, intraoperative lesions, non-interventional areas/all important organs) as an example, for the registration process Provide an illustrative description. It can be understood that the structural features of the lesion are not limited to being included in the second segmented image. In other embodiments, the structural features of the lesion may also be included in the first segmented image, or the structural features of the lesion may be included in the first segmented image at the same time. in the split image and the second split image.
  • Figure 12 is an exemplary flowchart of a process of registering a first segmented image and a second segmented image shown in some embodiments of this specification.
  • Step 231B Register the first segmented image and the second segmented image to determine the registration deformation field.
  • Registration may be an image processing operation that uses spatial transformation to make the corresponding points of the first segmented image and the second segmented image consistent in spatial position and anatomical position.
  • the registration deformation field can be used to reflect the spatial position changes of the first segmented image and the second segmented image.
  • the intra-operative scan image can be transformed in spatial position based on the registration deformation field, so that the transformed intra-operative scan image and the pre-operative enhanced image are completely consistent in spatial position and anatomical position. .
  • FIG. 13-14 are exemplary flowcharts of the process of determining a registration deformation field shown in some embodiments of this specification.
  • Figure 15 is an exemplary demonstration diagram of obtaining the first segmented image and the second segmented image after segmentation as shown in some embodiments of this specification.
  • step 231B the process of registering the first segmented image and the second segmented image and determining the registration deformation field may include the following sub-steps:
  • Sub-step 2311B determine the first preliminary deformation field based on the registration between elements.
  • the elements may be element outlines of the first segmented image and the second segmented image (eg, organ outlines, blood vessel outlines, lesion outlines).
  • the registration between elements may refer to the registration between image areas covered by the element outlines (mask).
  • the pre-surgery enhanced images in Figures 14 and 15 are segmented to obtain the image area covered by the organ outline A of the target organ (such as the target organ) (the area with the same or basically the same grayscale in the dotted line area in the lower left figure), surgery In the mid-scan image, the image area covered by the organ outline B of the target organ (such as the target organ) is obtained after segmentation (the area with the same or basically the same grayscale in the dotted area in the lower right image).
  • the first preliminary deformation field (deformation field 1 in Figure 14) is obtained through area registration between the image area covered by organ outline A and the image area covered by organ outline B.
  • the first preliminary deformation field may be a local deformation field.
  • the local deformation field about the liver contour is obtained through the liver preoperative contour A and the intraoperative contour B.
  • Sub-step 2312B Based on the first preliminary deformation field between elements, determine the second preliminary deformation field of the entire image.
  • the full image can be a region-wide image containing elements.
  • the full image can be the entire abdominal cavity. image.
  • the full image can be an image of the entire chest cavity.
  • the second preliminary deformation field of the entire image may be determined through interpolation based on the first preliminary deformation field.
  • the second preliminary deformation field may be a global deformation field.
  • the deformation field 2 of the full image size is determined by interpolation of the deformation field 1.
  • Sub-step 2313B Deform the floating image based on the second preliminary deformation field of the entire image, and determine the registration map of the floating image.
  • the floating image can be an image to be registered, for example, an enhanced image before surgery or a scanned image during surgery.
  • the floating image when registering the intra-operative scan image to the pre-operative scan image, the floating image is the intra-operative scan image.
  • the scanned image during surgery can be registered by registering the deformation field so that the spatial position of the scanned image before surgery is consistent.
  • the pre-operative enhanced image when registered to the intra-operative scanned image, the floating image is the pre-operative enhanced image.
  • the pre-operative scanned image can be registered by registering the deformation field so that the spatial position of the scanned image during surgery is consistent.
  • the registration map of the floating image may be the image of the intermediate registration result obtained during the registration process.
  • the registration map of the floating image can be the intermediate intra-operative scanned image obtained during the registration process.
  • the embodiments of this specification take the registration of pre-operative enhanced images to intra-operative scanned images as an example to describe the registration process in detail.
  • the floating image that is, the pre-surgery enhanced image
  • the pre-surgery enhanced image is deformed to determine the registration map of the pre-surgery enhanced image, that is, the intermediate registration result.
  • In-operative scan images For example, as shown in Figure 14, based on the obtained deformation field of the abdominal cavity range where the liver is located, the pre-operative enhanced image (abdominal cavity enhanced image) is deformed and its registration map is obtained.
  • Sub-step 2314B Register the registration map of the floating image with the area in the first grayscale difference range in the reference image to obtain a third preliminary deformation field.
  • the reference image refers to the target image before registration, which may also be called the target image without registration.
  • the reference image refers to the intra-operative scanned image without registration action.
  • the third preliminary deformation field may be a local deformation field.
  • sub-step 2314B can be implemented in the following manner: perform pixel grayscale calculations on different areas of the registration map of the floating image and the reference image to obtain corresponding grayscale values; calculate the grayscale of the registration map of the floating image.
  • the difference value when the difference value is within the first grayscale difference range, it may mean that the difference between an area in the registration map of the floating image and the corresponding area in the reference image is not large or relatively small.
  • the first grayscale difference range is 0 to 150
  • the grayscale difference between area Q1 in the registration map of the floating image and the same area in the reference image is 60
  • the area Q2 in the registration map of the floating image is the same as that in the reference image.
  • the grayscale difference value of the area is 180, then the difference in area Q1 of the two images (i.e., the registration map and the reference image of the floating image) is not large, while the difference in area Q2 is large. Only the area Q1 in the two images Perform registration.
  • elastic registration is performed on the registration map of the floating image and the area in the reference image that conforms to the first grayscale difference range (the area where the difference is not too large) to obtain the deformation field 3 (That is, the third preliminary deformation field mentioned above).
  • Sub-step 2315B Determine the fourth preliminary deformation field of the entire image based on the third preliminary deformation field.
  • interpolation is performed to obtain a fourth preliminary deformation field of the entire image.
  • the fourth preliminary deformation field may be a global deformation field.
  • this step can be used to obtain the local third preliminary deformation field into the global fourth preliminary deformation field.
  • the deformation field 4 of the full image size is determined by interpolation of the deformation field 3.
  • Sub-step 2316B Based on the fourth preliminary deformation field, register the area in the second grayscale difference range to obtain a final registration map.
  • the area of the second grayscale difference range may be an area where the grayscale value difference between the registration map grayscale value of the floating image and the reference image grayscale value is larger.
  • a grayscale difference threshold can be set (for example, the grayscale difference threshold is 150).
  • the area where the difference between the grayscale value of the registration map of the floating image and the grayscale value of the reference image is less than the grayscale difference threshold is The area in the first grayscale difference range, which is greater than the grayscale difference threshold, belongs to the second grayscale difference range.
  • the final registration map can be based on at least one deformation field, performing multiple deformations on the floating image (for example, the enhanced image before surgery) to obtain the same spatial position and anatomical position as the final scanned image during surgery. Image.
  • the regions in the second grayscale difference range are registered to obtain a final registration map. For example, for the area outside the spleen where the gray value difference is relatively large, the area is deformed through the deformation field 4 to obtain the final registration map.
  • elements in the floating image that are segmented and not segmented in the reference image can be extracted from the floating image. mapped to the reference image.
  • the floating image is the pre-operative enhanced image and the reference image is the intra-operative scanned image.
  • the blood vessels in the target organ are segmented in the pre-operative enhanced image and are not segmented in the intra-operative scanned image.
  • the target can be segmented through registration. Blood vessels within the organ are mapped to intraoperative scans.
  • the registration method of Figure 13- Figure 14 can also be used for the registration of inaccessible areas in the fast segmentation mode and all important organs in the precise segmentation mode, or similar results can be achieved only through the corresponding segmentation method. Effect.
  • Step 232B Determine the spatial position of the corresponding element during the operation based on the registered deformation field and the spatial position of at least some elements in the first target structure set in the pre-surgery enhanced image.
  • the spatial position of the blood vessels in the target organ during surgery (hereinafter referred to as blood vessels) may be determined based on the registration deformation field and the blood vessels in the target organ in the pre-surgery enhanced image.
  • the spatial position of the blood vessel during surgery can be determined based on the following formula (1), based on the registration deformation field and the blood vessel in the pre-surgery enhanced image:
  • I Q represents the pre-operative enhanced image
  • (x, y, z) represents the three-dimensional spatial coordinates of the blood vessel
  • u(x, y, z) represents the registration deformation field from the pre-operative enhanced image to the intra-operative scanned image. Indicates the spatial position of the blood vessel in the scanned image during surgery.
  • u(x, y, z) can also be understood as the offset of the three-dimensional coordinates of elements in the floating image (for example, blood vessels in the target organ) to the three-dimensional coordinates in the final registered registration map.
  • the blood vessels in the pre-operative enhanced image can be deformed through the registration deformation field determined in step 232B, and the spatial position of the blood vessels during surgery that is the same as its spatial position can be generated.
  • the center point of the lesion can be calculated based on the determined spatial positions of blood vessels and lesions during surgery (including in the second segmented image of the scanned image during surgery), and a safe area around the lesion and a potential needle insertion area can be generated.
  • the safe area around the lesion and the potential needle insertion area can be determined based on the determined interventional area and non-interventionable area.
  • a reference path from the percutaneous needle insertion point to the center point of the lesion can be planned based on the potential needle insertion area and basic obstacle avoidance constraints.
  • the basic obstacle avoidance constraints may include but are not limited to the needle insertion angle of the path, the needle insertion depth of the path, the path not intersecting with blood vessels and important organs, etc.
  • process 200B and process 300 are only for examples and illustrations, and do not limit the scope of application of this specification.
  • various modifications and changes can be made to process 200B and process 300 under the guidance of this specification.
  • the non-penetrable areas such as the part of the thoracoabdominal area after removing the fat area, that is, the thoracoabdominal area The non-fatty area
  • the non-penetrable area mask may appear to adhere to the surface of the target organ mask, which affects the planning of the puncture path and also causes the emergence of a large number of connected areas.
  • a mask fusion method based on rapid segmentation is needed, which can solve the problem of the inaccessible area mask attached to the target organ mask surface without changing the original mask of the target organ and the main mask of the inaccessible area.
  • accurate and fast segmentation results can finally be obtained for puncture planning. For specific methods, see the description below.
  • FIG 16 is an exemplary flow chart of an image-assisted method for interventional surgery according to some embodiments of this specification. As shown in Figure 16, process 1600 may include:
  • Step 1610 Segment the target structure set in the medical image to obtain a fat removal mask.
  • step 1610 may be performed by segmentation module 2320.
  • a fat mask and a thoracoabdominal mask can be obtained by segmenting a target structure set in a medical image (for example, a second target structure set in a scanned image during surgery). Furthermore, a fat removal mask can be obtained based on the fat mask and thoracoabdominal mask. In some embodiments, the thoracoabdominal mask and the fat mask can be subtracted, that is, the fat mask is removed from the thoracoabdominal mask, thereby obtaining a fat-removal mask of the thorax and abdomen. In some embodiments, since the target organ is located within the thoracoabdominal region, the target organ mask will also be located within the thoracoabdominal mask range.
  • Target organ masks may be included within the region.
  • the area of the fat removal mask obtained based on the thoracoabdominal mask and the fat mask may not include the target organ mask.
  • the thoracoabdominal mask and the target organ mask are first differentiated (i.e. The chest and abdomen mask removes the target organ mask) to obtain the chest and abdomen target removal mask, and then the fat removal mask is obtained based on the chest and abdomen removal target mask and the fat mask.
  • the process of segmenting the target structure set in the medical image to obtain the fat-free mask may be a segmentation process in a fast segmentation mode (also called a fast segmentation process).
  • the fat-removing mask and the target organ mask can be fused to obtain a processed fat-removing mask.
  • the delipidation mask and the target organ mask may be fused using erosion operations and expansion operations.
  • a mask fusion algorithm can be used to fuse the fat-removing mask and the target organ mask to obtain a processed fat-removing mask. For details, see Figure 17A- Figure 20 and their related descriptions.
  • Figure 17A is an exemplary flowchart of a mask fusion algorithm according to some embodiments of the present specification.
  • the mask fusion algorithm shown in FIG. 17A can be implemented according to the method described in step 1620, step 1630, and step 1640.
  • Step 1620 Determine the first intersection of the target organ mask and the fat removal mask within a preset range, and adjust the areas of the target organ mask and the fat removal mask based on the first intersection. In some embodiments, step 1620 may be performed by result determination module 2330.
  • the preset range may be the detection range of the element mask (eg, target organ mask, fat removal mask).
  • pre- The set range may include the edge range of the target organ mask.
  • the edge range of the target organ mask may be a mask area that is connected to the boundary of the target organ mask.
  • the target organ mask is located within the area of the fat removal mask, and the edge range of the target organ mask may be a partial area of the fat removal mask that is connected to the boundary of the target organ mask.
  • the target organ mask and fat removal mask areas can be adjusted. Adjustment of the mask area may include expansion and contraction of the mask area.
  • the first intersection refers to the mask value of the overlapping portion of the target organ mask and the fat-removing mask within the preset range.
  • step 1620 may include the following sub-steps:
  • Sub-step 1621 detect the target organ mask.
  • the detection of the target organ mask may be edge detection of the target organ mask. For example, detecting the boundaries of a target organ mask.
  • the boundary of the target organ mask may include edge point information of the target organ mask.
  • the edge of the target organ mask can be detected to determine the edge point of the target organ mask (for example, edge detection is performed on the target organ mask in Figure 17A).
  • the pixels of the target organ mask can be detected. If it is detected that the pixels within the area range (for example, a preset range) of the target organ mask have gaps in any of the six directions of the three-dimensional space. value, the pixel can be determined to be an edge point. This is because the target organ mask is a geometric mask composed of the same pixel values. In edge cases, the adjacent areas of the edge pixels have null values.
  • Sub-step 1622 Based on the detection results, determine the first intersection of the target organ mask and the fat removal mask within a first preset range, where the first preset range is determined according to the first preset parameter.
  • the first preset range may be the first detection range of the element mask (eg, target organ mask, fat removal mask).
  • the pixels within the first preset range of the target organ mask can be detected to determine whether the pixels are edge points. In some embodiments, it can be determined whether the edge point of the target organ mask has a fat removal mask value within the first preset range, and if so, the fat removal mask value is recorded. Similarly, all edge points of the target organ mask can be judged and all fat-free mask values that meet the conditions can be recorded. Further, a first intersection between an edge point of the target organ mask and the fat-removal mask within a first preset range is determined based on the recorded fat-removal mask value.
  • the first intersection may be an area formed by all the recorded fat removal mask values and the edge points of the target organ mask corresponding to each fat removal mask value. In some embodiments, in the first intersection area, it can be understood that the fat removal mask is attached to the target organ mask surface. In some embodiments, determining the first intersection between the edge of the target organ mask and the fat-removing mask within the first preset range may be the edge of the target organ mask and the fat-removing mask as shown in FIG. 17A Perform an "edge perimeter search."
  • the first preset range may be determined according to the first preset parameter.
  • the amplitude of adjustment of the target organ mask based on the first intersection can be adjusted by adjusting the first preset parameter. For example, if the first preset parameter is set smaller (for example, 3 pixels), the adjustment range of the target organ mask area can be smaller.
  • the first preset parameter may be a constant value obtained through experimental observation. For example only, the first preset parameter may be 3 to 4 pixels. In some embodiments, the first preset parameter can also be reasonably set based on historical data and/or big data.
  • the first preset ranges determined respectively under different first preset parameters in big data and/or historical data, and the range of the first intersection obtained under the first preset range can be collected.
  • the range of the first intersection can affect the subsequent adjustment range of the target organ mask and the fat removal mask, which in turn affects the area of the final adjusted fat removal mask. Therefore, the final adjustments corresponding to different first preset parameters can be determined.
  • the area of the final fat removal mask is determined to determine the range of the first preset parameter corresponding to the area of the fat removal mask that is more accurate and clear.
  • the range of the first preset parameter can be optimized and adjusted based on the doctor's feedback, so that a more accurate fat removal mask can be obtained within the adjusted range of the first preset parameter.
  • the first preset parameter for adjusting a certain mask when determining the first preset parameter for adjusting a certain mask (denoted as the first target mask), you can first search in big data and/or historical data and determine the parameters similar to the first target mask. Multiple masks (denoted as first comparison masks) under conditions (for example, similar adjustment amplitude), and then calculate the similarity between the first target mask and each first comparison mask to select the one with the first target mask. The first comparison mask with higher film similarity is used to determine the first preset parameters of the first target mask according to the first preset parameters of the first comparison mask.
  • the first preset parameter may be determined based on big data and/or historical data using a machine learning model.
  • the input to the machine learning model may be the first tuning parameter.
  • the first adjustment parameter may include, but is not limited to, one or more of the adjustment range of the first intersection, the adjustment range of the target organ mask, the adjustment range of the fat removal mask, and the like.
  • the output of the machine learning model may be the first preset parameter.
  • Machine learning models can be obtained through training based on big data and/or historical data. For example, big data and/or historical data are used as training samples to train an initial machine learning model to obtain the machine learning model.
  • the value of the first preset parameter can also be adjusted based on patient information (eg, gender, age, physical condition, etc.) and different organs of the same patient to adapt to the clinical situation.
  • Sub-step 1623 Based on the first intersection, make the first adjustment to the area of the target organ mask and the fat removal mask.
  • the first adjustment of the target organ mask area may be to expand the target organ mask area.
  • the first adjustment of the fat removal mask area may be to shrink the fat removal mask area.
  • the area of the target organ mask and the fat removal mask can be adjusted using the edge of the first intersection as a limit.
  • the area of the target organ mask can be expanded using the edge of the first intersection formed by all recorded fat removal mask values as a limit; using the target organ corresponding to each recorded fat removal mask value The edge of the first intersection formed by the edge points of the mask is used as the limit to shrink the area of the fat removal mask.
  • the first adjustment to the area of the target organ mask and fat removal mask based on the first intersection may be local. For example, only the marginal area of the target organ mask is expanded, and the fat-removing mask The edge area shrinks, while the main body of the target organ mask and the main body of the fat-removing mask remain unchanged.
  • the mask body e.g., the fat-removing mask body, the target organ mask
  • the connection between the main body of the membrane is broken, while also normalizing the over-segmentation of the degreasing mask.
  • Sub-step 1624 determine the second intersection of the first adjusted target organ mask and the first adjusted fat removal mask within the second preset range, wherein the second preset range is determined according to the second preset range. Parameters determined.
  • the second preset range may be the second detection range of the element mask (eg, target organ mask, fat removal mask).
  • it can be determined whether the edge point of the target organ mask after the first adjustment (such as expansion) has the fat removal mask value after the first adjustment (such as contraction) within the second preset range, If it exists, record the degreasing mask value. Similarly, all edge points of the target organ mask after the first adjustment (such as expansion) are judged, and all delipid mask values that meet the conditions are recorded.
  • a second intersection between the target organ mask after the first adjustment and the fat-free mask after the first adjustment is determined based on the recorded fat-removal mask value within the second preset range. The second intersection may be an area formed by all the recorded fat removal mask values and the edge points of the first adjusted target organ mask corresponding to each fat removal mask value.
  • the second preset range may be determined according to the second preset parameter.
  • the amplitude of adjustment of the fat removal mask based on the second intersection can be adjusted by adjusting the second preset parameter. For example, if the second preset parameter is set smaller (for example, 1 pixel), the adjustment range of the fat removal mask area can be smaller.
  • the second preset parameter may be less than or equal to the first preset parameter.
  • the second preset range is less than or equal to the first preset range.
  • the second preset parameter may be a constant value obtained through experimental observation. For example only, the second preset parameter may be 1-2 pixels.
  • the second preset parameters can also be reasonably set based on machine learning and/or big data.
  • the second preset ranges determined respectively under different second preset parameters in big data and/or historical data, and the range of the second intersection obtained under the second preset range can be collected, The range of the second intersection can affect the subsequent adjustment range of the target organ mask and the fat removal mask, and then affect the final adjusted fat removal mask area. Therefore, the final adjustments corresponding to different second preset parameters can be determined.
  • the area of the final fat removal mask can be determined, so that the range of the second preset parameter corresponding to the area of the more accurate fat removal mask can be determined.
  • the range of the second preset parameter can be optimized and adjusted based on the doctor's feedback, so that a more accurate fat removal mask can be obtained within the adjusted range of the second preset parameter.
  • the second comparison mask with higher film similarity is used to determine the second preset parameters of the second target mask according to the second preset parameters of the second comparison mask.
  • the second preset parameter may be determined based on big data and/or historical data using a machine learning model.
  • the input to the machine learning model may be a second tuning parameter.
  • the second adjustment parameter may include, but is not limited to, one or more of the adjustment range of the second intersection, the adjustment range of the target organ mask, the adjustment range of the fat removal mask, and the like.
  • the output of the machine learning model may be a second preset parameter.
  • Machine learning models can be obtained through training based on big data and/or historical data. For example, big data and/or historical data are used as training samples to train an initial machine learning model to obtain the machine learning model.
  • the value of the second preset parameter can also be adjusted based on patient information (eg, gender, age, physical condition, etc.) and different organs of the same patient to adapt to the clinical situation.
  • Sub-step 1625 Based on the second intersection, perform a second adjustment on the area of the first adjusted target organ mask and the first adjusted fat removal mask.
  • the second adjustment of the area of the target organ mask after the first adjustment may be the second expansion of the area of the target organ mask after the first adjustment.
  • the second adjustment of the area of the fat removal mask after the first adjustment may be the second contraction of the area of the fat removal mask after the first adjustment.
  • the edge of the second intersection may be used as a limit to perform a second adjustment on the area of the first adjusted target organ mask and the first adjusted fat removal mask.
  • the area of the expanded target organ mask can be expanded for the second time using the edge of the second intersection formed by all the recorded fat removal mask values as a limit; with each recorded fat removal mask value The edge of the second intersection formed by the edge points of the expanded target organ mask corresponding to the membrane value is used as the limit, and the contracted fat removal mask is shrunk for the second time.
  • the second adjustment to the area of the first adjusted target organ mask and the first adjusted fat removal mask based on the second intersection may be global. For example, after the first expansion, the entire target organ mask undergoes a second expansion, and after the first contraction, the entire target organ mask undergoes a second contraction.
  • sub-steps 1624 and 1625 can be regarded as the "opening operation" in Figure 17A, by performing a second expansion on the area of the target organ mask after the first expansion, and After the first shrinkage, the fat-removing mask is shrunk for the second time, and the distance between the remaining false positive area in sub-step 1623 and the mask body (for example, the fat-removing mask body, the target organ mask body) can be further reduced. Disconnect.
  • the magnitude of adjustment of the mask area may affect other structural organizations in the mask. For example, when the fat removal mask area is shrunk, it may affect the blood vessels in the fat removal mask. If the fat removal mask area shrinks to a large extent, it may cause the fat removal after shrinkage. Blood vessels are missing from the mask. Therefore, the adjustment amplitude of the target organ mask and the fat-removing mask area can be controlled by reasonably setting the first preset parameter and the second preset parameter, thereby reducing the impact on other structural tissues in the mask.
  • the first preset parameter can be set to 3 pixels
  • the second preset parameter can be set to 1 pixel. This setting method can ensure that the adjustment range of the target organ mask and fat removal mask area is small. Reduce the impact on other structural tissues in the mask (such as blood vessels), while also disconnecting false positive areas from the main body of the mask and reducing noise areas.
  • Step 1630 Perform connected domain processing on the adjusted fat removal mask. In some embodiments, step 1630 may be performed by result determination module 2330.
  • the area of the target organ mask and the fat removal mask undergoes the first adjustment and/or the second adjustment (i.e., the area of the target organ mask undergoes the first expansion and/or the second adjustment). expansion, and the area of the fat removal mask undergoes the first shrinkage and/or the second shrinkage), disconnecting the false positive area from the mask body (e.g., the fat removal mask body, the target organ mask body) But at the same time, there will be more noise areas between the fat removal mask and the target organ mask, which require further processing to clear the noise areas.
  • the connected domain around the adjusted fat removal mask and the adjusted target organ mask can be judged. If the connected domain is a valid connected domain, the connected domain is retained; otherwise, the connected domain is discarded. , thereby clearing the noisy area.
  • Connected domain that is, connected area, generally refers to the image area composed of foreground pixels with the same pixel value and adjacent positions in the image.
  • Figure 18 is an exemplary flowchart of connected domain processing on a fat removal mask according to some embodiments of the present specification. It can be understood that the fat-removing mask mentioned in Figure 18 may refer to the fat-removing mask after adjustment, such as shrinkage, and the target organ mask refers to the target organ mask after adjustment, such as expansion.
  • step 1630 may include the following sub-steps:
  • Sub-step 1631 determine whether the positioning information of the target organ mask overlaps with the positioning information of the pseudo-lipid connected domain
  • Sub-step 1633 when the judgment result is overlap, determine whether the pseudo-fat-removing connected domain should belong to the fat-removing mask based on the relationship between the area of the pseudo-fat-removing connected domain and the preset area threshold.
  • the pseudo-lipid connected domain refers to the connected domain within the peripheral range of the target organ mask (for example, a preset range).
  • the delipidated mask region may not be an integrally connected connected domain, but a region composed of multiple connected domains that are not connected to each other. Some of the multiple connected domains that are not connected to each other may be distributed around the target organ mask area, and these connected domains can be called pseudo-lipid connected domains.
  • the positioning information may include position information of the bounding rectangle of the element mask (ie, the "bounding box" in Figure 17A). For example, the coordinate information of the border line of the enclosing rectangle.
  • the bounding rectangle of the element mask covers the positioning area of the element.
  • the bounding rectangle may be displayed in the medical image in the form of a bounding rectangular frame. In three-dimensional space, the bounding rectangle encloses the element mask in the form of a bounding cube.
  • the bounding rectangle may be based on the bottom edges of the connected areas in the element in each direction, for example, the bottom edges of the connected areas in 6 directions in the three-dimensional space, to construct a bounding rectangle relative to the element mask.
  • the corresponding connected domain (that is, a connected domain in the pseudo-defatted connected domain) can be determined based on the positioning information (for example, a bounding box) of each connected domain in the pseudo-defatted connected domain. Whether the positioning information overlaps with the positioning information of the target organ mask. In some embodiments, when the target organ mask bounding box does not overlap with the pseudo-lipid connected domain bounding box, the pseudo-lipid connected domain may be considered not to be a noise area. At this time, the pseudo-de-fat connected domain can be identified as belonging to the de-fat mask. When the target organ mask bounding box overlaps with the pseudo-fat-free connected domain boundary box, the pseudo-fat-free connected domain can be considered to be a noise area.
  • the pseudo-fat-removing connected domain can be determined whether the pseudo-fat-removing connected domain should belong to the fat-removing mask based on the relationship between the area of the pseudo-fat-removing connected domain and the preset area threshold. It can be understood that the area of a pseudo-degreased connected domain refers to the area of a single connected domain. In some embodiments, when the area of the pseudo-fat-removing connected domain is greater than the preset area threshold, the pseudo-fat-removing connected domain may be identified as belonging to the fat-removing mask; when the area of the pseudo-fat-removing connected domain is less than or equal to the preset area threshold When , the pseudo-de-fat connected domain can be identified as not belonging to the de-fat mask.
  • connected regions identified as not belonging to the fat removal mask may be determined as false positive regions (ie, noise regions).
  • the preset area threshold may be a constant value obtained through experimental observation.
  • the preset area threshold may be in the range of 10,000 voxel points to 1,000,000 voxel points.
  • the preset area threshold may be 20,000 voxel points, 50,000 voxel points, 100,000 voxel points, etc.
  • the preset area threshold when the medical image is a two-dimensional medical image, the preset area threshold may be in the range of 100 pixels to 10,000 pixels.
  • the preset area threshold may be 300 pixels, 1000 pixels, 5000 pixels, etc.
  • the preset area threshold can be reasonably set according to the size of the target organ mask and/or the fat removal mask.
  • the preset area threshold can also be reasonably set based on machine learning and/or big data, which is not further limited here.
  • pseudo-delipided connected domains may be processed based on the identification results.
  • the identification results may include retention identification and/or discard identification.
  • the retention flag can represent a pseudo-delipid connected domain belonging to the delipid mask. Pseudo-delipid connected domains belonging to the delipid mask can be preserved.
  • the discard flag can represent pseudo-delipid connected domains that do not belong to the delipid mask. Pseudo-defatted connected domains that do not belong to the defatted mask can be discarded.
  • the identification results of pseudo-delipided connected domains can also be manually adjusted. For example, doctors can adjust the identification of pseudo-lipid connected domains based on clinical experience. For example only, if the doctor determines based on clinical experience that the pseudo-delipid connected domain identified as belonging to the delipid mask is false Positive area, you can manually adjust to discard the pseudo-delipided connected domain.
  • the target organ mask overlaps with the pseudo-fat-free connected domain, and the area of the overlapping region is very small, the relationship between the area of the pseudo-fat-free connected domain and the preset area threshold can be ignored, and the pseudo-fat-free connected domain can be discarded directly. Fat-free connected domains.
  • the target organ mask described in Figure 18 refers to the expanded target organ mask, and correspondingly, the bounding box of the target organ mask refers to the expanded target organ mask bounding box.
  • the main area of the fat removal mask and the effective connected areas around the target organ mask (for example, connected domains with an area greater than a preset area threshold) can be retained, and the noise areas (for example, Connected domains with an area less than or equal to the preset area threshold), and obtain the retained degreasing mask.
  • Step 1640 Obtain the processed fat-free mask based on the fat-free mask processed by the connected domain.
  • step 1640 may be performed by result determination module 2330.
  • FIG 19 is an exemplary flow chart for obtaining a processed fat-removing mask based on a connected domain-processed fat-removing mask according to some embodiments of this specification. As shown in Figure 17A and Figure 19, step 1640 may include the following sub-steps:
  • Sub-step 1641 detect the edges of the adjusted target organ mask.
  • Sub-step 1642 Based on the detection results, determine the adjacent boundaries between the fat-free mask processed by the connected domain and the target organ mask.
  • Sub-step 1643 perform a third adjustment on the degreasing mask processed by the connected domain based on adjacent boundaries to obtain the processed degreasing mask.
  • the target organ mask described in Figure 19 refers to the target organ mask that has been adjusted for the first time and/or the second time, and is simply recorded as the expanded target organ mask; the fat-removing mask refers to the target organ mask that has been adjusted for the first time.
  • the adjustment and/or second adjustment, and the connected domain processed defatted mask are simply denoted as connected domain processed defatted masks.
  • when detecting the expanded target organ mask if the pixels within the expanded target organ mask range have null values in any of the six directions in the three-dimensional space, it can be determined This pixel is an edge point. For more description about edge detection, please refer to the above description and will not be repeated here.
  • the detection results may include edge point information of the expanded target organ mask.
  • the adjacent boundary may be the boundary between the fat-free mask processed by the connected domain and the expanded target organ mask.
  • the adjacent boundaries may be determined based on the detection results of the expanded target organ mask and the detection results of the initial target organ mask (unadjusted target organ mask).
  • a difference calculation may be performed on the detection results (eg, edge point information) of the expanded target organ mask and the detection results of the initial target organ mask to determine adjacent boundaries.
  • the processed fat-removing mask also known as the non-interventionable area mask
  • the third adjustment of the connected domain processed degreasing mask may be an expansion of the connected domain processed degreasing mask.
  • the third adjustment process of the fat-removing mask after connected domain processing can also be called the reduction process of the fat-removing mask.
  • the processed fat-free mask can be obtained by extending the fat-free mask processed by the connected domain.
  • the processed fat removal mask may have the same shape as the initial fat removal mask (unadjusted fat removal mask), be not connected to the initial target organ mask, and have no noise areas. membrane.
  • the processed fat-free mask is obtained by fusing the fat-free mask and the target organ mask. Compared with the morphological erosion operation and the expansion operation, the processed fat-free mask obtained through the mask fusion algorithm can ensure processing The final fat removal mask will not be deformed, reduce false positive areas, and avoid over-segmentation of the target organ mask, thereby obtaining a more accurate processed fat removal mask.
  • the processed fat-free mask, the initial target organ mask, and the blood vessel mask within the target organ obtained through the mask fusion algorithm can be used as the fast segmentation result.
  • the mask fusion algorithm can also be used to fuse the thoracoabdominal mask and the target organ mask. That is, the thoracoabdominal mask is directly used to fuse with the target organ mask without fat removal.
  • Mask fusion Please refer to the above description for the specific processing method and will not be repeated here.
  • Figure 20 is a comparison diagram of exemplary effects of fusion of a fat removal mask and a target organ mask according to some embodiments of this specification.
  • the upper and lower left are respectively the cross-sectional medical images and the three-dimensional medical images with the fusion effect without using the mask fusion algorithm
  • the upper and lower right are respectively the cross-sectional medical images and the three-dimensional medical images with the fusion effect using the mask fusion algorithm. .
  • process 1600 is only for example and explanation, and does not limit the scope of application of this specification.
  • process 1600 is only for example and explanation, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to process 1600 under the guidance of this description. However, these modifications and changes are still within the protection scope of this description.
  • the target organs in the medical images can also be segmented to obtain a target organ mask, and the target organ mask is compared with the above-mentioned fast segmentation result mask. Fusion.
  • the process of segmenting the target organ in the medical image to obtain the target organ mask may be a segmentation process in a precise segmentation mode (also called a precise segmentation process).
  • Figure 21 is an exemplary flowchart of combining precise segmentation and fast segmentation according to some embodiments of this specification. As shown in Figure 21, process 2100 may include:
  • Step 2110 Obtain operation instructions.
  • step 2110 may be performed by acquisition module 2310.
  • Operation instructions may include mode instructions and data instructions.
  • the mode command may refer to the input command whether precise segmentation is required, that is, whether the target organ in the medical image needs to be segmented.
  • the target organ can be an important organ/tissue in medical imaging, such as blood vessels, bones, etc.
  • the result output by the medical image processing system is a fast segmentation result.
  • the result output by the medical image processing system is a fusion result of the fast segmentation result and the precise segmentation result.
  • the organ that needs to be segmented ie, the target organ
  • Step 2120 Segment at least one target organ in the medical image according to the operation instructions to obtain at least one target organ mask. In some embodiments, step 2120 may be performed by segmentation module 2320.
  • the segmentation module 2320 segments the selected target organ to obtain a target organ mask.
  • the segmentation method for segmenting the target organ is the same as the image segmentation method for fast segmentation.
  • it may include a threshold segmentation method, a region growing method or a level set method, or a convolutional network based on deep learning may also be used. Methods.
  • the segmentation method please refer to the segmentation method involved in fast segmentation, which will not be described again here.
  • the main difference between fast segmentation and precise segmentation is that the organs and/or tissues segmented are different and the segmentation results obtained are different.
  • precise segmentation all organs of medical images can be segmented to obtain individual organ masks, and the segmented image content will be more detailed and accurate.
  • fast segmentation only the inaccessible area is segmented as a whole to obtain the inaccessible area mask, and the segmentation efficiency is higher. Therefore, combining fast segmentation and accurate segmentation can ensure that the segmentation efficiency is high enough while also improving the accuracy of the segmentation results.
  • Step 2130 Determine the third intersection of at least one target organ mask and the fast segmentation result mask within the first preset range, and adjust the area of the at least one target organ mask and the fast segmentation result mask based on the third intersection, Among them, the fast segmentation result at least includes the processed fat-removing mask.
  • step 2130 may be performed by result determination module 2330.
  • the method of determining the third intersection is similar to the method of determining the first intersection.
  • the target organ mask can be detected; a third intersection of the target organ mask within the first preset range and the fast segmentation result mask is determined based on the detection result; and the target organ is determined based on the third intersection.
  • Mask and quickly segment the resulting masked areas for the first adjustment For example, the area of the target organ mask can be expanded based on the third intersection, and the area of the fast segmentation result mask can be contracted. The first adjustment to the target organ mask and the area of the fast segmentation result mask can be local.
  • the third intersection may be an area composed of all recorded fast segmentation result mask values and the edge points of the target organ mask corresponding to each fast segmentation result mask value.
  • the method of determining the third intersection please refer to the previous description (for example, step 1610 and its related description in Figure 16).
  • the connection between the false positive area and the mask body (for example, the fast segmentation result mask body, the target organ mask body) can be disconnected.
  • Step 2140 Perform connected domain processing on the adjusted fast segmentation result mask.
  • step 2140 may be performed by result determination module 2330.
  • performing connected domain processing on the adjusted fast segmentation result mask is basically the same as performing connected domain processing on the adjusted fat-removing mask described in step 1630 in FIG. 16 .
  • connected domain processing on the adjusted fast segmentation result mask can be implemented in the following manner: determine whether the positioning information of the target organ mask overlaps with the positioning information of the connected domain of the pseudo-fast segmentation result; when the judgment result is When there is no overlap, the connected domain of the pseudo-fast segmentation result is identified as belonging to the fast segmentation result mask; when the judgment result is overlapping, the connected domain of the pseudo-fast segmentation result is determined based on the relationship between the area of the connected domain of the pseudo-fast segmentation result and the preset area threshold. Whether it should belong to the fast segmentation result mask.
  • the connected domain of the pseudo-fast segmentation result is marked as belonging to the fast segmentation result mask, and the connected domain of the pseudo-fast segmentation result is retained; when the area of the connected domain of the pseudo-fast segmentation result is When it is less than or equal to the preset area threshold, the connected domain of the pseudo-fast segmentation result is marked as a false positive area (i.e., the noise area), and the connected domain of the pseudo-fast segmentation result is discarded.
  • the target organ mask overlaps with the connected domain of the pseudo-fast segmentation result, and the area of the overlapping area is very small, the relationship between the area of the connected domain of the pseudo-fast segmentation result and the preset area threshold may be ignored, and the area of the connected domain of the pseudo-fast segmentation result may be discarded.
  • the connected domain of the pseudo-fast segmentation result refers to the connected domain within the peripheral range of the target organ mask (for example, the first preset range).
  • the mask area of the fast segmentation result may not be a connected domain that is connected as a whole, but may be composed of unconnected domains. A region composed of multiple connected domains. Some connected domains among multiple connected domains that are not connected to each other may be distributed around the target organ mask area, and these connected domains can be called pseudo-fast segmentation result connected domains.
  • the main body area of the fast segmentation result mask and the effective connected domain around the target organ mask that is, the connected domain with an area greater than the preset area threshold
  • the noise area That is, a connected domain whose area is less than or equal to the preset area threshold
  • Step 2150 Obtain the processed fast segmentation result mask based on the fast segmentation result mask processed by the connected domain.
  • step 2150 may be performed by result determination module 2330.
  • obtaining the processed fast segmentation result mask can be implemented in the following ways: detect the expanded target organ mask; based on the detection results, determine the fast segmentation processed by the connected domain The adjacent boundary between the result mask and the target organ mask; adjust the fast segmentation result mask processed by the connected domain based on the adjacent boundary to obtain the processed fast segmentation result mask.
  • the fast segmentation result mask processed by the connected domain is expanded (that is, restored), so that the fast segmentation result mask processed by the connected domain can be restored to its original form.
  • the mask fusion method of the target organ mask and the fast segmentation result mask (that is, the mask fusion method of the accurate segmentation result and the fast segmentation result) is different from the mask fusion method between the target organ mask and the fat removal mask in Figure 16
  • the mask fusion methods are roughly the same. The difference is that when the target organ mask and the fast segmentation result mask are fused, the fast segmentation result mask is only shrunk once, eliminating the "opening operation” process. This is because when the target organ mask and the fast segmentation result mask are fused, the fast segmentation result mask is not allowed to change, and the "open operation" may cause the fast segmentation result mask to change.
  • the first preset parameter may be set to 4 pixels.
  • Figure 22 is a comparison diagram of exemplary effects of fusion of a precise segmentation result mask and a fast segmentation result mask according to some embodiments of this specification.
  • the top and bottom left are respectively the cross-sectional medical image and the three-dimensional medical image with the fusion effect without using the mask fusion algorithm
  • the top and bottom right are respectively the cross-sectional medical image and the three-dimensional medical image with the fusion effect using the mask fusion algorithm. . See the upper left picture.
  • the result of fusion of the mask and the precise segmentation result mask shows that the connection between the target organ mask and the fast segmentation result mask can be disconnected. See the image below on the left, where the two target organ (two organs enclosed by boxes) masks are covered by the fast segmentation result mask. Comparing the lower left and right images, we can see that the results of fusion of the fast segmentation result mask and the precise segmentation result mask (such as the target organ mask) based on the mask fusion algorithm can prevent the target organ mask from being masked by the fast segmentation result. membrane covering.
  • process 2100 is only for example and explanation, and does not limit the scope of application of this specification.
  • various modifications and changes can be made to the process 2100 under the guidance of this description. However, these modifications and changes are still within the protection scope of this description.
  • Figure 23 is an exemplary frame diagram of an interventional surgery imaging assistance system according to some embodiments of this specification.
  • the interventional surgery image assistance system 2300 may include an acquisition module 2310, a segmentation module 2320, and a result determination module 2330.
  • the acquisition module 2310, the segmentation module 2320 and the result determination module 2330 may be implemented in the interventional surgery image assistance system 100 shown in FIG. 1, such as in the medical scanning device 110.
  • the acquisition module 2310 is used to acquire medical images.
  • the acquisition module 2310 can be used to acquire pre-operative enhanced images and intra-operative scan images.
  • the obtaining module 2310 can also be used to obtain operating instructions.
  • the segmentation module 2320 is used to segment the target structure set in the medical image.
  • the segmentation module 2320 can be used to segment the first target structure set of the pre-operative enhanced image and the second target structure set of the intra-operative scan image.
  • the result determination module 2330 is configured to determine a result structure set reflecting the inaccessible area based on the segmentation result of the target structure set.
  • the result determination module 2330 may be used to determine the spatial location of the third target structure set during the operation based on the segmented images (the first segmented image, the second segmented image). As another example, the result determination module 2330 may be used to determine the degreasing mask based on the element mask.
  • the interventional surgical imaging assistance system 2300 may include one or more other modules.
  • the interventional surgery imaging assistance system 2300 may include a storage module to store data generated by modules of the interventional surgery imaging assistance system 2300 .
  • the interventional surgery imaging device further includes a display device, and the display device displays a segmentation result based on the interventional surgery image-assisted method executed by the processor.
  • the display device also displays trigger mode options, which include fast segmentation mode and precise segmentation mode. The operator can select the appropriate planning mode through the trigger mode option of the display device.
  • Some embodiments of this specification also provide a computer-readable storage medium that stores computer instructions.
  • the computer reads the computer instructions, the computer executes the interventional surgery imaging assistance method as in any of the above embodiments. See Figure 1 for details. The descriptions related to Figure 22 will not be repeated here.
  • numbers are used to describe the quantities of components and properties. It should be understood that such numbers used to describe the embodiments are modified by the modifiers "approximately”, “approximately” or “substantially” in some examples. Grooming. Unless otherwise stated, “about,” “approximately,” or “substantially” means that the stated number is allowed to vary by ⁇ 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending on the desired features of the individual embodiment. In some embodiments, numerical parameters should account for the specified number of significant digits and use general digit preservation methods. Although the numerical ranges and parameters used to identify the breadth of ranges in some embodiments of this specification are approximations, in specific embodiments, such numerical values are set as accurately as is feasible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本说明书实施例提供了一种介入手术影像辅助方法,包括:获取医学影像;对所述医学影像中的目标结构集进行分割;基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集。

Description

介入手术影像辅助方法、系统、装置及存储介质
交叉引用
本申请要求于2022年6月30日提交的申请号为202210761324.1的中国申请的优先权,以及于2022年7月29日提交的申请号为202210907258.4的中国申请的优先权,其全部内容通过引用结合于此。
技术领域
本说明书涉及图像处理技术领域,特别涉及一种介入手术影像辅助方法、系统、装置及存储介质。
背景技术
介入手术的手术前规划(以下简称术前规划)是指,在医学扫描设备的辅助下,获取扫描对象(如患者等)血管、病灶和器官等处的影像,结合病理学和解剖学知识,为合理规避血管和重要脏器,将特制的精密器械引入病灶处所做的手术计划,典型地如介入手术的术前规划等。术前规划是否能够准确确定医学扫描影像中血管、病灶的空间位置,以及为穿刺针的穿刺路径进行合理规划,直接影响手术中能否有效避开脏器和血管,使穿刺针顺利抵达病灶。因此,如何提供一种介入手术影像辅助方案,使得术前规划能够达到较高的精准度,就变得尤为重要,以便更好地辅助手术中准确地实施相应穿刺路径,从而获得理想的手术效果。
发明内容
本说明书实施例之一提供一种介入手术影像辅助方法,包括:获取医学影像;对所述医学影像中的目标结构集进行分割;基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集。
在一些实施例中,所述医学影像包括手术前增强影像和手术中扫描影像;所述目标结构集包括手术前增强影像的第一目标结构集和所述手术中扫描影像的第二目标结构集;所述结果结构集包括手术中第三目标结构集;所述对所述医学影像中的目标结构集进行分割,包括:对所述手术前增强影像的第一目标结构集进行分割,获得所述第一目标结构集的第一分割影像;对所述手术中扫描影像的第二目标结构集进行分割,获得所述第二目标结构集的第二分割影像;所述第一目标结构集与所述第二目标结构集有交集;所述基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集,包括:对所述第一分割影像与所述第二分割影像进行配准,确定手术中所述第三目标结构集的空间位置;所述第三目标结构集中至少有一个元素包括在所述第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
在一些实施例中,还包括:获取介入手术的规划模式,所述规划模式至少包括快速分割模式和精准分割模式;根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割。
在一些实施例中,所述根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割,还包括:在所述快速分割模式下,所述第四目标结构集包括所述不可介入区域。
在一些实施例中,所述根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割,还包括:在所述精准分割模式下,所述第四目标结构集包括预设的重要器官。
在一些实施例中,所述第四目标结构集中所述预设的重要器官总体积与所述不可介入区域总体积的比值小于预设效率因子m。
在一些实施例中,所述效率因子m的设定与所述介入手术类型相关。
在一些实施例中,所述分割包括:对所述医学影像中的所述目标结构集中的至少一个元素进行粗分割;得到至少一个所述元素的掩膜;确定所述掩膜的定位信息;基于所述掩膜的定位信息,对至少一个所述元素进行精准分割。
在一些实施例中,所述对所述第一分割影像与所述第二分割影像进行配准,包括:对所述第一分割影像与所述第二分割影像进行配准,确定配准形变场;基于所述配准形变场和所述手术前增强影像中的所述第一目标结构集中的至少部分元素的空间位置,确定手术中相应元素的空间位置。
在一些实施例中,所述确定配准形变场,包括:基于所述元素之间的配准,确定第一初步形变场;基于所述元素之间的所述第一初步形变场,确定全图的第二初步形变场;基于全图的所述第二初步 形变场,对浮动影像进行形变,确定所述浮动影像的配准图;对所述浮动影像的所述配准图与参考图像中第一灰度差异范围的区域进行配准,得到第三初步形变场;基于所述第三初步形变场,确定全图的第四初步形变场;基于所述第四初步形变场,对第二灰度差异范围的区域进行配准,获得最终配准的配准图。
在一些实施例中,所述结果结构集包括去脂掩膜;所述对所述医学影像中的目标结构集进行分割,包括:对所述医学影像中的所述目标结构集进行分割,获得所述去脂掩膜;所述基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集,包括:确定靶器官掩膜在预设范围内与所述去脂掩膜的第一交集,基于所述第一交集对所述靶器官掩膜和所述去脂掩膜的区域进行调整,获得调整后的所述去脂掩膜。
在一些实施例中,还包括:对调整后的所述去脂掩膜进行连通域处理;基于经过连通域处理的所述去脂掩膜,获得处理后的所述去脂掩膜。
在一些实施例中,所述确定靶器官掩膜在预设范围内与所述去脂掩膜的第一交集,基于所述第一交集对所述靶器官掩膜和所述去脂掩膜的区域进行调整,包括:对所述靶器官掩膜进行检测;基于检测结果,确定所述靶器官掩膜在第一预设范围内与所述去脂掩膜的第一交集,其中,所述第一预设范围根据第一预设参数确定;基于所述第一交集,对所述靶器官掩膜和所述去脂掩膜的区域进行第一次调整。
在一些实施例中,还包括:确定第一次调整后的所述靶器官掩膜在第二预设范围内与第一次调整后的所述去脂掩膜的第二交集,其中,所述第二预设范围根据第二预设参数确定;基于所述第二交集,对第一次调整后的所述靶器官掩膜和第一次调整后的所述去脂掩膜的区域进行第二次调整。
在一些实施例中,所述第二预设参数小于等于所述第一预设参数,所述第一预设参数和/或所述第二预设参数基于大数据或人工智能方式获得。
在一些实施例中,所述对调整后的所述去脂掩膜进行连通域处理,包括:判断所述靶器官掩膜的定位信息与伪去脂连通域的定位信息是否重叠;当判断结果为不重叠时,所述伪去脂连通域标识为属于所述去脂掩膜;当判断结果为重叠时,根据所述伪去脂连通域的面积与预设面积阈值的大小关系判定所述伪去脂连通域是否应属于所述去脂掩膜。
在一些实施例中,所述根据所述伪去脂连通域的面积与预设面积阈值的大小关系判定所述伪去脂连通域是否应属于去脂掩膜,包括:当所述伪去脂连通域的面积大于所述预设面积阈值时,所述伪去脂连通域标识为属于所述去脂掩膜;当所述伪去脂连通域的面积小于等于所述预设面积阈值时,所述伪去脂连通域标识为不属于所述去脂掩膜。
在一些实施例中,还包括:保留标识和/或舍弃标识,保留标识表示属于所述去脂掩膜的伪去脂连通域,舍弃标识表示不属于所述去脂掩膜的伪去脂连通域。
在一些实施例中,所述基于经过连通域处理的所述去脂掩膜,获得处理后的所述去脂掩膜,包括:对调整后的所述靶器官掩膜进行检测;基于检测结果,确定经过连通域处理的所述去脂掩膜与所述靶器官掩膜的相邻边界;基于所述相邻边界对经过连通域处理的所述去脂掩膜进行第三次调整,获得处理后的所述去脂掩膜。
在一些实施例中,还包括:获取操作指令;根据所述操作指令对所述医学影像中的至少一个目标器官进行分割,得到至少一个目标器官掩膜;确定至少一个所述目标器官掩膜在第一预设范围内与快速分割结果掩膜的第三交集,基于所述第三交集对至少一个所述目标器官掩膜和所述快速分割结果掩膜的区域进行调整,其中,所述快速分割结果掩膜至少包括处理后的所述去脂掩膜;对调整后的所述快速分割结果掩膜进行连通域处理;基于经过连通域处理的所述快速分割结果掩膜,获得处理后的所述快速分割结果掩膜。
本说明书实施例之一提供一种介入手术影像辅助系统,包括:获取模块,用于获取医学影像;分割模块,用于对所述医学影像中的目标结构集进行分割;结果确定模块,用于基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行本说明书任一实施例中所述的介入手术影像辅助方法。
本说明书实施例之一提供一种介入手术影像辅助装置,包括处理器,所述处理器用于执行本说明书任一实施例所述的介入手术影像辅助方法。
在一些实施例中,所述介入手术影像辅助装置还包括显示装置,所述显示装置显示基于所述处理器执行的介入手术影像辅助方法的分割结果,所述显示装置还显示触发模式选项,所述触发模式选项包括快速分割模式和精准分割模式。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的介入手术影像辅助系统的应用场景示意图;
图2A是根据本说明书一些实施例所示的介入手术影像辅助方法的示例性流程图;
图2B是根据本说明书一些实施例提供的介入手术影像辅助方法的示例性流程图;
图3是根据本说明书一些实施例所示的介入手术影像辅助方法中涉及的分割过程的示例性流程图;
图4是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图;
图5是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图;
图6是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图;
图7是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图;
图8是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图;
图9是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图;
图10A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示例图;
图10B-图10E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图;
图11是根据本说明书一些实施例所示的分割结果的示例性效果对比图;
图12是本说明书一些实施例中所示的对第一分割影像与第二分割影像进行配准过程的示例性流程图;
图13-图14是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图;
图15是本说明书一些实施例中所示的经过分割得到第一分割影像、第二分割影像的示例性演示图;
图16是根据本说明书一些实施例所示的介入手术影像辅助方法的示例性流程图;
图17A是根据本说明书一些实施例所示的掩膜融合算法的示例性流程图;
图17B是根据本说明书一些实施例所示的确定第一交集并基于第一交集对元素掩膜进行调整的示例性流程图;
图18是根据本说明书一些实施例所示的对去脂掩膜进行连通域处理的示例性流程图;
图19是根据本说明书一些实施例所示的基于经过连通域处理的去脂掩膜获得处理后的去脂掩膜的示例性流程图;
图20是根据本说明书一些实施例所示的对去脂掩膜与靶器官掩膜进行融合的示例性效果对比图;
图21是根据本说明书一些实施例所示的精准分割与快速分割结合的示例性流程图;
图22是根据本说明书一些实施例所示的对目标器官掩膜与快速分割结果掩膜进行融合的示例性效果对比图;
图23是根据本说明书一些实施例所示的介入手术影像辅助系统的示例性框架图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的 是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
图1是根据本说明书一些实施例所示的介入手术影像辅助系统的应用场景示意图。
在一些实施例中,介入手术影像辅助系统100可以应用于多种介入手术/介入治疗。在一些实施例中,介入手术/介入治疗可以包括心血管介入手术、肿瘤介入手术、妇产科介入手术、骨骼肌肉介入手术或其他任何可行的介入手术,如神经介入手术等。在一些实施例中,介入手术/介入治疗可以包括经皮穿刺活检术、冠状动脉造影、溶栓治疗、支架置入手术或者其他任何可行的介入手术,如消融手术等。
介入手术影像辅助系统100可以包括医学扫描设备110、网络120、一个或多个终端130、处理设备140和存储设备150。介入手术影像辅助系统100中的组件之间的连接可以是可变的。如图1所示,医学扫描设备110可以通过网络120连接到处理设备140。又例如,医学扫描设备110可以直接连接到处理设备140,如连接医学扫描设备110和处理设备140的虚线双向箭头所指示的。再例如,存储设备150可以直接或通过网络120连接到处理设备140。作为示例,终端130可以直接连接到处理设备140(如连接终端130和处理设备140的虚线箭头所示),也可以通过网络120连接到处理设备140。
医学扫描设备110可以被配置为使用高能射线(如X射线、γ射线等)对扫描对象进行扫描以收集与扫描对象有关的扫描数据。扫描数据可用于生成扫描对象的一个或以上影像。在一些实施例中,医学扫描设备110可以包括超声成像(US)设备、计算机断层扫描(CT)扫描仪、数字放射线摄影(DR)扫描仪(例如,移动数字放射线摄影)、数字减影血管造影(DSA)扫描仪、动态空间重建(DSR)扫描仪、X射线显微镜扫描仪、多模态扫描仪等或其组合。在一些实施例中,多模态扫描仪可以包括计算机断层摄影-正电子发射断层扫描(CT-PET)扫描仪、计算机断层摄影-磁共振成像(CT-MRI)扫描仪。扫描对象可以是生物的或非生物的。仅作为示例,扫描对象可以包括患者、人造物体(例如人造模体)等。又例如,扫描对象可以包括患者的特定部位、器官和/或组织。
如图1所示,医学扫描设备110可以包括机架111、探测器112、检测区域113、工作台114和放射源115。机架111可以支撑探测器112和放射源115。可以在工作台114上放置扫描对象以进行扫描。放射源115可以向扫描对象发射放射线。探测器112可以检测从放射源115发出的放射线(例如,X射线)。在一些实施例中,探测器112可以包括一个或以上探测器单元。探测器单元可以包括闪烁探测器(例如,碘化铯探测器)、气体探测器等。探测器单元可以包括单行探测器和/或多行探测器。
网络120可以包括可以促进介入手术影像辅助系统100的信息和/或数据的交换的任何合适的网络。在一些实施例中,一个或多个介入手术影像辅助系统100的组件(例如,医学扫描设备110、终端130、处理设备140、存储设备150)可以通过网络120彼此交换信息和/或数据。例如,处理设备140可以经由网络120从医学扫描设备110获得影像数据。又例如,处理设备140可以经由网络120从终端130获得用户指令。
网络120可以是和/或包括公共网络(例如,因特网)、专用网络(例如,局部区域网络(LAN)、广域网(WAN)等)、有线网络(例如,以太网络、无线网络(例如802.11网络、Wi-Fi网络等)、蜂窝网络(例如长期演进(LTE)网络)、帧中继网络、虚拟专用网络(“VPN”)、卫星网络、电话网络、路由器、集线器、交换机、服务器计算机和/或其任何组合。仅作为示例,网络120可以包括电缆网络、有线网络、光纤网络、电信网络、内联网、无线局部区域网络(WLAN)、城域网(MAN)、公用电话交换网络(PSTN)、蓝牙TM网络、ZigBeeTM网络、近场通信(NFC)网络等或其任意组合。在一些实施例中,网络120可以包括一个或多个网络接入点。例如,网络120可以包括诸如基站和/或互联网交换点之类的有线和/或无线网络接入点,通过它们,介入手术影像辅助系统100的一个或多个组件可以连接到网络120以交换数据和/或信息。
终端130可以包括移动设备131、平板计算机132、膝上型计算机133等,或其任意组合。在一些实施例中,移动设备131可以包括智能家居设备、可穿戴设备、移动设备、虚拟现实设备、增强现实设备等或其任意组合。在一些实施例中,智能家居设备可以包括智能照明设备、智能电气设备的控制设备、智能监控设备、智能电视、智能摄像机、对讲机等或其任意组合。在一些实施例中,移动设备131可能包括手机、个人数字助理(PDA)、游戏设备、导航设备、销售点(POS)设备、笔记本电脑、平板电脑、台式机等或其任何组合。在一些实施例中,虚拟现实设备和/或增强现实设备包括虚拟现实头盔、虚拟现实眼镜、虚拟现实眼罩、增强现实头盔、增强现实眼镜、增强现实眼罩等或其任意组合。例如,虚拟现实设备和/或增强现实设备可以包括Google GlassTM,Oculus RiftTM,HololensTM,Gear VRTM等。在一些实施例中,终端130可以是处理设备140的一部分。
处理设备140可以处理从医学扫描设备110、终端130和/或存储设备150获得的数据和/或信 息。例如,处理设备140可以获取医学扫描设备110获取的数据,并利用这些数据进行成像生成医学影像(如手术前增强影像、手术中扫描影像),并且对医学影像进行分割,生成分割结果数据(如第一分割影像、第二分割影像、手术中血管和病灶的空间位置、配准图、去脂掩膜等)。再例如,处理设备140可以从终端130获取医学影像、规划模式数据(如快速分割模式数据、精准分割模式数据)和/或扫描协议。
在一些实施例中,处理设备140可以是单个服务器或服务器组。服务器组可以是集中式或分布式的。在一些实施例中,处理设备140可以是本地的或远程的。例如,处理设备140可以经由网络120访问存储在医学扫描设备110、终端130和/或存储设备150中的信息和/或数据。又例如,处理设备140可以直接连接到医学扫描设备110、终端130和/或存储设备150以访问存储的信息和/或数据。在一些实施例中,处理设备140可以在云平台上实现。
存储设备150可以存储数据、指令和/或任何其他信息。在一些实施例中,存储设备150可以存储从医学扫描设备110、终端130和/或处理设备140获得的数据。例如,存储设备150可以将从医学扫描设备110获取的医学影像数据(如手术前增强影像、手术中扫描影像、第一分割影像、第二分割影像等等)和/或定位信息数据进行存储。再例如,存储设备150可以将从终端130输入的医学影像和/或扫描协议进行存储。再例如,存储设备150可以将处理设备140生成的数据(例如,医学影像数据、器官掩膜数据、定位信息数据、精准分割后的结果数据、手术中血管和病灶的空间位置、配准图、去脂掩膜数据等)进行存储。
在一些实施例中,存储设备150可以存储处理设备140可以执行或用于执行本说明书中描述的示例性方法的数据和/或指令。在一些实施例中,存储设备150包括大容量存储设备、可移动存储设备、易失性读写存储器、只读存储器(ROM)等或其任意组合。示例性大容量存储设备可以包括磁盘、光盘、固态驱动器等。示例性可移动存储设备可以包括闪存驱动器、软盘、光盘、内存卡、压缩盘、磁带等。示例性易失性读写存储器可以包括随机存取存储器(RAM)。示例性RAM可包括动态随机存取存储器(DRAM)、双倍数据速率同步动态访问存储器(DDR SDRAM)、静态随机存取存储器(SRAM)、晶闸管随机存取存储器(T-RAM)和零电容随机存取存储器(Z-RAM)等。示例性ROM可以包括掩膜式只读存储器(MROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、光盘只读存储器(CD-ROM)和数字多功能磁盘重新分配存储器等。在一些实施例中,所述存储设备150可以在云平台上实现。
在一些实施例中,存储设备150可以连接到网络120以与介入手术影像辅助系统100中的一个或多个其他组件(例如,处理设备140、终端130)通信。介入手术影像辅助系统100中的一个或多个组件可以经由网络120访问存储在存储设备150中的数据或指令。在一些实施例中,存储设备150可以直接连接到介入手术影像辅助系统100中的一个或多个其他组件或与之通信(例如,处理设备140、终端130)。在一些实施例中,存储设备150可以是处理设备140的一部分。
关于介入手术影像辅助系统100的描述旨在是说明性的,而不是限制本申请的范围。许多替代、修改和变化对本领域普通技术人员将是显而易见的。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接。在一些实施例中,图23的获取模块2310、分割模块2320和结果确定模块2330可以是一个系统中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。本申请描述的示例性实施方式的特征、结构、方法和其它特征可以以各种方式组合以获得另外的和/或替代的示例性实施例。例如,处理设备140和医学扫描设备110可以被集成到单个设备中。诸如此类的变形,均在本说明书的保护范围之内。
图2A是根据本说明书一些实施例所示的介入手术影像辅助方法的示例性流程图。如图2A所示,流程200A可以包括:
步骤210A,获取医学影像。在一些实施例中,步骤210A可以由获取模块2310或医学扫描设备110执行。
医学影像是指基于各种不同成像机理生成的医学影像。在一些实施例中,医学影像可以是三维医学影像。在一些实施例中,医学影像也可以是二维医学影像。在一些实施例中,医学影像可以包括CT影像、PET-CT影像、US影像或MR影像。
在一些实施例中,可以获取扫描对象的医学影像。在一些实施例中,可以从医学扫描设备110获取扫描对象的医学影像,如CT影像等。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的医学影像,如MR影像等。
在一些实施例中,扫描对象可以包括生物扫描对象或非生物扫描对象。在一些实施例中,生物 扫描对象可以包括患者、患者的特定部位、器官和/或组织,例如腹部、胸部或肿瘤组织等。在一些实施例中,非生物扫描对象可以包括人造物体,例如人造模体等。
需要说明的是,在另一些实施例中,还可以通过任何其他可行的方式获取医学影像,例如,可以经由网络120从云端服务器和/或医疗系统(如医院的医疗系统中心等)获取医学影像,本说明书实施例不做特别限定。
在一些实施例中,医学影像可以包括手术前增强影像。手术前增强影像可以简称为术前增强影像,是指扫描对象(如患者等)在手术前注入造影剂后,经由医学扫描设备扫描得到的影像。在一些实施例中,医学影像可以包括手术中扫描影像。手术中扫描影像是指扫描对象在手术中经由医学扫描设备平扫得到的影像。在一些实施例中,手术中扫描影像可以是实时扫描影像。在一些实施例中,手术中扫描影像也可以称为术前平扫影像或术中平扫影像,是手术准备过程中且手术执行前(即实际进针前)的扫描影像。
步骤220A,对医学影像中的目标结构集进行分割。在一些实施例中,步骤220A可以由分割模块2320执行。
目标结构集可以是医学影像中待分割的部位、器官和/或组织。在一些实施例中,目标结构集可以包括多个元素,例如,靶器官、靶器官内的血管、脂肪、胸腔/腹腔等中的一种或多种。在一些实施例中,靶器官可以包括肺、肝脏、脾脏、肾脏或其他任何可能的器官组织,如甲状腺等。
在一些实施例中,目标结构集可以包括手术前增强影像的第一目标结构集。手术前增强影像的第一目标结构集可以包括目标器官(例如,靶器官)内的血管。在一些实施例中,手术前增强影像的第一目标结构集除了靶器官内的血管外,还可以包括靶器官和病灶。在一些实施例中,可以对手术前增强影像的第一目标结构集进行分割,获得分割结果(例如,第一分割影像)。关于手术前增强影像的第一目标结构集的分割的具体描述可以参见图2B-图15及其相关描述。
在一些实施例中,目标结构集可以包括手术中扫描影像的第二目标结构集。手术中扫描影像的第二目标结构集所包括的区域或器官可以基于介入手术的规划模式(例如,快速分割模式和精准分割模式)确定。也即是,介入手术的规划模式不同时,第二目标结构集包括的区域或器官不同。例如,规划模式在快速分割模式下,第二目标结构集可以包括不可介入区域。又例如,规划模式在精准分割模式下,第二目标结构集可以包括手术中扫描影像中所有重要器官。重要器官是指介入手术时介入规划路径需要避开的器官,例如,肝脏、肾脏、靶器官外部血管等。在一些实施例中,第二目标结构集除不可介入区域/手术中扫描影像中所有重要器官外,也可以包括靶器官和病灶。在一些实施例中,可以对手术中扫描影像的第二目标结构集进行分割,获得分割结果(例如,第二分割影像)。关于手术中扫描影像的第二目标结构集的分割的具体描述可以参见本说明书的其他地方,例如,图2B-图15,及其相关描述。
在一些实施例中,经由步骤210A获取医学影像后,可以对医学影像进行预处理,并对经过预处理的影像中的目标结构集中的各元素进行分割。在一些实施例中,预处理可以包括影像预处理操作。在一些实施例中,影像预处理操作可以包括归一化处理和/或去背景处理。
在一些实施例中,图像分割方法可以包括阈值分割方法、区域生长方法或水平集方法。在一些实施例中,可以利用基于深度学习卷积网络的方法,对医学影像中的目标结构集进行分割的操作。在一些实施例中,基于深度学习卷积网络的方法可以包括基于全卷积网络的分割方法,如UNet等。关于分割方法的更多内容可以参见图3-图11,及其相关描述。
步骤230A,基于目标结构集的分割结果确定用于反映不可介入区域的结果结构集。在一些实施例中,步骤230A可以由结果确定模块2330执行。
不可介入区域是指在介入手术时介入规划路径需要避开的区域。在一些实施例中,不可介入区域可以包括不可穿刺区域、不可导入或置入区域以及不可注入区域。在一些实施例中,不可介入区域可以包括但不限于脏器、血管、骨骼等。
对医学影像的目标结构集进行分割后可以得到目标结构集的分割结果。在一些实施例中,目标结构集的分割结果可以包括元素掩膜。对医学影像中的目标结构集进行分割,可以得到目标结构集中的各元素对应的元素掩膜(Mask)。在一些实施例中,对医学影像中的目标结构集进行分割,可以分别得到靶器官掩膜、靶器官内的血管掩膜、脂肪掩膜、胸腔/腹腔掩膜等中的一种或多种。在一些实施例中,胸腔/腹腔可以统称为胸腹区域,对应的胸腔/腹腔掩膜称为胸腹掩膜。元素掩膜可以是像素级的分类标签,以腹腔医学影像为例进行说明,元素掩膜表示对医学影像中各个像素进行分类,例如,可以分成背景、肝脏、脾脏、肾脏等,特定类别的汇总区域用相应的标签值表示,例如,所有分类为肝脏的像素进行汇总,汇总区域用肝脏对应的标签值表示,这里的标签值可以根据具体分割任务进行设定。
在一些实施例中,结果结构集可以包括去脂掩膜。在一些实施例中,对医学影像中的目标结构 集进行分割,可以得到脂肪掩膜和胸腹掩膜,进一步地,可以基于脂肪掩膜和胸腹掩膜得到去脂掩膜。在一些实施例中,可以将胸腹掩膜与脂肪掩膜做差,即胸腹掩膜去掉脂肪掩膜,从而得到胸腹的去脂掩膜。在介入手术中,胸腔/腹腔区域中的脂肪区域可以认为是可介入区域;去脂掩膜区域是不可介入区域。关于去脂掩膜的具体描述可以参见本说明书其他地方,例如,图16-图22,及其相关描述。
在一些实施例中,目标结构集的分割结果可以包括分割影像。在一些实施例中,对手术前增强影像的第一目标结构集进行分割,可以获得第一目标结构集的第一分割影像。第一分割影像是对手术前增强影像分割得到的手术前第一目标结构集(例如,手术前增强影像中的靶器官、靶器官内的血管、病灶)的分割影像。在一些实施例中,对手术中扫描影像的第二目标结构集进行分割,可以获得第二目标结构集的第二分割影像。第二分割影像是对手术中扫描影像分割得到的手术中第二目标结构集(例如,不可介入区域/重要器官、靶器官、病灶)的分割影像。
在一些实施例中,结果结构集可以包括手术中第三目标结构集。在一些实施例中,可以根据第一分割影像和第二分割影像确定手术中第三目标结构集的空间位置。例如,对第一分割影像和第二分割影像进行配准,确定手术中第三目标结构集的空间位置。第三目标结构集是第一分割影像与第二分割影像配准后得到的结构全集。第三目标结构集能够更全面、准确的反映扫描对象(例如,患者)的当前状况。介入手术的路径规划可以基于第三目标结构集的空间位置进行规划,以使穿刺针能够有效避开不可穿区域和/或所有重要器官,并顺利抵达病灶。关于分割影像以及第三目标结构集的具体描述可以参见本说明书其他地方,例如,图2B-图15,及其相关描述。
需要说明的是,如上文所述,结果结构集(例如,去脂掩膜)可以是不可介入区域,在其他实施例中,结果结构集也可以是可介入区域,此时,可以将总区域与结果结构集做差,从而反映出不可介入区域。
应当注意的是,上述有关流程200A的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程200A进行各种修正和改变,然而,这些修正和改变仍在本说明书的保护范围之内。
图2B是根据本说明书一些实施例提供的介入手术影像辅助方法的示例性流程图。图2B所示,流程200B可以包括以下步骤:
步骤210B,对手术前增强影像的第一目标结构集进行分割,获得第一目标结构集的第一分割影像。在一些实施例中,步骤210B可以分割模块2320执行。
在一些实施例中,手术前增强影像的第一目标结构集可以包括目标器官(例如,靶器官)内的血管。在一些实施例中,手术前增强影像的第一目标结构集除了靶器官内的血管外,还可以包括靶器官和病灶。在一些实施例中,靶器官可以包括大脑、肺、肝脏、脾脏、肾脏或其他任何可能的器官组织,如甲状腺等。第一分割影像是对手术前增强影像分割得到的手术前第一目标结构集(例如,手术前增强影像中的靶器官、靶器官内的血管、病灶)的分割影像。
步骤220B,对手术中扫描影像的第二目标结构集进行分割,获得第二目标结构集的第二分割影像。在一些实施例中,步骤220B可以由分割模块2320执行。
在一些实施例中,手术中扫描影像的第二目标结构集所包括的区域或器官可以基于介入手术的规划模式(例如,快速分割模式和精准分割模式)确定。也即是,介入手术的规划模式不同时,第二目标结构集包括的区域或器官不同。例如,规划模式在快速分割模式下,第二目标结构集可以包括不可介入区域。又例如,规划模式在精准分割模式下,第二目标结构集可以包括手术中扫描影像中所有重要器官。重要器官是指介入手术时介入规划路径需要避开的器官,例如,肝脏、肾脏、靶器官外部血管等。在一些实施例中,第二目标结构集除不可介入区域/手术中扫描影像中所有重要器官外,也可以包括靶器官和病灶。第二分割影像是对手术中扫描影像分割得到的手术中第二目标结构集(例如,不可介入区域/重要器官、靶器官、病灶)的分割影像。
在一些实施例中,第一目标结构集和第二目标结构集有交集。例如,第一目标结构集包括靶器官内的血管和靶器官,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官。又例如,第一目标结构集包括靶器官内的血管、靶器官和病灶,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,第一目标结构集和第二目标结构集的交集为靶器官和病灶。
在一些实施例中,可以在执行步骤220B之前,获取介入手术的规划模式。
介入手术,或称介入治疗,利用现代高科技手段进行的一种微创性治疗手术,具体地可在医学扫描设备或医学影像设备的引导下,将特制的导管、导丝等精密器械引入人体,对体内病态进行诊断和局部治疗的医学手段。在一些实施例中,介入手术可以是实际(对患者)诊疗阶段的介入手术,也可以 是动物试验或模拟阶段的介入手术,本说明书实施例对其不作特别限制。
在一些实施例中,规划模式可以是用于对手术中扫描影像分割的规划模式。在一些实施例中,规划模式可以包括快速分割模式和精准分割模式。
在一些实施例中,步骤220B中,对手术中扫描影像的第二目标结构集进行分割,可以按以下方式实施:根据规划模式,对手术中扫描影像的第四目标结构集进行分割。在一些实施例中,可以根据快速分割模式和/或精准分割模式,对手术中扫描影像的第四目标结构集进行分割。
在一些实施例中,第四目标结构集可以是第二目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。在不同规划模式下,第四目标结构集包括的区域/器官不同。在一些实施例中,在快速分割模式下,第四目标结构集可以包括不可介入区域。在一些实施例中,在精准分割模式下,第四目标结构集可以包括预设的重要器官。
在一些实施例中,在快速分割模式下,可以对手术中扫描影像进行区域定位计算,以及对不可介入区域进行分割提取。
在一些实施例中,可以对不可介入区域和目标器官(如靶器官)之外的区域进行后处理,以保障不可介入区域与目标器官的中间区域不存在空洞区域。空洞区域是指由前景像素相连接的边界包围所形成的背景区域。在一些实施例中,不可介入区域可以用腹腔(或是胸腔)区域减去目标器官和可介入区域得到(例如,可以将胸腹掩膜与脂肪掩膜做差得到去脂掩膜。此时,由于靶器官(也就是目标器官)是位于胸腹区域内的,因此,该去脂掩膜中可能包含靶器官掩膜。又例如,可以先将胸腹掩膜与靶器官掩膜做差得到胸腹去靶掩膜,再将胸腹去靶掩膜与脂肪掩膜做差得到去脂掩膜。此时,该去脂掩膜中不包括靶器官掩膜)。而用腹腔(或是胸腔)区域减去目标器官和可介入区域得到不可介入区域后,目标器官和不可介入区域的中间可能会存在空洞区域,该空洞区域既不属于目标器官,也不属于不可介入区域。此时,可以对空洞区域进行后处理操作以将空洞区域补全,也即是经过后处理操作的空洞区域可以视为不可介入区域。在一些实施例中,后处理可以包括腐蚀操作和膨胀操作。在一些实施例中,腐蚀操作和膨胀操作可以基于手术中扫描影像与滤波器进行卷积处理来实施。在一些实施例中,腐蚀操作可以是滤波器与手术中扫描影像卷积后,根据预定腐蚀范围求局部最小值,使得手术中扫描影像的轮廓缩小至期望范围,在手术中扫描影像显示初始影像中目标高亮区域缩小一定范围。在一些实施例中,膨胀操作可以是滤波器与手术中扫描影像卷积后,根据预定腐蚀范围求局部最大值,使得手术中扫描影像的轮廓扩大至期望范围,在手术中扫描影像显示初始影像中目标高亮区域缩小一定范围。
在一些实施例中,在快速分割模式下,可以在对手术中扫描影像先进行区域定位计算,再进行不可介入区域的分割提取。在一些实施例中,可以基于手术中扫描影像的目标器官的分割掩膜和血管掩膜,确定目标器官内部的血管掩膜。需要说明的是,在快速分割模式下,仅需分割目标器官内部的血管;在精准分割模式下,可以分割目标器官内部的血管以及外部其他血管。
掩膜(Mask),如器官掩膜,可以是像素级的分类标签,以腹腔医学影像为例进行说明,掩膜表示对医学影像中各个像素进行分类,例如,可以分成背景、肝脏、脾脏、肾脏等,特定类别的汇总区域用相应的标签值表示,例如,所有分类为肝脏的像素进行汇总,汇总区域用肝脏对应的标签值表示,这里的标签值根据可以具体粗分割任务进行设定。分割掩膜是指经过分割操作后得到的相应掩膜。在一些实施例中,掩膜可以包括器官掩膜(如目标器官的器官掩膜)和血管掩膜。
在一些实施例中,快速分割模式下,仅以胸腔区域或腹腔区域作为示例,首先对手术中扫描影像的扫描范围内胸腔或是腹腔区域的区域定位计算,具体地,对于腹腔,选取肝顶直至直肠底部,作为腹腔的定位区域;如果是胸腔,则取食管顶至肺底(或肝顶),作为胸腹腔的定位区域;确定胸腔或是腹腔区域的区域定位信息后,再对腹腔或是胸腔进行分割,并在该分割区域内进行再次分割以提取可介入区域(与不可介入区域相对,如可穿区域,脂肪等);最后,用腹腔或是胸腔分割掩膜去掉目标器官的分割掩膜和可介入区域掩膜,即可提取到不可介入区域。在一些实施例中,可介入区域可以包括脂肪部分,如两个器官之间包含脂肪的缝隙等。以肝脏为例,皮下至肝脏之间的区域中的部分区域可以被脂肪覆盖。由于快速分割模式下处理速度快,进而使得规划速度更快,时间更短,提高了影像处理效率。关于快速分割模式下的不可介入区域的分割提取的方法和处理的具体描述可以参见本说明书其他地方,例如,图16-图22及其相关描述。
在一些实施例中,在精准分割模式下,可以对手术中扫描影像的所有器官进行分割。在一些实施例中,手术中扫描影像的所有器官可以包括手术中扫描影像的基本器官以及重要器官。在一些实施例中,手术中扫描影像的基本器官可以包括手术中扫描影像的目标器官(如靶器官)。在一些实施例中,在精准分割模式下,可以对手术中扫描影像的预设的重要器官进行分割。预设的重要器官可以根据手术中扫描影像的每个器官的重要程度来确定。例如,手术中扫描影像中的所有重要器官均可以作为预设的 重要器官。在一些实施例中,精准分割模式下的预设的重要器官总体积与快速分割模式下的不可介入区域总体积的比值可以小于预设效率因子m。预设效率因子m可以表征基于不同分割模式进行分割的分割效率和/或分割细致程度的差异情况。通过合理设置预设效率因子m可以便于控制介入手术的分割效率和分割细致程度。在一些实施例中,预设效率因子m可以等于或小于1。预设效率因子m的值越大,表征基于不同分割模式进行分割的分割效率越高(或分割细致程度越高);预设效率因子m的值越小,表征基于不同分割模式进行分割的分割效率越低(或分割细致程度越低)。在一些实施例中,效率因子m的设定与介入手术类型有关。介入手术类型可以包括但不限于泌尿手术、胸腹手术、心血管手术、妇产科介入手术、骨骼肌肉手术等。仅作为示例性说明,泌尿手术中的预设效率因子m可以设置的较小;胸腹手术中的预设效率因子m可以设置的较大。
在一些实施例中,预设效率因子m的值不仅可以影响分割效率和/或分割细致程度,还能影响分割过程所用的时间。例如,预设效率因子m越大,意味着精准分割模式下的预设的重要器官总体积相对越大,由此会导致精准分割时间相对较长。因此,可以综合考虑以及分割时间,来确定效率因子。也就是说,效率因子可以基于介入手术类型、分割效率的要求、分割细致程度的要求、分割时间的要求来综合确定。在一些实施例中,效率因子m可以基于大数据和/或历史数据进行合理设置。例如,针对某一类型的介入手术,可以收集大数据和/或历史数据中该类型介入手术中不同效率因子m分别对应的分割效率和分割时间,确定分割效率和分割时间都能够满足该类型介入手术的需求时所对应的效率因子m的范围。在一些实施例中,还可以根据医生的反馈对效率因子m的范围进行优化更新,使得更新后的效率因子m的范围能够更加满足介入手术的和分割时间的需求。
在一些实施例中,确定某介入手术(记为目标手术)的效率因子m时,可以先在大数据和/或历史数据中搜寻并确定与目标手术具有相似条件(如相似手术类型)的多个手术(记为对比手术),然后计算目标手术和各个对比手术的相似度(如基于分割细致程度、分割时间等计算相似度),以挑选出与目标手术相似度较高的对比手术,从而根据对比手术的效率因子m来确定目标手术的效率因子m。仅作为示例性描述,在胸腹介入手术中,效率因子m的范围可以为0.5~0.65的范围内。
在一些实施例中,效率因子m可以基于大数据和/或历史数据,并使用机器学习模型进行确定。在一些实施例中,机器学习模型的输入可以是介入手术的参数。介入手术的参数可以包括但不限于介入手术类型、分割效率、分割时间等中的一种或多种。机器学习模型的输出可以是预设效率因子m。机器学习模型可以基于大数据和/或历史数据通过训练获得。例如,将大数据和/或历史数据作为训练样本对初始机器学习模型进行训练获得该机械学习模型。在一些实施例中,机器学习模型可以包括但不限于线性分类模型(LR)、支持向量机模型(SVM)、朴素贝叶斯模型(NB)、K近邻模型(KNN)、决策树模型(DT)、集成模型(RF/GDBT等)等中的一种或多种。
在一些实施例中,在精准分割模式下,通过分割可以获取手术中扫描影像的所有重要器官的分割掩膜。在一些实施例中,在精准分割模式下,通过分割可以获取手术中扫描影像的所有重要器官的分割掩膜和血管掩膜。在一些实施例中,在精准分割模式下,基于手术中扫描影像的所有重要器官的分割掩膜和血管掩膜,确定所有重要器官内部的血管掩膜。由此可知,精准分割模式下,分割的影像内容更细致,使得规划路径的选择性更多,影像处理的鲁棒性也得到了加强。
图3是根据本说明书一些实施例所示的介入手术影像辅助方法中涉及的分割过程的示例性流程图。
在一些实施例中,介入手术影像辅助方法中涉及的分割流程300可以包括以下步骤:
步骤310,对医学影像中的目标结构集中的至少一个元素进行粗分割;
步骤320,得到至少一个元素的掩膜;
步骤330,确定掩膜的定位信息;
步骤340,基于掩膜的定位信息,对至少一个元素进行精准分割。
在一些实施例中,医学影像可以包括手术前增强影像和手术中扫描影像。目标结构集可以包括第一目标结构集、第二目标结构集和第四目标结构集中的任意一个或多个。
在一些实施例中,步骤310中,可以利用阈值分割方法、区域生长方法或水平集方法,对医学影像中的目标结构集中的至少一个元素进行粗分割的操作。元素可以包括医学影像中的目标器官(例如,靶器官)、靶器官内的血管、病灶、不可介入区域、所有重要器官等。在一些实施例中,基于阈值分割方法进行粗分割,可以按以下方式实施:可以通过设定多个不同的像素阈值范围,根据输入医学影像的像素值,对医学影像中的各个像素进行分类,将像素值在同一像素阈值范围内的像素点分割为同一区域。在一些实施例中,基于区域生长方法进行粗分割,可以按以下方式实施:基于医学影像上已知像素点或由像素点组成的预定区域,根据需求预设相似度判别条件,并基于该预设相似度判别条件,将像素点与 其周边像素点比较,或者将预定区域与其周边区域进行比较,合并相似度高的像素点或区域,直到上述过程无法重复则停止合并,完成粗分割过程。在一些实施例中,预设相似度判别条件可以根据预设影像特征确定,示例性地,如灰度、纹理等影像特征。在一些实施例中,基于水平集方法进行粗分割,可以按以下方式实施:将医学影像的目标轮廓设为一个高维函数的零水平集,对该函数进行微分,从输出中提取零水平集来得到目标的轮廓,然后将轮廓范围内的像素区域分割出来。
在一些实施例中,可以利用基于深度学习卷积网络的方法,对医学影像中的目标结构集的至少一个元素进行粗分割的操作。在一些实施例中,基于深度学习卷积网络的方法可以包括基于全卷积网络的分割方法。在一些实施例中,卷积网络可以采用基于U形结构的网络框架,如UNet等。在一些实施例中,卷积网络的网络框架可以由编码器和解码器以及残差连接(skip connection)结构组成,其中编码器和解码器由卷积层或卷积层结合注意力机制组成,卷积层用于提取特征,注意力模块用于对重点区域施加更多注意力,残差连接结构用于将编码器提取的不同维度的特征结合到解码器部分,最后经由解码器输出分割结果。在一些实施例中,基于深度学习卷积网络的方法进行粗分割,可以按以下方式实施:由卷积神经网络的编码器通过卷积进行医学影像的特征提取,然后由卷积神经网络的解码器将特征恢复成像素级的分割概率图,分割概率图表示图中每个像素点属于特定类别的概率,最后将分割概率图输出为分割掩膜,由此完成粗分割。
图4是根据本说明书一些实施例所示的确定元素掩膜的定位信息过程的示例性流程图。图5是根据本说明书一些实施例所示的元素掩膜进行软连通域分析过程的示例性流程图。图6是根据本说明书一些实施例所示的对元素掩膜进行软连通域分析的粗分割示例性效果对比图。
在一些实施例中,步骤330中,确定元素掩膜的定位信息,可以按以下方式实施:对元素掩膜进行软连通域分析。连通域,即连通区域,一般是指影像中具有相同像素值且位置相邻的前景像素点组成的影像区域。
在一些实施例中,步骤330对元素掩膜进行软连通域分析,可以包括以下几个子步骤:
子步骤331,确定连通域数量;
子步骤332,当连通域数量≥2时,确定符合预设条件的连通域面积;
子步骤333,当多个连通域中最大连通域的面积与连通域总面积的比值大于第一阈值M,确定最大连通域符合预设条件;
子步骤334,确定保留连通域至少包括最大连通域;
子步骤335,基于保留连通域确定元素掩膜的定位信息。
预设条件是指连通域作为保留连通域时需要满足的条件。例如,预设条件可以是对连通域面积的限定条件。在一些实施例中,医学影像中可能会包括多个连通域,多个连通域具有不同的面积。可以将具有不同面积的多个连通域按照面积大小,例如,从大到小进行排序,排序后的连通域可以记为第一连通域、第二连通域、第k连通域。其中,第一连通域可以是多个连通域中面积最大的连通域,也叫最大连通域。这种情况下,判断不同面积序位的连通域作为保留连通域的预设条件可以不同,具体参见图5的相关描述。在一些实施例中,符合预设条件的连通域可以包括:连通域的面积按照从大到小排序在预设序位n以内的连通域。例如,预设序位n为3时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域是否为保留连通域。即,先判断第一连通域是否为保留连通域,再判断第二连通域是否为保留连通域。在一些实施例中,预设序位n可以基于目标结构的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,第一阈值M的取值范围可以为0.8至0.95,在取值范围内能够保障软连通域分析获得预期准确率。在一些实施例中,第一阈值M的取值范围可以为0.9至0.95,进一步提高了软连通域分析的准确率。在一些实施例中,第一阈值M可以基于目标结构的类别,例如,胸部目标结构、腹部目标结构进行设定。在一些实施例中,预设序位n/第一阈值M也可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
在一些实施例中,步骤330对元素掩膜进行软连通域分析,可以按以下方式进行:
基于获取到的元素掩膜,对元素掩膜内连通域的个数及其对应面积进行分析和计算,过程如下:
当连通域个数为0时,表示对应掩膜为空,即掩膜获取或粗分割失败或分割对象不存在,不作处理。例如,对腹腔中的脾脏进行分割时,可能存在脾脏切除的情况,此时脾脏的掩膜为空。
当连通域个数为1时,表示仅此一个连通域,无假阳性,无分割断开等情况,保留该连通域;可以理解的是,连通域个数为0和1时,无需根据预设条件判断连通域是否为保留连通域。
当连通域个数为2时,按面积(S)的大小分别获取连通域A和B,其中,连通域A的面积大于连通域B的面积,即S(A)>S(B)。结合上文,连通域A也可以称为第一连通域或最大连通域;连通域B可以称为第二连通域。当连通域的个数为2时,连通域作为保留连通域需要满足的预设条件可以是最大 连通域面积与连通域总面积的比值与阈值的大小关系。对连通域进行计算,当A面积占A、B总面积的比例大于第一阈值M时,即S(A)/S(A+B)>第一阈值M,可以将连通域B判定为假阳性区域,仅保留连通域A(即确定连通域A为保留连通域);当A面积占A、B总体面积的比例小于或等于第一阈值M时,可以将A和B均判定为元素掩膜的一部分,同时保留连通域A和B(即确定连通域A和B为保留连通域)。
当连通域个数大于或等于3时,按面积(S)的大小分别获取连通域A、B、C…P,其中,连通域A的面积大于连通域B的面积,连通域B的面积大于连通域C的面积,以此类推,即S(A)>S(B)>S(C)>…>S(P);然后计算连通域A、B、C…P的总面积S(T),对连通域进行计算,此时,可以按照面积序位的顺序,并根据对应预设条件依次判断每个连通域(或者面积序位在预设序位n以内的连通域)是否为保留连通域。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件可以是最大连通域面积与连通域总面积的比值与阈值(例如,第一阈值M)的大小关系。在一些实施例中,当连通域的个数大于等于3时,最大连通域(即,连通域A)作为保留连通域需要满足的预设条件也可以是第二连通域面积与最大连通域面积的比值与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A面积占总面积S(T)的比例大于第一阈值M时,即S(A)/S(T)>第一阈值M,或者,连通域B面积占连通域A面积的比例小于第二阈值N时,即S(B)/S(A)<第二阈值N,将连通域A判定为元素掩膜部分并保留(即连通域A为保留连通域),其余连通域均判定为假阳性区域;否则,继续进行计算,即继续判断第二连通域(即连通域B)是否为保留连通域。在一些实施例中,连通域B作为保留连通域需要满足的预设条件可以是第一连通域与第二连通域的面积之和与连通域总面积的比值与第一阈值M的大小关系。在一些实施例中,连通域B作为保留连通域需要满足的预设条件也可以是第三连通域面积占第一连通域面积与第二连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A和连通域B的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B)/S(T)>第一阈值M,或者,连通域C面积占连通域A和连通域B面积的比例小于第二阈值N时,即S(C)/S(A+B)<第二阈值N,将连通域A和B判定为元素掩膜部分并保留(即连通域A和连通域B为保留连通域),剩余部分均判定为假阳性区域;否则,继续进行计算,即继续判断第三连通域(即连通域C)是否为保留连通域。连通域C的判断方法与连通域B的判断方法类似,连通域C作为保留连通域需要满足的预设条件可以是第一连通域、第二连通域和第三连通域的面积之和与连通域总面积的比值与第一阈值M的大小关系,或者,第四连通域面积占第一连通域面积、第二连通域面积和第三连通域面积之和的占比与阈值(例如,第二阈值N)的大小关系。具体地,当连通域A、连通域B和连通域C的面积占总面积S(T)的比例大于第一阈值M时,即S(A+B+C)/S(T)>第一阈值M,或者,连通域D面积占连通域A、连通域B和连通域C面积的比例小于第二阈值N时,即S(D)/S(A+B+C)<第二阈值N,将连通域A、B和C均判定为元素掩膜部分并保留(即连通域A、连通域B和连通域C均为保留连通域)。参照上述判断方法,可以依次判断连通域A、B、C、D…P,或者是面积序位在预设序位n以内的部分连通域是否为保留连通域。需要说明的是,图4中仅示出了对三个连通域是否为保留连通域进行的判断。也可以理解为,图4中的预设序位n的值设定为4,因此,只需对序位为1、2、3的连通域,即连通域A、连通域B、连通域C是否为保留连通域进行判断。
最后输出保留的连通域。
在一些实施例中,第二阈值N的取值范围可以为0.03至0.3,在取值范围内能够保障软连通域分析获得预期准确率。
如图6所示,左边上下分别为未采用软连通域分析的粗分割结果的横断面医学影像和立体医学影像,右边分别为采用了软连通域分析的粗分割结果的横断面医学影像和立体医学影像。经过对比可知,基于软连通域分析对元素掩膜进行粗分割的结果显示,去除了左边影像中方框框出的假阳性区域,相比以往连通域分析方法,排除假阳性区域的准确性和可靠性更高,并且直接有助于后续合理提取元素掩膜定位信息的边界框,提高了分割效率。
在一些实施例中,元素掩膜的定位信息可以为元素掩膜的外接矩形的位置信息,例如,外界矩形的边框线的坐标信息。在一些实施例中,元素掩膜的外接矩形,覆盖元素的定位区域。在一些实施例中,外接矩形可以以外接矩形框的形式显示在医学影像中。在一些实施例中,外接矩形可以是基于元素中连通区域的各方位的底边缘,例如,连通区域上下左右方位上的底边缘,来构建相对于元素掩膜的外接矩形框。
在一些实施例中,元素掩膜的外接矩形可以是一个矩形框或多个矩形框的组合。例如,可以是一个较大面积的矩形框,或者多个较小面积矩形框组合拼成的较大面积的矩形框。
在一些实施例中,元素掩膜的外接矩形可以是仅存在一个矩形框的一个外接矩形框。例如,在 元素中只存在一个连通区域时(例如,血管或腹腔中的器官),根据该连通区域各方位的底边缘,构建成一个较大面积的外接矩形。在一些实施例中,上述大面积的外接矩形可以应用于存在一个连通区域的器官。
在一些实施例中,元素掩膜的外接矩形可以是多个矩形框组合拼成的一个外接矩形框。例如,在元素存在多个连通区域时,多个连通区域对应的多个矩形框,根据这多个矩形框的底边缘构建成一个矩形框。可以理解的,如三个连通区域对应的三个矩形框的底边缘构建成一个总的外接矩形框,计算时按照一个总的外接矩形框来处理,在保障实现预期精确度的同时,减少计算量。
在一些实施例中,医学影像中包括多个连通域时,可以先判断多个连通域的位置信息,再基于多个连通域的位置信息得到元素掩膜的定位信息。例如,可以先判断多个连通域中符合预设条件的连通域,即保留连通域的位置信息,进而根据保留连通域的位置信息得到元素掩膜的定位信息。
在一些实施例中,步骤330中,确定元素掩膜的定位信息,还可以包括以下操作:基于预设的元素的定位坐标,对元素掩膜进行定位。
在一些实施例中,该操作可以在元素掩膜外接矩形定位失败的情况下执行。可以理解的,当元素掩膜外接矩形的坐标不存在时,判断对应元素定位失败。
在一些实施例中,预设的元素可以选取定位较为稳定的元素(例如,定位较为稳定的器官),在对该元素定位时出现定位失败的概率较低,由此实现对元素掩膜进行精确定位。在一些实施例中,由于在腹腔范围中肝部、胃部、脾部、肾部的定位失败的概率较低,胸腔范围中肺部的定位失败的概率较低,这些器官的定位较为稳定,因此肝部、胃部、脾部、肾部可以作为腹腔范围中的预设的器官,即预设的元素可以包括肝部、胃部、脾部、肾部、肺部,或者其他任何可能的器官组织。在一些实施例中,可以基于肝部、胃部、脾部、肾部的定位坐标对腹腔范围中的器官掩膜进行再次定位。在一些实施例中,可以基于肺部的定位坐标对胸腔范围中的器官掩膜进行定位。
在一些实施例中,可以以预设的元素的定位坐标为基准坐标,对元素掩膜进行再次定位。在一些实施例中,当定位失败的元素位于腹腔范围时,则以肝部、胃部、脾部、肾部的定位坐标作为再次定位的坐标,据此对腹腔中定位失败的元素进行再次定位。在一些实施例中,当定位失败的元素位于胸腔范围时,则以肺部的定位坐标作为再次定位的坐标,据此对胸腔中定位失败的元素进行再次定位。仅作为示例,当定位失败的元素位于腹腔范围时,可以以肝顶、肾底、脾左、肝右的定位坐标作为再次定位的横断面防线(上侧、下侧)、冠状面方向(左侧、右侧)的坐标,并取这四个器官坐标的最前端和最后端作为新定位的矢状面方向(前侧、后侧)的坐标,据此对腹腔中定位失败的元素进行再次定位。仅作为示例,当定位失败的元素位于胸腔范围时,以肺部定位坐标构成的外接矩形框扩张一定像素,据此对胸腔中定位失败的元素进行再次定位。
基于预设的元素的定位坐标,对元素掩膜进行精确定位,能够提高分割精确度,加上降低了分割时间,从而提高了分割效率,同时减少了分割计算量,节约了内存资源。
图7是根据本说明书一些实施例所示的对元素进行精准分割过程的示例性流程图。
在一些实施例中,步骤340中,基于掩膜的定位信息,对至少一个元素进行精准分割,可以包括以下子步骤:
子步骤341,对至少一个元素进行初步精准分割。初步精准分割可以是根据粗分割的元素掩膜的定位信息,进行的精准分割。在一些实施例中,可以根据输入数据和粗分割定位的外接矩形框,对元素进行初步精准分割。通过初步精准分割可以生成精准分割的元素掩膜。
子步骤342,判断元素掩膜的定位信息是否准确。通过步骤342,可以判断粗分割得到的元素掩膜的定位信息是否准确,进一步判断粗分割是否准确。
在一些实施例中,可以对初步精准分割的元素掩膜进行计算获得其定位信息,将粗分割的定位信息与精准分割的定位信息进行比较。在一些实施例中,可以对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。在一些实施例中,可以在三维空间的6个方向上(即外接矩形框的整体在三维空间内为一个立方体),对粗分割的元素掩膜的外接矩形框,与精准分割的元素掩膜的外接矩形框进行比较,确定两者的差别大小。仅作为示例,可以计算粗分割的元素掩膜的外接矩形框每个边与精准分割的元素掩膜的外接矩形框每个边的重合度,或者计算粗分割的元素掩膜的外接矩形框6个顶点坐标与精准分割的元素掩膜的外接矩形框的差值。
在一些实施例中,可以根据初步精准分割的元素掩膜的定位信息,来判断粗分割的元素掩膜的定位信息是否准确。在一些实施例中,可以根据粗分割的定位信息与精准分割的定位信息的差别大小,来确定判断结果是否准确。在一些实施例中,定位信息可以是元素掩膜的外接矩形(如外接矩形框),根据粗分割的元素掩膜的外接矩形与精准分割的元素掩膜的外接矩形,判断粗分割元素掩膜的外接矩 形是否准确。此时,粗分割的定位信息与精准分割的定位信息的差别大小可以是指,粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离大小。在一些实施例中,当粗分割的定位信息与精准分割的定位信息差别较大(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较大),则判断粗分割的定位信息准确;当差别较小(即粗分割外接矩形框与精准分割外接矩形框中相距最近的边框线之间的距离较小)时,则判断粗分割的定位信息不准确。需要注意的是,粗分割外接矩形框是对原始粗分割贴近元素的边框线上进行了像素扩张(例如,扩张15-20个体素)得到的。在一些实施例中,可以基于粗分割的外接矩形框与精准分割的外接矩形框中相距最近的边框线之间的距离与预设阈值的大小关系,来确定粗分割的定位信息是否准确,例如,当距离小于预设阈值时确定为不准确,当距离大于预设阈值时确定为准确。在一些实施例中,为了保障判断准确度,预设阈值的取值范围可以是小于或等于5体素。
图8至图9是根据本说明书一些实施例所示的元素掩膜的定位信息判断的示例性示意图。图10A是根据本说明书一些实施例所示的基于元素掩膜的定位信息判断滑动方向的示例性示例图。
其中,图8、图9中显示有粗分割得到的元素掩膜A以及元素掩膜A的外接矩形框B(即元素掩膜A的定位信息),以及根据粗分割的外接矩形框初次精准分割后的外接矩形框C,图10A中还示出了粗分割的外接矩形框B滑动后得到的滑窗B1,其中,(a)为滑动操作前的示意图,(b)为滑动操作后的示意图。另外,方便起见,以三维外接矩形框的一个平面内的平面矩形框进行示例说明,可以理解三维外接矩形框还存在其他5个平面矩形框,即在进行三维外接矩形框的具体计算时存在6个方向的边框线,这里仅以某一平面的4个边框线进行说明。
仅作为示例,如图8所示,精准分割外接矩形框C中的右边边框线与粗分割的外接矩形框B对应的边框线差别较小,由此可以判断粗分割外接矩形框B右边对应的方向上是不准确的,需要对右边边框线进行调整。但C中的上边、下边以及左边边框线与B中的上边、下边以及左边差别较大,由此可以判断粗分割外接矩形框B上边、下边以及左边对应的方向上是准确的。仅作为示例,如图9所示,精准分割外接矩形框C中4个边的边框线与粗分割的外接矩形框B对应边框线差别均较大,可以判断粗分割外接矩形框B中4个边的边框线均是准确的.需要注意的是,对于元素掩膜A共有6个方向,图8、图9中仅以4个边框线进行示意进行说明,实际情况中会对元素掩膜A中的6个方向的12个边框线进行判断。
子步骤343a,当判断结果为不准确,基于自适应滑窗获取准确的定位信息。在一些实施例中,当粗分割结果不准确时,对其精准分割获取到的元素大概率是不准确的,可以对其进行相应自适应滑窗计算,并获取准确的定位信息,以便继续进行精准分割。
在一些实施例中,基于自适应滑窗获取准确的定位信息,可以按以下方式实施:确定定位信息不准确的至少一个方向;根据重叠率参数,在所述方向上进行自适应滑窗计算。在一些实施例中,可以确定外接矩形框不准确的至少一个方向;确定粗分割外接矩形框不准确后,根据输入的预设重叠率参数,将粗分割外接矩形框按照相应方向滑动,即进行滑窗操作,并重复该滑窗操作直至所有外接矩形框完全准确。其中,重叠率参数指初始外接矩形框与滑动之后的外接矩形框之间重叠部分面积占初始外接矩形框面积的比例,当重叠率参数较高时,滑窗操作的滑动步长较短。在一些实施例中,若想保证滑窗计算的过程更加简洁(即滑窗操作的步骤较少),可以将重叠率参数设置的较小;若想保证滑窗计算的结果更加准确,可以将重叠率参数设置的较大。在一些实施例中,可以根据当前重叠率参数计算进行滑窗操作的滑动步长。根据图8的判断方法可知,图10A中粗分割的外接矩形框B的右边和下边边框线对应的方向上是不准确的。为方便描述,这里将外接矩形框B的右边边框线对应的方向记为第一方向(第一方向垂直于B的右边边框线),下边边框线对应的方向记为第二方向(第二方向垂直于B的下边边框线)。仅作为示例,如图10A所示,假设外接矩形框B的长度为a,当重叠率参数为60%时,可以确定对应步长为a*(1-60%),如上述的,外接矩形框B的右边框线可以沿着第一方向滑动a*(1-60%)。同理,外接矩形框B的下边框线可以沿着第二方向滑动相应步长。外接矩形框B的右边边框线以及下边边框线分别重复相应滑窗操作,直至外接矩形框B完全准确,如图10A(b)中所示的滑窗B1。结合图8及图10A,当确定了粗分割的外接矩形框(即目标结构掩膜的定位信息)不准确时,对精分割外接矩形框上6个方向上边框线的坐标值与粗分割外接矩形框上6个方向上边框线的坐标值进行一一比对,当差距值小于坐标差值阈值(例如,坐标差值阈值为5pt)时(其中坐标差值阈值可以根据实际情况进行设定,在此不做限定),可以判断该外接矩形框的边框线为不准确的方向。
再例如,如图8所示,将精分割外接矩形框C影像中4条边对应的4个方向的像素点坐标,与粗分割外接矩形框B影像中4条边框线对应的4个方向的像素点坐标进行一一比对,其中,当一个方向的像素点坐标的差值小于坐标差值阈值8pt时,则可以判定图8中的粗分割外接矩形框该方向不准 确。如,上边差值为20pt、下边差值为30pt、右边差值为1pt,左边为50pt,则右边对应的方向不准确,上边、下边、左边对应的方向准确。
再例如,结合图10A,其中B1为粗分割的外接矩形框B滑动后得到的外接矩形框(也称为滑窗),可以理解的,滑窗为符合预期精确度标准的粗分割外接矩形框,需要将粗分割外接矩形框B的边框线(例如,右边边框线、下边边框线)分别沿着相应方向(例如,第一方向、第二方向)滑动对应的步长至滑窗B1的位置。其中,依次移动不符合标准的每条边框线对应的方向,例如,先滑动B的右边边框线,再滑动B的下边边框线至滑窗的指定位置,而B左边和上边对应的方向是标准的,则不需要进行滑动。可以理解的,每一边滑动的步长取决于B1与B的重叠率,其中,重叠率可以是粗分割外接矩形框B与滑窗B1当前的重叠面积占总面积的比值,例如,当前的重叠率为40%等等。需要说明的是,粗分割外接矩形框B的边框线的滑动顺序可以是从左到右、从上到下的顺序,或者是其他可行的顺序,在此不做进一步限定。
图10B-图10E是根据本说明书一些实施例所示的滑窗后进行精准分割的示例性示意图。结合图10B-10E,在一些实施例中,基于原粗分割外接矩形框(原滑窗),自适应滑窗后获取准确的粗分割外接矩形框,可以获取准确的外接矩形框的坐标值,并基于坐标值和重叠率参数,对新滑窗进行精准分割,将精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。具体地,参见图10B,可以对原滑窗B进行滑窗操作,得到滑窗B1(滑窗操作后的最大范围的外接矩形框),B沿第一方向滑动对应步长得到滑窗B1-1,然后对滑窗B1-1的全域范围进行精准分割,得到滑窗B1-1的精准分割结果。进一步地,参见图10C,B可以沿第二方向滑动对应步长得到滑窗B1-2,然后对滑窗B1-2的全域范围进行精准分割,得到滑窗B1-2的精准分割结果。再进一步地,参见图10D,B滑动可以得到滑窗B1-3(如B可以按照图10C所示滑动操作得到滑窗B1-2,再由滑窗B1-2滑动得到滑窗B1-3),然后对滑窗B1-3的全域范围进行精准分割,得到滑窗B1-3的精准分割结果。将滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果叠加,得到最终精准分割结果。需要说明的是,滑窗B1-1、滑窗B1-2以及滑窗B1-3的尺寸与B的尺寸相同。滑窗B1是原滑窗B进行连续滑窗操作,即滑窗B1-1、滑窗B1-2以及滑窗B1-3得到的最终滑窗结果。在一些实施例中,滑窗B1-1、滑窗B1-2以及滑窗B1-3的精准分割结果与初步精准分割结果进行叠加时,可能存在重复叠加部分,例如,图10E中,滑窗B1-1和滑窗B1-2之间可能存在交集部分,在进行分割结果叠加时,该交集部分可能被重复叠加。针对这种情况,可以采用下述方法进行处理:对于元素掩膜A的某一部分,若一个滑窗对该部分的分割结果准确,另一滑窗的分割结果不准确,则将分割结果准确的滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都准确,则将右侧滑窗的分割结果作为该部分的分割结果;若两个滑窗的分割结果都不准确,则将右侧滑窗的分割结果作为该部分的分割结果,并继续进行精准分割,直至分割结果准确。
在一些实施例中,如图7所示,当判断结果为不准确,基于自适应滑窗获取准确的定位信息是一个循环过程。具体地,在对比精准分割边框线和粗分割边框线后,通过自适应滑窗可以得到更新后的精准分割外接矩形框坐标值,该精准分割外接矩形框扩张一定的像素后设定为新一轮循环的粗分割外接矩形框,然后对新的外接矩形框再次进行精准分割,得到新的精准分割外接矩形框,并计算其是否满足准确的要求。满足准确要求,则结束循环,否则继续循环。在一些实施例中,可以利用深度卷积神经网络模型对粗分割中的至少一个元素进行精准分割。在一些实施例中,可以利用粗分割前初始获取的历史医学影像作为训练数据,以历史精准分割结果数据,训练得到深度卷积神经网络模型。在一些实施例中,历史医学影像、历史精准分割结果数据从医学扫描设备110获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。在一些实施例中,可以从终端130、处理设备140和存储设备150获取扫描对象的历史扫描的医学影像及历史精准分割结果数据。
子步骤343b,当判断结果为准确,将初步精准分割结果输出。
在一些实施例中,当判断结果(即粗分割结果)准确时,可以确定通过该粗分割结果进行精准分割获取到的元素的定位信息是准确的,可以将初步精准分割结果输出。
在一些实施例中,可以输出上述进行精准分割的至少一个元素结果数据。在一些实施例中,为了进一步降低噪声及优化影像显示效果,可以在分割结果输出之前进行影像后处理操作。影像后处理操作可以包括对影像进行边缘光滑处理和/或影像去噪等。在一些实施例中,边缘光滑处理可以包括平滑处理或模糊处理(blurring),以便减少医学影像的噪声或者失真。在一些实施例中,平滑处理或模糊处理可以采用以下方式:均值滤波、中值滤波、高斯滤波以及双边滤波。
图11是根据本说明书一些实施例所示的分割结果的示例性效果对比图。
如图11所示,左边上下分别为采用传统技术的粗分割结果的横断面医学影像和立体医学影像, 右边分别为采用本申请实施例提供的器官分割方法的横断面医学影像和立体医学影像。经过对比可知,右边分割结果影像显示的目标器官分割结果,相比左边分割结果影像显示的目标器官分割结果,获取的目标器官更完整,降低了分割器官缺失的风险,提高了分割精准率,最终提高了整体分割效率。
步骤230B,对第一分割影像与第二分割影像进行配准,确定手术中第三目标结构集的空间位置。在一些实施例中,步骤230B可以由结果确定模块2330执行。
第三目标结构集是第一分割影像与第二分割影像配准后得到的结构全集。在一些实施例中,第三目标结构集可以包括目标器官(例如,靶器官)、靶器官内的血管、病灶、以及其他区域/器官(例如,不可介入区域、所有重要器官)。在一些实施例中,在快速分割模式下,其他区域/器官可以是指不可介入区域;在精准分割模式下,其他区域/器官可以是指所有重要器官。在一些实施例中,第三目标结构集中至少有一个元素包括在第一目标结构集中,第三目标结构集中至少有一个元素不包括在第二目标结构集中。例如,第一目标结构集包括靶器官内的血管、靶器官和病灶,第二目标结构集包括不可介入区域(或所有重要器官)、靶器官和病灶时,靶器官内的血管包括在第一目标结构集中且不包括在第二目标结构集中。在一些实施例中,第四目标结构集也可以视为是第三目标结构集的一部分,例如,不可介入区域、靶器官外部所有重要器官。
第三目标结构集能够更全面、准确的反映扫描对象(例如,患者)的当前状况。介入手术的路径规划可以基于第三目标结构集的空间位置进行规划,以使穿刺针能够有效避开不可穿区域和/或所有重要器官,并顺利抵达病灶。
在一些实施例中,第一分割影像可以包括第一目标结构集(例如,术前目标器官内的血管、术前目标器官、术前病灶)的精确结构特征;第二分割影像可以包括第二目标结构集(例如,术中目标器官、术中病灶、术中不可介入区域/所有重要器官)的精确结构特征。在一些实施例中,在配准之前,可以对第一分割影像、第二分割影像进行目标结构集外观特征与背景的分离处理。在一些实施例中,外观特征与背景的分离处理可以采用基于人工神经网络(线性决策函数等)、基于阈值的分割方法、基于边缘的分割方法、基于聚类分析的图像分割方法(如K均值等)或者其他任何可行的算法,如基于小波变换的分割方法等等。
下面以第一分割影像包括术前目标器官(如,靶器官)内的血管和术前目标器官的结构特征(即第一目标结构集包括目标器官内的血管和目标器官),第二分割影像包括术中目标器官、术中病灶、术中不可介入区域/所有重要器官的结构特征(即第二目标结构集包括目标器官、病灶、不可介入区域/所有重要器官)为例,对配准过程进行示例性描述。可以理解的是,病灶的结构特征并不限于包括在第二分割影像中,在其他实施例中,病灶的结构特征也可以包括在第一分割影像中,或者病灶的结构特征同时包括在第一分割影像和第二分割影像中。
图12是本说明书一些实施例中所示的对第一分割影像与第二分割影像进行配准过程的示例性流程图。步骤231B,对第一分割影像与第二分割影像进行配准,确定配准形变场。
配准可以是通过空间变换使第一分割影像与第二分割影像的对应点达到空间位置和解剖位置一致的图像处理操作。配准形变场可以用于反映第一分割影像和第二分割影像的空间位置变化。在一些实施例中,经过配准后,手术中扫描影像可以基于配准形变场进行空间位置的变换,以使变换后的手术中扫描影像与手术前增强影像在空间位置和解剖位置上完全一致。
图13至图14是本说明书一些实施例中所示的确定配准形变场过程的示例性流程图。图15是本说明书一些实施例中所示的经过分割得到第一分割影像、第二分割影像的示例性演示图。
在一些实施例中,步骤231B中,对第一分割影像与第二分割影像进行配准,确定配准形变场的过程,可以包括以下几个子步骤:
子步骤2311B,基于元素之间的配准,确定第一初步形变场。
在一些实施例中,元素可以是第一分割影像、第二分割影像的元素轮廓(例如,器官轮廓、血管轮廓、病灶轮廓)。元素之间的配准可以是指元素轮廓(掩膜)所覆盖的影像区域之间的配准。例如图14和图15中的手术前增强影像经过分割后得到目标器官(如靶器官)的器官轮廓A所覆盖的影像区域(左下图中虚线区域内灰度相同或基本相同的区域)、手术中扫描影像中经过分割得到目标器官(如靶器官)的器官轮廓B所覆盖的影像区域(右下图中虚线区域内灰度相同或基本相同的区域)。
在一些实施例中,通过器官轮廓A所覆盖的影像区域与器官轮廓B所覆盖的影像区域之间的区域配准,得到第一初步形变场(如图14中的形变场1)。在一些实施例中,第一初步形变场可以是局部形变场。例如,通过肝脏术前轮廓A与术中轮廓B得到关于肝脏轮廓的局部形变场。
子步骤2312B,基于元素之间的第一初步形变场,确定全图的第二初步形变场。
全图可以是包含元素的区域范围影像,例如,目标器官为肝脏时,全图可以是整个腹腔范围的 影像。又例如,目标器官为肺时,全图可以是整个胸腔范围的影像。
在一些实施例中,可以基于第一初步形变场,通过插值确定全图的第二初步形变场。在一些实施例这种,第二初步形变场可以是全局形变场。例如,图14中的通过形变场1插值确定全图尺寸的形变场2。
子步骤2313B,基于全图的第二初步形变场,对浮动影像进行形变,确定浮动影像的配准图。
浮动影像可以是待配准的图像,例如,手术前增强影像、手术中扫描影像。例如,将手术中扫描影像配准到手术前扫描影像时,浮动影像为手术中扫描影像。可以通过配准形变场对手术中扫描影像进行配准,以使与手术前扫描影像空间位置一致。又例如,将手术前增强影像配准到手术中扫描影像时,浮动影像为手术前增强影像。可以通过配准形变场对手术前扫描影像进行配准,以使与手术中扫描影像空间位置一致。浮动影像的配准图可以是配准过程中得到的中间配准结果的图像。以手术前增强影像配准到手术中扫描影像为例,浮动影像的配准图可以是配准过程中得到的中间手术中扫描影像。为了便于理解,本说明书实施例以手术前增强影像配准到手术中扫描影像为例,对配准过程进行详细说明。
在一些实施例中,如图14所示,基于获取到的全图的形变场2,对浮动影像,即手术前增强影像进行形变,确定手术前增强影像的配准图,即中间配准结果的手术中扫描影像。例如,如图14所示,基于获取到肝脏所处腹腔范围的形变场,对手术前增强影像(腹腔增强影像)进行形变,获取到其配准图。
子步骤2314B,对浮动影像的配准图与参考图像中第一灰度差异范围的区域进行配准,得到第三初步形变场。
在一些实施例中,参考图像是指配准前的目标图像,也可以称为未进行配准的目标图像。例如,手术前增强影像配准到手术中扫描影像时,参考图像是指未进行配准动作的手术中扫描影像。在一些实施例中,第三初步形变场可以是局部形变场。在一些实施例中,子步骤2314B可以按以下方式实施:对浮动影像的配准图和参考图像的不同区域分别进行像素灰度计算,获得相应灰度值;计算浮动图像的配准图的灰度值与参考图像的对应区域的灰度值之间的差值;所述差值在第一灰度差异范围时,分别将浮动影像的配准图与参考图像的对应区域进行弹性配准,获得第三初步形变场。在一些实施例中,所述差值在第一灰度差异范围时,可以表示浮动影像的配准图中的一个区域与参考图像中对应区域的差异不大或比较小。例如,第一灰度差异范围为0至150,浮动影像的配准图中区域Q1与参考图像中同一区域的灰度差值为60,浮动影像的配准图中区域Q2与参考图像中同一区域的灰度差值为180,则两个图像(即浮动影像的配准图和参考图像)的区域Q1的差异不大,而区域Q2的差异较大,仅对两个图像中的区域Q1进行配准。在一些实施例中,如图14所示,对浮动图像的配准图与参考图像中的符合第一灰度差异范围的区域(差异不太大的区域)进行弹性配准,得到形变场3(即上述的第三初步形变场)。
子步骤2315B,基于第三初步形变场,确定全图的第四初步形变场。
在一些实施例中,基于第三初步形变场,进行插值获得全图的第四初步形变场。在一些实施例中,第四初步形变场可以是全局形变场。在一些实施例中,可以通过该步骤将局部的第三初步形变场获取到关于全局的第四初步形变场。例如,图14中的通过形变场3插值确定全图尺寸的形变场4。
子步骤2316B,基于第四初步形变场,对第二灰度差异范围的区域进行配准,获得最终配准的配准图。
在一些实施例中,第二灰度差异范围的区域可以是浮动影像的配准图灰度值与参考图像灰度值相比,灰度值差值较大的区域。在一些实施例中,可以设置一个灰度差异阈值(如灰度差值阈值为150),浮动影像的配准图灰度值与参考图像灰度值的差值小于灰度差异阈值的区域为第一灰度差异范围的区域,大于灰度差异阈值的则属于第二灰度差异范围的区域。
在一些实施例中,最终配准的配准图可以是基于至少一个形变场对浮动影像(例如,手术前增强影像),进行多次形变,获得最终与手术中扫描影像空间位置、解剖位置相同的图像。在一些实施例中,如图6所示,基于第四初步形变场,对第二灰度差异范围(即灰度差异比较大)的区域进行配准,获得最终配准的配准图。例如,灰度值差异比较大的脾脏之外的区域,针对该区域通过形变场4进行形变,获得最终的配准图。
在一些实施例中,利用图13-图14中所描述的配准方法,可以将浮动影像中进行了分割,并且参考图像中没有分割的元素(例如,靶器官内的血管),从浮动影像中映射到参考图像中。以浮动影像是手术前增强影像,参考图像是手术中扫描影像为例,靶器官内的血管在手术前增强影像中进行了分割,并且在手术中扫描影像中没有分割,通过配准可以将靶器官内的血管映射到手术中扫描影像。可以理解的是,对于快速分割模式下的不可介入区域以及精准分割模式下的所有重要器官的配准也可以采用图13-图14的配准方法,或者仅通过对应分割方法也可以实现类似的效果。
步骤232B,基于配准形变场和手术前增强影像中的第一目标结构集中的至少部分元素的空间位置,确定手术中相应元素的空间位置。在一些实施例中,可以基于配准形变场和手术前增强影像中的目标器官内的血管,确定手术中目标器官内的血管(以下简称为血管)的空间位置。
在一些实施例中,可以基于下述公式(1),基于配准形变场和手术前增强影像中的血管,确定手术中血管的空间位置:
其中,IQ表示手术前增强影像,(x,y,z)表示血管的三维空间坐标,u(x,y,z)表示由手术前增强影像到手术中扫描影像的配准形变场,表示血管在手术中扫描影像中的空间位置。在一些实施例中,u(x,y,z)也可以理解为浮动图像中元素(例如,靶器官内的血管)的三维坐标至最终配准的配准图中的三维坐标的偏移。
由此,可以通过步骤232B中确定的配准形变场,对手术前增强影像中的血管进行形变,生成与其空间位置相同的手术中血管的空间位置。
在一些实施例中,可以基于确定的手术中血管和病灶(包括在手术中扫描影像的第二分割影像中)的空间位置,计算病灶中心点,并生成病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据确定的可介入区域、不可介入区域,确定病灶周边安全区域和潜在进针区域。在一些实施例中,可以根据潜在进针区域及基本避障约束条件,规划由经皮进针点到病灶中心点之间的基准路径。在一些实施例中,基本避障约束条件可以包括但不限于路径的入针角度、路径的入针深度、路径与血管及重要脏器之间不相交等。
应当注意的是,上述有关流程200B、流程300的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程200B、流程300进行各种修正和改变。
在一些实施例中,在快速分割模式下对手术中扫描影像进行分割得到不可介入区域的过程中,需要将不可穿刺区域(如胸腹区域中除去脂肪区域后的部分区域,也即是胸腹去脂区域)与靶器官的分割结果进行融合,此过程中可能会出现不可穿刺区域掩膜附着在靶器官掩膜表面,从而影响穿刺路径的规划,同时还会造成大量连通区域的出现。这种情况下,需要一种基于快速分割的掩膜融合方法,能够在不改变靶器官原始掩膜以及不可介入区域主体掩膜的条件下,解决靶器官掩膜表面附着不可介入区域掩膜和连通区域过多的问题,最后能够得到准确的快速分割结果以用于穿刺规划,具体方法参见下文描述。
图16是根据本说明书一些实施例所示的介入手术影像辅助方法的示例性流程图。如图16所示,流程1600可以包括:
步骤1610,对医学影像中的目标结构集进行分割,得到去脂掩膜。在一些实施例中,步骤1610可以由分割模块2320执行。
在一些实施例中,对医学影像中的目标结构集(例如,手术中扫描影像的第二目标结构集)进行分割,可以得到脂肪掩膜和胸腹掩膜。进一步地,可以基于脂肪掩膜和胸腹掩膜得到去脂掩膜。在一些实施例中,可以将胸腹掩膜与脂肪掩膜做差,即胸腹掩膜去掉脂肪掩膜,从而得到胸腹的去脂掩膜。在一些实施例中,由于靶器官是位于胸腹区域范围内的,因而,靶器官掩膜也会位于胸腹掩膜范围内,基于胸腹掩膜与脂肪掩膜得到的去脂掩膜的区域内可以包括靶器官掩膜。在一些实施例中,基于胸腹掩膜与脂肪掩膜得到的去脂掩膜的区域内也可以不包括靶器官掩膜,例如,先将胸腹掩膜与靶器官掩膜做差(即胸腹掩膜去掉靶器官掩膜)得到胸腹去靶掩膜,再基于胸腹去靶掩膜与脂肪掩膜得到去脂掩膜。在一些实施例中,对医学影像中的目标结构集进行分割得到去脂掩膜的过程可以是快速分割模式下的分割过程(也叫快速分割过程)。
在一些实施例中,可以对去脂掩膜与靶器官掩膜进行融合,得到处理后的去脂掩膜。在一些实施例中,可以利用腐蚀操作和膨胀操作对去脂掩膜与靶器官掩膜进行融合。
在一些实施例中,对去脂掩膜与靶器官掩膜进行融合时,为了保证去脂掩膜不发生变形、减少假阳性区域、避免靶器官掩膜过分割以及去脂掩膜附着在靶器官掩膜表面,可以采用掩膜融合算法对去脂掩膜与靶器官掩膜进行融合,从而得到处理后的去脂掩膜,具体参见图17A-图20,及其相关描述。
图17A是根据本说明书一些实施例所示的掩膜融合算法的示例性流程图。图17A所示的掩膜融合算法可以按照步骤1620、步骤1630以及步骤1640中描述的方法实施。
步骤1620,确定靶器官掩膜在预设范围内与去脂掩膜的第一交集,基于第一交集对靶器官掩膜和去脂掩膜的区域进行调整。在一些实施例中,步骤1620可以由结果确定模块2330执行。
预设范围可以是元素掩膜(例如,靶器官掩膜、去脂掩膜)的检测范围。在一些实施例中,预 设范围可以包括靶器官掩膜的边缘范围。靶器官掩膜的边缘范围可以是与靶器官掩膜边界相接的掩膜区域。例如,靶器官掩膜位于去脂掩膜的区域范围内,靶器官掩膜的边缘范围可以是去脂掩膜中与靶器官掩膜边界相接的部分区域。在预设范围的区域内,可以对靶器官掩膜和去脂掩膜的区域进行调整。掩膜区域的调整可以包括掩膜区域的扩展和收缩。第一交集是指预设范围内的靶器官掩膜与去脂掩膜重叠部分的掩膜值。图17B是根据本说明书一些实施例所示的确定第一交集并基于第一交集对元素掩膜进行调整的示例性流程图。在一些实施例中,结合图17A和图17B,步骤1620可以包括以下子步骤:
子步骤1621,对靶器官掩膜进行检测。
对靶器官掩膜进行检测可以是对靶器官掩膜进行边缘检测。例如,检测靶器官掩膜的边界。靶器官掩膜的边界可以包括靶器官掩膜的边缘点信息。在一些实施例中,可以对靶器官掩膜的边缘进行检测,确定靶器官掩膜的边缘点(例如,图17A中对靶器官掩膜进行边缘检测)。具体地,可以对靶器官掩膜的像素点进行检测,若检测到靶器官掩膜的区域范围(例如,预设范围)内的像素点在三维空间的6个方向中的任一方向存在空值,则可以确定该像素点为边缘点。这是由于靶器官掩膜是由具有相同像素点数值构成的几何掩膜,在边缘情况下,边缘像素点的相邻区域为空值。
子步骤1622,基于检测结果,确定靶器官掩膜在第一预设范围内与去脂掩膜的第一交集,其中,第一预设范围根据第一预设参数确定。
第一预设范围可以是元素掩膜(例如,靶器官掩膜、去脂掩膜)的第一检测范围。可以对靶器官掩膜的第一预设范围内的像素点进行检测,判断该像素点是否为边缘点。在一些实施例中,可以判断靶器官掩膜的边缘点在第一预设范围内是否存在去脂掩膜值,若存在,则将该去脂掩膜值进行记录。类似的,可以对靶器官掩膜的所有边缘点进行判断,并记录符合条件的所有去脂掩膜值。进一步地,根据记录的去脂掩膜值确定靶器官掩膜的边缘点在第一预设范围内与去脂掩膜的第一交集。第一交集可以是记录的全部去脂掩膜值,以及每个去脂掩膜值所对应的靶器官掩膜的边缘点所构成的区域。在一些实施例中,第一交集的区域中,可以理解为是去脂掩膜附着在靶器官掩膜表面。在一些实施例中,确定靶器官掩膜的边缘在第一预设范围内与去脂掩膜的第一交集,可以是图17A中所示的对靶器官掩膜的边缘与去脂掩膜进行“边缘周边搜索”。
在一些实施例中,第一预设范围可以根据第一预设参数确定。在一些实施例中,可以通过调整第一预设参数来调整靶器官掩膜基于第一交集进行调整的幅度。例如,第一预设参数设置的较小(例如,3像素点),可以使得靶器官掩膜区域的调整幅度较小。在一些实施例中,第一预设参数可以是通过实验观察得到的常数值。仅作为示例,第一预设参数可以为3至4像素点。在一些实施例中,第一预设参数也可以根据历史数据和/或大数据进行合理设置。例如,一些实施例中,可以收集大数据和/或历史数据中不同第一预设参数下分别确定的第一预设范围,以及该第一预设范围下所得到的第一交集的范围,而第一交集的范围能够影响后续靶器官掩膜和去脂掩膜的调整幅度,进而影响最终调整后的去脂掩膜的区域,因此,可以确定不同第一预设参数分别对应的最终调整后的去脂掩膜的区域,从而确定具有更为精准清晰的去脂掩膜的区域所对应的第一预设参数的范围。此外,还可以根据医生的反馈对第一预设参数的范围进行优化调整,使得在调整后的第一预设参数的范围下能够得到更为精准的去脂掩膜。在一些实施例中,确定某掩膜(记为第一目标掩膜)调整的第一预设参数时,可以先在大数据和/或历史数据中搜寻并确定与第一目标掩膜具有类似条件(例如,类似的调整幅度)的多个掩膜(记为第一对比掩膜),然后计算第一目标掩膜和各个第一对比掩膜的相似度,以挑选出与第一目标掩膜相似度较高的第一对比掩膜,从而根据第一对比掩膜的第一预设参数确定第一目标掩膜的第一预设参数。在一些实施例中,第一预设参数可以基于大数据和/或历史数据,并使用机器学习模型进行确定。在一些实施例中,机器学习模型的输入可以是第一调整参数。第一调整参数可以包括但不限于第一交集的调整幅度,靶器官掩膜的调整幅度、去脂掩膜的调整幅度等中的一种或多种。机器学习模型的输出可以是第一预设参数。机器学习模型可以基于大数据和/或历史数据通过训练获得。例如,将大数据和/或历史数据作为训练样本对初始机器学习模型进行训练获得该机械学习模型。在一些实施例中,第一预设参数的取值还可以根据患者信息(例如,性别、年龄、身体状况等)、同一患者的不同器官进行调整,以适应于临床情况。
子步骤1623,基于第一交集,对靶器官掩膜和去脂掩膜的区域进行第一次调整。
靶器官掩膜区域的第一次调整可以是对靶器官掩膜的区域进行扩展。去脂掩膜区域的第一次调整可以是对去脂掩膜的区域进行收缩。在一些实施例中,可以以第一交集的边缘为界限,对靶器官掩膜和去脂掩膜的区域进行调整。在一些实施例中,可以以记录的全部去脂掩膜值构成的第一交集的边缘为界限,对靶器官掩膜的区域进行扩展;以记录的每个去脂掩膜值对应的靶器官掩膜的边缘点构成的第一交集的边缘为界限,对去脂掩膜的区域进行收缩。在一些实施例中,基于第一交集对靶器官掩膜和去脂掩膜的区域的第一次调整可以是局部的。例如,仅对靶器官掩膜的边缘区域进行扩展,以及去脂掩膜 的边缘区域进行收缩,而靶器官掩膜主体和去脂掩膜主体区域保持不变。
在一些实施例中,通过对靶器官掩膜的区域进行扩展以及对去脂掩膜的区域进行收缩,可以将大部分假阳性区域与掩膜主体(例如,去脂掩膜主体、靶器官掩膜主体)之间的连接断开,同时还能使去脂掩膜的过分割趋于正常。
子步骤1624,确定第一次调整后的靶器官掩膜在第二预设范围内与第一次调整后的去脂掩膜的第二交集,其中,第二预设范围根据第二预设参数确定。
第二预设范围可以是元素掩膜(例如,靶器官掩膜、去脂掩膜)的第二检测范围。在一些实施例中,可以判断第一次调整(如扩展)后的靶器官掩膜的边缘点在第二预设范围内是否存在第一次调整(如收缩)后的去脂掩膜值,若存在,则将该去脂掩膜值进行记录。类似的,对第一次调整(如扩展)后的靶器官掩膜的所有边缘点进行判断,并记录符合条件的所有去脂掩膜值。进一步地,根据记录的去脂掩膜值确定第一次调整后的靶器官掩膜在第二预设范围内与第一次调整(如收缩)后的去脂掩膜的第二交集。第二交集可以是记录的全部去脂掩膜值,以及每个去脂掩膜值所对应的第一次调整后的靶器官掩膜的边缘点所构成的区域。
在一些实施例中,第二预设范围可以根据第二预设参数确定。在一些实施例中,可以通过调整第二预设参数来调整去脂掩膜基于第二交集进行调整的幅度。例如,第二预设参数设置的较小(例如,1像素点),可以使得去脂掩膜区域的调整幅度较小。在一些实施例中,第二预设参数可以小于等于第一预设参数。对应的,第二预设范围小于等于第一预设范围。在一些实施例中,第二预设参数可以是通过实验观察得到的常数值。仅作为示例,第二预设参数可以为1-2像素点。在一些实施例中,第二预设参数也可以根据机器学习和/或大数据进行合理设置。例如,一些实施例中,可以收集大数据和/或历史数据中不同第二预设参数下分别确定的第二预设范围,以及该第二预设范围下所得到的第二交集的范围,而第二交集的范围能够影响后续靶器官掩膜和去脂掩膜的调整幅度,进而影响最终调整后的去脂掩膜的区域,因此,可以确定不同第二预设参数分别对应的最终调整后的去脂掩膜的区域,从而能够确定具有更为精准的去脂掩膜的区域所对应的第二预设参数的范围。此外,还可以根据医生的反馈对第二预设参数的范围进行优化调整,使得在调整后的第二预设参数的范围下能够得到更为精准的去脂掩膜。在一些实施例中,确定某掩膜(记为第二目标掩膜)调整的第二预设参数时,可以先在大数据和/或历史数据中搜寻并确定与第二目标掩膜具有类似条件(例如,类似的调整幅度)的多个掩膜(记为第二对比掩膜),然后计算第二目标掩膜和各个第二对比掩膜的相似度,以挑选出与第二目标掩膜相似度较高的第二对比掩膜,从而根据第二对比掩膜的第二预设参数确定第二目标掩膜的第二预设参数。在一些实施例中,第二预设参数可以基于大数据和/或历史数据,并使用机器学习模型进行确定。在一些实施例中,机器学习模型的输入可以是第二调整参数。第二调整参数可以包括但不限于第二交集的调整幅度,靶器官掩膜的调整幅度、去脂掩膜的调整幅度等中的一种或多种。机器学习模型的输出可以是第二预设参数。机器学习模型可以基于大数据和/或历史数据通过训练获得。例如,将大数据和/或历史数据作为训练样本对初始机器学习模型进行训练获得该机械学习模型。在一些实施例中,第二预设参数的取值还可以根据患者信息(例如,性别、年龄、身体状况等)、同一患者的不同器官进行调整,以适应于临床情况。
子步骤1625,基于第二交集,对第一次调整后的靶器官掩膜和第一次调整后的去脂掩膜的区域进行第二次调整。
对第一次调整后的靶器官掩膜的区域进行第二次调整,可以是对第一次调整后的靶器官掩膜的区域进行第二次扩展。对第一次调整后的去脂掩膜的区域进行第二次调整,可以是对第一次调整后的去脂掩膜的区域进行第二次收缩。在一些实施例中,可以以第二交集的边缘为界限,对第一次调整后的靶器官掩膜和第一次调整后的去脂掩膜的区域进行第二次调整。在一些实施例中,可以以记录的全部去脂掩膜值构成的第二交集的边缘为界限,对扩展后的靶器官掩膜的区域进行第二次扩展;以记录的每个去脂掩膜值对应的扩展后的靶器官掩膜的边缘点构成的第二交集的边缘为界限,对收缩后的去脂掩膜进行第二次收缩。在一些实施例中,基于第二交集对第一调整后的靶器官掩膜和第一次调整后的去脂掩膜的区域进行的第二次调整可以是全局的。例如,第一次扩展后的靶器官掩膜整体进行第二次扩展,第一次收缩后的去脂掩膜整体进行第二次收缩。可以理解的是,元素掩膜(如,靶器官掩膜、去脂掩膜)的全局调整会改变元素掩膜的主体区域。在一些实施例中,子步骤1624和子步骤1625中描述的操作可以视为图17A中的“开操作”,通过对第一次扩展后的靶器官掩膜的区域进行第二次扩展,以及对第一次收缩后的去脂掩膜进行第二次收缩,可以进一步将子步骤1623中剩余的假阳性区域与掩膜主体(例如,去脂掩膜主体、靶器官掩膜主体)之间的连接断开。
在一些实施例中,掩膜区域的调整幅度可以影响掩膜中的其他结构组织。例如,对去脂掩膜区域进行收缩时,可能会影响去脂掩膜中的血管,若去脂掩膜区域收缩程度较大,可能导致收缩后的去脂 掩膜中血管缺失。因此,可以通过合理设置第一预设参数和第二预设参数,来控制靶器官掩膜和去脂掩膜区域的调整幅度,从而减小对掩膜中其他结构组织的影响。仅作为示例,第一预设参数可以设置为3像素点,第二预设参数设置为1像素点,这种设置方式下能够保证靶器官掩膜和去脂掩膜区域的调整幅度较小,降低对掩膜中其他结构组织(如,血管)的影响,同时还能将假阳性区域与掩膜主体之间的连接断开以及减少噪点区域。
步骤1630,对调整后的去脂掩膜进行连通域处理。在一些实施例中,步骤1630可以由结果确定模块2330执行。
在一些实施例中,靶器官掩膜和去脂掩膜的区域进行第一次调整和/或第二次调整后(即,靶器官掩膜的区域进行第一次扩展和/或第二次扩展,以及去脂掩膜的区域进行第一次收缩和/或第二次收缩后),断开了假阳性区域与掩膜主体(例如,去脂掩膜主体、靶器官掩膜主体)之间的连接,但是同时也会使得去脂掩膜与靶器官掩膜之间出现较多噪点区域,需要进一步进行处理,以清除噪点区域。
在一些实施例中,可以对调整后的去脂掩膜和调整后的靶器官掩膜周边的连通域进行判断,若连通域为有效连通域,则保留该连通域;反之,舍弃该连通域,从而清除噪点区域。连通域,即连通区域,一般是指影像中具有相同像素值且位置相邻的前景像素点组成的影像区域。图18是根据本说明书一些实施例所示的对去脂掩膜进行连通域处理的示例性流程图。可以理解的是,图18中涉及的去脂掩膜可以是指经过调整,如收缩后的去脂掩膜,靶器官掩膜是指经过调整,如扩展后的靶器官掩膜。在一些实施例中,结合图17A和图18,步骤1630可以包括以下几个子步骤:
子步骤1631,判断靶器官掩膜的定位信息与伪去脂连通域的定位信息是否重叠;
子步骤1632,当判断结果为不重叠时,伪去脂连通域标识为属于去脂掩膜;
子步骤1633,当判断结果为重叠时,根据伪去脂连通域的面积与预设面积阈值的大小关系判定伪去脂连通域是否应属于去脂掩膜。
伪去脂连通域是指靶器官掩膜周边范围(例如,预设范围)内的连通域。在一些实施例中,去脂掩膜区域可能不是一个整体连接的连通域,而是由彼此不相连的多个连通域组成的区域。彼此不相连的多个连通域中的部分连通域可能分布在靶器官掩膜区域的周边范围,可以将这些连通域称为伪去脂连通域。
定位信息可以包括元素掩膜的外接矩形(也就是图17A中的“边界框”)的位置信息。例如,外接矩形的边框线的坐标信息。在一些实施例中,元素掩膜的外接矩形,覆盖元素的定位区域。在一些实施例中,外接矩形可以以外接矩形框的形式显示在医学影像中。在三维空间中,外接矩形以外接立方体的形式框出元素掩膜。在一些实施例中,外接矩形可以是基于元素中连通区域的各方位的底边缘,例如,连通区域在三维空间的6个方位上的底边缘,来构建相对于元素掩膜的外接矩形框。
在一些实施例中,可以基于伪去脂连通域中的每个连通域的定位信息(例如,边界框),来判断对应的连通域(也就是伪去脂连通域中的一个连通域)的定位信息与靶器官掩膜的定位信息是否重叠。在一些实施例中,当靶器官掩膜边界框与伪去脂连通域边界框不重叠时,可以认为该伪去脂连通域不是噪点区域。此时,可以将该伪去脂连通域标识为属于去脂掩膜。当靶器官掩膜边界框与伪去脂连通域边界框重叠时,可以认为该伪去脂连通域是噪点区域。此时,可以根据伪去脂连通域的面积与预设面积阈值的大小关系判定伪去脂连通域是否应属于去脂掩膜。可以理解的是,伪去脂连通域的面积是指单个连通域的面积。在一些实施例中,当伪去脂连通域的面积大于预设面积阈值时,可以将伪去脂连通域标识为属于去脂掩膜;当伪去脂连通域的面积小于等于预设面积阈值时,可以将该伪去脂连通域标识为不属于去脂掩膜。在一些实施例中,标识为不属于去脂掩膜的连通域可以判定为假阳性区域(即噪点区域)。在一些实施例中,预设面积阈值可以是通过实验观察得到的常数值。在一些实施例中,医学影像为三维医学影像时,预设面积阈值可以位于10000体素点至1000000体素点的范围内。仅作为示例,预设面积阈值可以为20000体素点、50000体素点、100000体素点等。在一些实施例中,医学影像为二维医学影像时,预设面积阈值可以位于100像素点至10000像素点的范围内。仅作为示例,预设面积阈值可以为300像素点、1000像素点、5000像素点等。在一些实施例中,预设面积阈值可以根据靶器官掩膜和/或去脂掩膜的大小来进行合理设置。在一些实施例中,预设面积阈值也可以根据机器学习和/或大数据进行合理设置,在此不做进一步限定。
在一些实施例中,可以根据标识结果对伪去脂连通域进行处理。标识结果可以包括保留标识和/或舍弃标识。保留标识可以表示属于去脂掩膜的伪去脂连通域。属于去脂掩膜的伪去脂连通域可以被保留。舍弃标识可以表示不属于去脂掩膜的伪去脂连通域。不属于去脂掩膜的伪去脂连通域可以被舍弃。在一些实施例中,也可以手动调整伪去脂连通域的标识结果。例如,医生可以根据临床经验对伪去脂连通域的标识进行调整。仅作为示例,若医生根据临床经验判断标识为属于去脂掩膜的伪去脂连通域为假 阳性区域,则可以通过手动调整以舍弃该伪去脂连通域。
在一些实施例中,当靶器官掩膜与伪去脂连通域重叠,且重叠区域的面积很小时,可以不考虑伪去脂连通域的面积与预设面积阈值的大小关系,直接舍弃该伪去脂连通域。
需要说明的是,图18中所述的靶器官掩膜是指扩展后的靶器官掩膜,对应的,靶器官掩膜的边界框是指扩展后的靶器官掩膜边界框。
通过对去脂掩膜进行连通域处理,可以保留去脂掩膜的主体区域以及靶器官掩膜周边的有效连通区域(例如,面积大于预设面积阈值的连通域),去除噪点区域(例如,面积小于等于预设面积阈值的连通域),得到保留的去脂掩膜。
步骤1640,基于经过连通域处理的去脂掩膜,获得处理后的去脂掩膜。在一些实施例中,步骤1640可以由结果确定模块2330执行。
在一些实施例中,去脂掩膜经过步骤1620、步骤1630的操作后,断开了去脂掩膜与靶器官掩膜之间的连接以及清除了噪点区域,但此时得到的是调整(如收缩)后的去脂掩膜,因此,为了保证去脂掩膜的主体区域能够恢复到原来的形态,可以对去脂掩膜区域进行第三次调整,获得处理后的去脂掩膜。图19是根据本说明书一些实施例所示的基于经过连通域处理的去脂掩膜,获得处理后的去脂掩膜的示例性流程图。结合图17A和图19所示,步骤1640可以包括以下几个子步骤:
子步骤1641,对调整后的靶器官掩膜的边缘进行检测。
子步骤1642,基于检测结果,确定经过连通域处理的去脂掩膜与靶器官掩膜的相邻边界。
子步骤1643,基于相邻边界对经过连通域处理的去脂掩膜进行第三次调整,获得处理后的去脂掩膜。
图19中描述的靶器官掩膜是指经过第一次调整和/或第二次调整的靶器官掩膜,简单记为扩展后的靶器官掩膜;去脂掩膜是指经过第一次调整和/或第二次调整,以及连通域处理的去脂掩膜,简单记为经过连通域处理的去脂掩膜。在一些实施例中,对扩展后的靶器官掩膜进行检测时,若扩展后的靶器官掩膜范围内的像素点在三维空间的6个方向中的任一方向存在空值,则可以确定该像素点为边缘点。关于边缘检测的更多描述可以参见上文描述,在此不再赘述。
在一些实施例中,检测结果可以包括扩展后的靶器官掩膜的边缘点信息。相邻边界可以是经过连通域处理的去脂掩膜与扩展后的靶器官掩膜相邻的边界。在一些实施例中,可以根据扩展后的靶器官掩膜的检测结果和初始靶器官掩膜(未经过调整的靶器官掩膜)的检测结果确定相邻边界。在一些实施例中,可以对扩展后的靶器官掩膜的检测结果(例如,边缘点信息)与初始靶器官掩膜的检测结果进行差值计算,以确定相邻边界。基于相邻边界对经过连通域处理的去脂掩膜进行第三次调整,可以获得处理后的去脂掩膜(也称为不可介入区域掩膜)。在一些实施例中,经过连通域处理的去脂掩膜的第三次调整可以是对经过连通域处理的去脂掩膜进行扩展。经过连通域处理的去脂掩膜的第三次调整过程也可以称为去脂掩膜的还原过程。对经过连通域处理的去脂掩膜进行扩展可以得到处理后的去脂掩膜。处理后的去脂掩膜可以是具有与初始去脂掩膜(未进行调整的去脂掩膜)相同形态,且与初始靶器官掩膜之间不连接,且不存在噪点区域的去脂掩膜。
对去脂掩膜与靶器官掩膜进行融合得到处理后的去脂掩膜,相比于形态学腐蚀操作和膨胀操作,通过掩膜融合算法得到的处理后的去脂掩膜,可以保证处理后的去脂掩膜不发生变形、减少假阳性区域、同时还能避免靶器官掩膜过分割,从而得到更为精准的处理后的去脂掩膜。在一些实施例中,通过掩膜融合算法得到的处理后的去脂掩膜、初始靶器官掩膜以及靶器官内的血管掩膜可以作为快速分割结果。
需要说明的是,也可以利用掩膜融合算法对胸腹掩膜与靶器官掩膜进行融合处理,即胸腹掩膜不进行去脂操作而直接用于与靶器官掩膜融合,掩膜融合的具体处理方法参见上文描述,在此不再赘述。
图20是根据本说明书一些实施例所示的对去脂掩膜与靶器官掩膜进行融合的示例性效果对比图。如图20所示,左边上下分别为未采用掩膜融合算法的融合效果的横断面医学影像和立体医学影像,右边上下分别为采用掩膜融合算法的融合效果的横断面医学影像和立体医学影像。经过对比可知,基于掩膜融合算法对去脂掩膜与靶器官掩膜进行融合的结果显示,去除了左边影像中方框框出的假阳性区域(对比左上图和右上图),同时还避免了去脂掩膜发生变形(对比左下图和右下图)。
应当注意的是,上述有关流程1600的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程1600进行各种修正和改变,然而,这些修正和改变仍在本说明书的保护范围之内。
在一些实施例中,为了提高医学影像的分割结果的准确性,还可以对医学影像中的目标器官进行分割,得到目标器官掩膜,并将目标器官掩膜与上述的快速分割结果掩膜进行融合。在一些实施例中, 对医学影像中的目标器官进行分割得到目标器官掩膜的过程可以是精准分割模式下的分割过程(也叫精准分割过程)。图21是根据本说明书一些实施例所示的精准分割与快速分割结合的示例性流程图。如图21所示,流程2100可以包括:
步骤2110,获取操作指令。在一些实施例中,步骤2110可以由获取模块2310执行。
操作指令可以包括模式指令和数据指令。模式指令可以指输入的是否需要进行精准分割的指令,即是否需要对医学影像中的目标器官进行分割。目标器官可以是医学影像中的重要器官/组织,例如,血管、骨骼等。在一些实施例中,模式指令为不需要进行精准分割时,医学影像处理系统输出的结果为快速分割结果。在一些实施例中,模式指令为需要进行精准分割时,医学影像处理系统输出的结果为快速分割结果与精准分割结果的融合结果。在一些实施例中,当需要对医学影像中的目标器官进行分割时,可以通过数据指令来选择需要进行分割的器官(即,目标器官)。
步骤2120,根据操作指令对医学影像中的至少一个目标器官进行分割,得到至少一个目标器官掩膜。在一些实施例中,步骤2120可以由分割模块2320执行。
在一些实施例中,可以选择至少一个目标器官,分割模块2320对选定的目标器官进行分割,从而得到目标器官掩膜。在一些实施例中,对目标器官进行分割的分割方法与快速分割的图像分割方法相同,例如,可以包括阈值分割方法、区域生长方法或水平集方法,或者,也可以利用基于深度学习卷积网络的方法。关于分割方法的更多描述可以参见快速分割中涉及的分割方法,在此不再赘述。
可以理解的是,快速分割和精准分割的主要区别之处在于,分割的器官和/或组织不同以及得到的分割结果不同。在精准分割下,可以对医学影像的所有器官进行分割,得到单个器官掩膜,分割的影像内容更加细致准确。在快速分割下,则只将不可介入区域整体分割出来,得到不可介入区域掩膜,分割的效率更高。因此,将快速分割与精准分割结合起来,可以在保证分割效率足够高的同时,还能提高分割结果的准确性。例如,将快速分割与精准分割相结合,可以在快速分割模式下进行分割得到不可介入区域掩膜,在精准模式下选择性的精准分割某一特定器官以得到该特定器官的分割掩膜,进而将选择分割的特定器官的分割掩膜与不可介入区域掩膜进行融合(也即是将精准分割的分割结果显示在快速分割结果上)。
步骤2130,确定至少一个目标器官掩膜在第一预设范围内与快速分割结果掩膜的第三交集,基于第三交集对至少一个目标器官掩膜与快速分割结果掩膜的区域进行调整,其中,快速分割结果至少包括处理后的去脂掩膜。在一些实施例中,步骤2130可以由结果确定模块2330执行。
在一些实施例中,确定第三交集的方法与确定第一交集的方法类似。例如,在一些实施例中,可以对目标器官掩膜进行检测;基于检测结果确定目标器官掩膜在第一预设范围内与快速分割结果掩膜的第三交集;基于第三交集对目标器官掩膜和快速分割结果掩膜的区域进行第一次调整。例如,可以基于第三交集对目标器官掩膜的区域进行扩展,对快速分割结果掩膜的区域进行收缩。对目标器官掩膜和快速分割结果掩膜的区域进行的第一次调整可以是局部的。在一些实施例中,类似于第一交集,第三交集可以是记录的全部快速分割结果掩膜值,以及每个快速分割结果掩膜值对应的目标器官掩膜的边缘点所构成的区域。关于确定第三交集的方法的更多内容可以参见前文描述(例如,图16中步骤1610及其相关描述)。
目标器官掩膜进行扩展以及快速分割结果掩膜进行收缩后,可以断开假阳性区域与掩膜主体(例如,快速分割结果掩膜主体、目标器官掩膜主体)之间的连接。
步骤2140,对调整后的快速分割结果掩膜进行连通域处理。在一些实施例中,步骤2140可以由结果确定模块2330执行。
在一些实施例中,对调整后的快速分割结果掩膜进行连通域处理与图16中步骤1630描述的对调整后的去脂掩膜进行连通域处理的方法基本相同。在一些实施例中,对调整后的快速分割结果掩膜进行连通域处理可以按照以下方式实施:判断目标器官掩膜的定位信息与伪快速分割结果连通域的定位信息是否重叠;当判断结果为不重叠时,伪快速分割结果连通域标识为属于快速分割结果掩膜;当判断结果为重叠时,根据伪快速分割结果连通域的面积与预设面积阈值的大小关系判定伪快速分割结果连通域是否应属于快速分割结果掩膜。若伪快速分割结果连通域的面积大于预设面积阈值,将该伪快速分割结果连通域标识为属于快速分割结果掩膜,保留该伪快速分割结果连通域;当伪快速分割结果连通域的面积小于等于预设面积阈值时,将该伪快速分割结果连通域标识为假阳性区域(即噪点区域),舍弃该伪快速分割结果连通域。在一些实施例中,当目标器官掩膜与伪快速分割结果连通域重叠,且重叠区域的面积很小时,可以不考虑伪快速分割结果连通域的面积与预设面积阈值的大小关系,舍弃该伪快速分割结果连通域。伪快速分割结果连通域是指目标器官掩膜周边范围(例如,第一预设范围)内的连通域。在一些实施例中,快速分割结果掩膜区域可能不是一个整体连接的连通域,而是由彼此不相连的 多个连通域组成的区域。彼此不相连的多个连通域中的部分连通域可能分布在目标器官掩膜区域的周边范围,可以将这些连通域称为伪快速分割结果连通域。
通过对快速分割结果掩膜进行连通域处理,可以保留快速分割结果掩膜的主体区域以及目标器官掩膜周边的有效连通域(即,面积大于预设面积阈值的连通域),去除噪点区域(即面积小于等于预设面积阈值的连通域)。
步骤2150,基于经过连通域处理的快速分割结果掩膜,获得处理后的快速分割结果掩膜。在一些实施例中,步骤2150可以由结果确定模块2330执行。
对经过连通域处理的快速分割结果掩膜,获得处理后的快速分割结果掩膜可以按照以下方式实施:对扩展后的目标器官掩膜进行检测;基于检测结果,确定经过连通域处理的快速分割结果掩膜与目标器官掩膜的相邻边界;基于相邻边界对经过连通域处理的快速分割结果掩膜进行调整,获得处理后的快速分割结果掩膜。例如,对经过连通域处理的快速分割结果掩膜进行扩展(也就是还原),使得经过连通域处理的快速分割结果掩膜能够恢复到原有形态。
可以理解的是,目标器官掩膜与快速分割结果掩膜的掩膜融合方法(即精准分割结果与快速分割结果的掩膜融合方法),与图16中靶器官掩膜与去脂掩膜之间的掩膜融合方法大致相同。区别之处在于,目标器官掩膜与快速分割结果掩膜进行融合时,快速分割结果掩膜只进行了一次收缩,省去了“开操作”过程。这是由于,目标器官掩膜与快速分割结果掩膜进行融合时,不允许快速分割结果掩膜发生变动,而“开操作”可能会造成快速分割结果掩膜发生变动。在一些实施例中,为了保证快速分割结果掩膜不发生变动,确定第三交集的过程中,第一预设参数可以取值为4像素点。关于目标器官掩膜与快速分割结果掩膜之间的掩膜融合的更多内容可以参考图16及其相关描述,在此不再赘述。
图22是根据本说明书一些实施例所示的对精准分割结果掩膜与快速分割结果掩膜进行融合的示例性效果对比图。如图22所示,左边上下分别为未采用掩膜融合算法的融合效果的横断面医学影像和立体医学影像,右边上下分别为采用掩膜融合算法的融合效果的横断面医学影像和立体医学影像。参见左上图,两个目标器官(方框框出的两个器官)掩膜的周侧与快速分割结果掩膜之间存在连接;对比左上图和右上图可知,基于掩膜融合算法对快速分割结果掩膜与精准分割结果掩膜(如目标器官掩膜)进行融合的结果显示,可以断开目标器官掩膜与快速分割结果掩膜之间的连接。参见左下图,两个目标器官(方框框出的两个器官)掩膜被快速分割结果掩膜覆盖。对比左下图和右下图可知,基于掩膜融合算法对快速分割结果掩膜与精准分割结果掩膜(如目标器官掩膜)进行融合的结果显示,能够避免目标器官掩膜被快速分割结果掩膜覆盖。
应当注意的是,上述有关流程2100的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对流程2100进行各种修正和改变,然而,这些修正和改变仍在本说明书的保护范围之内。
图23是根据本说明书一些实施例所示的介入手术影像辅助系统的示例性框架图。介入手术影像辅助系统2300可以包括获取模块2310、分割模块2320和结果确定模块2330。在一些实施例中,获取模块2310、分割模块2320和结果确定模块2330可以在图1所示的介入手术影像辅助系统100中实现,如在医学扫描设备110中实现。
获取模块2310,用于获取医学影像。例如,获取模块2310可以用于获取手术前增强影像、手术中扫描影像。在一些实施例中,获取模块2310还可以用于获取操作指令。分割模块2320,用于对医学影像中的目标结构集进行分割。例如,分割模块2320可以用于分割手术前增强影像的第一目标结构集、手术中扫描影像的第二目标结构集。结果确定模块2330,用于基于目标结构集的分割结果确定用于反映不可介入区域的结果结构集。例如,结果确定模块2330可以用于基于分割影像(第一分割影像、第二分割影像)确定手术中第三目标结构集的空间位置。又例如,结果确定模块2330可以用于基于元素掩膜确定去脂掩膜。
需要说明的是,有关获取模块2310、分割模块2320和结果确定模块2330执行相应流程或功能实现介入手术影像辅助的更多技术细节,具体参见图1至图22所示的任一实施例描述的介入手术影像辅助方法相关内容,在此不再赘述。
关于介入手术影像辅助系统2300的以上描述仅用于说明目的,而无意限制本申请的范围。对于本领域普通技术人员来说,在不背离本申请原则的前提下,可以对上述方法及系统的应用进行各种形式和细节的改进和改变。然而,这些变化和修改不会背离本申请的范围。在一些实施例中,介入手术影像辅助系统2300可以包括一个或多个其他模块。例如,介入手术影像辅助系统2300可以包括存储模块,以存储由介入手术影像辅助系统2300的模块所生成的数据。
本说明书一些实施例还提供了一种介入手术影像辅助装置,包括处理器,处理器用于执行任一 实施例描述的医学影像中的介入手术影像辅助方法,具体参见图1至图22相关描述,在此不再赘述。在一些实施例中,介入手术影像装置还包括显示装置,显示装置显示基于处理器执行的介入手术影像辅助方法的分割结果。显示装置还显示触发模式选项,触发模式选项包括快速分割模式和精准分割模式。操作者可以通过显示装置的触发模式选项选取合适的规划模式。
本说明书一些实施例还提供了一种计算机可读存储介质,该存储介质存储计算机指令,当计算机读取计算机指令时,计算机执行如上述任一实施例的介入手术影像辅助方法,具体参见图1至图22相关描述,在此不再赘述。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (24)

  1. 一种介入手术影像辅助方法,包括:
    获取医学影像;
    对所述医学影像中的目标结构集进行分割;
    基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集。
  2. 根据权利要求1所述的介入手术影像辅助方法,其中,所述医学影像包括手术前增强影像和手术中扫描影像;所述目标结构集包括手术前增强影像的第一目标结构集和所述手术中扫描影像的第二目标结构集;所述结果结构集包括手术中第三目标结构集;
    所述对所述医学影像中的目标结构集进行分割,包括:
    对所述手术前增强影像的第一目标结构集进行分割,获得所述第一目标结构集的第一分割影像;
    对所述手术中扫描影像的第二目标结构集进行分割,获得所述第二目标结构集的第二分割影像;所述第一目标结构集与所述第二目标结构集有交集;
    所述基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集,包括:
    对所述第一分割影像与所述第二分割影像进行配准,确定手术中所述第三目标结构集的空间位置;所述第三目标结构集中至少有一个元素包括在所述第一目标结构集中,所述第三目标结构集中至少有一个元素不包括在所述第二目标结构集中。
  3. 根据权利要求2所述的介入手术影像辅助方法,其中,还包括:
    获取介入手术的规划模式,所述规划模式至少包括快速分割模式和精准分割模式;
    根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割。
  4. 根据权利要求3所述的介入手术影像辅助方法,其中,所述根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割,还包括:
    在所述快速分割模式下,所述第四目标结构集包括所述不可介入区域。
  5. 根据权利要求4所述的介入手术影像辅助方法,其中,所述根据所述规划模式对所述手术中扫描影像的第四目标结构集进行分割,还包括:
    在所述精准分割模式下,所述第四目标结构集包括预设的重要器官。
  6. 根据权利要求5所述的介入手术影像辅助方法,其中,所述第四目标结构集中所述预设的重要器官总体积与所述不可介入区域总体积的比值小于预设效率因子m。
  7. 根据权利要求6所述的介入手术影像辅助方法,其中,所述效率因子m的设定与所述介入手术类型相关。
  8. 根据权利要求1~7任一项所述的介入手术影像辅助方法,其中,所述分割包括:
    对所述医学影像中的所述目标结构集中的至少一个元素进行粗分割;
    得到至少一个所述元素的掩膜;
    确定所述掩膜的定位信息;
    基于所述掩膜的定位信息,对至少一个所述元素进行精准分割。
  9. 根据权利要求2所述的介入手术影像辅助方法,其中,所述对所述第一分割影像与所述第二分割影像进行配准,包括:
    对所述第一分割影像与所述第二分割影像进行配准,确定配准形变场;
    基于所述配准形变场和所述手术前增强影像中的所述第一目标结构集中的至少部分元素的空间位置,确定手术中相应元素的空间位置。
  10. 根据权利要求9所述的介入手术影像辅助方法,其中,所述确定配准形变场,包括:
    基于所述元素之间的配准,确定第一初步形变场;
    基于所述元素之间的所述第一初步形变场,确定全图的第二初步形变场;
    基于全图的所述第二初步形变场,对浮动影像进行形变,确定所述浮动影像的配准图;
    对所述浮动影像的所述配准图与参考图像中第一灰度差异范围的区域进行配准,得到第三初步形变场;
    基于所述第三初步形变场,确定全图的第四初步形变场;
    基于所述第四初步形变场,对第二灰度差异范围的区域进行配准,获得最终配准的配准图。
  11. 根据权利要求1所述的介入手术影像辅助方法,其中,所述结果结构集包括去脂掩膜;
    所述对所述医学影像中的目标结构集进行分割,包括:
    对所述医学影像中的所述目标结构集进行分割,获得所述去脂掩膜;
    所述基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集,包括:
    确定靶器官掩膜在预设范围内与所述去脂掩膜的第一交集,基于所述第一交集对所述靶器官掩膜和所述去脂掩膜的区域进行调整,获得调整后的所述去脂掩膜。
  12. 根据权利要求11所述的介入手术影像辅助方法,其中,还包括:
    对调整后的所述去脂掩膜进行连通域处理;
    基于经过连通域处理的所述去脂掩膜,获得处理后的所述去脂掩膜。
  13. 根据权利要求12所述的介入手术影像辅助方法,其中,所述确定靶器官掩膜在预设范围内与所述去脂掩膜的第一交集,基于所述第一交集对所述靶器官掩膜和所述去脂掩膜的区域进行调整,包括:
    对所述靶器官掩膜进行检测;
    基于检测结果,确定所述靶器官掩膜在第一预设范围内与所述去脂掩膜的第一交集,其中,所述第一预设范围根据第一预设参数确定;
    基于所述第一交集,对所述靶器官掩膜和所述去脂掩膜的区域进行第一次调整。
  14. 根据权利要求13所述的介入手术影像辅助方法,其中,还包括:
    确定第一次调整后的所述靶器官掩膜在第二预设范围内与第一次调整后的所述去脂掩膜的第二交集,其中,所述第二预设范围根据第二预设参数确定;
    基于所述第二交集,对第一次调整后的所述靶器官掩膜和第一次调整后的所述去脂掩膜的区域进行第二次调整。
  15. 根据权利要求14所述的介入手术影像辅助方法,其中,所述第二预设参数小于等于所述第一预设参数,所述第一预设参数和/或所述第二预设参数基于大数据或人工智能方式获得。
  16. 根据权利要求12所述的介入手术影像辅助方法,其中,所述对调整后的所述去脂掩膜进行连通域处理,包括:
    判断所述靶器官掩膜的定位信息与伪去脂连通域的定位信息是否重叠;
    当判断结果为不重叠时,所述伪去脂连通域标识为属于所述去脂掩膜;
    当判断结果为重叠时,根据所述伪去脂连通域的面积与预设面积阈值的大小关系判定所述伪去脂连通域是否应属于所述去脂掩膜。
  17. 根据权利要求16所述的介入手术影像辅助方法,其中,所述根据所述伪去脂连通域的面积与预设面积阈值的大小关系判定所述伪去脂连通域是否应属于去脂掩膜,包括:
    当所述伪去脂连通域的面积大于所述预设面积阈值时,所述伪去脂连通域标识为属于所述去脂掩膜;
    当所述伪去脂连通域的面积小于等于所述预设面积阈值时,所述伪去脂连通域标识为不属于所述去脂掩膜。
  18. 根据权利要求16所述的介入手术影像辅助方法,其中,还包括:保留标识和/或舍弃标识,保留标识表示属于所述去脂掩膜的伪去脂连通域,舍弃标识表示不属于所述去脂掩膜的伪去脂连通域。
  19. 根据权利要求12所述的介入手术影像辅助方法,其中,所述基于经过连通域处理的所述去脂掩膜,获得处理后的所述去脂掩膜,包括:
    对调整后的所述靶器官掩膜进行检测;
    基于检测结果,确定经过连通域处理的所述去脂掩膜与所述靶器官掩膜的相邻边界;
    基于所述相邻边界对经过连通域处理的所述去脂掩膜进行第三次调整,获得处理后的所述去脂掩膜。
  20. 根据权利要求12所述的介入手术影像辅助方法,其中,还包括:
    获取操作指令;
    根据所述操作指令对所述医学影像中的至少一个目标器官进行分割,得到至少一个目标器官掩膜;
    确定至少一个所述目标器官掩膜在第一预设范围内与快速分割结果掩膜的第三交集,基于所述第三交集对至少一个所述目标器官掩膜和所述快速分割结果掩膜的区域进行调整,其中,所述快速分割结果掩膜至少包括处理后的所述去脂掩膜;
    对调整后的所述快速分割结果掩膜进行连通域处理;
    基于经过连通域处理的所述快速分割结果掩膜,获得处理后的所述快速分割结果掩膜。
  21. 一种介入手术影像辅助系统,包括:
    获取模块,用于获取医学影像;
    分割模块,用于对所述医学影像中的目标结构集进行分割;
    结果确定模块,用于基于所述目标结构集的分割结果确定用于反映不可介入区域的结果结构集。
  22. 一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行如权利要求1~20中任一项所述的介入手术影像辅助方法。
  23. 一种介入手术影像辅助装置,包括处理器,所述处理器用于执行权利要求1~20中任一项所述的介入手术影像辅助方法。
  24. 根据权利要求23所述的介入手术影像辅助装置,还包括显示装置,所述显示装置显示基于所述处理器执行的介入手术影像辅助方法的分割结果,所述显示装置还显示触发模式选项,所述触发模式选项包括快速分割模式和精准分割模式。
PCT/CN2023/103728 2022-06-30 2023-06-29 介入手术影像辅助方法、系统、装置及存储介质 WO2024002221A1 (zh)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202210761324.1 2022-06-30
CN202210761324.1A CN117392142A (zh) 2022-06-30 2022-06-30 一种介入手术影像辅助方法、系统、装置及存储介质
CN202210907258.4A CN117522886A (zh) 2022-07-29 2022-07-29 一种医学影像处理方法、系统、装置及存储介质
CN202210907258.4 2022-07-29

Publications (1)

Publication Number Publication Date
WO2024002221A1 true WO2024002221A1 (zh) 2024-01-04

Family

ID=89383292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/103728 WO2024002221A1 (zh) 2022-06-30 2023-06-29 介入手术影像辅助方法、系统、装置及存储介质

Country Status (1)

Country Link
WO (1) WO2024002221A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435263A (zh) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 医学图像分割方法、装置、设备、系统及计算机存储介质
CN113506331A (zh) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 组织器官的配准方法、装置、计算机设备和存储介质
CN113516624A (zh) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 穿刺禁区的确定、路径规划方法、手术系统和计算机设备
US20210401392A1 (en) * 2019-03-15 2021-12-30 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210401392A1 (en) * 2019-03-15 2021-12-30 Genentech, Inc. Deep convolutional neural networks for tumor segmentation with positron emission tomography
CN112435263A (zh) * 2020-10-30 2021-03-02 苏州瑞派宁科技有限公司 医学图像分割方法、装置、设备、系统及计算机存储介质
CN113516624A (zh) * 2021-04-28 2021-10-19 武汉联影智融医疗科技有限公司 穿刺禁区的确定、路径规划方法、手术系统和计算机设备
CN113506331A (zh) * 2021-06-29 2021-10-15 武汉联影智融医疗科技有限公司 组织器官的配准方法、装置、计算机设备和存储介质

Similar Documents

Publication Publication Date Title
Manniesing et al. Level set based cerebral vasculature segmentation and diameter quantification in CT angiography
US8942455B2 (en) 2D/3D image registration method
Zheng et al. Multi-part modeling and segmentation of left atrium in C-arm CT for image-guided ablation of atrial fibrillation
US20130279780A1 (en) Method and System for Model Based Fusion on Pre-Operative Computed Tomography and Intra-Operative Fluoroscopy Using Transesophageal Echocardiography
CN111798451A (zh) 基于血管3d/2d匹配的3d导丝跟踪方法及装置
CN111415404B (zh) 术中预设区域的定位方法、装置、存储介质及电子设备
US9547906B2 (en) System and method for data driven editing of rib unfolding
US9058664B2 (en) 2D-2D fusion for interventional guidance in trans-catheter aortic valve implantation
US9582934B2 (en) Method and system for efficient extraction of a silhouette of a 3D mesh
JP2007135858A (ja) 画像処理装置
Liu et al. Jssr: A joint synthesis, segmentation, and registration system for 3d multi-modal image alignment of large-scale pathological ct scans
EP1652122B1 (en) Automatic registration of intra-modality medical volume images using affine transformation
WO2020003036A1 (en) Internal organ localization in computed tomography (ct) images
US9672600B2 (en) Clavicle suppression in radiographic images
WO2023216947A1 (zh) 一种用于介入手术的医学影像处理系统和方法
US20210049766A1 (en) Method for controlling display of abnormality in chest x-ray image, storage medium, abnormality display control apparatus, and server apparatus
JP7457011B2 (ja) 異常検出方法、異常検出プログラム、異常検出装置、サーバ装置及び情報処理方法
WO2023169578A1 (zh) 一种用于介入手术的图像处理方法、系统和装置
WO2024002221A1 (zh) 介入手术影像辅助方法、系统、装置及存储介质
Freiman et al. Vessels-cut: a graph based approach to patient-specific carotid arteries modeling
CN113538419B (zh) 一种图像处理方法和系统
US11138736B2 (en) Information processing apparatus and information processing method
CN117392142A (zh) 一种介入手术影像辅助方法、系统、装置及存储介质
López-Mir et al. Aorta segmentation using the watershed algorithm for an augmented reality system in laparoscopic surgery
CN116912098A (zh) 用于介入手术医学影像处理方法、系统、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23830390

Country of ref document: EP

Kind code of ref document: A1