WO2018119766A1 - 多模态图像处理系统及方法 - Google Patents

多模态图像处理系统及方法 Download PDF

Info

Publication number
WO2018119766A1
WO2018119766A1 PCT/CN2016/112689 CN2016112689W WO2018119766A1 WO 2018119766 A1 WO2018119766 A1 WO 2018119766A1 CN 2016112689 W CN2016112689 W CN 2016112689W WO 2018119766 A1 WO2018119766 A1 WO 2018119766A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
lesion
multimodal
removal
Prior art date
Application number
PCT/CN2016/112689
Other languages
English (en)
French (fr)
Inventor
王睿
聂卫文
程兆宁
Original Assignee
上海联影医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海联影医疗科技有限公司 filed Critical 上海联影医疗科技有限公司
Priority to PCT/CN2016/112689 priority Critical patent/WO2018119766A1/zh
Priority to EP16925011.5A priority patent/EP3547252A4/en
Publication of WO2018119766A1 publication Critical patent/WO2018119766A1/zh
Priority to US16/236,596 priority patent/US11037309B2/en
Priority to US17/347,531 priority patent/US11869202B2/en
Priority to US18/407,390 priority patent/US20240144495A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • G06N5/025Extracting rules from data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • G06T2207/10092Diffusion tensor magnetic resonance imaging [DTI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Definitions

  • the present application relates to multimodal image processing systems and methods, and more particularly to systems and methods for visualization and analysis of multimodal images of brain patient tissue.
  • DSA Digital Subtraction Angiography
  • MRI Magnetic Resonance Imaging
  • blood oxygen level-dependent effects a wide variety of brain-related scans available, including, for example, Digital Subtraction Angiography (DSA), Magnetic Resonance Imaging (MRI), and blood oxygen level-dependent effects.
  • DSA Digital Subtraction Angiography
  • MRI Magnetic Resonance Imaging
  • blood oxygen level-dependent effects a wide variety of brain-related scans available, including, for example, Digital Subtraction Angiography (DSA), Magnetic Resonance Imaging (MRI), and blood oxygen level-dependent effects.
  • fMRI-BOLD Blood Oxygenation Level Dependent Functional Magnetic Resonance Imaging
  • DTI Diffusion Tensor Imaging
  • DTT Diffusion Tensor Tractography
  • Magnetic Resonance Angiography MRI
  • CT Computed Tomography
  • PET Positron Emission Tomography
  • PET Positron Emission Tomography
  • Single-Photon Emission Computerized Tomography SPECT
  • TOF-MRI Time of Flight Magnetic Resonance Imaging
  • TOF-MRA Time of Flight Magnetic Resonance Angiography
  • MAG Magnetoencephalography
  • TMS-MRI Transcranial Magnetic Stimulation-Magnetic Resonance Imaging
  • TMS-MRI Transcranial Magnetic Stimulation-Magnetic Resonance Imaging
  • TMS-MRI Transcranial Magnetic Stimulation-Magnetic Resonance Imaging
  • the multi-modal analysis function of the nerve fibers provided by the current medical post-processing workstation and the fusion function of the multi-modal images are mainly processed and analyzed for the two modal image data.
  • the multimodal analysis of nerve fibers mainly combines the information of MRI-T1, fMRI-BOLD or fMRI-DTI/DTT multimodality, so that the structure of the brain nerve in the brain and the association with the functional area can be analyzed.
  • the fusion of multimodal images is mainly combined CT and PET-CT images were used to analyze the patient's tumor metabolic intensity and spread. In order to provide doctors with a wide range of lesion information, assist doctors in diagnosing diseases and guiding surgery, it is necessary to perform comprehensive data analysis on multimodal (eg, more than three modal) image data.
  • the multimodal image processing method can be performed on at least one machine, each of the at least one machine can have at least one processor and one memory.
  • the multimodal image processing method may include one or more of the following: acquiring a multimodal image, the multimodal image may include an image of at least three modalities, and, in the multimodal image Included may be a lesion; registering the multimodal image; fusing the multimodal image; generating a reconstructed image based on the fusion result of the multimodal image; and determining the lesion removal based on the reconstructed image range.
  • the non-transitory computer readable medium can include executable instructions.
  • the instructions when executed by at least one processor, may cause the at least one processor to implement the multimodal image processing method.
  • the multimodal image processing system can include at least one processor and information.
  • the information when executed by the at least one processor, may cause the at least one processor to implement the multimodal image processing method.
  • the multimodal image processing system can further include the non-transitory computer readable medium.
  • the multimodal image processing method may further include displaying image information according to the multimodal image or the reconstructed image.
  • the multimodal image processing method may further comprise: acquiring a standard map; and/or registering the multimodal image according to the standard map.
  • the standard map may include standard image data associated with a portion of the target object.
  • the multimodal image can include a multimodal image of the brain.
  • the standard map can include a standard brain map.
  • displaying the image information can include displaying information of cerebral blood vessels, nerve fibers, brain functional regions, and/or brain tissue metabolic rates.
  • the multimodal image may further include an MRI T1 image, a BOLD image, and a first image.
  • the first image may include one of a DTI/DTT image, a CT/PET image, and an MRI TOF image.
  • the registering the multimodal image may include: registering the BOLD image to obtain a second image according to a standard map; registering the first image to obtain a third image according to the MRI T1 image; The second image and the third image are registered according to an MRI T1 image.
  • the generating the reconstructed image may include: splitting a fusion result of the multimodal image; and/or generating the reconstructed image by using a reconstruction method according to the segmented multimodal image .
  • the reconstruction method may include multi-planar reconstruction or volume rendering techniques.
  • the determining the lesion removal range may include determining a range of the lesion according to the reconstructed image; determining first peripheral information of the lesion according to a range of the lesion; and/or according to The first surrounding information determines the lesion removal range.
  • the first peripheral information may include blood vessel information, neural information, or other tissue and organ information around the lesion.
  • the multimodal image processing method can further include simulating removal of the lesion based on the lesion removal range.
  • the determining the lesion removal range may further include: determining second peripheral information after removing the lesion; determining to remove the according to the first surrounding information and the second surrounding information Destruction information of tissue and organs after the lesion; and/or optimizing the extent of the lesion removal based on the damage information.
  • the multimodal image processing method can further include determining a surgical plan based on the lesion removal range.
  • the lesion can include a tumor of the brain.
  • the lesion peripheral information may further include a name of a blood vessel through which the lesion passes, a blood flow condition of the blood vessel, a number of brain fibers eroded by the lesion, a connection of the brain fibers, and/or a covered by the lesion. Brain function area name.
  • the damage information may include damage information of the blood vessel after removal of the lesion, damage information of the brain fiber, and/or damage information of the brain functional area.
  • the multimodal image processing method may further comprise: saving case information related to the lesion.
  • the case information may include the multimodal image, the reconstructed image, a range of the lesion, the optimized lesion range, the first surrounding information, the second surrounding information, The damage information, information related to the lesion, information related to the surgical protocol, and/or information related to post-operative recovery.
  • the multimodal image processing method may further include retrieving a similar case based on the case information.
  • saving the case information related to the lesion can include saving case information related to the lesion in a database.
  • the retrieval of similar cases may include retrieving similar cases in a database.
  • the multimodal image processing method may further include performing machine learning based on information in the database to optimize the lesion removal range.
  • 1-A is a schematic diagram of an image analysis system shown in accordance with some embodiments of the present application.
  • 1-B is a schematic diagram of a hardware structure of a computing device shown in accordance with some embodiments of the present application.
  • FIG. 1-C is a schematic diagram of a hardware structure of a mobile device according to some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a multimodal image processing system shown in accordance with some embodiments of the present application.
  • FIG. 3 is an exemplary flow diagram of a multimodal image processing process, in accordance with some embodiments of the present application.
  • FIG. 4 is a schematic diagram of a visualization module shown in accordance with some embodiments of the present application.
  • FIG. 5 is an exemplary flow of a visualization process shown in accordance with some embodiments of the present application.
  • FIG. 6 is a schematic diagram of an analysis module shown in accordance with some embodiments of the present application.
  • FIG. 7 is an exemplary flow diagram of processing a multimodal image, shown in accordance with some embodiments of the present application.
  • FIG. 8 is a schematic diagram of a database module, shown in accordance with some embodiments of the present application.
  • FIG. 9 is a schematic diagram of one embodiment of a multimodal image processing system, in accordance with some embodiments of the present application.
  • modules in a multimodal image processing system in accordance with embodiments of the present application, any number of different modules can be used and run in a network connected to the system. On the remote terminal and / or server.
  • the modules are merely illustrative, and different aspects of the systems and methods may use different modules.
  • a "multimodal image” may include images of two or more modalities.
  • the modalities may include digital subtraction angiography (DSA), magnetic resonance imaging (MRI), blood oxygen level dependent effect functional magnetic resonance imaging (fMRI-BOLD), diffusion tensor imaging (DTI), diffusion sheets Volumetric fiber imaging (DTT), magnetic resonance angiography (MRA), computed tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), time-lapse magnetic resonance imaging ( TOF-MRI), time-flying magnetic resonance angiography (TOF-MRA), magnetoencephalography (MEG), ultrasound scanning (US), transcranial magnetic stimulation magnetic resonance imaging (TMS-MRI), MRI-T1 MRI-T2, fMRI-DTI, fMRI-DTT, CT-PET, CT-SPET, DSA-MR, PET-MR, PET-US, SPECT-US, US-CT, US-MR, X-ray-CT,
  • DSA digital subtraction
  • the target object displayed on the multimodal image may be an organ, a body, an object, a lesion, a tumor, or the like, or a combination of multiples.
  • the target object displayed on the multimodal image can be one or more diseased tissue of the brain.
  • the multimodal image can be a two dimensional image and/or a three dimensional image. In a two-dimensional image, the finest resolvable elements can be pixels. In a three-dimensional image, the finest resolvable elements can be voxels. In a three-dimensional image, the image can be composed of a series of two-dimensional slices or two-dimensional layers.
  • An image analysis system 100 can include an imaging device 110, a multimodal image processing system 130, a network 160, and a remote terminal 170.
  • imaging device 110, multimodal image processing system 130, and remote terminal 170 can be directly connected to each other, and/or indirectly.
  • imaging device 110, multimodal image processing system 130, and remote terminal 170 can be directly and/or indirectly connected to one another via network 160.
  • imaging device 110, multimodal image processing system 130, and remote terminal 170 may be indirectly connected by one or more intermediate units (not shown).
  • the intermediate unit may be physical (eg, device, device, module, interface, etc., or a combination of multiples), or non-physical (eg, radio wave, optical, sonic, electromagnetic, etc., or multiple) Combination), etc., or a combination of multiples.
  • Different modules and units can be connected by wireless and/or wired means.
  • the imaging device 110 can scan the target object and generate data and images associated therewith.
  • the imaging device 110 can further process the image using the generated data.
  • the target subject can include a human body, an animal, or a portion thereof, such as an organ, tissue, lesion (eg, a tumor site), or any combination of the above.
  • the target object can be the head, chest, abdomen, heart, liver, upper limbs, lower limbs, spine, bones, blood vessels, etc., or any combination of the above.
  • imaging device 110 can be a device, or a group of devices.
  • imaging device 110 can be a medical imaging device, such as an MRI device, a SPECT device, a CT device, a PET device, and the like.
  • the medical imaging device can be used alone, or/and in combination, for example, a SPECT-MRI device Preparation, a CT-PET device, an SPET-CT device, etc.
  • the imaging device 110 may include a scanner to scan a target object and obtain information related thereto (eg, images, data, etc.).
  • imaging device 110 can be a radioactive scanning device.
  • the device may include a source of radioactive scanning that can emit radioactive rays to the target object.
  • the radioactive rays may include one or a combination of particulate rays, photons, and the like.
  • the particulate radiation may include one or a combination of neutrons, protons, alpha rays, electrons, ⁇ media, heavy ions, and the like.
  • the photon ray may include one or a combination of X-rays, gamma rays, ultraviolet rays, lasers, and the like.
  • the photon ray may be an X-ray; its corresponding imaging device 110 may be a CT system, a digital radiography system (DR), a multimodal medical imaging system, etc., or a combination thereof.
  • the multimodal medical imaging system can include a CT-PET system, a SPECT-MRI system, an SPET-CT system, etc., or a combination of multiples.
  • imaging device 110 can include a ray generating unit and a ray detecting unit (not shown).
  • imaging device 110 may include a photon detector to perform generation and/or detection of radiation, and the like.
  • a photon detector can generate photons for scanning of a target object or to capture photons after scanning of a target object.
  • imaging device 110 can be a PET system or a multimodal medical imaging system, the photon detectors of which can include a scintillator and/or a photodetector.
  • imaging device 110 can include a radio frequency transmit coil and/or a radio frequency receive coil (not shown).
  • imaging device 110 can be an MRI imaging device.
  • Multimodal image processing system 130 can process information from imaging device 110, network 160, and/or remote terminal 170.
  • the information may include image information generated by the imaging device 110 or information related to the patient, information transmitted by the cloud device (not shown) over the network 160, commands and information issued by the remote terminal 170, etc., or a combination thereof.
  • multimodal image processing system 130 can perform various types of operations related to multimodal image data processing, such as registration fusion of multimodal image data, segmentation of multimodal image data, and reconstruction of images. Based on analysis of reconstructed image data, storage of multimodal image data, retrieval of multimodal image data, or the like, or a combination thereof.
  • multimodal image processing system 130 can reconstruct one or more two-dimensional and/or three-dimensional images based on the information.
  • the reconstructed image may include lesion information
  • the multimodal image processing system 130 may analyze the reconstructed image based on the lesion information to simulate the course of the procedure. For example, multimodal image processing system 130 can select one by analyzing the image The extent of lesion removal.
  • the multimodal image processing system 130 can analyze damage to surrounding tissue after removal of the lesion in the image, thereby further optimizing the extent of lesion removal in the selected image, avoiding or reducing the surrounding tissue after removal of the lesion in the image. Damage.
  • multimodal image processing system 130 can store or query multimodal images.
  • multimodal image processing system 130 may be implemented by one or more computing devices 180 having a hardware architecture.
  • 1-B shows a schematic diagram of the hardware architecture of a computing device shown in accordance with some embodiments of the present application.
  • Network 160 can be a single network, or a combination of multiple different networks.
  • the network 160 may be a local area network (LAN), a wide area network (WAN), a public switched telephone network (PSTN), or a virtual network (VN).
  • a private network (Private Network (PN)), a Metropolitan Area Network (MAN), or any combination of the above.
  • Network 160 may include multiple network access points and may use a wired network architecture, a wireless network architecture, and a wired/wireless network hybrid architecture.
  • Wired networks may include the use of metal cables, hybrid cables, fiber optic cables, etc., or a combination of multiple cables.
  • the wireless network may include Bluetooth, Wi-Fi, ZigBee, Near Field Communication (NFC), cellular networks (eg, GSM, CDMA, 3G, or 4G, etc.) , or a combination of multiple network modes.
  • Network 160 may be adapted to the scope of the present application, but is not limited to the description.
  • Remote terminal 170 can receive, manipulate, process, store, or display multimodal image data.
  • Remote terminal 170 can communicate information with imaging device 110, multimodal image processing system 130 over network 160.
  • the remote terminal 170 can be used by one or more users, for example, a hospital care worker, a medical school and its students, or other trained non-medical workers, or the like, or a combination of multiples.
  • remote terminal 170 can be a device terminal coupled to imaging device 110, multimodal image processing system 130, network 160, such as a display screen, printer, computing device, etc., or a combination of multiples.
  • remote terminal 170 can be a computing device 180 or mobile device 190 having a hardware structure.
  • 1-C shows a schematic diagram of the hardware structure of a mobile device shown in accordance with some embodiments of the present application.
  • multimodal image processing system 130 and remote terminal 170 can be integrated on one computing device and/or on a mobile device.
  • the system can include two or more imaging devices 110.
  • the system can have two or more remote terminals 170.
  • Computing device 180 can implement and/or implement a particular system (e.g., multimodal image processing system 130) disclosed in this application.
  • the particular system in this embodiment utilizes a functional block diagram to explain a hardware platform that includes a user interface.
  • Computing device 180 can implement one or more components, modules, units, sub-units of image analysis system 100 (e.g., remote terminal 170, multimodal image processing system 130, etc.).
  • Computing device 180 can be a general purpose computer or a computer with a specific purpose. Both computers can be used to implement the particular system in this embodiment.
  • FIG. 1-B Only one computing device is shown in FIG. 1-B, but the related computing functions described in this embodiment for providing information required for multimodal image processing can be distributed in a distributed manner by a similar set of platforms. Implemented, the processing load of the decentralized system.
  • computing device 180 can include internal communication bus 188, processor 181, hard disk 182, read only memory (ROM) 183, input/output component 184, random access memory (RAM) 185, communication port. 186, and user interface 187.
  • the internal communication bus 188 can enable data communication between components of the computing device 180.
  • Processor 181 can execute program instructions and/or perform any of the functions, components, modules, units, sub-units of image analysis system 100 described in this application.
  • Processor 181 can be comprised of one or more processors.
  • processor 181 may include a microcontroller, a simplified instruction system computer (RISC), an application specific integrated circuit (ASIC), an application specific instruction set processor (ASIP), a central processing unit (CPU), a graphics processor. (GPU), physical processor (PPU), microprocessor unit, digital signal processor (DSP), field programmable gate array (FPGA), or other circuit or processor capable of executing computer program instructions, or more The combination.
  • RISC reduced instruction system computer
  • ASIC application specific integrated circuit
  • ASIP application specific instruction set processor
  • CPU central processing unit
  • GPU graphics processor.
  • PPU physical processor
  • DSP digital signal processor
  • FPGA field programmable gate array
  • processor 181 can control imaging device 110, multimodal image processing system 130, and/or remote terminal 170. In some embodiments, the processor 181 can control the imaging device 110, the multimodal image processing system 130, the remote terminal 170 to receive information, or to the above system/device Send a message. In some embodiments, processor 181 can receive image information from imaging device 110 or information related to the target object. Processor 181 can transmit image information or information related to the target object to multimodal image processing system 130. Processor 181 can receive processed data or images from multimodal image processing system 130. Processor 181 can transmit the processed data or images to remote terminal 170. In some embodiments, processor 181 can execute programs, algorithms, software, and the like. In some embodiments, processor 181 can include one or more interfaces. The interface may include an interface between imaging device 110, multimodal image processing system 130, remote terminal 170, and/or other modules or units in image analysis system 100.
  • processor 181 can execute commands from remote terminal 170.
  • Processor 181 can control imaging device 110, and/or multimodal image processing system 130 by processing and/or converting the above commands.
  • processor 181 can process the information entered by the user via remote terminal 170 and convert the information into one or more corresponding commands.
  • the command may be a scan time, scan target positioning information, a rotation speed of the rack, a scan parameter, or the like, or a combination of a plurality.
  • Processor 181 can control multimodal image processing system 130 to select different algorithms to process and/or analyze image data.
  • processor 181 may also be integrated into an external computing device for controlling imaging device 110, multimodal image processing system 130, and/or remote terminal 170, and the like.
  • computing device 180 also includes one or more forms of storage devices for storing data, programs, and/or algorithms, etc., for example, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, and the like.
  • the storage device can be used for various data files used by computer processing and/or communication, as well as possible program instructions executed by processor 181.
  • the storage device may be internal to image analysis system 100, or external to image analysis system 100 (eg, an external storage device connected via network 160, or cloud storage, etc.).
  • a storage device eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.
  • the information may include a plurality of modal images and patient-related information, standard maps and related information, programs, software, algorithms, data, text, numbers, images, audio, etc. used in multimodal image processing. Or a combination of multiples.
  • the hard disk 182 may be a device that stores information using magnetic energy. In some embodiments, the hard disk 182 may also be other devices that store information using magnetic energy, such as floppy disk, magnetic tape, magnetic Core memory, bubble memory, USB flash drive, flash memory, etc.
  • Read only memory (ROM) 183 and/or random access memory (RAM) 185 may be devices that store information using electrical energy.
  • Read only memory (ROM) 183 may include optical disk drives, hard disks, magnetic tape, early nonvolatile memory (NVRAM), nonvolatile SRAM, flash memory, electronic erasable rewritable read only memory, erasable programmable read only memory , programmable read-only memory, etc., or a combination of multiple.
  • the random access memory (RAM) 185 may include dynamic random access memory (DRAM), static random access memory (SRAM), thyristor random access memory (T-RAM), zero capacitance random access memory (Z-RAM), and the like, or a combination thereof.
  • the storage device may also be a device that optically stores information, such as a CD or DVD or the like.
  • the storage device may be a device that stores information using magneto-optical means, such as a magneto-optical disk or the like.
  • the access mode of the foregoing storage device may be random storage, serial access storage, read-only storage, or the like, or a combination of multiple.
  • the above storage device may be a non-permanent memory storage device, or a permanent memory storage device.
  • the storage devices mentioned above are just a few examples, and the storage devices are not limited thereto.
  • the above storage devices may be local or remote.
  • the above storage devices may be centralized or distributed. For example, the above storage device can be set on a cloud server.
  • Input/output component 184 can support input/output data streams between computing device 180 and other components of image analysis system 100 (e.g., imaging device 110, remote terminal 170, etc.), such as receiving, transmitting, displaying, or printing information.
  • the input/output component 184 can include a keyboard, a touch device, a mouse, a mechanical analog device, a wearable device (eg, 3D glasses, mechanical gloves, etc.), a virtual reality device, an audio input device, an image input device , and remote control devices, etc., or a combination of multiple.
  • the output information can be sent to the user or not sent.
  • the output information that is not transmitted may be stored in the hard disk 182, the read only memory (ROM) 183, the random access memory (RAM) 185, or deleted.
  • the user may enter some raw parameters through the input/output component 184 or set initialization conditions for the corresponding multimodal image processing.
  • some of the input information may come from an external data source (eg, a floppy disk, a hard disk, an optical disk, a memory chip, a wired terminal, a wireless terminal, etc., or a combination of multiples).
  • Input/output component 184 can receive information from other modules or units in image analysis system 100, or send information to other modules or units in the system.
  • Communication port 186 may enable data communication between computing device 180 and other components of image analysis system 100 (e.g., imaging device 110, remote terminal 170, etc.). Computer can communicate Port 186 sends and receives information and data from network 160.
  • the form in which the image analysis system 100 outputs information may include numbers, characters, instructions, pressure, sound, images, systems, software, programs, etc., or a combination of multiples.
  • the user interface 187 can display phased information of the multimodal image processing process, or multimodal image processing results (eg, image cross-sectional views, multi-planar images reconstructed by multimodal images, etc., or a combination of multiples).
  • the user interface 187 can prompt the user input parameters or assist the user in participating in the multimodal image processing process (eg, starting or stopping the process, selecting or modifying operational parameters, selecting or modifying algorithms, modifying the program, exiting the system, system maintenance, System upgrade, or system update, etc.).
  • the above storage device hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, or cloud memory, etc.
  • the computing platform completes the corresponding functions.
  • the cloud computing platform may include a storage-based cloud platform based on storage data, a computing-based cloud platform that processes data, and an integrated cloud computing platform that takes into account data storage and processing.
  • the cloud platform used by the image analysis system 100 may be a public cloud, a private cloud, a community cloud, or a hybrid cloud. For example, according to actual needs, a part of the information received by the image analysis system 100 may be calculated and/or stored by the cloud platform; and another part of the information may be calculated and/or stored by a local processing device and/or a storage device.
  • image analysis system 100 can have one or more computing devices 180.
  • the plurality of computing devices 180 can implement and/or implement the same or different functionality.
  • a first computing device can control imaging device 110 to image and acquire multimodal image data; and a second computing device can acquire multimodal image data from a first computing device or other storage device, and process and/or Or analyze multimodal image data.
  • the mobile device 190 can implement and/or implement the particular systems disclosed in this application.
  • the remote terminal 170 for displaying and interacting with location related information can be a mobile device 190.
  • the mobile device 190 can have various forms, including a smart phone, a tablet computer, a music player, a portable game machine, a Global Positioning System (GPS) receiver, a wearable computing device (such as glasses, a watch, etc.), and the like. Or a combination of multiples.
  • GPS Global Positioning System
  • mobile device 190 can include one or more antennas 199 (eg, wireless communication units), display module 191, graphics processing The device 192, the central processing unit 193, the input/output module 194, the memory 195, and the storage module 198.
  • mobile device 190 may also include any other suitable components, such as a system bus or controller (not shown).
  • a mobile operating system 196 such as iOS, Android, Windows Phone, etc.
  • Application 197 can include a browser and/or other mobile application suitable for receiving and processing image related information on mobile device 190.
  • the input/output module 194 can provide interactive functionality for multimodal image related information. Input/output module 194 can enable interaction of information between mobile device 190 and multimodal image processing system 130, and/or other components of image analysis system 100, for example, via network 160.
  • computing device 180 and/or mobile device 190 can serve as a hardware platform for one or more of the components described above (eg, multimodal image processing) System 130, remote terminal 170, and/or other components of image analysis system 100 depicted in Figures 1-A).
  • the hardware elements, operating systems, and programming languages of such computers are common in nature, and it is assumed that those skilled in the art are familiar enough with these techniques to be able to provide the information needed for multimodal image processing using the techniques described herein.
  • a computer containing user interface elements can be used as a personal computer (PC) or other type of workstation or terminal device, and can be used as a server after being properly programmed.
  • PC personal computer
  • server can be used as a server after being properly programmed.
  • the multimodal image processing system 130 can include a visualization module 210, an analysis module 220, and a database module 230.
  • the multimodal image processing system 130 of FIG. 2 is merely representative of some embodiments of the present application, and can be processed according to multimodal images for those skilled in the art without any creative work.
  • the description of system 130 is modified, added, and deleted. For example, two of the modules can be combined into one module, or one of the modules can be divided into two or more modules.
  • the visualization module 210 can visualize the multimodal image.
  • the visualization module 210 can be coupled to the analysis module 220, the database module 230, and/or other related modules (not shown).
  • the multimodal image may refer to an image of two or more different modalities.
  • the images of the different modes may refer to Different imaging principles, images generated by different devices, or images generated by the same imaging device in different imaging modes.
  • the multimodal image may comprise images of multiple modalities, eg, one MRI image, one CT image, one MRA image, one fMRI image, one PET image, one DTI/DTT image, one CT- A combination of two or more of a PET image, an fMRI-DTI image, a TOF-MRI image, a TOF-MRA image, and the like.
  • the multimodal image may be from imaging device 110, processor 181, storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud memory, etc.), input/output components 184.
  • the remote terminal 170 acquires or obtains from an external data source via the network 160.
  • the visualized multimodal image may be obtained in an experiment (eg, a medical experiment, a clinical simulation experiment, an industrial test experiment, etc.), generated by the imaging device 110, or calculated analogly synthesized.
  • visualization module 210 can perform processing such as registration, fusion, and/or reconstruction of multimodal images.
  • visualization module 210 can visualize multimodal images based on one or more visualization techniques.
  • the visualization technique of the multimodal image may be a surface rendering, a volume rendering, and a hybrid rendering technique, depending on the in-process data description method.
  • the surface rendering technique can reconstruct the surface of the object, that is, the isosurface data in the three-dimensional data field obtained by multi-modal image data segmentation, and realize surface rendering by using graphics technology.
  • the volume rendering technique can use a voxel as a basic unit to directly generate a three-dimensional object image from three-dimensional data, and represent internal information of the object.
  • the hybrid rendering technique can perform surface and internal synchronous reconstruction by combining surface rendering and volume rendering reconstruction algorithms.
  • the visualization module 210 visualizes the results of the multimodal images, which may be stored in a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.) Provides information for the analysis of subsequent multimodal images.
  • a storage device eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.
  • the results of the visualization of the multimodal image by the visualization module 210 can be analyzed in real time by the analysis module 220.
  • Analysis module 220 can analyze the multimodal image. Analysis module 220 can be coupled to visualization module 210, database module 230, or other related modules (not shown). In some embodiments, the analysis module 220 can perform a separate analysis on one or more modal images in the multimodal image or a comprehensive analysis on the reconstructed image of the multimodal image. In some embodiments, the analysis module 220 can analyze local and/or overall information of the target object displayed on the multimodal image, such as tissue function information of the target object, spatial structure information of the target object, physiological information of the target object, and the like. Or more Combination of species. The tissue function information may include whether the physiological function of the tissue or organ is abnormal, whether a lesion occurs, the degree of the lesion, or the like, or a combination thereof.
  • the spatial structure information may include two-dimensional and/or three-dimensional anatomical information, such as the shape, number, size, relative position, etc. of the tissue or organ, or a combination of multiples.
  • the physiological information may include a metabolic rate of a tissue or an organ, a name of a blood vessel through which the lesion site passes, a blood flow rate of the blood vessel, a blood flow velocity, and the like, or a combination thereof.
  • the analysis module 220 can analyze the lesion surrounding information in the multimodal image and determine the lesion information after the lesion is removed to assist in the subsequent determination of the surgical simulation protocol.
  • the peripheral information of the lesion may include blood vessel information around the lesion, peripheral nerve information of the lesion, tissue information of the surrounding organs of the lesion, or the like, or a combination thereof.
  • the damage information after the removal of the lesion may include removal of vascular damage information after the lesion, nerve damage information, organ tissue damage information, and the like, or a combination thereof.
  • the analysis module 220 can generate an analysis report based on the results of the analysis.
  • the analysis report can be sent by analysis module 220 to a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.), input/output component 184, and/or remote terminal. 170.
  • Database module 230 can store and/or retrieve information.
  • Database module 230 can include one or more databases.
  • the database module 230 can be coupled to the visualization module 210, the analysis module 220, or other related modules (not shown).
  • the information stored by the database module 230 may include basic information of a patient corresponding to the multimodal image, case information of the target object displayed on the multimodal image, related information of the multimodal image, or the like, or A variety of combinations.
  • the basic information may include the patient's name, gender, age, medical history, biochemical examination information, etc., or a combination of various.
  • the case information may include an image, an image analysis result, a lesion-related information, a surgical plan, post-operative recovery information, and the like, or a combination thereof.
  • the related information may include a generation time of the multimodal image, a generation time of the multimodal image inspection result, a system analysis time of the multimodal image, a surgical operation time of the patient, and the like, or a combination thereof.
  • database module 230 can store from imaging device 110, processor 181, storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.), Information of the input/output component 184, the remote terminal 170, the visualization module 210, the analysis module 220, and the like.
  • database module 230 can store the above information in a data table.
  • a database can contain one or more data tables.
  • the data table can include one or more rows, and/or one or more columns.
  • the above information can be stored in rows or columns in the data table.
  • the data sheet can be for the above Information is stored in categories. For example, basic information of one or more patients, case information of one or more target objects, related information of one or more multimodal images, and the like may be stored in separate data tables, respectively.
  • the database module 230 can create a connection between two or more data tables to facilitate finding the corresponding information of the second data table through the information in the second data table.
  • the database module 230 may store the same patient's name and surgical plan storage in the first data table, respectively. The same row or column as the second data table, then the database module 230 can find the surgical plan of the patient based on the patient's name. As another example, the database module 230 can set the same feature value or number for the same patient's name and surgical plan, by which the same patient's information can be linked. In some embodiments, database module 230 can create an index for stored information.
  • a database can include one or more indexes.
  • the index may refer to a data structure that sorts information of one or more columns in a data table.
  • the data structure may adopt a B-Tree structure or a B+Tree structure. Indexing can facilitate the search of information.
  • the database module 230 can perform retrieval of keywords based on one or more stored information, and/or automatic retrieval.
  • the retrieval of the keyword may be based on one or more keywords provided by the user (eg, basic information of the patient corresponding to the image, case information of the target object displayed on the multimodal image, and information about the multimodal image) And so on) the search.
  • the automatic retrieval may be an automated classification search by the database module 230 based on one or more criteria, such as identical or similar image modalities, identical or similar image analysis results, identical or similar surgical plans, identical or similar The image inspection results are generated at the time, etc., or a combination of multiples.
  • database module 230 can retrieve based on one or more indexes to improve retrieval efficiency.
  • database module 230 can operate on one or more databases. Operations on the database can include creating, accessing, modifying, updating, or deleting databases.
  • the creation of the database may be the creation or activation of one or more new databases to facilitate storage and/or retrieval of information.
  • the accessing the database may be accessing one or more databases that have been created for storage and/or retrieval of information.
  • the modifying the database may be modifying or replacing information in one or more of the databases that have been created.
  • the update database may be information in a database that has been replaced or updated in one or more of the already created databases.
  • the deleting the database may be deleting one or more already created data. Information in the library.
  • database module 230 can use one or more database languages, such as a data definition language, a data manipulation language, a data query language, a data control language, a transaction control language, etc., or a combination of multiples.
  • image analysis system 100 can allow a user with appropriate access rights to access database module 230.
  • the access rights may include, for example, reading some or all of the information related to the stored information, updating some or all of the information related to the stored information, or the like, or a combination of multiples.
  • the access rights may be associated with a set of login information and linked to the login information.
  • the login information may be a user account or a login password or the like input when the user logs in to the image analysis system 100, or a combination of multiples.
  • image analysis system 100 can provide one or more layers of access rights.
  • the first layer access rights may be full access to stored information, for example, allowing for receiving and updating stored information;
  • the second level access rights may be partial access to stored information, for example, allowing receiving and updating storage Part of the information;
  • third-level access rights may be minimal access to stored information, for example, allowing for the receipt and updating of stored partial information.
  • the updating may include providing information that does not exist in the image analysis system 100, or modifying the information that exists with the new information.
  • the login information can be associated with the three different access rights.
  • database module 230 can be integrated into a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.).
  • visualization module 210 can be integrated into input/output component 184.
  • analysis module 220 can be integrated into processor 181.
  • multimodal image processing system 130 can perform a multimodal image processing flow.
  • the multimodal image processing flow can include visualizing the multimodal image 310, analyzing the multimodal image 320, and manipulating the database 330.
  • a multimodal image can be visualized.
  • visualization module 210 can perform operation 310.
  • the visualization process of the multimodal image may further include registering multimodality
  • the image, the fused multimodal image, the segmented multimodal image, the reconstructed image based on the multimodal image data obtained after segmentation, and/or the displayed reconstructed image are as shown in FIG. 5.
  • the multimodal image may be one modal, or multiple images of different modalities, eg, one MRI image, one CT image, one MRA image, one fMRI image, one DTI image, A DTT image, an fMRI-DTI image, a TOF-MRI image, a TOF-MRA image, etc., or a combination thereof.
  • multimodal images can be analyzed.
  • the multimodal image visualized in 310 can be analyzed at 320.
  • analysis module 220 can perform operation 320.
  • 320 can further include determining a lesion location in the multimodal image, determining lesion perimeter information, determining a lesion removal range, determining surrounding information after removal of the lesion, determining damage information after removal of the lesion, and/or Optimize the extent of lesion removal, etc., as shown in Figure 7.
  • a single modal image can be analyzed separately at 320.
  • 320 can determine information such as the relationship between brain tissue and brain function.
  • a comprehensive analysis of the reconstructed image can be performed at 320.
  • 320 can determine an optimized lesion removal range to guide the surgical procedure.
  • the database can be operated.
  • database module 230 can perform operation 330.
  • 330 can further include storing information to a database, and/or retrieving information in a database, and the like.
  • the information may be basic information of a patient corresponding to the multimodal image, case information of the target object displayed on the multimodal image, related information of the multimodal image, or the like, or a combination thereof.
  • the information may be retrieved according to one or more methods.
  • the method of retrieving information may be performing keyword retrieval or automatic retrieval based on one or more stored information.
  • machine learning of database information may be performed in accordance with one or more machine learning algorithms to optimize the extent of lesion removal or to provide an optimized surgical protocol to provide a reference for the physician.
  • the visualization module 210 can include an image acquisition unit 410, an image registration unit 420, an image fusion unit 430, an image segmentation unit 440, an image reconstruction unit 450, and a display unit 460.
  • the units shown can be connected directly and/or indirectly to each other.
  • the visualization module 210 illustrated in FIG. 4 is merely representative of some embodiments of the present application.
  • modifications and additions may be made according to the description of the visualization module without any creative work. And cuts down.
  • two of the units may be combined into one unit, or one of the units may be divided into two or more units.
  • the image acquisition unit 410 can acquire an image (and/or image data).
  • the acquired image (and/or image data) may be directly from the imaging device 110, the processor 181, a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.)
  • the input/output component 184 acquires or is acquired via the network 160.
  • the target object displayed on the acquired image may be a human body, an animal, or a part thereof, such as an organ, a tissue, a lesion (for example, a tumor site), or any combination of the above.
  • the target object can be the head, chest, abdomen, heart, liver, upper limbs, lower limbs, spine, bones, blood vessels, etc., or any combination of the above.
  • the acquired image data may be three-dimensional image data, and/or two-dimensional image data.
  • the acquired image data may be image data generated at different times, different imaging devices, and/or different conditions (eg, weather, illumination, scan position, angle, etc.).
  • the acquired image data may be a modal image data, and/or a combination of image data of a plurality of different modalities.
  • the acquired image data may include an MRI image, a CT image, an MRA image, an fMRI image, a DTI image, a DTT image, an fMRI-DTI image, a TOF-MRI image, a TOF-MRA image, and the like. , or a combination of multiple.
  • the acquired image data may be a combination of one or more of standard map data, and/or image data of different modalities.
  • the acquired image data may be image data of a specific part scanned according to an inspection requirement, for example, a panoramic scan of a target object, a blood vessel of a target object, a nerve distribution, a functional area, tissue metabolism information, or the like, or a plurality of The combination.
  • the acquired image data may be brain image raw data, processed brain image data, or brain image processing parameters, and the like.
  • Image registration unit 420 can register two or more images.
  • the two or more images may be images of the same modality, or images of different modalities.
  • the two or more images may be images obtained at different times, different imaging devices, or different conditions (eg, weather, illumination, camera position, angle, etc.).
  • Image registration can refer to the process of matching and superimposing two or more images.
  • image registration unit 420 can use the spatial location of the target object as a basis for registration. For example, registration can be performed at the same or similar spatial locations in two or more images based on the same anatomical point of the target object.
  • image registration unit 420 can map one or more anatomical points of a target object on two or more images, or points of interest (eg, points of diagnostic significance, closely related to a surgical plan) The points) all match.
  • image registration unit 420 may employ the same or different image registration methods, such as relative registration, and/or absolute registration.
  • the relative registration may select one image as a reference image and register the other images with it.
  • the coordinate system of the reference image and other images may be arbitrary, for example, the coordinate system of the reference image and other images may be the same or different.
  • the absolute registration may first select a coordinate system, and then transform the multimodal image to the coordinate system to achieve uniformity of the coordinate system.
  • image registration unit 420 can geometrically correct each modal image to achieve uniformity of the coordinate system.
  • geometric correction of the image may be implemented in accordance with one or more geometric transformation polynomials.
  • a certain number of images of the same name point with uniform distribution may be determined in the multimodal image, and then the polynomial coefficients of the geometric transformation are determined according to the same name point of the image, thereby realizing geometric correction of one image to another image.
  • image registration unit 420 can take an image representing an anatomical structure (eg, MRI-T1) as a reference image and other images (eg, DTI/DTT, CT/PET, MRI TOF, etc., or A variety of combinations) are registered with MRI-T1.
  • image registration unit 420 can register fMRI-BOLD image data with standard map data using standard map data as a reference image.
  • Image registration unit 420 can perform image registration based on one or more image registration methods.
  • image registration methods may be point methods (eg, anatomical landmarks), curve methods, surface methods (eg, surface contour methods), moment and spindle methods (eg, spatial coordinate alignment), cross-correlation, interaction Information Method, sequential similarity detection matching method (SSDA), mapping method and nonlinear variation method, etc., or a combination of multiple.
  • the image registration method may be a multi-resolution method based on maximum interaction information, a grayscale statistical method based on maximum interaction information, a feature image registration method based on surface contours, or the like, or a combination thereof.
  • image registration unit 420 can select one or more image registration methods for image registration.
  • image registration unit 420 can select an image registration method based on the history of homogeneous multimodal image registration.
  • the image registration unit 420 can perform manual intervention on the fully automatic or semi-automatically selected image registration method to achieve multimodal image registration.
  • the user can manually select an image registration method via input/output component 184 or remote terminal 170.
  • the user can perform parameter setting, adjustment, and the like on the image registration method automatically selected by the image registration unit 420.
  • the image fusion unit 430 can fuse the images.
  • image fusion unit 430 can fuse the two or more images that are registered.
  • Image fusion can refer to multi-modal images of the same target object, extracting effective information in each modal image, and combining multiple modal images to generate an image to improve spatial resolution and spectral resolution of the image information.
  • image fusion unit 430 can embody valid information in the various modal images in the fused image.
  • the image fusion unit 430 can complement the multimodal image to be merged into a brand new image in which some or all of the information from the plurality of modal images can be displayed.
  • Image fusion unit 430 can perform image fusion based on one or more image fusion algorithms.
  • the image fusion algorithm may include an intensity hue saturation (IHS) algorithm, a principal component analysis (PCA), a ratio conversion algorithm (Brovey Transform), a product transformation algorithm (Mutiplicative), a wavelet transform method (eg, three-dimensional Wavelet transform method, etc., or a combination of multiples.
  • image fusion can be divided into decision level fusion, feature level fusion, and data level fusion (pixel level fusion) according to the level of the hierarchy.
  • the decision-level fusion can be based on a cognitive model approach, using a large database and an expert decision system for analysis, reasoning, identification, and decision making, ie, only data needs to be associated. Decision-level integration can also be based on a number of other rules, such as Bayesian, D-S evidence, and voting.
  • the feature level fusion may perform comprehensive processing on feature information of an image (eg, edge, shape, texture, region, etc.).
  • the pixel level fusion may directly process data of one or more pixels of the obtained multimodal image to obtain a fused image.
  • Pixel level fusion may perform image fusion according to one or more algorithms, such as spatial domain algorithms, and/or transform domain algorithms, and the like.
  • the spatial domain algorithm can include logic filtering Wave method, gray weighted average method, or contrast modulation method.
  • the transform domain algorithm may include a pyramid decomposition fusion method, or a wavelet transform method, or the like.
  • pixel level fusion and feature level fusion may register and correlate information of multimodal images (eg, raw image data, feature vectors, etc.), while decision level fusion may associate image data.
  • image fusion unit 430 can select one or more image fusion methods for image fusion.
  • the choice of image fusion method is fully automatic, semi-automatic or artificial.
  • the image fusion unit 430 can select an image fusion method based on the history of homogeneous multimodal image fusion.
  • the user can manually select an image fusion method via input/output component 184 or remote terminal 170.
  • the user can perform parameter setting, adjustment, and the like on the image fusion method automatically selected by the image fusion unit 430.
  • the image dividing unit 440 can divide the image.
  • image segmentation unit 440 can segment in a single modal image or segment in a multimodal image.
  • image segmentation unit 440 may perform image segmentation prior to image registration and/or blending, or image segmentation after image registration and/or blending.
  • the image segmentation process can be performed based on the corresponding features of the pixel points (or voxel points) of the image.
  • corresponding features of the pixel points (or voxel points) may include texture structure, grayscale, average grayscale, signal strength, color saturation, contrast, brightness, etc., or a combination of multiples.
  • image segmentation unit 440 can segment the multimodal image by manual, automatic, or semi-automatic segmentation methods based on the medical image characteristics of the target object.
  • the segmented image may include organ tissue, vascular structure, nerve fibers, structural functional regions, and the like of the target object, or a combination thereof.
  • the image segmentation unit 440 can segment the brain tissue structure and the corresponding brain functional region in a brain fMRI-BOLD image.
  • image segmentation unit 440 can segment brain nerve fibers in a brain fMRI-DTI/DTT image.
  • image segmentation unit 440 can segment the vascular structure of the brain in a brain TOF-MRI image.
  • image segmentation unit 440 can perform image segmentation based on one or more segmentation methods.
  • image segmentation can be based on gray threshold segmentation, region growing and splitting, edge segmentation, histogram, and fuzzy theory based segmentation (eg, fuzzy threshold segmentation, fuzzy connectivity segmentation, fuzzy cluster segmentation).
  • the method is based on a neural network segmentation method, a mathematical morphology segmentation method (for example, a morphological watershed algorithm, etc.), or a combination thereof.
  • the image segmentation unit 440 may perform image segmentation based on the similarity of gray values between adjacent pixels in the fused multimodal image and the difference in gray values between different pixels.
  • the image reconstruction unit 450 can reconstruct a three-dimensional and/or two-dimensional image.
  • image reconstruction unit 450 may reconstruct an image based on multimodal image data to display multimodal information of the target object.
  • image reconstruction unit 450 may reconstruct an image based on image data obtained by registration, blending, and/or segmentation.
  • image reconstruction unit 450 may establish one or more organ or tissue models, eg, a blood vessel model, a segmented model of organ tissue, a connected model of nerve fibers, a three-dimensional overall structural model of the target object, etc., or more Combination of species.
  • image reconstruction unit 450 can perform image reconstruction based on one or more reconstruction techniques or methods.
  • image reconstruction may be based on a surface model method, a voxel model method, or the like, or a combination of various.
  • the surface model method may include contour reconstruction method, voxel reconstruction method, Volume Rendering (VR), Multi-Planar Reformation (MPR), Maximum Intensity Projection (Maximum Intensity Projection) MIP)), or surface shading display (SSD), etc.
  • the voxel model method may include a spatial domain method, a transform domain method, or the like.
  • the image reconstruction unit 450 may obtain a three-dimensional reconstructed image based on the image data obtained by the segmentation based on a three-dimensional reconstruction technique using a technique such as VTK (Visualization Toolkit) or OSG (Open Scene Graph).
  • VTK Visualization Toolkit
  • OSG Open Scene Graph
  • the display unit 460 can display an image.
  • the display unit 460 can display the image acquired by the image acquisition unit 410, the image registered by the image registration unit 420, the image fused by the image fusion unit 430, the image segmented by the image segmentation unit 440, and the image reconstruction unit 450 reconstructs The image, the information generated by the analysis module 220, the information obtained by the database module 230 operating the database, and/or any multimodal image processing information during the analysis.
  • the display unit 460 can display the target object and surrounding tissue information in the reconstructed image, for example, spatial anatomy of the target object, peripheral vascular tissue, nerve fibers, structural functional regions, and tissue metabolism, or the like. The combination.
  • FIG. 5 is an exemplary flow diagram of a visualization process shown in accordance with some embodiments of the present application.
  • visualization module 210 can perform a visualization process.
  • the visualization process may further include acquiring standard atlas data 510, acquiring multimodal image data 520, registering multimodal image data and standard atlas data 530, merging and registering multimodal image data 540, and splitting the fused multimode
  • the state image data 550, the reconstructed image 560 based on the divided image data, the display reconstructed image 570, and the like.
  • standard atlas data can be obtained.
  • image acquisition unit 410 can perform operation 510.
  • the standard atlas data may be a map capable of displaying target object information and referred to as a standard, for example, a standard lung map, a standard heart map, a standard brain Maps, etc., or a combination of multiples.
  • the standard map data may be directly from a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.), input/output component 184, or external Data source acquisition.
  • standard atlas data may be obtained from other standard atlas databases via network 160.
  • multimodal image data can be acquired.
  • Image acquisition unit 410 can perform operation 520.
  • multimodal image data may be directly from imaging device 110, processor 181, storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc. ), input/output component 184, or external data source acquisition, or acquired over network 160.
  • the acquired multimodal image data may be original image data, processed image data, or image processing parameters, or the like, or a combination thereof.
  • the multimodal image data may be magnetic resonance imaging (MRI), blood oxygen level dependent effect functional magnetic resonance imaging (fMRI-BOLD), diffusion tensor imaging (DTI), diffusion tensor fiber bundle imaging.
  • MRI magnetic resonance imaging
  • fMRI-BOLD blood oxygen level dependent effect functional magnetic resonance imaging
  • DTI diffusion tensor imaging
  • DTT magnetic resonance angiography
  • MRA magnetic resonance angiography
  • CT computed tomography
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • TOF- MRI time-lapse magnetic resonance imaging
  • TOF-MRA time-jumping magnetic resonance angiography
  • MEG magnetoencephalography
  • US transcranial magnetic stimulation magnetic resonance imaging
  • TMS-MRI transcranial magnetic stimulation magnetic resonance imaging
  • MRI-T1, MRI-T2 fMRI-DTI, fMRI-DTT, CT-PET, CT-SPET
  • PET-MR PET-US
  • SPECT-US US-CT
  • US-MR X-ray-CT
  • X-ray-PET X-ray-PET
  • X-ray- US or the like or a combination of multiples.
  • the multimodal image data may include the same or different numbers and modalities for different target objects or ways of imaging.
  • the image may include MRI-T1, MRI-T2, fMRI-BOLD, fMRI-DTI, fMRI-DTT, CT-PET, etc., or a combination thereof.
  • the multimodal image data acquired at 520 and the standard atlas data acquired at 510 can be registered.
  • the multimodal image data acquired at 520 may also be directly registered in 530 without relying on standard atlas data.
  • Image registration unit 420 can perform operation 530.
  • the multimodal image data and the standard atlas data may be registered using one or more of the image registration methods described above.
  • the image registration process may include extracting features of the multimodal image and the standard map to obtain feature points, and finding matching feature point pairs in the multimodal image and the standard map by the similarity measure, and then based on the matching
  • the multi-modal image obtained by the feature point pair and the spatial coordinate transformation parameter in the standard map are subjected to image registration based on the spatial coordinate transformation parameter.
  • a multimodal map The registration of the image data with the standard map data may be performed by using MRI T1 as a reference image, and other images (for example, DTI/DTT, CT/PET, MRI TOF, etc., or a combination thereof) are registered with MRI-T1.
  • the registration of the multimodal image data with the standard map data may be by registering the fMRI-BOLD image data with the standard map data as a reference image.
  • the coordinate system employed by the MRI-T1, fMRI-BOLD image data, and/or standard atlas data may be arbitrary.
  • the image with the fMRI-BOLD image data and the standard map data can be registered again with MRI-T1.
  • the registration method of the multimodal image data and the standard map data may be a multi-resolution method based on maximum interaction information, a gray scale statistical method based on maximum interaction information, and a feature image registration method based on surface contour. Etc., or a combination of multiples.
  • multimodal image data registered in 530 can be fused.
  • Image fusion unit 430 can perform operation 540.
  • the multimodal image data can be fused using one or more of the image fusion methods described above.
  • the fusion method of the multimodal image data may include a logic filtering method, a gray weighted average method, a contrast modulation method, a pyramid decomposition fusion method, a wavelet transform method (for example, a three-dimensional wavelet transform), and a Bayesian method. , DS evidence method, voting method, etc., or a combination of multiple.
  • multimodal image data fused in 540 can be segmented.
  • Image segmentation unit 440 can perform operation 550.
  • segmentation of multimodal image data may result in organ tissue, vascular structures, nerve fibers, functional regions, and the like of the target subject, or a combination of multiples.
  • the brain tissue structure and the corresponding brain functional area can be segmented in a brain fMRI BOLD image; the brain nerve fibers are segmented in a brain fMRI-DTI/DTT image; a brain TOF In the MRI image, the vascular structure of the brain and the like are obtained.
  • the image can be reconstructed based on the results of the segmentation in 550.
  • Image reconstruction unit 450 can perform operation 560.
  • the image reconstruction may be based on the organ tissue, the vascular structure, the nerve fibers, the functional area, and the like of the segmented target object, and the three-dimensional modeling of the target object and the surrounding tissue is performed by using the three-dimensional reconstruction technique, and the target object is reconstructed. 3D model.
  • the image reconstruction performed in 560 can include surface reconstruction of the target object, or volume reconstruction of the target object, and the like.
  • the surface reconstruction may form a three-dimensional surface data set based on the image data of the segmented target object, thereby performing three-dimensional surface reconstruction.
  • the volume reconstruction may form a three-dimensional volume data set based on the image data of the segmented target object, and then perform three-dimensional volume reconstruction.
  • the reconstructed image in 560 can be displayed.
  • the intermediate information and/or the result of the processing of any one of operations 510 through 560 is displayed.
  • the multimodal image acquired in 520, the result of registration in 530, the result of the fusion in 540, and/or the result of segmentation in 550, etc. may be displayed.
  • Display unit 460 can perform operation 570.
  • 570 can display three-dimensional (and/or two-dimensional) related information of the target object and/or surrounding in the reconstructed image.
  • 570 can display a spatial anatomy of a target subject, surrounding vascular tissue, nerve fibers, or functional areas, etc., or a combination of multiples.
  • 510 and 520 can be performed sequentially, simultaneously, or alternately.
  • 550 and 560 can be combined into a single operation.
  • 550 and 560 can be performed sequentially, simultaneously, or alternately.
  • a display operation of 570 can be added before or after any of the operations between 510 and 560.
  • FIG. 6 is a schematic diagram of an analysis module 220 shown in accordance with some embodiments of the present application.
  • the analysis module 220 can include a lesion determining unit 610, a lesion surrounding information determining unit 620, and a surgical simulation unit 630.
  • the units shown can be connected directly (and/or indirectly) to each other.
  • the analysis module 220 illustrated in FIG. 6 is merely representative of some embodiments of the present application.
  • modifications and additions may be made according to the description of the analysis module without any creative work. And cuts down.
  • two of the units may be combined into one unit, or one of the units may be divided into two or more units.
  • the lesion determination unit 610 can determine lesion information in the image.
  • the image may include a multimodal image acquired by the image acquisition unit 410, an image registered by the image registration unit 420, an image fused by the image fusion unit 430, an image segmented by the image segmentation unit 440, and/or an image reconstruction unit 450 reconstructed.
  • Image may include information such as the location, shape, diameter, volume, and/or number of lesions.
  • the lesion can be a tumor, a bleeding site, calcification, infarction, inflammation, a pathogen infection, a tissue congenital anomaly, etc., or a combination of multiples.
  • the lesion can be viewed at different angles in the reconstructed image, or measured in any sagittal, coronal, or axial section to determine the location, shape, diameter, volume, number, etc. of the lesion, or The combination.
  • the lesion information may be manually determined by the user via the input/output component 184, and/or the remote terminal 170 in a two-dimensional and/or three-dimensional reconstructed image, or by the lesion determination unit 610 through one or more
  • the algorithms automatically identify and determine in two-dimensional and/or three-dimensional reconstructed images.
  • the algorithm may include a gray value based region growing method, a threshold based algorithm, and the like.
  • the lesion information can be determined in a two-dimensional and/or three-dimensional reconstructed image using a Computer-Aided Diagnosis System (CAD).
  • CAD Computer-Aided Diagnosis System
  • the computer aided diagnostic system (CAD) may be integrated in the lesion determination unit 610 or other modules and/or units of the multimodal image processing system 130.
  • the lesion information may be determined by the lesion determination unit 610 by segmenting the reconstructed image by one or more models (eg, a human body structure model, an image pixel point, or a gray value distribution model, etc.).
  • the lesion surrounding information determining unit 620 can determine the lesion surrounding information.
  • the lesion peripheral information may be lesion peripheral blood vessel information, lesion peripheral nerve information, lesion peripheral organ tissue information, or the like, or a combination thereof.
  • the surrounding information of the lesion may include the name, number, branching direction and blood flow of the lesion, the number of fibers eroded by the lesion and the fiber connection, the name of the functional area covered by the lesion, the volume ratio, and the tissue of the surrounding organs of the lesion. Metabolic information, etc., or a combination of multiples.
  • the lesion peripheral information may be peripheral blood vessel information after lesion removal, peripheral nerve information, peripheral organ tissue information, or the like, or a combination thereof.
  • the lesion surrounding information may include the name and/or volume ratio of the remaining peripheral functional area after the lesion is removed, the name, number, branching direction, and blood flow of the peripheral blood vessel after the lesion is removed, and the surrounding fibers after the lesion is removed.
  • the lesion perimeter information determination unit 620 can determine lesion perimeter information based on one or more algorithms. For example, determining lesion periphery information may be based on region growing algorithms, edge detection, etc., or a combination of multiples.
  • the surgery simulation unit 630 can simulate surgery.
  • the simulated surgical procedure may include simulated surgical protocol design, surgical simulation, simulation result analysis, risk based on simulation results, and post-operative analysis, as shown in FIG.
  • the surgical simulation unit 630 can include a lesion removal sub-unit 631, a lesion information determination sub-unit 632, and a removal range optimization sub-unit 633.
  • the lesion removal subunit 631 can determine lesion removal information and/or remove lesions.
  • the lesion removal information may be a lesion removal range, a lesion removal volume, a lesion removal sequence, a lesion removal method, a lesion removal duration, a device used for lesion removal, or other information during lesion removal. (for example, whether it is anesthesia, whether it is extracorporeal circulation, whether it is inserted into a gas tube, etc.), etc., or a combination of various.
  • the lesion removal range can be the lesion itself, or a larger range than the lesion, which encompasses the lesion. The larger range may have the same or similar contour as the lesion, or other contours.
  • the larger range may be 1%, 3%, 5%, 10%, 50%, or any other number greater than the lesion area and/or volume.
  • the lesion removal sub-unit 631 can determine the extent of lesion removal based on the lesion surrounding information.
  • the name, number, branching and blood flow of the lesion, the number of fibers eroded by the lesion, and the attachment of the fibers, the name of the functional area covered by the lesion, and the volume fraction, the tissue of the surrounding organs of the lesion may be based on the name of the blood vessel through which the lesion passes.
  • Peripheral information such as metabolic information to determine the extent of lesion removal to avoid or reduce damage to the blood vessels, nerves, and organ tissues surrounding the lesion.
  • the removing the lesion may be removing pixel points (or voxel points) within the determined lesion removal range.
  • the lesion removal sub-unit 631 can involve the user in removing the lesion.
  • the lesion removal sub-unit 631 can receive instructions from the remote terminal 170 and/or the input/output component 184, which can be input by the user, and the lesion removal sub-unit 631 can remove the lesion according to the instruction.
  • the user can select the lesion removal range via remote terminal 170 and/or input/output component 184 and remove the lesion to effect manual or semi-automatic removal of the lesion.
  • the lesion removal sub-unit 631 can automatically remove the lesion according to one or more algorithms.
  • the lesion removal range may be determined by a user's manual intervention based on a range of lesion removals that are automatically determined based on one or more algorithms.
  • the manner in which the lesion is removed may be based on the removal of one or more spatial planes.
  • the lesion removal subunit 631 can remove a lesion or the like based on a two-dimensional plane and/or three-dimensional, or a combination of multiples.
  • the damage information determination sub-unit 632 can determine the predicted damage information after the lesion is removed.
  • the damage information can be predicted vascular damage information, neurological damage information, and/or organ tissue damage information after removal of the lesion.
  • the damage information may be predicted whether the surrounding affected vascular structure may cause tissue ischemia or blood stasis after removal of the lesion, whether the peripheral nerve fiber breaks or lacks after removal of the lesion causes function
  • the condition of the disorder, whether the peripheral organ tissue after the lesion is removed may cause damage or dysfunction, or a combination thereof.
  • the damage information determination sub-unit 632 can determine the predicted damage information by comparing the lesion perimeter information with the predicted surrounding information after removal of the lesion. For example, the damage information determining sub-unit 632 can determine whether the removal of the lesion causes a break or decrease of the nerve fiber by comparing the number of nerve fibers around the lesion, the connection condition, and the number of peripheral nerve fibers after removal of the lesion, and the connection condition. Whether it will cause dysfunction. In some embodiments, the damage information determination sub-unit 632 can determine one or more damage information, for example, the number of damaged blood vessels, the number of nerve damages, the damaged area or volume of the functional area, and the like.
  • the damage information determination sub-unit 632 can determine the one or more damage information by determining one or more criteria for blood vessels, nerve fibers, and functional areas surrounding the lesion. For example, it may be determined whether the blood vessel is damaged based on the degree of integrity of the blood vessel surrounding the lesion (eg, 90%, 80%, 70%, or other ratio) and blood flow conditions (eg, whether the blood vessel is stenosis or deformity, etc.). For another example, it can be determined whether the nerve fibers are broken based on the number and connection of nerve fibers around the lesion. As another example, whether the functional area is damaged can be determined based on the remaining area or volume of the peripheral functional area of the lesion (eg, 90%, 80%, 70%, or other ratio).
  • the corruption information determination sub-unit 632 can determine the combined information of the two or more damage information. For example, the damage information determining sub-unit 632 may assign different weights to different damage information, thereby determining weighting values of the two or more types of damage information, and using the weighting value as an evaluation index of the damage information. In some embodiments, the damage information determination sub-unit 632 can predict damage information to the surrounding tissue after the lesion is removed for use in guiding the surgical plan or simulating the surgical procedure.
  • the removal range optimization sub-unit 633 can optimize the extent of lesion removal.
  • the optimized lesion removal range can be optimized based on one or more constraints.
  • the constraint may include evading damage to important fibers, important blood vessels, important functional areas, and important organ tissues, or the like, or a combination thereof.
  • the removal range optimization sub-unit 633 can specify that a certain blood vessel or nerve (eg, internal carotid artery, optic nerve, etc.) is not damaged according to the user's request. Then, the removal range optimization sub-unit 633 can avoid the blood vessel or nerve during the optimization of the lesion removal range.
  • a certain blood vessel or nerve eg, internal carotid artery, optic nerve, etc.
  • the removal range optimization sub-unit 633 may be optimized one or more times to determine a lesion removal range, determine peripheral information after removal of the lesion, determine surgical information such as damage information.
  • the removal range optimization sub-unit 633 can optimize the lesion removal range based on one or more criteria. For example, the removal range optimization sub-unit 633 can destroy as few blood vessels as possible, or nerves, etc. as a standard, to damage the functional area as small as possible or small, or to destroy two or more types of damage to the periphery. The combined effect is minimal and standard. In some real In an embodiment, the removal range optimization sub-unit 633 can determine the optimized lesion removal range based on the predicted damage information.
  • the removal range optimization sub-unit 633 can optimize the lesion removal range based on a machine learning algorithm in the database module 230. See Figure 8 for a schematic diagram of a database module shown in accordance with some embodiments of the present application.
  • the optimized lesion removal range can be used to guide the user through surgical planning and/or to develop an optimal surgical plan.
  • the analysis module 220 can perform an analysis process.
  • the analysis procedure can include determining a lesion 710, determining lesion perimeter information 720, determining a lesion removal range 730, removing the lesion 740 based on the lesion removal range, determining peripheral information 750 after removal of the lesion, and determining damage information 760 after removal of the lesion. And optimizing the lesion removal range 770 based on the damage information.
  • the lesion can be determined based on the multimodal image. In some embodiments, the lesion can be determined based on the reconstructed image produced by 560. In some embodiments, the lesion determination unit 610 can perform operation 710. In some embodiments, the lesion can be determined by automated, semi-automatic, or manual means. For example, the user may manually outline the position, shape, diameter, volume, number, etc. of the lesion, or a combination of multiple, in the two-dimensional and/or three-dimensional reconstructed image through the input/output component 184, or the remote terminal 170. As another example, the lesion determining unit 610 can automatically identify the location, shape, diameter, volume, number, etc. of the lesion, or a combination of multiples, in the two-dimensional and/or three-dimensional reconstructed image by one or more algorithms. For another example, the user can make changes, or adjustments to the automatically identified lesions.
  • lesion surrounding information can be determined.
  • the lesion perimeter information determination unit 620 can perform operation 720.
  • determining the lesion periphery information may include determining lesion peripheral blood vessel information, lesion peripheral nerve information, lesion peripheral organ tissue information, etc. based on information such as lesion location, shape, diameter, volume, number, etc. determined at 710, or more Combination of species.
  • the lesion peripheral information may be the name, number, branching direction, and blood flow of the lesion, the number of fibers eroded by the lesion, the connection of the fibers, the name of the functional area covered by the lesion, and the volume ratio. Tissue metabolism information of organs surrounding the lesion, or a combination thereof.
  • the extent of lesion removal can be determined.
  • the lesion removal sub-unit 631 can perform operation 730.
  • the extent of lesion removal can be determined by expanding the edge of the lesion.
  • the lesion removal range can be the lesion itself, or a larger range than the lesion, which encompasses the lesion.
  • the surrounding information of the lesion is determined to avoid or reduce damage to the blood vessels, nerves, or organ tissues surrounding the lesion as a reference to determine the extent of lesion removal.
  • the lesion can be removed based on the lesion removal range determined at 730.
  • the lesion removal sub-unit 631 can perform operation 740.
  • the removal of the lesion may be manually performed by the user via the input/output component 184 or the remote terminal 170, or the lesion removal sub-unit 631 may automatically identify the removal of the lesion according to one or more algorithms.
  • the manner in which the lesion is removed may be based on the removal of one or more spatial planes.
  • peripheral information after removal of the lesion at 740 can be determined.
  • the lesion perimeter information determination unit 620 can perform operation 750.
  • the peripheral information after the removal of the lesion may be peripheral blood vessel information after removal of the lesion, peripheral nerve information, peripheral organ tissue information, or the like, or a combination thereof.
  • the damage information after removal of the lesion at 740 can be determined.
  • the corruption information determination sub-unit 632 can perform operation 760.
  • the damage information may be determined by comparing the surrounding information before and after the lesion (ie, comparing the information determined in 750 with the information determined in 720).
  • the relative vessel name, number, branching trend, blood flow condition, etc. before and after the lesion can be removed by comparative analysis to determine whether removal of the lesion has caused damage, and/or damage to the vascular structure surrounding the lesion. .
  • metabolic information such as removal of peripheral organ tissue before and after the lesion can be removed by comparative analysis to determine whether removal of the lesion has caused damage, and/or damage to organ tissue surrounding the lesion.
  • the lesion removal range can be optimized based on the damage information determined at 760.
  • the removal range optimization sub-unit 633 can perform operation 770.
  • the lesion removal range determined at 730 can be optimized based on the lesion information determined at 760. For example, remove areas of the lesion removal area that cause severe damage to the surrounding area, or expand the area of the lesion removal area that is less damaged to the surrounding area.
  • optimizing the lesion removal range may include performing one or more iterations 730, 740, 750, 760, determining a superior lesion by comparative analysis based on the determined damage information after the lesion is determined multiple times. The scope is removed to assist in the development of surgical procedures, to guide actual surgical procedures, and the like.
  • the optimized lesion removal range can be an optimal or better hand determined by analysis. Remove the result.
  • 750 and 760 can be performed simultaneously, or combined into a single operation.
  • one or more operations may be added to, or deleted from, the process. For example, after 760, a comparison operation of the corruption information with the threshold can be added.
  • the corruption information determination sub-unit 632 or the removal range optimization sub-unit 633 can perform the operation.
  • an information storage operation may be added before or after any operation between 710 and 770. The information may be stored in a storage device (for example, hard disk 182, read only memory (ROM) 183, random access memory (RAM). ) 185, cloud storage, etc.) or in database module 230.
  • FIG. 8 is a schematic diagram of a database module 230, shown in accordance with some embodiments of the present application.
  • the database module 230 can include an information storage unit 810, an information retrieval unit 820, and a machine learning unit 830.
  • the units shown can be connected directly and/or indirectly to each other.
  • the database module 230 shown in FIG. 8 is merely a representative of some embodiments of the present application.
  • modifications and additions may be made according to the description of the database module without any creative work. And cuts down.
  • two of the units may be combined into one unit, or one of the units may be divided into two or more units.
  • the information storage unit 810 can store information.
  • Information storage unit 810 can include one or more databases, such as a general purpose database, or a dedicated database, etc., or a combination of multiples.
  • the general-purpose database may be a Microsoft Office Access database, a MySQL database, an Oracle database, a SQL Server database, a Sybase database, a Visual Foxpro (VF) database, a DB2 database, or the like. Or a combination of multiples.
  • the dedicated database may be a database developed to store some type of information, such as Facebook Cloud Database (ApsaraDB for RDS).
  • the information stored by the information storage unit 810 may be basic information of a patient corresponding to a multimodal image, case information of a target object displayed on the multimodal image, or other related information, or the like, or a combination thereof.
  • the basic information may include the patient's name, gender, age, medical history, biochemical examination information, etc., or a combination of multiples.
  • the case information may include images, image examination results, system analysis results, surgical plans, post-operative recovery information, etc., or a combination of multiples.
  • the related information may include a generation time of the multimodal image, a generation time of the multimodal image inspection result, a system analysis time of the multimodal image, and a surgical operation of the target object in the multimodal image. Time, etc., or a combination of multiples.
  • information storage unit 810 can receive from processor 181, storage devices (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.), input/output Data and/or instructions of components 184, remote terminal 170, or other modules or units in multimodal image processing system 130 to store, modify, or delete information.
  • the information retrieval unit 820 can retrieve information.
  • information retrieval unit 820 can retrieve information stored in information storage unit 810.
  • the retrieved information may be case information of a target object displayed on the multimodal image or basic information corresponding to the patient of the multimodal image.
  • information retrieval unit 820 can retrieve the same information based on one or more ways.
  • the information retrieval unit 820 can perform a retrieval of the keyword based on the one or more basic information of the patient corresponding to the multimodal image, the result of the retrieval may include basic information of the patient, and/or multimodality. Case information of the target object displayed on the image.
  • the information retrieval unit 820 can perform retrieval of the keyword based on one or more case information of the target object displayed on the multimodal image, and the result of the retrieval may include the target object displayed on the multimodal image.
  • Case information, and/or multimodal images correspond to basic patient information.
  • the information retrieval unit 820 can retrieve case information based on the basic information or retrieve the basic information based on the case information.
  • the manner in which the information is retrieved may be that the user manually retrieves the keywords via the input/output component 184 or the remote terminal 170.
  • information retrieval unit 820 can provide an intelligent retrieval function to retrieve case information similar to a multimodal image.
  • the similar case information may include similar patient medical history, similar lesion location, similar image examination results, similar image modalities, similar surgical plans, and the like, or a combination thereof.
  • multimodal image processing system 130 may design or improve a corresponding surgical protocol based on similar cases retrieved by information retrieval unit 820.
  • information retrieval unit 820 can display the results of the retrieval on input/output component 184, or remote terminal 170, or via network 160 for further analysis by one or more users.
  • information retrieval unit 820 can perform information retrieval based on one or more algorithms. In some embodiments, information retrieval unit 820 can retrieve information from one or more indexes to increase the efficiency of the retrieval. For example, the information retrieval unit 820 can perform retrieval based on a word index and/or a word index.
  • the word index may be a search algorithm that uses words as an index unit. The difficulty in calculating the word index is the word segmentation algorithm, so it is necessary to add techniques such as artificial intelligence analysis and context judgment to improve the accuracy of the word index.
  • the word index may be a search algorithm in which Chinese words are indexed.
  • the machine learning unit 830 can perform machine learning based on information stored by the information storage unit 810. For example, based on the basic information of the patient corresponding to the multimodal image stored by the information storage unit 810, case information of the target object displayed on the multimodal image, related information of the multimodal image, or the like, Learning algorithms that acquire new data or knowledge from one or more data.
  • the machine learning algorithm may be a decision tree algorithm, a K-means algorithm, a Support Vector Machines (SVM) algorithm, a maximum expectation algorithm, an AdaBoost algorithm, an association rule (Apriori) algorithm, and a nearest neighbor. (K-Nearest Neighbor (KNN)) algorithm, naive Bayesian algorithm, neural network algorithm, classification and regression tree algorithm, etc., or a combination of various.
  • machine learning unit 830 can learn using one or more of the above machine learning algorithms based on case information of a target object displayed on one or more multimodal images.
  • the machine learning unit 830 can optimize one or more algorithmic functions in the multimodal image processing and/or analysis process by one or more learnings, for example, a method of removing lesion information after lesion removal, lesion removal range Determine an algorithm, etc., or a combination of multiples.
  • the machine learning unit 830 can optimize the movement of the lesion based on the surgical plan of the plurality of patients and the postoperative recovery information of the patient, combined with the surrounding information of the lesion on the multimodal image, removing the lesion information after the lesion, and the like.
  • the machine learning unit 830 can learn from case information of a plurality of samples of patients having brain tumors at the same or similar locations, and learn to optimize or improve the determination algorithm of brain tumor removal range by learning. .
  • multimodal image processing system 130 can achieve, approach, or exceed the diagnostic and/or therapeutic level of the expert physician.
  • machine learning unit 830 can optimize and/or attenuate individual differences in multimodal image processing results based on nearest neighbor (KNN) algorithms to avoid or reduce damage to the patient during actual surgical procedures.
  • KNN nearest neighbor
  • the multimodal image processing system 130 can acquire multimodal brain image data from the database 910, obtain a visualized brain image based on the data, and analyze the data to generate an analysis report 945.
  • Database 910 can be a database in database module 230, a database of local storage devices (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, etc.), or an external data source (eg, , a cloud database, etc.) in a remote database.
  • ROM read only memory
  • RAM random access memory
  • the brain image data acquired by the multimodal image processing system 130 from the database 910 may include still image data, video image data, two-dimensional image data, three-dimensional image data, and the like, or a combination of a plurality.
  • multimodal image data such as MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, and fMRI-BOLD image data 925.
  • the acquired brain image data may further include ultrasound contrast data (such as B-mode ultrasound image), CT, SPECT, MEG, TMS-MRI, MRI-T2 data, CT-PET data, CT-SPET data, etc., or more Combination of species.
  • the data acquired by the multimodal image processing system 130 from the database 910 may also include standard brain map data 926.
  • multimodal image processing system 130 may also acquire brain image data from other devices.
  • multimodal image processing system 130 can acquire MRI-T1 image data 921 directly from one MRI imaging device and acquire MRI TOF image data 924 directly from another MRI imaging device.
  • MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, fMRI-BOLD image data 925, and/or standard brain map data 926 may be simultaneously Or not from the database 910 at the same time.
  • MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, fMRI-BOLD image data 925 may be first retrieved from database 910, and standard brain map data 926 may be in pairs. The image data is acquired from the database 910 during processing or analysis.
  • visualization module 210 can register MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, fMRI-BOLD image data 925, and standards.
  • the brain map data 926 and the registration result is obtained.
  • the MRI-T1 image data 921 can be used as a reference image
  • the DTI/DTT image data 922, the CT/PET image data 923, and the MRI TOF image data 924 can be respectively registered.
  • standard brain map data 926 can be used as a reference image to which fMRI-BOLD image data 925 is registered.
  • the image with the fMRI-BOLD image data 925 and the standard brain map data 926 can be registered again with the MRI-T1 image data 921.
  • the registration techniques employed by image registration operation 930 may include point methods (eg, anatomical landmarks), curve methods, surface methods (eg, surface contour methods), moment and spindle methods (eg, spatial coordinate alignment), cross-correlation, interaction Information method, sequential similarity detection matching method, mapping method and nonlinear variation method, etc., or a combination of multiple.
  • image data can be registered using anatomical information of the brain, such as central groove position information.
  • the visualization module 210 can visualize the acquired brain image data to produce a visualized image.
  • the visualization module 210 can perform a blending operation 951 on the results of the registration after the registration operation 930. For example, an image based on MRI T1 image data 921, DTI/DTT image data 92/2, CT/PET image data 923, MRI TOF image data 924, and fMRI BOLD image data 925, and standard brain map data 926, A fusion operation 951 is performed.
  • the fusion operation 951 can perform image fusion using one or more of the fusion algorithms described above.
  • the visualization module 210 can further perform a multi-planar reconstruction/bulk rendering technique visualization operation 952 on the image to produce a reconstructed image.
  • the multi-planar reconstruction/bulk rendering technique visualization operation 952 can reconstruct images using one or more reconstruction algorithms described above, for example, contour reconstruction, voxel reconstruction, volume rendering techniques (VR), multi-planar reconstruction (MPR), curved planes Curve Planar reconstruction (CPR), maximum density projection (MIP), surface shading display (SSD), etc., or a combination of multiples.
  • Volume rendering techniques can include Volume Ray Casting Volume Rendering, element projection, fast volume rendering, Splatting Volume Rendering, Fourier Volume Rendering, and Shear Deformation.
  • the reconstructed image after the volume rendering process can display multimodal image information to facilitate diagnosis (eg, medical workers) and/or treatment of the disease by a user (eg, a medical worker).
  • the maximum density projection method may be based on an overlapping image corresponding to the three-dimensional image, retaining the most dense pixel in the image, and projecting the image to a two-dimensional surface such as a coronal plane, a sagittal plane, and a lateral plane. In the plane, a MIP reconstructed image is formed. For example, with MIP, two-dimensional projection images and pixmaps can be generated based on one or more three-dimensional images.
  • polyping The face reconstruction/bulk rendering technique visualization operation 952 can obtain one or more reconstructed images, including MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, and/or Some or all of the information of the fMRI-BOLD image data 925.
  • the reconstructed image and/or the volume rendered processed image may be displayed on input/output component 184 or stored in database module 230, a storage device (eg, hard disk 182, read only memory (ROM) 183 , a random access memory (RAM) 185, a cloud memory, etc., or a remote terminal 170 or the like.
  • the analysis module 220 may perform MRI-T1 image data 921, DTI/DTT image data 922, CT/PET image data 923, MRI TOF image data 924, fMRI-BOLD image data 925, standard brain map data 926, results of operation 930, The results of operation 951, and/or the results of operation 952, are analyzed to generate an analysis report 945.
  • analysis module 220 can simulate the removal effect of brain tumors.
  • the analysis module 220 can simulate a manual removal operation 941 of a tumor. The user can manually outline the extent of the tumor in the brain image based on the results of operation 930 or operation 952 and remove image data within the range.
  • the user can use the input/output component 184 to effect manual removal of the tumor.
  • the user can set one or more parameters and enter one or more control signals to control the extent of tumor removal.
  • the analysis module 220 can remove the image data in the range according to the input information of the user to implement manual removal.
  • the user can transmit control signals to the analysis module 220 through the brain-computer interface to achieve manual removal of the tumor within a certain range.
  • the analysis module 220 can simulate a smart removal operation 942 of the tumor.
  • the analysis module 220 can determine the tumor extent and peripheral information of the tumor (eg, vessel name, number, branching and blood flow conditions, number of fibers eroded by the lesion, and fiber connectivity, lesion coverage) Functional area name and volume ratio, tissue metabolism information of organs around the lesion, etc.).
  • the analysis module 220 can further analyze the peripheral information and the damage information after removing the tumor range (operation 944), thereby optimizing the tumor range according to the damage information, intelligently selecting an appropriate tumor range, the tumor removal order, the manner, and the like. And remove the image data within the range.
  • the intelligent removal of the tumor may refer to an analysis module 220 that improves or optimizes the algorithm for determining the extent of the tumor by one or more learnings to intelligently determine the extent of the tumor, the order in which the tumor is removed, the manner, and the like.
  • the analysis module 220 can simulate a full removal operation 943 of the tumor. In the total removal operation 943 of the tumor, the analysis module 220 can implement an extended tumor removal based on the artificially and/or intelligently determined tumor extent.
  • the extended tumor removal range can be A range of 2 cm, 5 cm or other distance from the tumor boundary to avoid the spread of subsequent tumor cells or tumor recurrence.
  • the analysis module 220 can further analyze the peripheral information and the damage information after the removal of the extended tumor range (operation 944) as an embodiment of a tumor removal range, for example, a physician, reference.
  • the manual removal operation 941 of the tumor, the intelligent removal operation 942 may belong to a non-extended tumor removal based on surrounding information and damage information before and after tumor removal to avoid or reduce images.
  • the damage to the surrounding tissue after removal of the tumor; the total removal operation 943 of the tumor may belong to an extended tumor removal to avoid the spread of subsequent tumor cells or tumor recurrence.
  • the analysis module 220 can perform an analysis operation 944 after tumor removal.
  • Analytical operation 944 after tumor removal can include evaluating the effects of tumor removal, determining damage information to surrounding tissue after tumor removal, and the like.
  • the analysis module 220 can determine new tumor removal information based on the analysis results after the previous tumor removal to guide, or optimize the next manual removal operation 941, smart removal operation 942, Or remove operation 943 in its entirety.
  • Analysis module 220 may generate an analysis report 945 after analysis operation 944 after tumor removal.
  • the analysis report 945 can include information about the tumor, peripheral information of the tumor, removal information of the tumor (eg, extent of removal of the tumor, time, manner, etc.), peripheral information after tumor removal (eg, periphery after tumor removal) Damage information), optimized tumor removal information (eg, optimized tumor removal range, time, mode, etc.), and/or any other information generated during the analysis.
  • the analysis report 945 may also include a single mode image, a multimodal image, a registration image, a fused image, a standard brain map 926, a reconstructed image, and/or a volume rendered image, etc., or A variety of combinations.
  • the analysis report 945 can also include information retrieved in the database module 230 based on information acquired and/or generated in the multimodal image processing system 130.
  • the analysis report 945 may include basic information about similar cases (eg, name, gender, age, medical history, laboratory examination information, etc.), surgical information, post-operative information (eg, postoperative recovery), imaging information, pathology Information, etc.
  • the analysis report 945 can be stored in a storage device (eg, hard disk 182, read only memory (ROM) 183, random access memory (RAM) 185, cloud storage, etc.), database module 230, or remote terminal 170. Wait.
  • the present application uses specific words to describe embodiments of the present application.
  • a "one embodiment,” “an embodiment,” and/or “some embodiments” means a feature, structure, or feature associated with at least one embodiment of the present application. Therefore, it should be emphasized and noted that “an embodiment” or “an embodiment” or “an alternative embodiment” that is referred to in this specification two or more times in different positions does not necessarily refer to the same embodiment. . Furthermore, some of the features, structures, or characteristics of one or more embodiments of the present application can be combined as appropriate.
  • aspects of the present application can be illustrated and described by a number of patentable categories or conditions, including any new and useful process, machine, product, or combination of materials, or Any new and useful improvements. Accordingly, various aspects of the present application can be performed entirely by hardware, entirely by software (including firmware, resident software, microcode, etc.) or by a combination of hardware and software.
  • the above hardware or software may be referred to as a "data block,” “module,” “unit,” “component,” or “system.”
  • aspects of the present application may be embodied in a computer product located in one or more computer readable medium(s) including a computer readable program code.
  • a computer readable signal medium may contain a propagated data signal containing a computer program code, for example, on a baseband or as part of a carrier.
  • the propagated signal may have a variety of manifestations, including electromagnetic forms, optical forms, and the like, or a suitable combination thereof.
  • the computer readable signal medium may be any computer readable medium other than a computer readable storage medium that can be communicated, propagated, or transmitted for use by connection to an instruction execution system, apparatus, or device.
  • Program code located on a computer readable signal medium can be propagated through any suitable medium, including a radio, cable, fiber optic cable, RF, or similar medium, or a combination of any of the above.
  • the computer program code required for the operation of various parts of the application can be written in any one or more programming languages, including object oriented programming languages such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB.NET, Python. Etc., regular programming languages such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code can run entirely on the user's computer, or run as a stand-alone software package on the user's computer, or partially on the user's computer, partly on a remote computer, or entirely on a remote computer or server.
  • the remote computer It can be connected to the user's computer through any network form, for example, a local area network (LAN) or a wide area network (WAN), or connected to an external computer (for example, via the Internet), or in a cloud computing environment, or used as a service such as software as a service (SaaS) ).
  • LAN local area network
  • WAN wide area network
  • SaaS software as a service

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Surgery (AREA)
  • Urology & Nephrology (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

披露了一种多模态图像处理的方法和系统。所述方法可以包括获取多模态图像;配准所述多模态图像;融合所述多模态图像;根据所述多模态图像的融合结果,产生重建图像;以及根据所述重建图像,确定所述病灶移除范围。所述多模态图像可以包括至少三种模态的图像。所述多模态图像中可以包括一个病灶。

Description

多模态图像处理系统及方法 技术领域
本申请涉及多模态图像处理系统及方法,尤其是涉及脑部病患组织的多模态图像的可视化及分析的系统和方法。
背景技术
医生在进行脑外科手术之前,通常会用医学成像系统对病人脑部进行各方面的扫描,并将扫描的结果导入到后处理工作站中去观察扫描结果,从而辅助诊断以及指导手术。目前能够提供脑部相关的扫描种类繁多,可以包括,例如,数字减影心血管造影术(Digital Subtraction Angiography(DSA))、磁共振成像(Magnetic Resonance Imaging(MRI))、血氧水平依赖效应功能性磁共振成像(Blood Oxygenation Level Dependent Functional Magnetic Resonance Imaging(fMRI-BOLD))、弥散张量成像(Diffusion Tensor Imaging(DTI))、弥散张量纤维束成像(Diffusion Tensor Tractography(DTT))、磁共振血管造影术(Magnetic Resonance Angiography(MRA))、计算机断层扫描(Computed tomography(CT))、正电子发射断层扫描(Positron Emission Tomography(PET))、单光子发射计算机断层成像(Single-Photon Emission Computerized Tomography(SPECT))、时间飞跃法磁共振成像(Time of Flight Magnetic Resonance Imaging(TOF-MRI))、时间飞跃法磁共振血管成像(Time of Flight Magnetic Resonance Angiography(TOF-MRA))、脑磁图(Magnetoencephalography(MEG))、经颅磁刺激磁共振成像(Transcranial Magnetic Stimulation-Magnetic Resonance Imaging(TMS-MRI))、fMRI-DTI、fMRI-DTT、PET-CT、SPET-CT、MRI-T1、MRI-T2、fMRI-DTI、fMRI-DTT等,或者上述成像技术的任意组合。
目前的医学后处理工作站提供的神经纤维的多模态分析功能和多模态图像的融合功能主要针对两种模态图像数据进行处理和分析。神经纤维的多模态分析主要结合了MRI-T1、fMRI-BOLD或fMRI-DTI/DTT多模态的信息,从而可以分析脑神经在大脑中的结构以及和功能区的关联。多模态图像的融合主要结合 CT和PET-CT图像,分析病人的肿瘤代谢强度以及扩散情况。为了给医生提供多方面的病灶信息、辅助医生诊断疾病及指导手术,有必要对多模态(例如,三个以上的模态)图像数据进行综合的数据分析。
简述
本申请的一个方面是关于一种多模态图像处理方法。所述多模态图像处理方法可以在至少一个机器上执行,所述至少一个机器中的每一个机器可以具有至少一个处理器和一个存储器。所述多模态图像处理方法可以包括以下操作中的一个或多个:获取多模态图像,所述多模态图像可以包括至少三种模态的图像,并且,所述多模态图像中可以包括一个病灶;配准所述多模态图像;融合所述多模态图像;根据所述多模态图像的融合结果,产生重建图像;以及根据所述重建图像,确定所述病灶移除范围。
本申请的另一个方面是关于一种非暂时性的计算机可读介质。所述非暂时性的计算机可读介质可以包括可执行指令。所述指令被至少一个处理器执行时,可以导致所述至少一个处理器实现所述多模态图像处理方法。
本申请的另一个方面是关于一种多模态图像处理系统。所述多模态图像处理系统可以包括至少一个处理器和信息。所述信息被所述至少一个处理器执行时,可以导致所述至少一个处理器实现所述多模态图像处理方法。
在一些实施例中,所述多模态图像处理系统可以进一步包括所述非暂时性的计算机可读介质。
在一些实施例中,所述多模态图像处理方法可以进一步包括:根据所述多模态图像或所述重建图像,显示图像信息。
在一些实施例中,所述多模态图像处理方法可以进一步包括:获取标准图谱;和/或根据所述标准图谱,配准所述多模态图像。所述标准图谱可以包括与目标对象的一个部位相关的标准图像数据。
在一些实施例中,所述多模态图像可以包括脑部的多模态图像。在一些实施例中,所述标准图谱可以包括标准脑图谱。
在一些实施例中,所述显示图像信息可以包括:显示脑血管、神经纤维、脑功能区、和/或脑组织代谢率的信息。
在一些实施例中,所述多模态图像可以进一步包括MRI T1图像、BOLD图像,和第一图像。所述第一图像可以包括DTI/DTT图像、CT/PET图像、和MRI TOF图像中的一种。
在一些实施例中,所述配准多模态图像可以包括:根据标准图谱,配准BOLD图像,以获得第二图像;根据MRI T1图像,配准第一图像,以获得第三图像;以及根据MRI T1图像,配准所述第二图像和所述第三图像。
在一些实施例中,所述产生重建图像可以包括:分割所述多模态图像的融合结果;和/或根据分割后的所述多模态图像,利用一种重建方法,产生所述重建图像。所述重建方法可以包括多平面重建或体绘制技术。
在一些实施例中,所述确定病灶移除范围可以包括:根据所述重建图像,确定所述病灶的范围;根据所述病灶的范围,确定所述病灶的第一周边信息;和/或根据所述第一周边信息,确定所述病灶移除范围。所述第一周边信息可以包括所述病灶的周围的血管信息、神经信息、或其他组织器官信息。
在一些实施例中,所述多模态图像处理方法可以进一步包括:根据所述病灶移除范围,模拟移除病灶。
在一些实施例中,所述确定病灶移除范围可以进一步包括:确定移除所述病灶后的第二周边信息;根据所述第一周边信息和所述第二周边信息,确定移除所述病灶后的组织器官的损毁信息;和/或根据所述损毁信息,优化所述病灶移除范围。
在一些实施例中,所述多模态图像处理方法可以进一步包括:根据所述病灶移除范围,确定手术方案。
在一些实施例中,所述病灶可以包括脑部的肿瘤。所述病灶周边信息可以进一步包括所述病灶经过的血管的名称、所述血管的血流情况、所述病灶侵蚀的脑纤维数量、所述脑纤维的连接情况、和/或所述病灶覆盖的脑功能区名称。
在一些实施例中,所述损毁信息可以包括移除所述病灶后的所述血管的损毁信息、所述脑纤维的损毁信息、和/或所述脑功能区的损毁信息。
在一些实施例中,所述多模态图像处理方法可以进一步包括:保存与所述病灶相关的案例信息。所述案例信息可以包括所述多模态图像、所述重建图像、所述病灶的范围、所述优化的病灶范围、所述第一周边信息、所述第二周边信息、 所述损毁信息、与所述病灶相关的信息、与所述手术方案相关的信息、和/或与术后恢复情况相关的信息。
在一些实施例中,所述多模态图像处理方法可以进一步包括:根据所述案例信息检索相似的案例。
在一些实施例中,所述保存与所述病灶相关的案例信息可以包括在数据库中保存与所述病灶相关的案例信息。所述检索相似的案例可以包括在数据库中检索相似的案例。
在一些实施例中,所述多模态图像处理方法可以进一步包括:根据所述数据库中的信息进行机器学习,以优化所述病灶移除范围。
附图简述
为了更清楚地说明本申请的实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本申请的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本申请应用于其他类似情景。除非从语言环境中显而易见或另做说明,在各图中,图中相同标号代表相同结构或操作。
图1-A是根据本申请的一些实施例所示的图像分析系统的一个示意图;
图1-B是根据本申请的一些实施例所示的计算设备的硬件结构的一个示意图;
图1-C是根据本申请的一些实施例所示的移动设备的硬件结构的一个示意图;
图2是根据本申请的一些实施例所示的多模态图像处理系统的一个示意图;
图3是根据本申请的一些实施例所示的多模态图像处理过程的一种示例性流程图;
图4是根据本申请的一些实施例所示的可视化模块的一个示意图;
图5是根据本申请的一些实施例所示的可视化过程的一种示例性流程 图;
图6是根据本申请的一些实施例所示的分析模块的一个示意图;
图7是根据本申请的一些实施例所示的处理多模态图像的一种示例性流程图;
图8是根据本申请的一些实施例所示的数据库模块的一个示意图;以及
图9是根据本申请的一些实施例所示的多模态图像处理系统的一个实施例的示意图。
具体描述
如本申请和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其他的步骤或元素。
虽然本申请对根据本申请的实施例的多模态图像处理系统中的某些模块做出了各种引用,然而,任何数量的不同模块可以被使用并运行在一个通过网络与该系统连接的远程终端和/或服务器上。所述模块仅是说明性的,并且所述系统和方法的不同方面可以使用不同模块。
本申请中使用了流程图用来说明根据本申请的实施例的多模态图像处理系统所执行的操作步骤。应当理解的是,显示在前面或后面的操作步骤不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各种步骤。同时,也可以将其他操作步骤添加到这些过程中,或从这些过程移除某一步或数步操作。
在多模态图像处理系统中,“多模态图像”可以包括两种或多种模态的图像。所述模态可以包括数字减影心血管造影术(DSA)、磁共振成像(MRI)、血氧水平依赖效应功能性磁共振成像(fMRI-BOLD)、弥散张量成像(DTI)、弥散张量纤维束成像(DTT)、磁共振血管造影术(MRA)、计算机断层扫描(CT)、正电子发射断层扫描(PET)、单光子发射计算机断层成像(SPECT)、时间飞跃法磁共振成像(TOF-MRI)、时间飞跃法磁共振血管成像((TOF-MRA)、脑磁图(MEG)、超声波扫描(US)、经颅磁刺激磁共振成像(TMS-MRI)、MRI-T1、 MRI-T2、fMRI-DTI、fMRI-DTT、CT-PET、CT-SPET、DSA-MR、PET-MR、PET-US、SPECT-US、US-CT、US-MR、X射线-CT、X射线-PET、X射线-US等和/或类似的,或多种的组合。在一些实施例中,多模态图像上显示的目标对象可以是器官、机体、物体、损伤部位、肿瘤等,或多种的组合。在一些实施例中,多模态图像上显示的目标对象可以为脑部的一个或多个病变组织。在一些实施例中,多模态图像可以是二维图像和/或三维图像。在二维图像中,最细微的可分辨元素可以为像素点(pixel)。在三维图像中,最细微的可分辨元素可以为体素点(voxel)。在三维图像中,图像可由一系列的二维切片或二维图层构成。
图1-A是根据本申请的一些实施例所示的图像分析系统100的一个示意图。一个图像分析系统100可以包括一个成像设备110、一个多模态图像处理系统130、一个网络160及一个远程终端170。在一些实施例中,成像设备110、多模态图像处理系统130和远程终端170可以彼此直接连接,和/或间接连接。在一些实施例中,成像设备110、多模态图像处理系统130和远程终端170可以通过网络160彼此直接和/或间接连接。在一些实施例中,成像设备110、多模态图像处理系统130和远程终端170可以通过一个或多个中间单元(未显示)间接连接。所述中间单元可以是实体的(例如,设备、装置、模块、接口等,或多种的组合),或非实体的(例如,无线电波、光学的、音波的、电磁类等,或多种的组合)等,或多种的结合。不同模块和单元之间可以通过无线和/或有线的方式连接。
成像设备110可以对目标对象进行扫描,并生成与之相关的数据和图像。成像设备110可以利用生成的数据对图像进行进一步地处理。在一些实施例中,目标对象可以包括人体、动物、或它们的其中一部分,例如器官、组织、病变部位(例如,肿瘤部位),或者上述部位的任意组合。举例来说,目标对象可以是头部、胸部、腹部、心脏、肝脏、上肢、下肢、脊椎、骨骼、血管等,或者上述部位的任意组合。在一些实施例中,成像设备110可以是一个设备,或一个设备组。在一些实施例中,成像设备110可以是一个医学成像设备,例如,一个MRI设备、一个SPECT设备、一个CT设备、一个PET设备等。在一些实施例中,所述医学成像设备可以单独使用,或/和组合使用,例如,一个SPECT-MRI设 备、一个CT-PET设备、一个SPET-CT设备等。成像设备110可以包括一个扫描仪对目标对象进行扫描,并获得与之相关的信息(例如图像、数据等)。在一些实施例中,成像设备110可以是一个放射性扫描设备。该设备可能包括可对目标对象发射放射性射线的放射性扫描来源。所述放射性射线可以包括,微粒射线、光子射线等其中的一种或其组合。所述微粒射线可以包括中子、质子、α射线、电子、μ介质、重离子等其中的一种或其组合。光子射线则可以包括X射线、γ射线、紫外线、激光等其中的一种或其组合。在一些实施例中,光子射线可能是X射线;其相应的成像设备110则可以是一个CT系统、一个数字式射线成像系统(DR)、一个多模态医学成像系统等,或多种的组合。在一些实施例中,所述多模态医学成像系统可以包括CT-PET系统、SPECT-MRI系统、SPET-CT系统等,或多种的组合。在一些实施例中,成像设备110可以包括一个射线生成单元和射线探测单元(图中未展示)。例如,成像设备110可以包括一个光子探测器以完成射线的生成和/或探测等。光子探测器可以生成光子以用于目标物体的扫描,或捕获目标物体扫描后的光子。在一些实施例中,成像设备110可以是一个PET系统或多模态医学成像系统,其光子探测器可以包括闪烁器和/或光电探测器。在一些实施例中,成像设备110可以包括射频发射线圈和/或射频接收线圈(图中未展示)。例如,成像设备110可以是一个MRI成像设备。
多模态图像处理系统130可以处理来自成像设备110、网络160和/或远程终端170的信息。所述信息可以包括成像设备110生成的图像信息或与病人相关的信息、云设备(未显示)通过网络160传输的信息、远程终端170发出的命令及信息等,或多种的组合。在一些实施例中,多模态图像处理系统130可以执行与多模态图像数据处理相关的各类操作,例如,多模态图像数据的配准融合,多模态图像数据的分割、重建图像、基于重建图像数据的分析、多模态图像数据的存储、多模态图像数据的检索等,或多种的组合。在一些实施例中,多模态图像处理系统130可以基于所述信息重建一个或者多个二维和/或三维图像。在一些实施例中,重建的图像中可以包括病灶信息,而多模态图像处理系统130可以基于所述病灶信息分析重建的图像,从而模拟手术的过程。例如,多模态图像处理系统130可以通过分析图像,选择一 个病灶移除范围。又如,多模态图像处理系统130可以分析在图像中移除病灶后对周边组织的损伤,从而进一步优化选择的图像中的病灶移除范围,避免或减少图像中移除病灶后对周边组织的损伤。此外,多模态图像处理系统130可以存储或查询多模态图像。为简便,以下描述中,对图像中的对应于一个器官或组织的部分简称为该器官或组织;对图像中该对应部分的处理简称为对该器官或组织的处理。在一些实施例中,多模态图像处理系统130可以由一个或多个具有硬件结构的计算设备180来实现其功能。图1-B显示了根据本申请的一些实施例所示的计算设备的硬件结构的一个示意图。
网络160可以是单个网络,或多个不同网络的组合。例如,网络160可以是一个局域网(Local Area Network(LAN))、广域网(Wide Area Network(WAN))、公共开关电话网(Public Switched Telephone Network(PSTN))、虚拟网络(Virtual Network(VN))、专用网络(Private Network(PN))、都市城域网(Metropolitan Area Network(MAN))、或者上述网络的任何组合。网络160可以包括多个网络接入点,并可以使用有线网络构架、无线网络构架以及有线/无线网络混合构架。有线网络可以包括利用金属电缆、混合电缆、光缆等,或多种线缆组合的方式。无线网络可以包括蓝牙(Bluetooth)、无线网(Wi-Fi)、紫峰(ZigBee)、近场通信(Near Field Communication(NFC))、蜂窝网络(例如,GSM、CDMA、3G、或4G等)等,或多种网络模式的组合。网络160可以适用于本申请所描述的范围内,但并不局限于所述描述。
远程终端170可以接收、操作、处理、存储或显示多模态图像数据。远程终端170可以通过网络160与成像设备110、多模态图像处理系统130进行信息传输。在一些实施例中,远程终端170可以供一个或多个用户使用,例如,医院的医护工作人员、医学院及其学生、或者其他经过训练的非医护工作者等,或多种的组合。在一些实施例中,远程终端170可以是与成像设备110、多模态图像处理系统130、网络160相连接的设备终端,例如,显示屏、打印机、计算设备等,或多种的组合。在一些实施例中,远程终端170可以是一个具有硬件结构的计算设备180或移动设备190。图1-C显示了根据本申请的一些实施例所示的移动设备的硬件结构的一个示意图。
需要注意的是,以上对于图像分析系统100的描述,仅为描述方便, 并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接,对实施上述方法和系统的应用领域形式和细节上的各种修正和改变。例如,多模态图像处理系统130和远程终端170可以集成在一个计算设备上和/或移动设备上。又如,该系统可以包括两个或多个成像设备110。再如,该系统可以具有两个或多个远程终端170。
图1-B是根据本申请的一些实施例所示的计算设备180的硬件结构的一个示意图。计算设备180能够实现和/或实施本申请中披露的特定系统(例如,多模态图像处理系统130)。本实施例中的特定系统利用功能框图解释了一个包含用户界面的硬件平台。计算设备180可以实施图像分析系统100的一个或多个组件、模块、单元、子单元(例如,远程终端170,多模态图像处理系统130等)。计算设备180可以是一个通用目的的计算机,或一个有特定目的的计算机。两种计算机都可以被用于实现本实施例中的特定系统。为简便,图1-B中只显示了一台计算设备,但是本实施例所描述的提供多模态图像处理所需信息的相关计算功能是可以以分布的方式、由一组相似的平台所实施的,分散系统的处理负荷。
如图1-B所示,计算设备180可以包括内部通信总线188、处理器181、硬盘182、只读存储器(ROM)183、输入/输出组件184、随机存取存储器(RAM)185、通信端口186、和用户界面187。内部通信总线188可以实现计算设备180组件间的数据通信。处理器181可以执行程序指令,和/或完成在此申请中所描述的图像分析系统100的任何功能、组件、模块、单元、子单元。处理器181可以由一个或多个处理器组成。在一些实施例中,处理器181可以包括微控制器、简化指令系统计算机(RISC)、专用集成电路(ASIC)、特定应用指令集处理器(ASIP)、中央处理器(CPU)、图形处理器(GPU)、物理处理器(PPU)、微处理器单元、数字信号处理器(DSP)、现场可编程门阵列(FPGA),或者其他能够执行计算机程序指令的电路或处理器等,或多种的组合。
在一些实施例中,处理器181可以控制成像设备110、多模态图像处理系统130,和/或远程终端170。在一些实施例中,处理器181可以控制成像设备110、多模态图像处理系统130、远程终端170接收信息,或向上述系统/设 备发送信息。在一些实施例中,处理器181可以接收来自成像设备110的图像信息或与目标对象相关的信息。处理器181可以向多模态图像处理系统130发送图像信息或与目标对象相关的信息。处理器181可以接收来自多模态图像处理系统130的经过处理的数据或图像。处理器181可以向远程终端170发送经过处理的数据或图像。在一些实施例中,处理器181可以执行程序,算法,软件等。在一些实施例中,处理器181可以包括一个或多个接口。所述接口可以包括成像设110、多模态图像处理系统130、远程终端170,和/或图像分析系统100中的其他模块或单元之间的接口。
在一些实施例中,处理器181可以执行来自远程终端170的命令。处理器181可以通过处理和/或转化上述命令以控制成像设备110、和/或多模态图像处理系统130。例如,处理器181可通过远程终端170处理用户输入的信息,并将该信息转化为一个或多个相应命令。所述命令可以是扫描时间、扫描目标定位信息、机架的旋转速度、扫描参数等,或多个的组合。处理器181可以控制多模态图像处理系统130选择不同算法以处理和/或分析图像数据。在一些实施例中,处理器181也可以集成在一个外部计算设备中,用于控制成像设备110、多模态图像处理系统130,和/或远程终端170等。
在一些实施例中,计算设备180还包括一种或多种形式的存储设备用于储存数据、程序,和/或算法等,例如,硬盘182,只读存储器(ROM)183,随机存取存储器(RAM)185、云存储器等。所述存储设备可以用于计算机处理和/或通信使用的各种数据文件,以及处理器181所执行的可能的程序指令。所述存储设备可以在图像分析系统100内部,或在图像分析系统100外部(例如,通过网络160连接的外部存储设备,或云存储器等)。存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)可以存储来自于成像设备110、多模态图像处理系统130、远程终端170的信息。所述信息可以包括多种模态图像及与病人相关的信息、标准图谱及相关的信息、多模态图像处理过程中使用的程序、软件、算法、数据、文本、数字、图像、音频等,或多种的组合。
硬盘182可以是利用磁能方式存储信息的设备。在一些实施例中,硬盘182还可以是利用磁能方式存储信息的其他设备,例如,软盘、磁带、磁 芯存储器、磁泡存储器、U盘、闪存等。只读存储器(ROM)183和/或随机存取存储器(RAM)185可以是利用电能方式存储信息的设备。只读存储器(ROM)183可以包括光盘驱动器、硬盘、磁带、早期非易失存储器(NVRAM)、非易失SRAM、闪存、电子抹除式可复写只读存储器、可擦除可编程只读存储器、可编程只读存储器等,或多种的组合。随机存储器(RAM)185可以包括动态随机存储器(DRAM)、静态随机存储器(SRAM)、晶闸管随机存储器(T-RAM)、零电容随机存储器(Z-RAM)等,或多种的组合。
在一些实施例中,存储设备还可以是利用光学方式存储信息的设备,例如,CD或DVD等。在一些实施例中,存储设备可以是利用磁光方式存储信息的设备,例如磁光盘等。上述存储设备的存取方式可以是随机存储、串行访问存储、只读存储等,或多种的组合。上述存储设备可以是非永久记忆存储设备,或永久记忆存储设备。以上提及的存储设备只是列举了一些例子,存储设备并不局限于此。上述存储设备可以是本地的,或远程的。上述存储设备可以是集中式的,或分布式的。例如,上述存储设备可以设置在云服务器上。
输入/输出组件184可以支持计算设备180与图像分析系统100其他组件(例如,成像设备110、远程终端170等)之间的输入/输出数据流,例如,接收、发送、显示或打印信息。在一些实施例中,输入/输出组件184可以包括键盘、触控设备、鼠标、机械模拟设备、可穿戴设备(例如,3D眼镜、机械手套等)、虚拟现实设备、音频输入设备、图像输入设备、和远程控制设备等,或多种的组合。输出的信息可以发送给用户,或不发送。不发送的输出信息可以存储在硬盘182、只读存储器(ROM)183,随机存取存储器(RAM)185中,或删除。在一些实施例中,用户可以通过输入/输出组件184输入一些原始参数或设置相应多模态图像处理的初始化条件。在一些实施例中,一些输入信息可以来自于外部数据源(例如,软盘、硬盘、光盘、存储芯片、有线终端、无线终端等,或多个的组合)。输入/输出组件184可以接收来自图像分析系统100中其他模块或单元的信息,或发送信息至系统中其他模块或单元。
通信端口186可以实现计算设备180与图像分析系统100的其他部件(例如,成像设备110、远程终端170等)之间数据通信。计算机可以通过通信 端口186从网络160发送和接受信息及数据。图像分析系统100输出信息的形式可以包括数字、字符、指令、压力、声音、图像、系统、软件、程序等,或多种的组合。
用户界面187可以展示多模态图像处理过程阶段性的信息,或多模态图像处理结果(例如,图像截面图,经多模态图像重建后的多平面图像等,或多种的组合)。用户界面187可以对用户输入参数给予提示,或帮助用户参与多模态图像处理过程(例如,启动或停止处理过程、选择或修改运算参数、选择或修改算法、修改程序、退出系统、系统维护、系统升级,或系统更新等)。
需要注意的是,上述存储设备(硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、或云存储器等)和/或处理器181可以实际存在于系统中,或通过云计算平台完成相应功能。云计算平台可以包括以存储数据为主的存储型云平台、以处理数据为主的计算型云平台以及兼顾数据存储和处理的综合云计算平台。图像分析系统100所使用的云平台可以是公共云、私有云、社区云或混合云等。例如,根据实际需要,图像分析系统100接收的一部分信息,可以通过云平台进行计算和/或存储;而另一部分信息,可以通过本地的处理设备和/或存储设备进行计算和/或存储。
在一些实施例中,图像分析系统100可以具有一个或多个计算设备180。所述多个计算设备180可以实现和/或实施相同或不同的功能。例如,第一个计算设备可以控制成像设备110成像并获取多模态图像数据;而第二个计算设备可以从第一个计算设备或其他存储设备中获取多模态图像数据,并处理和/或分析多模态图像数据。
图1-C是根据本申请的一些实施例所示的移动设备190的硬件结构的一个示意图。该移动设备190可以实现和/或实施本申请中披露的特定系统。在一些实施例中,用于显示和交互位置相关信息的远程终端170可以是一个移动设备190。移动设备190可以具有多种形式,包括智能手机、平板电脑、音乐播放器、便携游戏机、全球定位系统(Global Positioning System,GPS)接收器、可穿戴计算设备(如眼镜、手表等)等,或多种的组合。在一些实施例中,移动设备190可以包括一个或多个天线199(例如,无线通信单元)、显示模块191、图形处理 器192、中央处理器193、输入/输出模块194、内存195、以及存储模块198。在一些实施例中,移动设备190还可以包括任何其他合适的组件,例如,系统总线或控制器(图中未显示)。如图1-C所示,一个移动操作系统196,如iOS、Android、Windows Phone等,和/或一个或多个应用197可以从存储模块198加载到内存195中,并被中央处理器193所执行。应用197可以包括一个浏览器和/或其他适合在移动设备190上接收并处理图像相关信息的移动应用。输入/输出模块194可以提供多模态图像相关信息的交互功能。输入/输出模块194可以实现信息在移动设备190与多模态图像处理系统130,和/或图像分析系统100的其他组件之间的交互,例如,通过网络160进行信息传输。
为了实现不同的模块、单元以及在之前的披露中所描述的他们的功能,计算设备180和/或移动设备190可以作为以上描述的一个或多个组件的硬件平台(例如,多模态图像处理系统130,远程终端170,和/或图1-A中描述的图像分析系统100的其他组件)。这类计算机的硬件元素、操作系统和程序语言在自然界中是常见的,可以假定本领域技术人员对这些技术都足够熟悉,能够利用这里描述的技术提供多模态图像处理所需要的信息。一台包含用户界面元素的计算机能够被用作个人计算机(personal computer(PC))或其他类型的工作站或终端设备,被适当程序化后也可以作为服务器使用。可以认为本领域技术人员对这样的结构、程序以及这类计算机设备的一般操作都是熟悉的,因此所有附图也都不需要额外的解释。
图2是根据本申请的一些实施例所示的多模态图像处理系统130的一个示意图。多模态图像处理系统130可以包括一个可视化模块210、一个分析模块220、和一个数据库模块230。显而易见地,图2所述多模态图像处理系统130仅仅是代表本申请的一些实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,可以根据多模态图像处理系统130的描述做出修改、增添和删减。例如,其中两个模块可以结合为一个模块,或者其中一个模块可以分割为两个或多个模块。
可视化模块210可以将多模态图像可视化。可视化模块210可以与分析模块220、数据库模块230和/或其他相关模块(未显示)连接。所述多模态图像可以指两个或多个不同模态的图像。所述不同模态的图像可以指利用 不同成像原理、不同设备生成的图像,或者同一成像设备在不同成像模式下生成的图像。在一些实施例中,多模态图像可以包括多种模态的图像,例如,一个MRI图像、一个CT图像、一个MRA图像、一个fMRI图像、一个PET图像、一个DTI/DTT图像、一个CT-PET图像、一个fMRI-DTI图像、一个TOF-MRI图像、一个TOF-MRA图像等两种或多种的组合。所述多模态图像可以从成像设备110、处理器181、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、或远程终端170获取,或通过网络160从一个外部数据源获取。在一些实施例中,可视化的多模态图像可以是实验中(例如,医学实验、临床模拟实验、工业测试实验等)获得的、成像设备110生成的、或计算模拟合成的。在一些实施例中,可视化模块210可以对多模态图像进行配准、融合和/或重建等处理。在一些实施例中,可视化模块210可以基于一个或多个可视化技术对多模态图像进行可视化。基于过程中数据描述方法的不同,所述多模态图像的可视化技术可以是面绘制、体绘制和混合绘制技术。所述面绘制技术可以对物体的表面进行重建,即基于多模态图像数据分割得到的三维数据场中的等值面数据,利用图形学技术实现表面绘制。所述体绘制技术可以体素作为基本单元,直接由三维数据生成三维物体图像,表示物体内部信息。所述混合绘制技术可以融合面绘制和体绘制的重建算法进行表面和内部同步重建。在一些实施例中,可视化模块210对多模态图像可视化的结果,可以保存在存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)中,为后续多模态图像的分析提供信息。在一些实施例中,可视化模块210对多模态图像可视化的结果,可以由分析模块220进行实时分析。
分析模块220可以分析多模态图像。分析模块220可以与可视化模块210、数据库模块230或其他相关模块(未显示)连接。在一些实施例中,分析模块220可以对多模态图像中的一个或多个模态图像进行单独分析,或对多模态图像重建后的图像进行综合分析。在一些实施例中,分析模块220可以分析多模态图像上所显示目标对象的局部和/或整体信息,例如,目标对象的组织功能信息、目标对象的空间结构信息、目标对象的生理信息等,或多 种的组合。所述组织功能信息可以包括组织或器官的生理功能是否异常、是否发生病变、病变的程度等,或多种的组合。所述空间结构信息可以包括二维和/或三维解剖结构信息,例如,组织或器官的形态、数量、尺寸、相对位置等,或多种的组合。所述生理信息可以包括组织或器官的代谢率、病变部位经过的血管名称、所述血管的血流量、血流速度等,或多种的组合。在一些实施例中,分析模块220可以分析多模态图像中病灶周边信息及确定移除病灶后的损毁信息,以辅助后续确定手术模拟方案。所述病灶周边信息可以包括病灶周边血管信息、病灶周边神经信息、病灶周边器官组织信息等,或多种的组合。所述移除病灶后的损毁信息可以包括移除病灶后的血管损毁信息、神经损毁信息、器官组织损毁信息等,或多种的组合。在一些实施例中,分析模块220可以根据分析的结果产生分析报告。分析报告可以被分析模块220发送至存储设备(例如,硬盘182,只读存储器(ROM)183,随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、和/或远程终端170。
数据库模块230可以存储和/或检索信息。数据库模块230可以包括一个或多个数据库。数据库模块230可以与可视化模块210、分析模块220或其他相关模块(未显示)连接。在一些实施例中,所述数据库模块230存储的信息可以包括多模态图像对应的病人的基本信息、多模态图像上显示的目标对象的案例信息,多模态图像的相关信息等,或多种的组合。所述基本信息可以包括病人的姓名、性别、年龄、病史、生化检查信息等,或多种的组合。所述案例信息可以包括图像、图像分析结果、与病灶相关的信息、手术方案、术后恢复信息等,或多种的组合。所述相关信息可以包括多模态图像的生成时间、多模态图像检查结果的生成时间、多模态图像的系统分析时间、病人的手术操作时间等,或多种的组合。在一些实施例中,数据库模块230可以存储来自成像设备110、处理器181、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、远程终端170、可视化模块210、分析模块220等的信息。
在一些实施例中,数据库模块230可以将上述信息存储在数据表中。一个数据库可以包含一个或多个数据表。所述数据表可以包括一行或多行,和/或一列或多列。上述信息可以存储在数据表中的行或列中。所述数据表可以对上述 信息进行分类存储。例如,可以将一个或多个病人的基本信息、一个或多个目标对象的案例信息、一个或多个多模态图像的相关信息等分别存储在不同的数据表中。数据库模块230可以在两个或多个数据表之间创建联系,以方便通过第二数据表中的信息查找到第二个数据表的相应信息。例如,如果第一数据表中包括x个病人的姓名,第二数据表中包括该x个病人的手术方案,数据库模块230可以将同一个病人的姓名和手术方案存储分别存储在第一数据表和第二数据表的相同的行或列,那么,数据库模块230可以根据该病人的姓名查找到该病人的手术方案。又如,数据库模块230可以为同一个病人的姓名和手术方案设置相同的特征值或编号,通过所述特征值或编号可以将同一个病人的信息联系起来。在一些实施例中,数据库模块230可以为存储的信息创建索引。一个数据库可以包括一个或多个索引。所述索引可以指对数据表中的一列或多列的信息进行排序的数据结构。所述数据结构可以采用B-Tree结构或B+Tree结构。通过索引可以方便信息的查找。
在一些实施例中,所述数据库模块230可以基于一种或多种存储信息进行关键词的检索,和/或自动检索。所述关键词的检索可以是基于用户提供的一种或多种关键词(例如,图像对应的病人的基本信息,多模态图像上显示的目标对象的案例信息,多模态图像的相关信息等)进行的检索。所述自动检索可以是数据库模块230基于一种或多种基准进行自动的分类检索,例如,相同或相似的图像模态、相同或相似的图像分析结果、相同或相似的手术方案、相同或相似的图像检查结果的生成时间等,或多种的组合。在一些实施例中,数据库模块230可以根据一个或多个索引进行检索,以提高检索效率。
在一些实施例中,数据库模块230可以对一个或多个数据库进行操作。对数据库的操作可以包括创建、访问、修改、更新或删除数据库。所述创建数据库可以是创建或启用一个或多个新的数据库以便于信息的存储和/或检索。所述访问数据库可以是访问一个或多个已经创建的数据库以进行信息的存储和/或检索。所述修改数据库可以是修改或替换一个或多个已经创建的数据库中的信息。所述更新数据库可以是替换或更新一个或多个已经创建的数据库中的信息。所述删除数据库可以是删除一个或多个已经创建的数据 库中的信息。在一些实施例中,数据库模块230可以使用一种或多种数据库语言,例如,数据定义语言、数据操作语言、数据查询语言、数据控制语言、事务控制语言等,或多种的组合。在一些实施例中,图像分析系统100可以允许具有适当的访问权限的用户访问数据库模块230。所述访问权限可以包括,例如,读取与存储的信息相关的一些或所有信息,更新与存储的信息相关的一些或所有信息等,或多种的组合。所述访问权限可以与一组登录信息相关联并链接到所述登录信息。在一些实施例中,所述登录信息可以是用户登录图像分析系统100时输入的用户账号或登录密码等,或多种的组合。在一些实施例中,图像分析系统100可以提供一层或多层访问权限。在一些实施例中,第一层访问权限可以是对存储信息的完全访问,例如,允许接收和更新存储信息;第二层访问权限可以是对存储信息的部分访问,例如,允许接收和更新存储信息的部分信息;第三层访问权限可以是对存储信息的最小访问,例如,允许接收和更新存储的部分信息。所述更新可以包括提供图像分析系统100中不存在的信息,或者用新信息修改与存在的信息。在一些实施例中,所述登录信息可以与所述三层不同的访问权限相关联。
需要注意的是,以上对于多模态图像处理系统130的描述,仅为描述方便,并不能把本申请限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该系统的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子系统与其他模块连接,对实施上述方法和系统的应用领域形式和细节上的各种修正和改变。例如,数据库模块230可以集成在存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)中。又如,可视化模块210可以集成在输入/输出组件184中。再如,分析模块220可以集成在处理器181中。
图3是根据本申请的一些实施例所示的多模态图像处理过程的一种示例性流程图。在一些实施例中,多模态图像处理系统130可以执行多模态图像处理流程。多模态图像处理流程可以包括可视化多模态图像310、分析多模态图像320、和操作数据库330。
在310中,可以可视化多模态图像。在一些实施例中,可视化模块210可以执行操作310。所述多模态图像的可视化过程可以进一步包括配准多模态 图像、融合多模态图像、分割多模态图像、基于分割后得到的多模态图像数据重建图像、和/或显示重建图像等,如图5所示。在一些实施例中,所述多模态图像可以是一个模态的,或多个不同模态的图像,例如,一个MRI图像、一个CT图像、一个MRA图像、一个fMRI图像、一个DTI图像、一个DTT图像、一个fMRI-DTI图像、一个TOF-MRI图像、一个TOF-MRA图像等,或多种的组合。
在320中,可以分析多模态图像。在一些实施例中,在320可以对310中可视化的多模态图像进行分析。在一些实施例中,分析模块220可以执行操作320。在一些实施例中,320可以进一步包括确定多模态图像中的病灶位置、确定病灶周边信息、确定病灶移除范围、确定移除病灶后周边信息、确定移除病灶后损毁信息、和/或优化病灶移除范围等,如图7所示。在一些实施例中,在320可以对单个模态图像进行单独分析。例如,在一幅脑部CT图像中,320可以确定脑部组织和脑功能的关系等信息。在一些实施例中,在320可以对重建图像进行综合分析。例如,在一幅重建图像中,320可以确定优化的病灶移除范围,用以指导手术操作。
在330中,可以操作数据库。在一些实施例中,数据库模块230可以执行操作330。在一些实施例中,330可以进一步包括存储信息至数据库、和/或在数据库中检索信息等。所述信息可以是多模态图像对应的病人的基本信息、多模态图像上显示的目标对象的案例信息、多模态图像的相关信息等,或多种的组合。在一些实施例中,可以根据一种或多种方法检索信息。所述检索信息的方法可以是基于一种或多种存储信息进行关键词的检索,或自动检索。在一些实施例中,可以根据一种或多种机器学习算法,进行数据库信息的机器学习,以优化病灶的移除范围或提供优化的手术方案,为医生提供参考意见。
需要注意的是,以上对多模态图像处理流程的描述仅仅是具体的示例,不应被视为是唯一可行的实施方案。显然,对于本领域的专业人员来说,在了解多模态图像处理过程的基本原理后,可能在不背离这一原理的情况下,对多模态图像处理过程的具体实施方式与步骤进行形式和细节上的各种修正和改变,还可以做出若干简单推演或替换,在不付出创造性劳动的前提下, 对个别步骤的顺序做出一定调整或组合,但是这些修正和改变仍在以上描述的范围之内。例如,310执行之后,可以直接执行330。又如,330可以省略。再如,330可以独立进行。
图4是根据本申请的一些实施例所示的可视化模块210的一个示意图。可视化模块210可以包括一个图像获取单元410、一个图像配准单元420、一个图像融合单元430、一个图像分割单元440、一个图像重建单元450和一个显示单元460。所示单元之间可以彼此直接和/或间接连接。显而易见地,图4所述可视化模块210仅仅是代表本申请的一些实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,可以根据可视化模块的描述做出修改、增添和删减。例如,其中两个单元可以结合为一个单元,或者其中一个单元可以分割为两个或多个单元。
图像获取单元410可以获取图像(和/或图像数据)。获取的图像(和/或图像数据)可以直接从成像设备110、处理器181、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184获取,或通过网络160获取。获取的图像上显示的目标对象可以是人体、动物、或其中一部分,例如器官、组织、病变部位(例如肿瘤部位),或者上述部位的任意组合。举例来说,目标对象可以是头部、胸部、腹部、心脏、肝脏、上肢、下肢、脊椎、骨骼、血管等,或者上述部位的任意组合。在一些实施例中,获取的图像数据可以是三维图像数据,和/或二维图像数据。在一些实施例中,获取的图像数据可以是不同时间、不同成像设备、和/或不同条件(例如天候、照度、扫描位置和角度等)生成的图像数据。在一些实施例中,获取的图像数据可以是一种模态的图像数据,和/或多种不同模态的图像数据的组合。例如,获取的图像数据可以包括一个MRI图像、一个CT图像、一个MRA图像、一个fMRI图像、一个DTI图像、一个DTT图像、一个fMRI-DTI图像、一个TOF-MRI图像、一个TOF-MRA图像等,或多种的组合。在一些实施例中,获取的图像数据可以是标准图谱数据,和/或不同模态的图像数据中的一种或多种的组合。在一些实施例中,获取的图像数据可以是根据检查需求扫描的特定部位的图像数据,例如,目标对象的全景扫描、目标对象的血管、神经分布、功能区、组织代谢信息等,或多种 的组合。在一些实施例中,获取的图像数据可以是脑部图像原始数据,处理后的脑部图像数据,或脑部图像处理的参数等。
图像配准单元420可以配准两个或多个图像。所述两个或多个图像可以是同一种模态的图像、或不同模态的图像。所述两个或多个图像可以是不同时间、不同成像设备或不同条件(例如天候、照度、摄像位置和角度等)下获得的图像。图像配准可以指对两个或多个图像进行匹配、叠加的过程。在一些实施例中,图像配准单元420可以将目标对象的空间位置作为配准的依据。例如,可以基于目标对象的同一解剖点在两个或多个图像中的相同或相似的空间位置进行配准。在一些实施例中,图像配准单元420可以将两个或多个图像上的目标对象的一个或多个解剖点,或感兴趣的点(例如,具有诊断意义的点、与手术方案关系密切的点)都达到匹配。
对于不同模态的图像,图像配准单元420可以采用相同或不同的图像配准方式,例如,相对配准、和/或绝对配准。所述相对配准可以选择一个图像作为参考图像,而将其他图像与之配准。在一些实施例中,参考图像和其他图像的坐标系可以是任意的,例如,参考图像和其他图像的坐标系可以相同或不同。所述绝对配准可以先选择一个坐标系,然后将多模态图像都变换到该坐标系以实现坐标系的统一。在一些实施例中,图像配准单元420可以对各模态图像进行几何校正以实现坐标系的统一。在一些实施例中,可以根据一个或多个几何变换多项式实现图像的几何校正。例如,可以先在多模态图像中确定分布均匀的一定数量的图像同名点,然后根据图像同名点确定几何变换的多项式系数,从而实现一个图像对另一个图像的几何校正。
在一些实施例中,图像配准单元420可以将一个表现解剖结构的图像(例如,MRI-T1)作为参考图像,并将其他图像(例如,DTI/DTT、CT/PET、MRI TOF等,或多种的组合)与MRI-T1进行配准。在一些实施例中,图像配准单元420可以将标准图谱数据作为参考图像,将fMRI-BOLD图像数据与标准图谱数据进行配准。
图像配准单元420可以基于一种或多种图像配准方法进行图像配准。在一些实施例中,图像配准方法可以是点法(例如解剖标志点)、曲线法、表面法(例如表面轮廓法)、矩和主轴法(例如空间坐标对齐法)、互相关法、交互信息 法、序贯相似度检测匹配法(SSDA)、图谱法与非线性变化法等,或多种的组合。在一些实施例中,图像配准方法可以是基于最大交互信息的多分辨率方法、基于最大交互信息的灰度统计方法、基于表面轮廓的特征图像配准方法等,或多种的组合。在一些实施中,图像配准单元420可以选择一种或多种图像配准方法进行图像配准。图像配准方法的选择可以是全自动地、半自动地或人工地。例如,图像配准单元420可以基于同类的多模态图像配准的历史而选择图像配准方法。又如,图像配准单元420可以在全自动或半自动选择的图像配准方法上再进行人工的干预,以实现多模态图像配准。再如,用户可以通过输入/输出组件184或远程终端170人工选择图像配准方法。再如,用户可以对图像配准单元420自动选择的图像配准方法进行参数设定、调整等。
图像融合单元430可以融合图像。在一些实施例中,图像融合单元430可以融合配准后的两个或多个图像。图像融合可以指根据同一目标对象的多模态图像,提取各个模态图像中的有效信息,将多个模态的图像组合生成一个图像,以提高图像信息的空间分辨率和光谱分辨率等。在一些实施例中,图像融合单元430可以将各个模态图像中的有效信息体现在融合后的图像中。在一些实施例中,图像融合单元430可以对多模态图像取长补短,融合成一个全新的图像,在所述全新图像中能够显示来自多种模态图像的部分或全部信息。
图像融合单元430可以基于一种或多种图像融合算法进行图像融合。在一些实施例中,图像融合算法可以包括亮度色调饱和度(IHS)算法、主成分分析法(PCA)、比值变换算法(Brovey Transform)、乘积变换算法(Mutiplicative)、小波变换法(例如,三维小波变换法)等,或多种的组合。在一些实施例中,图像融合可以按照层次高低分为决策级融合、特征级融合、数据级融合(像素级融合)。所述决策级融合可以基于认知模型的方法,采用大型数据库和专家决策系统进行分析、推理、识别和判决,即只需要对数据进行关联。决策级融合也可以基于一些其他的规则,例如,贝叶斯法、D-S证据法和表决法等。所述特征级融合可以对图像的特征信息(例如边缘、形状、纹理、区域等)进行综合处理。所述像素级融合可以直接对获得的多模态图像的一个或多个像素的数据进行处理而获得融合后的图像。像素级融合可以根据一种或多种算法进行图像融合,例如,空间域算法、和/或变换域算法等。在一些实施例中,空间域算法可以包括逻辑滤 波法、灰度加权平均法、或对比调制法等。在一些实施例中,变换域算法可以包括金字塔分解融合法、或小波变换法等。在一些实施例中,像素级融合和特征级融合可以对多模态图像的信息(例如,原始图像数据、特征向量等)进行配准和关联,而决策级融合可以对图像数据进行关联。
在一些实施中,图像融合单元430可以选择一种或多种图像融合方法进行图像融合。图像融合方法的选择是全自动地、半自动地或人工地。图像融合单元430可以基于同类的多模态图像融合的历史而选择图像融合方法。又如,用户可以通过输入/输出组件184或远程终端170人工选择图像融合方法。再如,用户可以对图像融合单元430自动选择的图像融合方法进行参数设定、调整等。
图像分割单元440可以分割图像。在一些实施例中,图像分割单元440可以在单个模态图像中进行分割,或在多模态图像中进行分割。在一些实施例中,图像分割单元440可以在图像配准和/或融合前进行图像分割,或在图像配准和/或融合后进行图像分割。图像分割过程可以基于图像的像素点(或体素点)的相应特征进行。在一些实施例中,所述像素点(或体素点)的相应特征可以包括纹理结构、灰度、平均灰度、信号强度、颜色饱和度、对比度、亮度等,或多种的组合。在一些实施例中,所述像素点(或体素点)的空间位置特征也可以用于图像分割过程。在一些实施例中,图像分割单元440可以基于目标对象的医学图像特点,通过人工、自动或半自动分割方法对多模态图像进行分割。分割的图像可以包括目标对象的器官组织、血管结构、神经纤维、结构功能区等,或多种的组合。例如,图像分割单元440可以在一幅脑部fMRI-BOLD图像中分割得到脑部组织结构以及对应的脑功能区。又如,图像分割单元440可以在一幅脑部fMRI-DTI/DTT图像中分割脑部神经纤维。再如,图像分割单元440可以在一幅脑部TOF-MRI图像中分割脑部的血管结构。在一些实施例中,图像分割单元440可以基于一种或多种分割方法进行图像分割。例如,图像分割可以基于灰度阈值分割法、区域生长和分裂合并法、边缘分割法、直方图法、基于模糊理论分割法(例如模糊阙值分割法、模糊连接度分割法、模糊聚类分割法等)、基于神经网络分割法、基于数学形态学分割法(例如形态学分水岭算法等)等,或多种的组合。在一些实施例中,图像分割单元440可以基于融合后的多模态图像中相邻像素间灰度值的相似性以及不同像素间灰度值的差异性,来进行图像分割。
图像重建单元450可以重建三维和/或二维图像。在一些实施例中,图像重建单元450可以基于多模态图像数据重建图像,以显示目标对象的多模态的信息。在一些实施例中,图像重建单元450可以基于配准、融合和/或分割得到的图像数据,重建图像。在一些实施例中,图像重建单元450可以建立一个或多个器官或组织模型,例如,血管模型、器官组织的分段模型、神经纤维的连接模型、目标对象的三维整体构造模型等,或多种的组合。在一些实施例中,图像重建单元450可以基于一种或多种重建技术或方法进行图像重建。例如,图像重建可以基于表面模型法、体素模型法等,或多种的组合。所述表面模型法可以包括轮廓重建法、体素重建法、容积再现法(Volume Rendering(VR))、多平面重建法(Multi-Planar Reformation(MPR))、最大密度投影法(Maximum Intensity Projection(MIP))、或表面阴影显示法(SSD)等。所述体素模型法可以括空间域法、或变换域法等。在一些实施例中,图像重建单元450可以基于三维重建技术,利用VTK(Visualization Toolkit)或OSG(Open Scene Graph)等技术基于分割得到的图像数据,得到一幅三维重建图像。
显示单元460可以显示图像。在一些实施例中,显示单元460可以显示图像获取单元410获取的图像、图像配准单元420配准的图像、图像融合单元430融合的图像、图像分割单元440分割的图像、图像重建单元450重建的图像、分析模块220产生的信息、数据库模块230操作数据库得到的信息、和/或任何多模态图像处理分析过程中的信息。在一些实施例中,显示单元460可以显示重建图像中的目标对象及周边组织信息,例如,目标对象的空间解剖结构、周围血管组织、神经纤维、结构功能区和组织代谢情况等,或多种的组合。
图5是根据本申请的一些实施例所示的可视化过程的一种示例性流程图。在一些实施例中,可视化模块210可以执行可视化流程。可视化流程可以进一步包括获取标准图谱数据510、获取多模态图像数据520、配准多模态图像数据与标准图谱数据530、融合配准后的多模态图像数据540、分割融合后的多模态图像数据550、基于分割后的图像数据重建图像560、和显示重建图像570等。
在510中,可以获取标准图谱数据。在一些实施例中,图像获取单元410可以执行操作510。在一些实施例中,所述标准图谱数据可以是能够显示目标对象信息,并作为标准被参照的图谱,例如,标准肺图谱、标准心脏图谱、标准脑 图谱等,或多种的组合。在一些实施例中,标准图谱数据可以直接从存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、或外部数据源获取。在一些实施例中,标准图谱数据可以通过网络160从其他标准图谱数据库获取。
在520中,可以获取多模态图像数据。图像获取单元410可以执行操作520。在一些实施例中,多模态图像数据可以直接从成像设备110、处理器181、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、或外部数据源获取,或通过网络160获取。所述获取的多模态图像数据可以是原始图像数据,处理后的图像数据,或图像处理的参数等,或多种的组合。在一些实施例中,多模态图像数据可以是磁共振成像(MRI)、血氧水平依赖效应功能性磁共振成像(fMRI-BOLD)、弥散张量成像(DTI)、弥散张量纤维束成像(DTT)、磁共振血管造影术(MRA)、计算机断层扫描图像(CT)、正电子发射断层扫描图像(PET)、单光子发射计算机断层成像(SPECT)、时间飞跃法磁共振成像(TOF-MRI)、时间飞跃法磁共振血管成像((TOF-MRA)、脑磁图(MEG)、超声波扫描(US)、经颅磁刺激磁共振成像(TMS-MRI)、MRI-T1、MRI-T2、fMRI-DTI、fMRI-DTT、CT-PET、CT-SPET、PET-MR、PET-US、SPECT-US、US-CT、US-MR、X射线-CT、X射线-PET、X射线-US等,或多种的组合。在一些实施例中,对于不同的目标对象或成像的方式,所述多模态图像数据可以包括相同或不同的数目和模态。例如,脑部相关的扫描图像可以包括MRI-T1、MRI-T2、fMRI-BOLD、fMRI-DTI、fMRI-DTT、CT-PET等,或多种的组合。
在530中,可以配准在520获取的多模态图像数据与510获取的标准图谱数据。在一些实施例中,在530中也可以不根据标准图谱数据,而直接配准在520获取的多模态图像数据。图像配准单元420可以执行操作530。在530中,可以利用前文描述的一种或多种图像配准方法对多模态图像数据和标准图谱数据进行配准。在一些实施例中,图像配准流程可以包括提取多模态图像和标准图谱的特征以得到特征点,通过相似性度量找到多模态图像和标准图谱中匹配的特征点对,然后基于匹配的特征点对得到的多模态图像和标准图谱中的空间坐标变换参数,再基于空间坐标变换参数进行图像配准。在一些实施例中,多模态的图 像数据与标准图谱数据的配准可以是将MRI T1作为参考图像,其他图像(例如,DTI/DTT、CT/PET、MRI TOF等,或多种的组合)与MRI-T1进行配准。在一些实施例中,多模态的图像数据与标准图谱数据的配准可以是将标准图谱数据作为参考图像,将fMRI-BOLD图像数据与之进行配准。在一些实施例中,MRI-T1、fMRI-BOLD图像数据和/或标准图谱数据采用的坐标系统可以是任意的。在一些实施例中,可以将fMRI-BOLD图像数据与标准图谱数据配准后的图像再次与MRI-T1进行配准。在一些实施例中,多模态图像数据与标准图谱数据的配准方法可以是基于最大交互信息的多分辨率方法、基于最大交互信息的灰度统计方法、基于表面轮廓的特征图像配准方法等,或多种的组合。
在540中,可以融合在530中配准的多模态图像数据。图像融合单元430可以执行操作540。在540中,可以利用前文描述的一种或多种图像融合方法融合多模态图像数据。在一些实施例中,多模态图像数据的融合方法可以包括逻辑滤波法、灰度加权平均法、对比调制法、金字塔分解融合法、小波变换法(例如,三维小波变换)、贝叶斯法、D-S证据法、表决法等,或多种的组合。
在550中,可以分割在540中融合的多模态图像数据。图像分割单元440可以执行操作550。在一些实施例中,分割多模态图像数据可以获得目标对象的器官组织、血管结构、神经纤维、功能区等,或多种的组合。例如,可以在一幅脑部fMRI BOLD图像中分割得到脑部组织结构以及对应的脑功能区;在一幅脑部fMRI-DTI/DTT图像中分割得到脑部神经纤维;在一幅脑部TOF-MRI图像中分割得到脑部的血管结构等。
在560中,可以基于550中分割的结果重建图像。图像重建单元450可以执行操作560。在一些实施例中,图像重建可以基于分割得到的目标对象的器官组织、血管结构、神经纤维、功能区等信息,利用三维重建技术实现目标对象及周边组织的三维建模,重建出目标对象的三维模型。在一些实施例中,在560中执行的图像重建可以包括目标对象的表面重建、或目标对象的体重建等。所述表面重建可以基于分割得到的目标对象的图像数据,形成三维表面数据集,进而进行三维表面重建。所述体重建可以基于分割得到的目标对象的图像数据,形成三维体数据集,进而进行三维体重建。
在570中,可以显示560中重建的图像。在一些实施例中,在570中可 以显示操作510至操作560中的任何一个操作的中间信息和/或处理的结果。例如,可以显示在520中获取的多模态图像、在530中配准的结果、在540中融合的结果、和/或在550中分割的结果等。显示单元460可以执行操作570。在一些实施例中,570可以显示重建的图像中的目标对象和/或周围的三维(和/或二维)的相关信息。例如,570可以显示目标对象的空间解剖结构、周围血管组织、神经纤维、或功能区等,或多种的组合。
需要注意的是,以上对可视化模块和可视化流程的描述仅仅是具体的示例,不应被视为是唯一可行的实施方案。上述每一个单元均可通过一个或多个部件实现,每个单元的功能也并不局限于此。上述各个单元可以根据具体实施场景或需要选择添加或删除。显然,对于本领域的专业人员来说,在了解图像可视化的基本原理后,可能在不背离这一原理的情况下,对图像可视化的具体实施方式与步骤进行形式和细节上的各种修正和改变,还可以做出若干简单推演或替换,在不付出创造性劳动的前提下,对各单元和/或各可视化步骤的顺序做出一定调整或组合,但是这些修正和改变仍在以上描述的范围之内。例如,510和520可以组合成一个单独的操作。又如,510和520可以依次、同时、或交替执行。再如,550和560可以组合成一个单独的操作。再如,550和560可以依次、同时、或交替执行。再如,在510至560之间的任何一个操作之前或之后都可以添加一个570的显示操作。
图6是根据本申请的一些实施例所示的分析模块220的一个示意图。分析模块220可以包括一个病灶确定单元610、一个病灶周边信息确定单元620、和一个手术模拟单元630。所示单元之间可以彼此直接(和/或间接)连接。显而易见地,图6所述分析模块220仅仅是代表本申请的一些实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,可以根据分析模块的描述做出修改、增添和删减。比如,其中两个单元可以结合为一个单元,或者其中一个单元可以分割为两个或多个单元。
病灶确定单元610可以确定图像中的病灶信息。所述图像可以包括图像获取单元410获取的多模态图像、图像配准单元420配准的图像、图像融合单元430融合的图像、图像分割单元440分割的图像、和/或图像重建单元450重建的图像。所述病灶信息可以包括病灶的位置、形状、直径、体积、和/或数目等信息。 在一些实施例中,所述病灶可以是肿瘤、出血点、钙化、梗塞、炎症、病原菌感染、组织先天性异常等,或多种的组合。在一些实施例中,可以在重建的图像中通过不同角度观察病灶,或在任意矢状、冠状、轴状截面内进行测量以确定病灶的位置、形状、直径、体积、数量等,或多种的组合。在一些实施例中,所述病灶信息可以由用户通过输入/输出组件184、和/或远程终端170手动在二维和/或三维重建图像中确定,或由病灶确定单元610通过一种或多种算法在二维和/或三维重建图像中自动识别确定。例如,所述算法可以包括基于灰度值的区域生长法、基于阈值的算法等。在一些实施例中,所述病灶信息可以采用计算机辅助诊断系统(Computer-Aided Diagnosis System(CAD))在二维和/或三维重建图像中确定。所述计算机辅助诊断系统(CAD)可以集成在病灶确定单元610或多模态图像处理系统130的其他模块和/或单元中。在一些实施例中,所述病灶信息可以由病灶确定单元610通过一个或多个模型(例如人体结构模型、图像像素点或灰度值分布模型等)对重建图像进行分割来确定。
病灶周边信息确定单元620可以确定病灶周边信息。在一些实施例中,所述病灶周边信息可以是病灶周边血管信息、病灶周边神经信息、病灶周边器官组织信息等,或多种的组合。例如,所述病灶周边信息可以包括病灶经过的血管名称、数量、分支走向及血流情况、病灶侵蚀的纤维数目及纤维连接情况、病灶覆盖的功能区名称及体积占比、病灶周围器官的组织代谢信息等,或多种的组合。在一些实施例中,所述病灶周边信息可以是病灶移除后的周边血管信息、周边神经信息、周边器官组织信息等,或多种的组合。例如,所述病灶周边信息可以包括病灶移除后周边剩余功能区的名称和/或体积占比、病灶移除后周边血管的名称、数量、分支走向及血流情况、病灶移除后周边纤维数目及纤维连接情况等,或多种的组合。在一些实施例中,病灶周边信息确定单元620可以基于一种或多种算法确定病灶周边信息。例如,确定病灶周边信息可以基于区域生长算法、边缘检测等,或多种的组合。
手术模拟单元630可以模拟手术。所述模拟手术的过程可以包括模拟手术方案设计、手术模拟、模拟结果分析、基于模拟结果进行风险和术后分析等,如图7所示。在一些实施例中,手术模拟单元630可以包括病灶移除子单元631、损毁信息确定子单元632、移除范围优化子单元633。
病灶移除子单元631可以确定病灶移除信息和/或移除病灶。所述病灶移除信息可以是病灶移除范围、病灶移除体积、病灶移除顺序、病灶移除方式,病灶移除持续时间、病灶移除采用的器械、或其他病灶移除过程中的信息(例如,是否麻醉、是否体外循环、是否插气管等)等,或多种的组合。在一些实施例中,所述病灶移除范围可以是病灶本身,或比病灶较大的范围,该较大的范围将病灶包含在内。所述较大的范围可以具有与病灶相同或相似的轮廓,或其他轮廓。所述较大的范围可以比病灶面积和/或体积大1%、3%、5%、10%、50%、或者其它任意的数字。在一些实施例中,病灶移除子单元631可以基于病灶周边信息确定病灶移除范围。在一些实施例中,可以基于病灶经过的血管名称、数量、分支走向及血流情况、病灶侵蚀的纤维数目以及纤维的连接情况、病灶覆盖的功能区名称及体积占比、病灶周围器官的组织代谢信息等周边信息来确定病灶移除范围,以避免或减少对病灶周边血管、神经、器官组织造成的损毁。在一些实施例中,所述移除病灶可以是移除所述确定的病灶移除范围内的像素点(或体素点)。
在一些实施例中,病灶移除子单元631可以让用户参与移除病灶。例如,病灶移除子单元631可以接收来自远程终端170和/或输入/输出组件184的指令,所述指令可以由用户输入,而病灶移除子单元631可以根据该指令移除病灶。通过这种方式,用户可以通过远程终端170和/或输入/输出组件184选择病灶移除范围,并移除该病灶,从而实现人工或半自动移除病灶的操作。在一些实施例中,病灶移除子单元631可以根据一种或多种算法自动移除病灶。在一些实施例中,所述病灶移除范围可以是在基于一种或多种算法自动确定的病灶移除范围的基础上,用户进行人工的干预而确定的。在一些实施例中,移除病灶的方式可以是基于一个或多个空间平面的移除。例如,病灶移除子单元631可以基于二维平面和/或三维立体移除病灶等,或多种的组合。
损毁信息确定子单元632可以确定预测的移除病灶后的损毁信息。在一些实施例中,所述损毁信息可以是预测的移除病灶后的血管损毁信息、神经损毁信息、和/或器官组织损毁信息。在一些实施例中,所述损毁信息可以是预测的移除病灶后周边受影响的血管结构是否会引起组织缺血或瘀血情况、移除病灶后周边神经纤维的断裂或缺少是否会引起功能障碍情况、移除病灶后周边器官组织是否会造成损伤或功能障碍情况等,或多种的组合。
在一些实施例中,损毁信息确定子单元632可以通过对比病灶周边信息和预测的移除病灶后的周边信息而确定预测的损毁信息。例如,损毁信息确定子单元632可以通过对比病灶周边的神经纤维数目、连接情况和移除病灶后的周边神经纤维数目、连接情况来确定移除病灶是否造成了神经纤维的断裂或减少,从而判断是否会引起功能障碍。在一些实施例中,损毁信息确定子单元632可以对一种或多种损毁信息分别进行确定,例如,血管的损毁数量、神经的损毁数量、功能区的损毁面积或体积等。在一些实施例中,损毁信息确定子单元632可以采用一个或多个标准对病灶周边的血管、神经纤维、及功能区进行判断而确定所述一种或多种损毁信息。例如,可以基于病灶周边血管的完整程度(例如,90%、80%、70%、或其他比例)及血流情况(例如,血管有无狭窄或畸形等)来判断血管是否损毁。又如,可以基于病灶周边神经纤维的数量及连接情况来判断神经纤维是否断裂。再如,可以基于病灶周边功能区的剩余面积或体积(例如,90%、80%、70%、或其他比例)来判断功能区是否损毁。在一些实施例中,损毁信息确定子单元632可以对两种或多种损毁信息的综合信息进行确定。例如,损毁信息确定子单元632可以对不同的损毁信息赋予不同的权重,进而确定两种或多种损毁信息的加权值,并将所述加权值作为损毁信息的评价指标。在一些实施例中,损毁信息确定子单元632可以预估病灶移除后对周边组织的损毁信息,以用于指导手术方案或模拟手术过程。
移除范围优化子单元633可以优化病灶移除范围。所述优化病灶移除范围可以基于一个或多个约束条件进行优化。所述约束条件可以包括规避损毁重要纤维、重要血管、重要的功能区及重要的器官组织等,或多种的组合。例如,移除范围优化子单元633可以根据用户的要求指定某一条血管或神经(例如,颈内动脉、视神经等)不被损坏。那么,移除范围优化子单元633在优化病灶移除范围过程中,可以避开所述血管或神经。在一些实施例中,移除范围优化子单元633可以经过一次或多次优化以确定病灶移除范围、确定移除病灶后的周边信息、确定损毁信息等手术模拟信息。在一些实施例中,移除范围优化子单元633可以基于一个或多个标准来优化病灶移除范围。例如,移除范围优化子单元633可以损毁尽量少数量的血管、或神经等为标准,以损毁尽量小面积或体积的功能区为标准,或者,以对周边造成的两种或多种损毁的综合效果最小为标准等。在一些实 施例中,移除范围优化子单元633可以基于预测的损毁信息确定所述优化病灶移除范围。在一些实施例中,移除范围优化子单元633可以基于数据库模块230中的机器学习算法来优化病灶移除范围。参见图8中根据本申请的一些实施例所示的数据库模块的一个示意图。在一些实施例中,优化的病灶移除范围可以用于指导用户进行手术规划和/或制定最优的手术方案。
图7是根据本申请的一些实施例所示的分析多模态图像的一种示例性流程图。在一些实施例中,分析模块220可以执行分析流程。分析流程可以包括确定病灶710、确定病灶周边信息720、确定病灶移除范围730、基于病灶移除范围移除病灶740、确定移除病灶后的周边信息750、确定移除病灶后的损毁信息760、和基于损毁信息优化病灶移除范围770。
在710中,可以基于多模态图像确定病灶。在一些实施例中,可以基于560产生的重建图像确定病灶。在一些实施例中,病灶确定单元610可以执行操作710。在一些实施例中,可以通过自动、半自动或人工的方式确定病灶。例如,用户可以通过输入/输出组件184、或远程终端170手动在二维和/或三维重建图像中勾勒出病灶的位置、形状、直径、体积、数目等,或多种的组合。又如,可以由病灶确定单元610通过一种或多种算法在二维和/或三维重建图像中自动识别出病灶的位置、形状、直径、体积、数目等,或多种的组合。再如,用户可以对自动识别出的病灶进行更改、或调整等。
在720中,可以确定病灶周边信息。在一些实施例中,病灶周边信息确定单元620可以执行操作720。在一些实施例中,确定病灶周边信息可以包括基于在710确定的病灶位置、形状、直径、体积、数目等信息来确定病灶周边血管信息、病灶周边神经信息、病灶周边器官组织信息等,或多种的组合。在一些实施例中,所述病灶周边信息可以是病灶经过的血管名称、数量、分支走向及血流情况、病灶侵蚀的纤维数目以及纤维的连接情况、病灶覆盖的功能区名称及体积占比、病灶周围器官的组织代谢信息等,或多种的组合。
在730中,可以确定病灶移除范围。在一些实施例中,病灶移除子单元631可以执行操作730。在一些实施例中,病灶移除范围可以通过扩充病灶边缘的方式来确定。在一些实施例中,所述病灶移除范围可以是病灶本身,或比病灶较大的范围,该较大的范围将病灶包含在内。在一些实施例中,可以基于在720 中确定的病灶周边信息,以避免或减小对病灶周边血管、神经、或器官组织等造成损毁为基准,来确定病灶移除范围。
在740中,可以基于在730确定的病灶移除范围移除病灶。在一些实施例中,病灶移除子单元631可以执行操作740。在一些实施例中,所述移除病灶可以由用户通过输入/输出组件184或远程终端170进行人工操作,或由病灶移除子单元631根据一种或多种算法自动识别移除病灶。在一些实施例中,移除病灶的方式可以是基于一个或多个空间平面的移除。
在750中,可以确定在740移除病灶后的周边信息。在一些实施例中,病灶周边信息确定单元620可以执行操作750。所述移除病灶后的周边信息可以是移除病灶后的周边血管信息、周边神经信息、周边器官组织信息等,或多种的组合。
在760中,可以确定在740移除病灶后的损毁信息。在一些实施例中,损毁信息确定子单元632可以执行操作760。在一些实施例中,可以通过对比分析移除病灶前后的周边信息(即,对比在750中确定的信息与在720中确定的信息)而确定所述损毁信息。在一些实施例中,可以通过对比分析移除病灶前后的周边血管名称、数量、分支走向、血流情况等来确定移除病灶是否对病灶周边的血管结构造成了损毁、和/或损毁的程度。在一些实施例中,可以通过比对分析移除病灶前后的周边纤维的数量及连接情况等来确定移除病灶是否对病灶周边的神经纤维造成了损毁、和/或损毁的程度。在一些实施例中,可以通过对比分析移除病灶前后的周边器官组织的代谢信息等来确定移除病灶是否对病灶周边的器官组织造成了损毁、和/或损毁的程度。
在770中,可以基于在760确定的损毁信息优化病灶移除范围。在一些实施例中,移除范围优化子单元633可以执行操作770。在一些实施例中,可以根据在760确定的损毁信息,对在730确定的病灶移除范围进行优化。例如,去除病灶移除范围中给周边造成严重损毁的区域,或扩充病灶移除范围中对周边损毁较轻的区域。在一些实施例中,优化病灶移除范围可以包括一次或多次重复执行730、740、750、760,基于多次确定的移除病灶后的损毁信息,通过对比分析,确定一个较优的病灶移除范围,以辅助制定手术方案,指导实际手术操作等。在一些实施例中,优化的病灶移除范围可以是一个经过分析确定的最优或较优的手 术移除结果。
需要注意的是,以上对分析流程的描述仅仅是具体的示例,不应被视为是唯一可行的实施方案。显然,对于本领域的专业人员来说,在了解分析过程的基本原理后,可能在不背离这一原理的情况下,对分析过程的具体实施方式与步骤进行形式和细节上的各种修正和改变,还可以做出若干简单推演或替换,在不付出创造性劳动的前提下,对个别步骤的顺序做出一定调整或组合,但是这些修正和改变仍在以上描述的范围之内。在一些实施例中,730和740可以组合成一个独立的操作。在一些实施例中,在760执行之后,分析流程可以返回至730,进行病灶移除范围的再次确定。在一些实施例中,750和760可以同时执行,或组合成一个独立的操作。在一些实施例中,一个或多个操作可以添加至流程中,或从流程中删除。例如,在760后,可以添加一个损毁信息与阈值的比较操作。在一些实施例中,损毁信息确定子单元632或移除范围优化子单元633可以执行该操作。又如,710至770之间的任何一个操作之前或之后,都可以添加一个信息存储操作,信息可以保存在存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)或数据库模块230中。
图8是根据本申请的一些实施例所示的数据库模块230的一个示意图。数据库模块230可以包括一个信息存储单元810、一个信息检索单元820、和一个机器学习单元830。所示单元之间可以彼此直接和/或间接连接。显而易见地,图8所述数据库模块230仅仅是代表本申请的一些实施例,对于本领域的普通技术人员而言,在不付出创造性劳动的前提下,可以根据数据库模块的描述做出修改、增添和删减。比如,其中两个单元可以结合为一个单元,或者其中一个单元可以分割为两个或多个单元。
信息存储单元810可以存储信息。信息存储单元810可以包括一个或多个数据库,例如,通用数据库、或专用数据库等,或多种的组合。在一些实施例中,所述通用数据库可以是微软办公关系数据库(Microsoft Office Access)、MySQL数据库、甲骨文数据库(Oracle Database)、SQL Server数据库、Sybase数据库、Visual Foxpro(VF)数据库、DB2数据库等,或多种的组合。在一些实施例中,所述专用数据库可以是为了存储某种类型的信息而开发的数据库,例如 阿里云数据库(ApsaraDB for RDS(RDS))。信息存储单元810存储的信息可以是多模态图像对应的病人的基本信息、多模态图像上显示的目标对象的案例信息、或其他相关信息等,或多种的组合。在一些实施例中,所述基本信息可以包括病人的姓名、性别、年龄、病史、生化检查信息等,或多种的组合。在一些实施例中,所述案例信息可以包括图像、图像检查结果、系统分析结果、手术方案、术后恢复信息等,或多种的组合。在一些实施例中,所述相关信息可以包括多模态图像的生成时间、多模态图像检查结果的生成时间、多模态图像的系统分析时间、多模态图像中的目标对象的手术操作时间等,或多种的组合。在一些实施例中,信息存储单元810可以接收来自处理器181、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、输入/输出组件184、远程终端170、或多模态图像处理系统130中的其他模块或单元的数据和/或指令,以存储、更改或删除信息。
信息检索单元820可以检索信息。在一些实施例中,信息检索单元820可以检索信息存储单元810中存储的信息。所述检索的信息可以是多模态图像上显示的目标对象的案例信息或多模态图像对应病人的基本信息。在一些实施例中,信息检索单元820可以基于一种或多种方式检索相同信息。在一些实施例中,信息检索单元820可以基于多模态图像对应病人的一种或多种基本信息进行关键词的检索,检索的结果可以包括所述病人的基本信息,和/或多模态图像上显示的目标对象的案例信息。在一些实施例中,信息检索单元820可以基于多模态图像上显示的目标对象的一种或多种案例信息进行关键词的检索,检索的结果可以包括多模态图像上显示的目标对象的案例信息,和/或多模态图像对应病人的基本信息。在一些实施例中,信息检索单元820可以根据基本信息检索案例信息,或者根据案例信息检索基本信息。
在一些实施例中,信息检索的方式可以是用户通过输入/输出组件184或远程终端170手动进行关键词的检索。在一些实施例中,信息检索单元820可以提供智能检索功能,以检索与多模态图像相似的案例信息。所述相似的案例信息可以包括病人病史相似、病灶位置相似、图像检查结果相似、图像模态相似、手术方案相似等,或多种的组合。在一些实施例中,多模态图像处理系统130可以根据信息检索单元820检索的相似案例进行相应的手术方案的设计或改进。在一 些实施例中,信息检索单元820可以将检索的结果显示在输入/输出组件184、或远程终端170上,或通过网络160进行传输,以供一个或多个用户进行进一步的分析。
在一些实施例中,信息检索单元820可以基于一种或多种算法进行信息检索。在一些实施例中,信息检索单元820可以根据一个或多个索引检索信息,以提高检索的效率。例如,信息检索单元820可以根据词索引和/或字索引进行检索。所述词索引可以是以单词为索引单位的检索算法。词索引的计算难点是分词算法,所以需要加入人工智能分析、上下文判断等技术而提升词索引的准确率。所述字索引可以是以汉语单字为索引单位的检索算法。
机器学习单元830可以基于信息存储单元810存储的信息进行机器学习。例如,根据信息存储单元810存储的多模态图像对应的病人的基本信息、多模态图像上显示的目标对象的案例信息,多模态图像的相关信息等,或多种的组合,采用机器学习算法,从一个或多个数据中获取新的数据或知识。在一些实施例中,所述机器学习算法可以是决策树算法、K平均算法、支持向量机(Support Vector Machines(SVM))算法、最大期望值算法、AdaBoost算法、关联规则(Apriori)算法、最邻近(K-Nearest Neighbor(KNN))算法、朴素贝叶斯算法、神经网络算法、分类与回归树算法等,或多种的组合。
在一些实施例中,机器学习单元830可以基于一种或多种多模态图像上显示的目标对象的案例信息,采用一个或多个上述机器学习算法,进行学习。机器学习单元830可以通过一次或多次学习,优化多模态图像处理和/或分析流程中的一个或多个算法功能,例如,移除病灶后的损毁信息的计算方法,病灶移除范围的确定算法等,或多个的组合。在一些实施例中,机器学习单元830可以基于多个病人的手术方案以及病人的术后恢复信息,并结合多模态图像上病灶周边信息、移除病灶后的损毁信息等,优化病灶的移除范围,以在后期为其他多模态图像中病灶的分割提供参考意见和/或方案。例如,对于脑部肿瘤,机器学习单元830可以根据多个样本的在相同或相似部位患有脑部肿瘤的患者的案例信息进行学习,并通过学习优化或改进脑部肿瘤移除范围的确定算法。在一些实施例中,通过机器学习单元830的学习,多模态图像处理系统130可以达到、接近或超越专家医生的诊断和/或治疗水平。 在一些实施例中,机器学习单元830可以基于最近邻(KNN)算法,优化和/或减弱多模态图像处理结果的个体差,以避免或减小在实际手术操作中对病人造成的损伤。
图9是根据本申请的一些实施例所示的多模态图像处理系统130的一个实施例的示意图。多模态图像处理系统130可以从数据库910获取多模态的脑图像数据,根据该数据获得可视化的脑部图像,并对该数据进行分析,进而产生分析报告945。数据库910可以是数据库模块230中的一个数据库、本地存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185等)中的一个数据库、或外部数据源(例如,云存储器等)中的一个远程数据库。
多模态图像处理系统130从数据库910中获取的脑部图像数据可以包括静态图像数据、视频图像数据、二维图像数据、三维图像数据等,或多个的组合。例如,MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、和fMRI-BOLD图像数据925等多模态的图像数据。此外,获取的脑部图像数据还可以包括超声造影数据(如B型超声波图像)、CT、SPECT、MEG、TMS-MRI、MRI-T2数据、CT-PET数据、CT-SPET数据等,或多种的组合。多模态图像处理系统130从数据库910获取的数据也可以包括标准脑图谱数据926。在一些实施例中,多模态图像处理系统130也可以从其他设备中获取脑部图像数据。例如,多模态图像处理系统130可以直接从一个MRI成像设备获取MRI-T1图像数据921,以及直接从另外一个MRI成像设备获取MRI TOF图像数据924。
在一些实施例中,MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、fMRI-BOLD图像数据925、和/或标准脑图谱数据926可以同时或不同时从数据库910获取。例如,MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、fMRI-BOLD图像数据925可以先从数据库910获取,标准脑图谱数据926可以在对图像数据进行处理或分析过程中从数据库910获取。
在图像配准操作930中,可视化模块210可以配准MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、fMRI-BOLD图像数据925、和标准脑图谱数据926,并得到配准结果。在一些实施例 中,可以将MRI-T1图像数据921作为参考图像,将DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924分别与之配准。在一些实施例中,可以将标准脑图谱数据926作为参考图像,将fMRI-BOLD图像数据925与之配准。在一些实施例中,可以将fMRI-BOLD图像数据925与标准脑图谱数据926配准后的图像再次与MRI-T1图像数据921配准。图像配准操作930采用的配准技术可以包括点法(例如解剖标志点)、曲线法、表面法(例如表面轮廓法)、矩和主轴法(例如空间坐标对齐法)、互相关法、交互信息法、序贯相似度检测匹配法、图谱法与非线性变化法等,或多个的组合。例如,可以利用脑的解剖结构信息(如中央沟位置信息)对图像数据进行配准。
可视化模块210可以将获取的脑部图像数据可视化,产生可视化的图像。可视化模块210可以在配准操作930之后,对配准的结果进行融合操作951。例如,基于MRI T1图像数据921、DTI/DTT图像数据92/2、CT/PET图像数据923、MRI TOF图像数据924、和fMRI BOLD图像数据925、和标准脑图谱数据926配准后的图像,进行融合操作951。融合操作951可以利用前文描述的一种或多种融合算法进行图像融合。
在融合操作951之后,可视化模块210可以进一步地对图像进行多平面重建/体绘制技术可视化操作952,以产生重建图像。多平面重建/体绘制技术可视化操作952可以利用前文描述的一种或多种重建算法重建图像,例如,轮廓重建、体素重建、体绘制技术(VR)、多平面重建(MPR)、曲平面重建(Curve Planar reconstruction(CPR)),最大密度投影(MIP)、表面阴影显示(Shaded Surface Display(SSD))等,或多个的组合。体绘制技术可以包括光线投射法(Volume Ray Casting Volume Rendering)、单元投影法、快速体绘制算法、抛雪球体绘法(Splatting Volume Rendering)、傅里叶体绘制(Fourier Volume Rendering)、剪切变形体绘制(Shear-Warp Volume Rendering)等,或多种的组合。体绘制处理后的重建图像可以展示多模态的图像信息,以方便用户(例如,医务工作者)诊断和/或治疗疾病。在一些实施例中,最大密度投影法(MIP)可以基于一个与三维图像对应的重叠图像,将图像中密度最大的像素保留,并将图像投影到冠状面、矢状面、横向平面等二维平面上,从而形成MIP重建图像。例如,通过MIP,可以基于一个或多个三维图像生成二维投影图像和像素图。在一些实施例中,多平 面重建/体绘制技术可视化操作952可以获得一个或多个重建图像,包括,MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、和/或fMRI-BOLD图像数据925的部分或全部信息。在一些实施例中,重建的图像和/或体绘制处理后的图像可以在输入/输出组件184上显示,或者储存在数据库模块230、存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、或远程终端170等。
分析模块220可以对MRI-T1图像数据921、DTI/DTT图像数据922、CT/PET图像数据923、MRI TOF图像数据924、fMRI-BOLD图像数据925、标准脑图谱数据926、操作930的结果、操作951的结果、和/或操作952的结果进行分析,以产生分析报告945。配准操作930之后,分析模块220可以模拟脑部肿瘤的移除效果。在一些实施例中,分析模块220可以模拟肿瘤的人工移除操作941。用户可以根据操作930或操作952的结果,手动勾勒出脑部图像中的肿瘤的范围,并移除该范围内的图像数据。用户可以使用输入/输出组件184实现肿瘤的人工移除。例如,用户可以设置一个或多个参数,输入一个或多个控制信号以控制肿瘤的移除范围。分析模块220可以根据用户的输入信息移除该范围内的图像数据,以实现人工移除。再如,用户可以通过脑机接口向分析模块220传输控制信号,以实现一定范围内的肿瘤的人工移除。
在一些实施例中,分析模块220可以模拟肿瘤的智能移除操作942。肿瘤的智能移除操作942中,分析模块220可以确定肿瘤范围及肿瘤的周边信息(例如,血管名称、数量、分支走向及血流情况、病灶侵蚀的纤维数目以及纤维的连接情况、病灶覆盖的功能区名称及体积占比、病灶周围器官的组织代谢信息等)。分析模块220可以进一步分析移除该肿瘤范围后的周边信息及损毁信息(操作944),从而根据损毁信息对肿瘤范围进行优化,智能选定一个合适的肿瘤范围、肿瘤的移除顺序、方式等,并移除该范围内的图像数据。所述肿瘤的智能移除可以指分析模块220通过一次或多次学习,改进或优化确定肿瘤范围的算法,从而智能确定肿瘤范围、肿瘤的移除顺序、方式等。
在一些实施例中,分析模块220可以模拟肿瘤的全部移除操作943。肿瘤的全部移除操作943中,分析模块220可以基于人工和/或智能确定的肿瘤范围,实现一种扩展型肿瘤移除。在一些实施例中,该扩展型肿瘤移除范围可以是 离肿瘤边界2cm、5cm或者其他距离的范围,以避免后续肿瘤细胞的扩散或者肿瘤复发。分析模块220可以进一步分析全部移除该扩展型肿瘤范围后的周边信息及损毁信息(操作944),以作为一个肿瘤移除范围的实施例,供,例如,医生,参考。在一些实施例中,肿瘤的人工移除操作941、智能移除操作942可以属于一种非扩展型肿瘤移除,其依据是基于肿瘤移除前后的周边信息及损毁信息,以避免或减少图像中移除肿瘤后对周边组织的损伤;肿瘤的全部移除操作943可以属于一种扩展型肿瘤移除,以避免后续肿瘤细胞的扩散或者肿瘤复发。
在模拟移除肿瘤之后,分析模块220可以进行肿瘤移除后的分析操作944。肿瘤移除后的分析操作944可以包括评价肿瘤移除的效果、确定肿瘤移除后对周边组织的损毁信息等。在一些实施例中,分析模块220可以根据前一次的肿瘤移除后的分析结果来确定新的肿瘤移除信息,以指导、或优化下一次的人工移除操作941、智能移除操作942、或全部移除操作943。
在肿瘤移除后的分析操作944之后,分析模块220可以产生分析报告945。分析报告945可以包括肿瘤的信息、肿瘤的周边信息、肿瘤的移除信息(例如,肿瘤的移除范围、时间、方式等)、肿瘤移除后的周边信息(例如,肿瘤移除后的周边损毁信息)、优化后的肿瘤移除信息(例如,优化后的肿瘤移除范围、时间、方式等)、和/或在分析过程中产生的其他任何信息。在一些实施中,分析报告945中还可以包括单模态图像、多模态图像、配准图像、融合图像、标准脑图谱926、重建的图像、和/或体绘制处理后的图像等,或多种的组合。在一些实施例中,分析报告945还可以包括根据多模态图像处理系统130中获取和/或产生的信息在数据库模块230中检索到的信息。例如,分析报告945可以包括相似病例的基本信息(例如,姓名、性别、年龄、病史、实验室检查信息等),手术信息,手术后信息(例如,术后恢复情况),影像学信息,病理信息等。在一些实施例中,分析报告945可以储存在存储设备(例如,硬盘182、只读存储器(ROM)183、随机存取存储器(RAM)185、云存储器等)、数据库模块230、或远程终端170等。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述发明披露仅仅作为示例,而并不构成对本申请的限定。虽然此处并没有明确说明,本领域技术人员可能会对本申请进行各种修改、改进和修正。该类修改、改进和 修正在本申请中被建议,所以该类修改、改进、修正仍属于本申请示范实施例的精神和范围。
同时,本申请使用了特定词语来描述本申请的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本申请至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一替代性实施例”并不一定是指同一实施例。此外,本申请的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,本领域技术人员可以理解,本申请的各方面可以通过若干具有可专利性的种类或情况进行说明和描述,包括任何新的和有用的工序、机器、产品或物质的组合,或对他们的任何新的和有用的改进。相应地,本申请的各个方面可以完全由硬件执行、可以完全由软件(包括固件、常驻软件、微码等)执行、也可以由硬件和软件组合执行。以上硬件或软件均可被称为“数据块”、“模块”、“单元”、“组件”或“系统”。此外,本申请的各方面可能表现为位于一个或多个计算机可读介质中的计算机产品,该产品包括计算机可读程序编码。
计算机可读信号介质可能包含一个内含有计算机程序编码的传播数据信号,例如在基带上或作为载波的一部分。所述传播信号可能有多种表现形式,包括电磁形式、光形式等等、或合适的组合形式。计算机可读信号介质可以是除计算机可读存储介质之外的任何计算机可读介质,该介质可以通过连接至一个指令执行系统、装置或设备以实现通讯、传播或传输供使用的程序。位于计算机可读信号介质上的程序编码可以通过任何合适的介质进行传播,包括无线电、电缆、光纤电缆、RF、或类似介质、或任何上述介质的组合。
本申请各部分操作所需的计算机程序编码可以用任意一种或多种程序语言编写,包括面向对象编程语言如Java、Scala、Smalltalk、Eiffel、JADE、Emerald、C++、C#、VB.NET、Python等,常规程序化编程语言如C语言、Visual Basic、Fortran 2003、Perl、COBOL 2002、PHP、ABAP,动态编程语言如Python、Ruby和Groovy,或其他编程语言等。该程序编码可以完全在用户计算机上运行、或作为独立的软件包在用户计算机上运行、或部分在用户计算机上运行部分在远程计算机运行、或完全在远程计算机或服务器上运行。在后种情况下,远程计算机 可以通过任何网络形式与用户计算机连接,例如,局域网(LAN)或广域网(WAN),或连接至外部计算机(例如通过因特网),或在云计算环境中,或作为服务使用如软件即服务(SaaS)。
此外,除非权利要求中明确说明,本申请所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本申请流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本申请实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本申请披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本申请实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本申请对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本申请一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本申请引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档、物件等,特将其全部内容并入本申请作为参考。与本申请内容不一致或产生冲突的申请历史文件除外,对本申请权利要求最广范围有限制的文件(当前或之后附加于本申请中的)也除外。需要说明的是,如果本申请附属材料中的描述、定义、和/或术语的使用与本申请所述内容有不一致或冲突的地方,以本申请的描述、定义和/或术语的使用为准。
最后,应当理解的是,本申请中所述实施例仅用以说明本申请实施例的原则。其他的变形也可能属于本申请的范围。因此,作为示例而非限制,本申请实施例的替代配置可视为与本申请的教导一致。相应地,本申请的实施例不限于本申请明确介绍和描述的实施例。

Claims (21)

  1. 一种多模态图像处理方法,在至少一个机器上执行,所述至少一个机器中的每一个机器具有至少一个处理器和一个存储器,所述方法包括:
    获取多模态图像,所述多模态图像包括至少三种模态的图像,并且,所述多模态图像中包括一个病灶;
    配准所述多模态图像;
    融合所述多模态图像;
    根据所述多模态图像的融合结果,产生重建图像;以及
    根据所述重建图像,确定病灶移除范围。
  2. 权利要求1所述的方法,进一步包括:
    根据所述多模态图像或所述重建图像,显示图像信息。
  3. 权利要求2所述的方法,进一步包括:
    获取标准图谱,所述标准图谱包括与目标对象的一个部位相关的标准图像数据;以及
    根据所述标准图谱,配准所述多模态图像。
  4. 权利要求3所述的方法,所述多模态图像包括脑部的多模态图像,以及,所述标准图谱包括标准脑图谱。
  5. 权利要求4所述的方法,所述显示图像信息包括:
    显示脑血管、神经纤维、脑功能区、或脑组织代谢率的信息。
  6. 权利要求4所述的方法,所述多模态图像进一步包括MRI T1图像、BOLD图像,和第一图像,所述第一图像包括DTI/DTT图像、CT/PET图像和MRI TOF图像中的一种。
  7. 权利要求6所述的方法,所述配准所述多模态图像包括:
    根据标准图谱,配准BOLD图像,以获得第二图像;
    根据MRI T1图像,配准第一图像,以获得第三图像;以及
    根据MRI T1图像,配准所述第二图像和所述第三图像。
  8. 权利要求1所述的方法,所述产生重建图像包括:
    分割所述多模态图像的融合结果;以及
    根据分割后的所述多模态图像,利用一种重建方法,产生所述重建图像,所述重建方法包括多平面重建或体绘制技术。
  9. 权利要求1所述的方法,所述确定病灶移除范围包括:
    根据所述重建图像,确定所述病灶的范围;
    根据所述病灶的范围,确定所述病灶的第一周边信息,所述第一周边信息包括所述病灶的周围的血管信息、神经信息、或其他组织器官信息;以及
    根据所述第一周边信息,确定所述病灶移除范围。
  10. 权利要求9所述的方法,进一步包括:
    根据所述病灶移除范围,模拟移除病灶。
  11. 权利要求9所述的方法,所述确定病灶移除范围进一步包括:
    确定移除所述病灶后的第二周边信息;
    根据所述第一周边信息和所述第二周边信息,确定移除所述病灶后的周围组织器官的损毁信息;以及
    根据所述损毁信息,优化所述病灶移除范围。
  12. 权利要求11所述的方法,进一步包括:
    根据所述病灶移除范围,确定手术方案。
  13. 权利要求11所述的方法,所述病灶包括脑部的肿瘤,以及,所述病灶周边信息进一步包括所述病灶经过的血管的名称、所述血管的血流情况、所述病灶侵蚀 的脑纤维数量、所述脑纤维的连接情况、或所述病灶覆盖的脑功能区名称。
  14. 权利要求13所述的方法,所述损毁信息包括移除所述病灶后的所述血管的损毁信息、所述脑纤维的损毁信息、或所述脑功能区的损毁信息。
  15. 权利要求12所述的方法,进一步包括:
    保存与所述病灶相关的案例信息,所述案例信息包括所述多模态图像、所述重建图像、所述病灶的范围、所述优化的病灶范围、所述第一周边信息、所述第二周边信息、所述损毁信息、与所述病灶相关的信息、与所述手术方案相关的信息、或与术后恢复情况相关的信息。
  16. 权利要求15所述的方法,进一步包括:
    根据所述案例信息检索相似的案例。
  17. 权利要求16所述的方法,所述保存与所述病灶相关的案例信息包括在数据库中保存与所述病灶相关的案例信息;以及所述检索相似的案例包括在数据库中检索相似的案例。
  18. 权利要求17所述的方法,进一步包括:
    根据所述数据库中的信息进行机器学习,以优化所述病灶移除范围。
  19. 一种非暂时性的计算机可读介质,包括可执行指令,所述指令被至少一个处理器执行时,导致所述至少一个处理器实现一种方法,包括:
    获取多模态图像,所述多模态图像包括至少三种模态的图像,并且,所述多模态图像中包括一个病灶;
    配准所述多模态图像;
    融合所述多模态图像;
    根据所述多模态图像的融合结果,产生重建图像;以及
    根据所述重建图像,确定病灶移除范围。
  20. 一个系统,包括:
    至少一个处理器;以及
    信息,所述信息被所述至少一个处理器执行时,导致所述至少一个处理器实现:
    获取多模态图像,所述多模态图像包括至少三种模态的图像,并且,所述多模态图像中包括一个病灶;
    配准所述多模态图像;
    融合所述多模态图像;
    根据所述多模态图像的融合结果,产生重建图像;以及
    根据所述重建图像,确定病灶移除范围。
  21. 权利要求20所述的系统,进一步包括权利要求19所述的非暂时性的计算机可读介质。
PCT/CN2016/112689 2016-12-28 2016-12-28 多模态图像处理系统及方法 WO2018119766A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/CN2016/112689 WO2018119766A1 (zh) 2016-12-28 2016-12-28 多模态图像处理系统及方法
EP16925011.5A EP3547252A4 (en) 2016-12-28 2016-12-28 SYSTEM AND METHOD FOR PROCESSING MULTI-MODAL IMAGES
US16/236,596 US11037309B2 (en) 2016-12-28 2018-12-30 Method and system for processing multi-modality image
US17/347,531 US11869202B2 (en) 2016-12-28 2021-06-14 Method and system for processing multi-modality image
US18/407,390 US20240144495A1 (en) 2016-12-28 2024-01-08 Method and system for processing multi-modality image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2016/112689 WO2018119766A1 (zh) 2016-12-28 2016-12-28 多模态图像处理系统及方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/236,596 Continuation US11037309B2 (en) 2016-12-28 2018-12-30 Method and system for processing multi-modality image

Publications (1)

Publication Number Publication Date
WO2018119766A1 true WO2018119766A1 (zh) 2018-07-05

Family

ID=62706658

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/112689 WO2018119766A1 (zh) 2016-12-28 2016-12-28 多模态图像处理系统及方法

Country Status (3)

Country Link
US (3) US11037309B2 (zh)
EP (1) EP3547252A4 (zh)
WO (1) WO2018119766A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3699920A1 (en) * 2019-02-21 2020-08-26 SignalPET, LLC Methods and apparatus for the application of machine learning to radiographic images of animals
WO2021081759A1 (zh) * 2019-10-29 2021-05-06 中国科学院深圳先进技术研究院 一种协同成像方法、装置、存储介质和协同成像设备
US11545267B2 (en) 2020-08-04 2023-01-03 Signalpet, Llc Methods and apparatus for the application of reinforcement learning to animal medical diagnostics

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3547252A4 (en) * 2016-12-28 2019-12-04 Shanghai United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR PROCESSING MULTI-MODAL IMAGES
EP3373247A1 (en) * 2017-03-09 2018-09-12 Koninklijke Philips N.V. Image segmentation and prediction of segmentation
US11854281B2 (en) 2019-08-16 2023-12-26 The Research Foundation For The State University Of New York System, method, and computer-accessible medium for processing brain images and extracting neuronal structures
KR20220100851A (ko) * 2019-08-20 2022-07-18 테란 바이오사이언시스 인코포레이티드 파킨슨병을 평가하기 위한 뉴로멜라닌 민감성 mri
CN112420202A (zh) * 2019-08-23 2021-02-26 阿里巴巴集团控股有限公司 数据的处理方法、装置及设备
CN110717961B (zh) * 2019-09-17 2023-04-18 上海联影医疗科技股份有限公司 多模态图像重建方法、装置、计算机设备和存储介质
US11925418B2 (en) * 2019-12-02 2024-03-12 The General Hospital Corporation Methods for multi-modal bioimaging data integration and visualization
CN111383211A (zh) * 2020-03-04 2020-07-07 深圳大学 骨案例识别方法、装置、服务器及存储介质
WO2021175644A1 (en) * 2020-03-05 2021-09-10 Koninklijke Philips N.V. Multi-modal medical image registration and associated devices, systems, and methods
CN113554576B (zh) * 2020-04-24 2024-01-19 上海联影医疗科技股份有限公司 一种多期相数据的减影方法、装置、设备及存储介质
CN111667486B (zh) * 2020-04-29 2023-11-17 杭州深睿博联科技有限公司 一种基于深度学习的多模态融合胰腺分割方法和系统
US20220036555A1 (en) * 2020-07-29 2022-02-03 Biosense Webster (Israel) Ltd. Automatically identifying scar areas within organic tissue using multiple imaging modalities
CN113450294A (zh) * 2021-06-07 2021-09-28 刘星宇 多模态医学图像配准融合方法、装置及电子设备
CN113888663B (zh) * 2021-10-15 2022-08-26 推想医疗科技股份有限公司 重建模型训练方法、异常检测方法、装置、设备及介质
CN114580497B (zh) * 2022-01-26 2023-07-11 南京航空航天大学 一种分析基因对多模态脑影像表型影响的方法
CN114974518A (zh) * 2022-04-15 2022-08-30 浙江大学 多模态数据融合的肺结节影像识别方法及装置
CN116542997B (zh) * 2023-07-04 2023-11-17 首都医科大学附属北京朝阳医院 磁共振图像的处理方法、装置以及计算机设备
CN116862930B (zh) * 2023-09-04 2023-11-28 首都医科大学附属北京天坛医院 适用于多模态的脑血管分割方法、装置、设备和存储介质
CN117056863B (zh) * 2023-10-10 2023-12-26 湖南承希科技有限公司 一种基于多模态数据融合的大数据处理方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103617605A (zh) * 2013-09-22 2014-03-05 天津大学 针对三模态医学图像的透明度加权融合方法
CN103815928A (zh) * 2014-03-18 2014-05-28 北京大学 一种多模态成像系统的图像配准装置及其配准方法
US20140161338A1 (en) * 2012-12-10 2014-06-12 The Cleveland Clinic Foundation Image fusion with automated compensation for brain deformation
CN105488804A (zh) * 2015-12-14 2016-04-13 上海交通大学 脑部asl、spect和mri图像配准融合联合分析的方法及系统
CN105849777A (zh) * 2015-06-26 2016-08-10 深圳市美德医疗电子技术有限公司 一种大脑皮层电极与磁共振图像融合显示的方法及装置
CN105913375A (zh) * 2016-04-13 2016-08-31 漯河医学高等专科学校 一种用于解剖图像进行配准的方法和设备

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9750425B2 (en) * 2004-03-23 2017-09-05 Dune Medical Devices Ltd. Graphical user interfaces (GUI), methods and apparatus for data presentation
US20080292164A1 (en) * 2006-08-29 2008-11-27 Siemens Corporate Research, Inc. System and method for coregistration and analysis of non-concurrent diffuse optical and magnetic resonance breast images
US8295575B2 (en) * 2007-10-29 2012-10-23 The Trustees of the University of PA. Computer assisted diagnosis (CAD) of cancer using multi-functional, multi-modal in-vivo magnetic resonance spectroscopy (MRS) and imaging (MRI)
JP5595762B2 (ja) * 2009-04-08 2014-09-24 株式会社東芝 X線診断装置及び画像再構成処理装置
CA2821395A1 (en) * 2010-12-17 2012-06-21 Aarhus Universitet Method for delineation of tissue lesions
EP2763591A4 (en) * 2011-10-09 2015-05-06 Clear Guide Medical Llc GUIDING INTERVENTIONAL IN SITU IMAGES BY FUSIONING AN ULTRASONIC VIDEO
WO2014031531A1 (en) * 2012-08-21 2014-02-27 Convergent Life Sciences, Inc. System and method for image guided medical procedures
CN104103083A (zh) * 2013-04-03 2014-10-15 株式会社东芝 图像处理装置和方法以及医学成像设备
US10535133B2 (en) * 2014-01-17 2020-01-14 The Johns Hopkins University Automated anatomical labeling by multi-contrast diffeomorphic probability fusion
TWI536969B (zh) * 2015-01-05 2016-06-11 國立中央大學 磁共振造影之白質病變區域識別方法及系統
CN105701799B (zh) * 2015-12-31 2018-10-30 东软集团股份有限公司 从肺部掩膜影像中分割肺血管的方法和装置
EP3547252A4 (en) * 2016-12-28 2019-12-04 Shanghai United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR PROCESSING MULTI-MODAL IMAGES

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140161338A1 (en) * 2012-12-10 2014-06-12 The Cleveland Clinic Foundation Image fusion with automated compensation for brain deformation
CN103617605A (zh) * 2013-09-22 2014-03-05 天津大学 针对三模态医学图像的透明度加权融合方法
CN103815928A (zh) * 2014-03-18 2014-05-28 北京大学 一种多模态成像系统的图像配准装置及其配准方法
CN105849777A (zh) * 2015-06-26 2016-08-10 深圳市美德医疗电子技术有限公司 一种大脑皮层电极与磁共振图像融合显示的方法及装置
CN105488804A (zh) * 2015-12-14 2016-04-13 上海交通大学 脑部asl、spect和mri图像配准融合联合分析的方法及系统
CN105913375A (zh) * 2016-04-13 2016-08-31 漯河医学高等专科学校 一种用于解剖图像进行配准的方法和设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3547252A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3699920A1 (en) * 2019-02-21 2020-08-26 SignalPET, LLC Methods and apparatus for the application of machine learning to radiographic images of animals
US10949970B2 (en) 2019-02-21 2021-03-16 Signalpet, Llc Methods and apparatus for the application of machine learning to radiographic images of animals
US11735314B2 (en) 2019-02-21 2023-08-22 Signalpet, Llc Methods and apparatus for the application of machine learning to radiographic images of animals
WO2021081759A1 (zh) * 2019-10-29 2021-05-06 中国科学院深圳先进技术研究院 一种协同成像方法、装置、存储介质和协同成像设备
US11545267B2 (en) 2020-08-04 2023-01-03 Signalpet, Llc Methods and apparatus for the application of reinforcement learning to animal medical diagnostics

Also Published As

Publication number Publication date
EP3547252A4 (en) 2019-12-04
US11037309B2 (en) 2021-06-15
US20210312645A1 (en) 2021-10-07
US20190139236A1 (en) 2019-05-09
US11869202B2 (en) 2024-01-09
US20240144495A1 (en) 2024-05-02
EP3547252A1 (en) 2019-10-02

Similar Documents

Publication Publication Date Title
US11869202B2 (en) Method and system for processing multi-modality image
US11386557B2 (en) Systems and methods for segmentation of intra-patient medical images
JP6884853B2 (ja) ニューラルネットワーク法を用いた画像セグメンテーション
JP6932182B2 (ja) 畳み込みニューラルネットワークを用いた画像セグメンテーションのためのシステムおよび方法
RU2703344C1 (ru) Формирование псевдо-кт по мр-данным с использованием регрессионной модели на основе признаков
US7606405B2 (en) Dynamic tumor diagnostic and treatment system
JP6220310B2 (ja) 医用画像情報システム、医用画像情報処理方法及びプログラム
CN111008984A (zh) 医学影像中正常器官的轮廓线自动勾画方法及系统
US20130083987A1 (en) System and method for segmenting bones on mr images
US20240127436A1 (en) Multi-modal computer-aided diagnosis systems and methods for prostate cancer
US20230154007A1 (en) Few-shot semantic image segmentation using dynamic convolution
WO2021136304A1 (en) Systems and methods for image processing
Mlynarski et al. Anatomically consistent CNN-based segmentation of organs-at-risk in cranial radiotherapy
US20210118173A1 (en) Systems and methods for patient positioning
WO2006119340A2 (en) Dynamic tumor diagnostic and treatment system
Tiago et al. A data augmentation pipeline to generate synthetic labeled datasets of 3D echocardiography images using a GAN
US9082193B2 (en) Shape-based image segmentation
Kieselmann et al. Auto-segmentation of the parotid glands on MR images of head and neck cancer patients with deep learning strategies
Zhang et al. Enhancing the depth perception of DSA images with 2D–3D registration
US11501442B2 (en) Comparison of a region of interest along a time series of images
JP6843892B2 (ja) 解剖学的または生理学的状態データのクラスタリング
Ger et al. Auto-contouring for image-guidance and treatment planning
US20240037739A1 (en) Image processing apparatus, image processing method, and image processing program
Cai et al. [Retracted] Detection of 3D Arterial Centerline Extraction in Spiral CT Coronary Angiography
Li Computer generative method on brain tumor segmentation in MRI images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16925011

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2016925011

Country of ref document: EP

Effective date: 20190628