CN112561868A - Cerebrovascular segmentation method based on multi-view cascade deep learning network - Google Patents

Cerebrovascular segmentation method based on multi-view cascade deep learning network Download PDF

Info

Publication number
CN112561868A
CN112561868A CN202011429279.7A CN202011429279A CN112561868A CN 112561868 A CN112561868 A CN 112561868A CN 202011429279 A CN202011429279 A CN 202011429279A CN 112561868 A CN112561868 A CN 112561868A
Authority
CN
China
Prior art keywords
image
segmentation
network model
mra
mrv
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011429279.7A
Other languages
Chinese (zh)
Other versions
CN112561868B (en
Inventor
黄炳升
吴松雄
张乃文
陶蔚
陈嘉
陈富勇
孟祥红
魏明怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202011429279.7A priority Critical patent/CN112561868B/en
Publication of CN112561868A publication Critical patent/CN112561868A/en
Application granted granted Critical
Publication of CN112561868B publication Critical patent/CN112561868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cerebral vessel segmentation method based on a multi-view cascade deep learning network, which comprises the steps of obtaining an MRA image and an MRV image corresponding to a target part; determining a plurality of section images based on the MRA image and the MRV image; determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model; and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and the trained second segmentation network model. According to the method and the device, the reference segmentation image corresponding to each section image acquired by the first network segmentation model is used as context information, and the MRA image and the MRV image are jointly used as input information of the second segmentation network model, so that the blood vessel information in the MRA image can be learned from multiple angles, the accuracy of the second segmentation network model for determining the segmentation image can be improved, and the accuracy of cerebral vessel segmentation is improved.

Description

Cerebrovascular segmentation method based on multi-view cascade deep learning network
Technical Field
The application relates to the technical field of medical images, in particular to a cerebral vessel segmentation method based on a multi-view cascade deep learning network.
Background
In neurosurgery, the three-dimensional cerebrovascular image can assist a doctor to see the relation between saccular aneurysm and blood vessel in skull, and can be greatly helpful for assisting the clinician to plan the electrode implantation path before operation to avoid important blood vessel in the epilepsia stereotactic electroencephalogram technology. The angiography technique (MRA) is an important means for computer-aided diagnosis and interventional therapy of cardiovascular and cerebrovascular diseases, neurosurgical navigation and postoperative observation, and can be used for providing three-dimensional images of cerebral vessels. However, due to the difference between the angiography imaging principle and the characteristics of the brain tissue, the MRA cerebrovascular angiography image is affected by noise, local volume effect, field offset effect, etc. during the acquisition process, so that the segmentation accuracy of the blood vessel is low.
Disclosure of Invention
The technical problem to be solved by the present application is to provide a cerebrovascular segmentation method based on a multi-view cascaded deep learning network, aiming at the defects of the prior art.
In order to solve the above technical problem, a first aspect of the embodiments of the present application provides a cerebrovascular segmentation method based on a multi-view cascaded deep learning network, where the method includes:
acquiring an MRA image and an MRV image corresponding to a target part;
determining a plurality of section images based on the MRA image and the MRV image;
determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model;
and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and a trained second segmentation network model, wherein the segmentation image comprises the blood vessel position information of the MRA image.
The cerebrovascular segmentation method based on the multi-view cascaded deep learning network is characterized in that the section images comprise one or more of a cross-sectional image, a sagittal image and a coronal image.
The cerebrovascular segmentation method based on the multi-view cascaded deep learning network, wherein the determining of the plurality of section images based on the MRA image and the MRV image specifically comprises:
acquiring a plurality of first section images of the MRA image and a plurality of second section images of the MRV image, wherein the plurality of first section images correspond to the plurality of second section images one to one;
and respectively splicing each first section image with the corresponding second section image according to the channel to obtain a plurality of section images.
The cerebrovascular segmentation method based on the multi-view cascaded deep learning network, wherein the determining, based on the MRA image, the MRV image and the obtained reference segmentation map, the segmentation image corresponding to the MRA image specifically includes:
splicing the MRA image, the MRV image and the acquired reference segmentation image according to channels to obtain a target image;
and inputting the target image into the trained second segmentation network model, and determining a segmentation image corresponding to the MRA image through the second segmentation network model.
The cerebrovascular segmentation method based on the multi-view cascade deep learning network comprises the steps that a network layer included in a first segmentation network model is the same as a network layer included in a second segmentation network model, the network layer in the first segmentation network model is a two-dimensional network layer, and the network layer in the second segmentation network model is a three-dimensional network layer.
The cerebral vessel segmentation method based on the multi-view cascade deep learning network is characterized in that the first segmentation network model and the second segmentation network model both adopt U-Net + + structures, and except the U-Net located on the first layer in the U-Net + + structures, the sampling modules located on the first position in the U-Net of the other layers are residual modules.
The cerebrovascular segmentation method based on the multi-view cascaded deep learning network is characterized in that the residual error module comprises a first convolution unit, a second convolution unit and a residual error fusion unit; the output item of the first convolution unit is the input item of the second convolution unit; the input items of the residual error fusion unit comprise input items of a second convolution unit and output items of the second convolution unit, the model structure of the first convolution unit is the same as that of the second convolution unit, and the first convolution unit comprises a normalization layer, an activation layer and a convolution layer which are sequentially cascaded.
The second aspect of the embodiment of the application provides a model is cut apart to cerebral vessel based on multi-view cascade deep learning network, and it includes a plurality of first segmentation network models, fuses module and second segmentation network model, and a plurality of first segmentation network models all are connected with the fusion module, and the fusion module is connected with the second segmentation network model, wherein, first segmentation network model and second segmentation network model all adopt the U-Net + + structure, and except the U-Net who is located the first layer in the U-Net + + structure, the sampling module that is located the first place in all the other layers U-Net is the residual error module.
A third aspect of embodiments of the present application provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the method for segmenting cerebral vessels based on multi-view cascaded deep learning network as described in any one of the above.
A fourth aspect of the embodiments of the present application provides a terminal device, including: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps of any of the above-mentioned cerebrovascular vessel segmentation methods based on multi-view cascaded deep learning network.
Has the advantages that: compared with the prior art, the method for segmenting the cerebral vessels based on the multi-view cascade deep learning network comprises the steps of obtaining an MRA image and an MRV image corresponding to a target part; determining a plurality of section images based on the MRA image and the MRV image; determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model; and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and the trained second segmentation network model. According to the method and the device, the reference segmentation image corresponding to each section image acquired by the first network segmentation model is used as context information, and the MRA image and the MRV image are jointly used as input information of the second segmentation network model, so that the blood vessel information in the MRA image can be learned from multiple angles, the accuracy of the second segmentation network model for determining the segmentation image can be improved, and the accuracy of cerebral vessel segmentation is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without any inventive work.
Fig. 1 is a flowchart of a cerebrovascular segmentation method based on a multi-view cascaded deep learning network according to the present application.
Fig. 2 is a flowchart illustrating a training process of a segmentation network model in a cerebrovascular segmentation method based on a multi-view cascade deep learning network according to the present application.
Fig. 3 is a schematic structural diagram of a first segmentation network model in the cerebrovascular segmentation method based on the multi-view cascaded deep learning network provided by the present application.
Fig. 4 is a schematic flow chart of a cerebral vessel segmentation method based on a multi-view cascaded deep learning network according to the present application.
Fig. 5 is a schematic structural diagram of a terminal device provided in the present application.
Detailed Description
In order to make the purpose, technical scheme and effect of the present application clearer and clearer, the present application is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein. In addition, it should be understood that, the sequence numbers and sizes of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process is determined by the function and the inherent logic of the process, and should not constitute any limitation on the implementation process of the embodiment of the present application.
The inventor finds that the modern standardized diagnosis and treatment process of neurosurgery comprises the steps of accurately positioning the focus before operation, designing and optimizing an operation path, accurately removing the focus area in the operation and evaluating the focus removing effect and the disease recovery after the operation. In the whole process, in order to ensure the effect of the surgical treatment, a neurosurgeon must evaluate the imaging data of a patient, judge the specific position of a focus and finally design a targeted surgical scheme. In recent years, with the proposition of the neurosurgery precise diagnosis and treatment concept and the gradual development of image science, the medical image navigation system can assist doctors to judge the position of a focus and show important anatomical structures around the focus gradually, so that the precise excision of the focus and the protection of the nerve function are facilitated, and the long-term treatment effect is ensured.
Image guidance systems for neurosurgery are an area that has developed very rapidly in recent years. Neuronavigation, also known as frameless stereotactic surgery, is a technology combining modern neuroimaging diagnostic techniques, stereotactic techniques and microsurgery together through a high-performance computer, and can accurately show the three-dimensional spatial position and adjacency relation of anatomical structures and lesions of the nervous system. Since the introduction of the first frameless stereotactic navigation device by Roberts et al, neuronavigation systems have become an essential tool in neurosurgical procedures and reduce trauma during the procedure by locating the target in real time. Through the development of the last 30 years, the technology has been successfully applied to more and more complicated fields, including surgical treatment of malignant tumors, vascular diseases, epilepsy, deep brain stimulation surgery, and the like. In neurosurgery, if a three-dimensional image of a cerebral blood vessel can be provided, a doctor can be assisted to see the relationship between a saccular aneurysm and a blood vessel in a skull or to be helpful for diagnosis and treatment of the inside of the blood vessel of the saccular aneurysm, and a clinician can be assisted to carry out electrode implantation path planning and the like before operation to avoid important blood vessels in an epilepsy stereotactic electroencephalogram technology. It is a technical difficulty how to obtain the cerebral vessels through the image and image post-processing procedures before or during the operation.
Angiography (MRA) is one of the important means for computer-aided diagnosis and intervention of cardiovascular and cerebrovascular diseases, neurosurgical navigation and postoperative observation. However, due to the difference between the angiography imaging principle and the characteristics of the brain tissue, the MRA cerebrovascular angiography image is affected by noise, local volume effect, field offset effect and the like during the acquisition process, and has the characteristics of fuzzy vessel morphology, background noise, non-uniformity and the like, and the cerebrovascular branches are numerous, have different morphologies, are thin, have low contrast between fine vessels and the background, and the like, which bring certain difficulties to the segmentation of the cerebrovascular.
Machine learning is the core of an intelligent image processing technology, deep learning is taken as a branch of machine learning, and different from a traditional method, feature extraction and reasoning can be performed on data in a supervised manner, rules among the data are fully explored, and then objective prediction is performed on unknown data by utilizing the rules, so that stable and efficient segmentation of a region of interest (ROI) can be realized. However, the existing deep learning for cerebral vessel segmentation generally has the problem of low accuracy. This is because most of the prior art only considers intracranial vessels and neglects extracranial vessels, but the segmentation of this part of vessels also has certain clinical significance. For example, in epilepsy surgery, the stereotactic electroencephalography (SEEG) is used to locate the epileptogenic focus, and the skull or some important scalp blood vessels need to be avoided. For example, if the superficial temporal artery can be avoided, the blood transportation of the scalp and the healing of the incision can be greatly facilitated, and meanwhile, the ischemia of a distal flap, the atrophy of the temporal muscle and the like can be avoided.
In order to solve the above problem, in the embodiment of the present application, an MRA image and an MRV image corresponding to a target portion are acquired; determining a plurality of section images based on the MRA image and the MRV image; determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model; and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and the trained second segmentation network model. According to the method and the device, the reference segmentation image corresponding to each section image acquired by the first network segmentation model is used as context information, and the MRA image and the MRV image are jointly used as input information of the second segmentation network model, so that the blood vessel information in the MRA image can be learned from multiple angles, the accuracy of the second segmentation network model for determining the segmentation image can be improved, and the accuracy of cerebral vessel segmentation is improved.
The following further describes the content of the application by describing the embodiments with reference to the attached drawings.
The present embodiment provides a cerebrovascular segmentation method based on a multi-view cascade deep learning network, as shown in fig. 1 and 4, the method includes:
s10, acquiring an MRA image and an MRV image corresponding to the target part;
s20, determining a plurality of section images based on the MRA images and the MRV images;
determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model;
s30, determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation map and the trained second segmentation network model, wherein the segmentation image comprises the blood vessel position information of the MRA image.
The embodiment provides a cerebral vessel segmentation method based on a multi-view cascade deep learning network, which is used for acquiring an MRA image and an MRV image corresponding to a target part; determining a plurality of section images based on the MRA image and the MRV image; determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model; and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and the trained second segmentation network model. According to the method and the device, the reference segmentation image corresponding to each section image acquired by the first network segmentation model is used as context information, and the MRA image and the MRV image are jointly used as input information of the second segmentation network model, so that the blood vessel information in the MRA image can be learned from multiple angles, the accuracy of the second segmentation network model for determining the segmentation image can be improved, and the accuracy of cerebral vessel segmentation is improved.
Each step of the cerebrovascular segmentation method based on the multi-view cascaded deep learning network provided in this embodiment is specifically described below.
In step S10, the target region is a brain, the MRA image is a magnetic resonance artery image, the MRV image is a magnetic resonance vein image, and the patient corresponding to the MRA image and the MRV image are the same as each other. In other words, the MRA image and the MRV image are images acquired of the brain of the same patient by an angiography technique. In this embodiment, the image scale of the MRA image is the same as the image scale of the MRV image, for example, the image scale of the MRA image is w × h × c, and then the image scale of the MRV image is w × h × c.
In step S20, each of the slice images is a two-dimensional image, and the slice images may include one or more of a cross-sectional image, a sagittal image, and a coronal image. For example, the sectional images include a cross-sectional view, or the sectional images include a cross-sectional view including a sagittal view, or the sectional images include a cross-sectional view, a sagittal view, a coronal view, and the like. In this embodiment, the sectional images include a cross-sectional view, a sagittal view, and a coronal view.
In an implementation manner of this embodiment, the determining the plurality of slice images based on the MRA image and the MRV image specifically includes:
acquiring a plurality of first section images of the MRA image and a plurality of second section images of the MRV image, wherein the plurality of first section images correspond to the plurality of second section images one to one;
and respectively splicing each first section image with the corresponding second section image according to the channel to obtain a plurality of section images.
Specifically, the number of images of the first sectional image is the same as the number of images of the second sectional image, and is equal to the number of images of several sectional images. The section types of each section image in the plurality of first section images are different from each other, the section types of each section image in the plurality of second section images are different from each other, and the section type set formed by the plurality of first section images and the section type set formed by the plurality of second section images comprise one or more of a cross-sectional view, a sagittal view and a coronal view; and the section type set formed by a plurality of first section images is the same as the section type set formed by a plurality of second section images. In an implementation manner of this embodiment, the slice type set formed by the plurality of first slice images and the slice type set formed by the plurality of second slice images each include a cross-sectional view, a sagittal view, and a coronal view, and thus, the plurality of first slice images and the plurality of second slice images are three, the three second slice images correspond to the cross-sectional view, the sagittal view, and the coronal view one by one, and the three first slice images correspond to the cross-sectional view, the sagittal view, and the coronal view one by one.
Based on this, the second sectional image corresponding to the first sectional image is an image of a sectional type in the second sectional images corresponding to the first sectional image, for example, when the first sectional image is a cross sectional image, the second sectional image is a cross sectional image. In addition, since the image scale of the MRA image is the same as that of the MRV image, the image size of the first slice image is the same as that of the second slice image. Therefore, after the first section image and the second section image are obtained, the first section image and the second section image can be spliced according to the channel to obtain the section image. For example, if the image scale of the first slice image is w × h × 1, the image size of the second slice image is w × h × 1, and the image size of the slice image is w × h × 2.
In step S30, the first segmentation network model is a trained network model, an input item of the first segmentation network model is a tangent plane image, an output item of the first segmentation network model is a reference segmentation image corresponding to the tangent plane image, and the reference segmentation image carries the blood vessel position information in the tangent plane image. In other words, based on the trained first segmentation network model, determining the reference segmentation map corresponding to each slice image specifically is: and for each section image, inputting the section image into a first segmentation network model, and outputting a reference segmentation image corresponding to the section image through the first segmentation network model. The image size of the reference divided image is the same as the image size of the corresponding sectional image, and the image sizes of the reference divided images are the same because the image sizes of the sectional images are the same.
The first segmentation network model is a two-dimensional network model, the first segmentation network model adopts a U-Net + + structure, as shown in FIG. 3, the first segmentation network model comprises a plurality of U-Net layers, and each of the plurality of U-Net layers comprises a plurality of sampling modules; and the number of sampling modules in the U-Net layer positioned at the upper layer in the two adjacent U-Net layers is 1 more than that of the sampling modules in the U-Net layer positioned at the lower layer; in two adjacent U-Net layers, the sampling module positioned at the forefront of the U-Net layer positioned at the previous layer is connected with the sampling module positioned at the forefront of the U-Net layer positioned at the next layer through the largest pooling layer; for each sampling module in the U-Net layer positioned at the next layer, the sampling module is connected with the lower sampling module, the arrangement serial number of the U-Net layer positioned at the previous layer is greater than 1, and the arrangement serial number of the U-Net layer corresponding to the sampling module is greater than 1. For example, as shown in fig. 3, the sampling module located at the first bit of layer 1 and the sampling module located at the second bit of layer 0 are connected by deconvolution, and for example, the sampling module located at the first bit of layer 2 and the sampling module located at the second bit of layer 1 are connected by deconvolution. In addition, for each U-Net layer, the output item of the sampling module positioned at the forefront in the U-Net layer is the input item of each sampling module positioned behind the output item; the input items of the last sampling module comprise the output items of the preceding sampling modules, and the output item of the preceding sampling module in two adjacent sampling modules is the input item of the following sampling module.
In an implementation manner of this embodiment, except for the U-Net located in the first layer in the U-Net + + structure, all sampling modules located in the first position in the remaining layers of U-Net are residual modules. As shown in fig. 3, the residual error module includes a first convolution unit, a second convolution unit, and a residual error fusion unit; the output item of the first convolution unit is the input item of the second convolution unit; the input items of the residual error fusion unit comprise input items of a second convolution unit and output items of the second convolution unit, the model structure of the first convolution unit is the same as that of the second convolution unit, and the first convolution unit comprises a normalization layer, an activation layer and a convolution layer which are sequentially cascaded. In the embodiment, by adding the residual module into the U-Net + + structure, the convergence of the network can be accelerated by the residual structure, and meanwhile, the original characteristics of the input data are more effectively retained.
In one implementation of this embodiment, the normalization layer is an instance normalization layer, and the activation function layer is configured with a leak ReLU, so that the network input batch size is set to 1, there is no problem of batch input, and convergence of the network can be accelerated. The Leaky ReLU function has a small slope for negative inputs. Therefore, the derivative of the Leaky ReLU function is always not zero, so that the occurrence of a silent neuron can be reduced, learning based on gradient is allowed, and the problem that the neuron does not learn after the Relu function enters a negative interval is solved. In one specific implementation, the number of U-nets is four, and the number of convolution kernels of the initial two convolutions is 30, which is doubled every time the samples are down-sampled. For example, as shown in fig. 3, the image block size is 64 × 160 × 192, i.e., the x (0,0) input size is 1x64 × 160 × 192, the size of the sampling block x (1,0) is 30x32x80x96, the size of x (2,0) is 60x16x40x48, the size of x (3,0) is 90x8x20x24, and the size of x (4,0) is 120x4x10x 12. The sampling block x (3,1) size is 210x8x20x24, the x (2,1) size is 150x16x40x48, and the x (2,2) size is 420x16x40x 48.
In step S40, the second segmentation network model is a trained network model, and the input items of the second segmentation network model are images determined based on the MRA image, the MRV image, and the acquired reference segmentation map; the output item is a segmentation image corresponding to the MRA image, and the segmentation image carries the position information of the blood vessel in the MRA image. The second split network model includes the same network layers as the first split network model, and is different from the first split network model in that the network layers in the second split network model are three-dimensional, for example, the convolutional layers of the second split network model are three-dimensional convolutional layers, and the convolutional layers of the first split network model are two-dimensional convolutional layers. In other words, the second split network model is a method of converting each two-dimensional network layer in the first split network model into a three-dimensional network layer, for example, converting a two-dimensional convolution into a three-dimensional convolution, converting a two-dimensional maximum pooling into a three-dimensional maximum pooling, and converting a two-dimensional transposed convolution into a three-dimensional transposed convolution, or the like; and the position relation and the connection relation among the network layers are unchanged.
In an implementation manner of this embodiment, the determining, based on the MRA image, the MRV image, and the acquired reference segmentation map, a segmentation image corresponding to the MRA image specifically includes:
splicing the MRA image, the MRV image and the acquired reference segmentation image according to channels to obtain a target image;
and inputting the target image into the trained second segmentation network model, and determining a segmentation image corresponding to the MRA image through the second segmentation network model.
Specifically, the target image is a three-dimensional image, and the target image is obtained by splicing the MRA image, the MRV image and the acquired reference segmentation map in the channel direction. For example, the slice images include transverse, coronal, and sagittal images, the reference segmentation images corresponding to the transverse, coronal, and sagittal images are 1x1x464x512(464x512 is the transverse 2D image size), 1x1x384x464(144x464 is the coronal 2D image size), 1x1x384x512(144x512 is the sagittal 2D image size), and the output items are the reference segmentation images corresponding to the slice images; after the reference segmentation image is obtained, the input item of the 5-channel image composed of the MRA image, the MRV image and the three reference segmentation images is 1x5x384x464x512, and the output is the final cerebrovascular segmentation result 1x1x384x464x 512.
In an implementation manner of this embodiment, in order to improve the determination speed of the section images, the number of first segmentation network models may be determined according to the number of the section images, and each section image corresponds to a plurality of determined first segmentation network models one to one, and each section image is an input item of the corresponding first segmentation network model; each first segmentation network model is parallel and connected with the second segmentation model, so that the output item of each first segmentation network model is used as the input item of the second segmentation network model. In addition, in order to connect the first and second segmentation network models, a fusion module may be further provided, the plurality of segmentation network models are all connected with the fusion module, the fusion module is connected with the second segmentation network model, the input items of the fusion module include the output items of the first segmentation network models, the MRA image and the MRV image, and the output items of the fusion module are the output items of the second segmentation network model. Based on this, a plurality of first segmentation network models, a plurality of fusion modules and a plurality of second segmentation network models may form a segmentation network model, and the working process of the segmentation network model may be as follows: firstly, determining a plurality of section images according to the MRA image and the MRV image, and secondly, determining a reference segmentation image corresponding to each section image through each first segmentation network model; then, a plurality of reference segmentation images, the MRA image and the MRV image are fused through a fusion module to obtain an input image, and finally, the input image is input into a second segmentation network model to obtain a segmentation image corresponding to the MRA image.
The first segmentation network model and the second segmentation network model form the segmentation network model and then are subjected to combined training, or the first segmentation network model can be trained independently at first, and the second segmentation network model is trained after the training of the first segmentation network model is finished; and finally, after the training of the second segmentation network model is finished, connecting the first segmentation network model and the second segmentation network to form the segmentation network model, and performing fine tuning training on the segmentation network model.
In an implementation manner of this embodiment, the first segmentation network model and the second segmentation network model form a segmentation network model as shown in fig. 4 and then train, in the training process of the segmentation network model, as shown in fig. 2, the training sample may be an MRI image, and the MRI image is labeled with blood vessel position information, wherein a blood vessel region may be delineated by an experienced physician in ITK-SNAP software to have clinical value, wherein in the delineation process, the physician needs images of an MRI cross section, a sagittal plane, and a coronal plane of a patient to be mutually referred and supplemented, and determines the position of the blood vessel by cross judgment, and performs an operation on a cross section which is suitable for the operation. In addition, the split network model is trained and tested on an Nvidia GTX 1080TI video card with the video memory of 11G, and the deep learning framework is PyTorch. The sum of cross entropy loss and Dice loss is used as a loss function of training, and the initial learning rate is that the total iteration number is 1000 epochs. Adopting a learning rate function reduce LROnPateau, wherein the learning rate adjustment strategy is that the exponential moving average loss of the training set is reduced by less than 30 epochs, and the learning rate is attenuated by 5 times; the training stopping condition is that when the reduction of the verification set index moving average loss within 60 epochs is insufficient, or the learning rate is less, the training is stopped. The optimizer is Adam, the penalty factor is 0.05, the L2 regular term is, the batch size of the training is set to 1. The method adopts five-fold cross validation and adopts a real-time digital amplification mode to train the model so as to prevent the model from being over-fitted, and the amplification method comprises the following steps: random image rotation, range-15 ° -15 °; random image scaling, range 0.85-1.25; and random mirroring, namely after the network performs coding and decoding operations on the input, outputting a segmentation probability graph of the two channels through a Softmax activation function.
The experiment relates to a preprocessing mode when cascading multi-view network data: first, z-score normalization is performed on each patient data individually, with an image resolution of 0.39 x 0.90, thus resampling all data to the same resolution, which helps to improve the generalization of the model, with a resolution size of 0.39 x 0.39, thus reducing the number of resampled images, with a resampled image matrix size of 464x512 x 331. And in the data resampling stage, the MRI image adopts trilinear interpolation, and the gold standard adopts nearest neighbor interpolation. For the input image matrix size of the first segmentation network model of the cross section is 464 × 512, and for the input image matrix size of the first segmentation network model of the sagittal plane, considering that the downsampling operation can cause the image matrix size to become 1/2, the image centering and zero padding operation is carried out on the periphery to change the image matrix size to 384 × 512; in the same operation, the input image matrix size of the first segmentation network model of the coronal plane 2D is changed to 384 × 464.
In summary, the present embodiment provides a cerebrovascular segmentation method based on a multi-view cascaded deep learning network, where the method includes acquiring an MRA image and an MRV image corresponding to a target portion; determining a plurality of section images based on the MRA image and the MRV image; determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model; and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and the trained second segmentation network model. According to the method and the device, the reference segmentation image corresponding to each section image acquired by the first network segmentation model is used as context information, and the MRA image and the MRV image are jointly used as input information of the second segmentation network model, so that the blood vessel information in the MRA image can be learned from multiple angles, the accuracy of the second segmentation network model for determining the segmentation image can be improved, and the accuracy of cerebral vessel segmentation is improved.
Based on the method for segmenting the cerebral vessels based on the multi-view cascaded deep learning network, the embodiment provides a model for segmenting the cerebral vessels based on the multi-view cascaded deep learning network, which comprises a plurality of first segmentation network models, a fusion module and a second segmentation network model, wherein the plurality of first segmentation network models are all connected with the fusion module, and the fusion module is connected with the second segmentation network model, wherein the first segmentation network models and the second segmentation network models both adopt a U-Net + + structure, and except the U-Net positioned at the first layer in the U-Net + + structure, the sampling modules positioned at the first position in the U-Net of the other layers are residual modules; the residual error module comprises a first convolution unit, a second convolution unit and a residual error fusion unit; the output item of the first convolution unit is the input item of the second convolution unit; the input items of the residual error fusion unit comprise an input item of a second convolution unit and an output item of the second convolution unit, and the model structure of the first convolution unit and the model structure of the second convolution unit are connected.
In addition, for the description of the model structure of the first segmentation network model and the model structure of the second segmentation network model, reference may be made to the description of the above embodiments, which is not repeated here.
Based on the above-mentioned cerebrovascular segmentation method based on the multi-view cascaded deep learning network, the present embodiment provides a computer-readable storage medium, where one or more programs are stored, and the one or more programs are executable by one or more processors to implement the steps in the cerebrovascular segmentation method based on the multi-view cascaded deep learning network according to the above-mentioned embodiment.
Based on the above method for segmenting cerebral vessels based on the multi-view cascaded deep learning network, the present application further provides a terminal device, as shown in fig. 5, which includes at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 20 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the terminal device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A cerebrovascular segmentation method based on a multi-view cascade deep learning network is characterized by comprising the following steps:
acquiring an MRA image and an MRV image corresponding to a target part;
determining a plurality of section images based on the MRA image and the MRV image;
determining a reference segmentation graph corresponding to each section image based on the trained first segmentation network model;
and determining a segmentation image corresponding to the MRA image based on the MRA image, the MRV image, the acquired reference segmentation image and a trained second segmentation network model, wherein the segmentation image comprises the blood vessel position information of the MRA image.
2. The method of claim 1, wherein the sectional images include one or more of a cross-sectional image, a sagittal image, and a coronal image.
3. The method of claim 1, wherein the determining the plurality of slice images based on the MRA image and the MRV image specifically comprises:
acquiring a plurality of first section images of the MRA image and a plurality of second section images of the MRV image, wherein the plurality of first section images correspond to the plurality of second section images one to one;
and respectively splicing each first section image with the corresponding second section image according to the channel to obtain a plurality of section images.
4. The cerebrovascular segmentation method based on the multi-view cascaded deep learning network according to claim 1, wherein the determining, based on the MRA image, the MRV image, and the obtained reference segmentation map, the segmentation image corresponding to the MRA image specifically includes:
splicing the MRA image, the MRV image and the acquired reference segmentation image according to channels to obtain a target image;
and inputting the target image into the trained second segmentation network model, and determining a segmentation image corresponding to the MRA image through the second segmentation network model.
5. The cerebrovascular segmentation method based on the multi-view cascaded deep learning network according to any one of claims 1-4, wherein the first segmentation network model comprises the same network layer as the second segmentation network model, the network layer in the first segmentation network model is a two-dimensional network layer, and the network layer in the second segmentation network model is a three-dimensional network layer.
6. The method for segmenting cerebral vessels based on the multiview cascade deep learning network of claim 5, wherein the first segmentation network model and the second segmentation network model both adopt a U-Net + + structure, and the first sampling module in each of the U-Net + + structures is a residual module except the U-Net in the first layer.
7. The cerebrovascular segmentation method based on the multi-view cascaded deep learning network according to claim 5, wherein the residual error module comprises a first convolution unit, a second convolution unit and a residual error fusion unit; the output item of the first convolution unit is the input item of the second convolution unit; the input items of the residual error fusion unit comprise input items of a second convolution unit and output items of the second convolution unit, the model structure of the first convolution unit is the same as that of the second convolution unit, and the first convolution unit comprises a normalization layer, an activation layer and a convolution layer which are sequentially cascaded.
8. A brain vessel segmentation model based on a multi-view cascade deep learning network is characterized by comprising a plurality of first segmentation network models, a fusion module and a second segmentation network model, wherein the first segmentation network models and the second segmentation network models are all connected with the fusion module, the fusion module is connected with the second segmentation network model, the first segmentation network models and the second segmentation network models are both of U-Net + + structures, and sampling modules located at the first position in all the layers of U-Net + + structures are residual modules except U-Net located at the first layer.
9. A computer-readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the method for cerebral vessel segmentation based on multi-view cascaded deep learning network according to any one of claims 1 to 7.
10. A terminal device, comprising: a processor, a memory, and a communication bus; the memory has stored thereon a computer readable program executable by the processor;
the communication bus realizes connection communication between the processor and the memory;
the processor, when executing the computer readable program, implements the steps of the method for segmenting cerebral vessels based on multi-view cascaded deep learning network according to any one of claims 1 to 7.
CN202011429279.7A 2020-12-09 2020-12-09 Cerebrovascular segmentation method based on multi-view cascade deep learning network Active CN112561868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011429279.7A CN112561868B (en) 2020-12-09 2020-12-09 Cerebrovascular segmentation method based on multi-view cascade deep learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011429279.7A CN112561868B (en) 2020-12-09 2020-12-09 Cerebrovascular segmentation method based on multi-view cascade deep learning network

Publications (2)

Publication Number Publication Date
CN112561868A true CN112561868A (en) 2021-03-26
CN112561868B CN112561868B (en) 2021-12-07

Family

ID=75061015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011429279.7A Active CN112561868B (en) 2020-12-09 2020-12-09 Cerebrovascular segmentation method based on multi-view cascade deep learning network

Country Status (1)

Country Link
CN (1) CN112561868B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160208A (en) * 2021-05-07 2021-07-23 西安智诊智能科技有限公司 Liver lesion image segmentation method based on cascade hybrid network
CN115937153A (en) * 2022-12-13 2023-04-07 北京瑞医博科技有限公司 Model training method and device, electronic equipment and computer storage medium
WO2023071154A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image segmentation method, training method and apparatus for related model, and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108257134A (en) * 2017-12-21 2018-07-06 深圳大学 Nasopharyngeal Carcinoma Lesions automatic division method and system based on deep learning
CN108629784A (en) * 2018-05-08 2018-10-09 上海嘉奥信息科技发展有限公司 A kind of CT image intracranial vessel dividing methods and system based on deep learning
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
US20190130578A1 (en) * 2017-10-27 2019-05-02 Siemens Healthcare Gmbh Vascular segmentation using fully convolutional and recurrent neural networks
CN110619635A (en) * 2019-07-25 2019-12-27 深圳大学 Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN110675409A (en) * 2019-09-20 2020-01-10 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110930417A (en) * 2019-11-26 2020-03-27 腾讯科技(深圳)有限公司 Training method and device of image segmentation model, and image segmentation method and device
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111667488A (en) * 2020-04-20 2020-09-15 浙江工业大学 Medical image segmentation method based on multi-angle U-Net
CN112037186A (en) * 2020-08-24 2020-12-04 杭州深睿博联科技有限公司 Coronary vessel extraction method and device based on multi-view model fusion

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN107220980A (en) * 2017-05-25 2017-09-29 重庆理工大学 A kind of MRI image brain tumor automatic division method based on full convolutional network
US20190130578A1 (en) * 2017-10-27 2019-05-02 Siemens Healthcare Gmbh Vascular segmentation using fully convolutional and recurrent neural networks
CN108257134A (en) * 2017-12-21 2018-07-06 深圳大学 Nasopharyngeal Carcinoma Lesions automatic division method and system based on deep learning
CN108198184A (en) * 2018-01-09 2018-06-22 北京理工大学 The method and system of contrastographic picture medium vessels segmentation
CN108629784A (en) * 2018-05-08 2018-10-09 上海嘉奥信息科技发展有限公司 A kind of CT image intracranial vessel dividing methods and system based on deep learning
CN109118495A (en) * 2018-08-01 2019-01-01 沈阳东软医疗系统有限公司 A kind of Segmentation Method of Retinal Blood Vessels and device
CN109345538A (en) * 2018-08-30 2019-02-15 华南理工大学 A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN110619635A (en) * 2019-07-25 2019-12-27 深圳大学 Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
CN110675409A (en) * 2019-09-20 2020-01-10 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110930417A (en) * 2019-11-26 2020-03-27 腾讯科技(深圳)有限公司 Training method and device of image segmentation model, and image segmentation method and device
CN111192245A (en) * 2019-12-26 2020-05-22 河南工业大学 Brain tumor segmentation network and method based on U-Net network
CN111667488A (en) * 2020-04-20 2020-09-15 浙江工业大学 Medical image segmentation method based on multi-angle U-Net
CN112037186A (en) * 2020-08-24 2020-12-04 杭州深睿博联科技有限公司 Coronary vessel extraction method and device based on multi-view model fusion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
AYSE NURDAN SARAN 等: "Vessel segmentation in MRI using a variational image subtraction approach", 《TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES》 *
CHEN HUANG 等: "Fully Automated Segmentation of Lower Extremity Deep Vein Thrombosis Using Convolutional Neural Network", 《BIOMED RESEARCH INTERNATIONAL》 *
ZONGWEI ZHOU 等: "UNet++: A Nested U-Net Architecture for Medical Image Segmentation", 《HTTPS://ARXIV.ORG/PDF/1807.10165.PDF》 *
ZONGWEI ZHOU 等: "UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation", 《HTTPS://ARXIV.ORG/PDF/1912.05074.PDF》 *
梁礼明 等: "自适应尺度信息的U型视网膜血管分割算法", 《光学学报》 *
游齐靖、万程: "基于深度学习的医学图像分割方法", 《中国临床新医学》 *
高磊 等: "传统解剖学特征与深度学习相结合的肺叶分割算法", 《光学技术》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160208A (en) * 2021-05-07 2021-07-23 西安智诊智能科技有限公司 Liver lesion image segmentation method based on cascade hybrid network
WO2023071154A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Image segmentation method, training method and apparatus for related model, and device
CN115937153A (en) * 2022-12-13 2023-04-07 北京瑞医博科技有限公司 Model training method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN112561868B (en) 2021-12-07

Similar Documents

Publication Publication Date Title
CN112561868B (en) Cerebrovascular segmentation method based on multi-view cascade deep learning network
CN113239755B (en) Medical hyperspectral image classification method based on space-spectrum fusion deep learning
US20210264623A1 (en) Methods for optimizing the planning and placement of probes in the brain via multimodal 3d analyses of cerebral anatomy
WO2016040260A1 (en) Edema invariant tractography
CN111145901B (en) Deep venous thrombosis thrombolytic curative effect prediction method and system, storage medium and terminal
CN115311193A (en) Abnormal brain image segmentation method and system based on double attention mechanism
EP4143742A1 (en) Method and apparatus to classify structures in an image
Jin et al. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review
Cai et al. Accurate preoperative path planning with coarse-to-refine segmentation for image guided deep brain stimulation
CN116245951A (en) Brain tissue hemorrhage localization and classification and hemorrhage quantification method, device, medium and program
Yang et al. Opencc–an open Benchmark data set for Corpus Callosum Segmentation and Evaluation
CN115937096A (en) Cerebral hemorrhage analysis method and system based on image registration
CN113496493B (en) Brain tumor image segmentation method combining multi-mode information
CN115760780A (en) MRI (magnetic resonance imaging) image diagnosis method based on neural network and continuous medium model
Xu et al. A Multi-scale Attention-based Convolutional Network for Identification of Alzheimer's Disease based on Hippocampal Subfields
US11786309B2 (en) System and method for facilitating DBS electrode trajectory planning
Wasserthal et al. Direct white matter bundle segmentation using stacked u-nets
EP4143741A1 (en) Method and apparatus to classify structures in an image
CN113269816A (en) Regional progressive brain image elastic registration method and system
Basher et al. One step measurements of hippocampal pure volumes from MRI data using an ensemble model of 3-D convolutional neural network
Dong et al. Primary brain tumors Image segmentation based on 3D-UNET with deep supervision and 3D brain modeling
Villanueva-Naquid et al. Novel risk assessment methodology for keyhole neurosurgery with genetic algorithm for trajectory planning
CN115619810B (en) Prostate partition segmentation method, system and equipment
CN115115628B (en) Lacunar infarction identification system based on three-dimensional refined residual error network
CN117831757B (en) Pathological CT multi-mode priori knowledge-guided lung cancer diagnosis method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant