WO2022188799A1 - 脑标识定位系统和方法 - Google Patents

脑标识定位系统和方法 Download PDF

Info

Publication number
WO2022188799A1
WO2022188799A1 PCT/CN2022/079897 CN2022079897W WO2022188799A1 WO 2022188799 A1 WO2022188799 A1 WO 2022188799A1 CN 2022079897 W CN2022079897 W CN 2022079897W WO 2022188799 A1 WO2022188799 A1 WO 2022188799A1
Authority
WO
WIPO (PCT)
Prior art keywords
identification
point
probability map
brain
cerebral cortex
Prior art date
Application number
PCT/CN2022/079897
Other languages
English (en)
French (fr)
Inventor
张旭
葛传斌
方伟
Original Assignee
武汉联影智融医疗科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202110259731.8A external-priority patent/CN112950600B/zh
Application filed by 武汉联影智融医疗科技有限公司 filed Critical 武汉联影智融医疗科技有限公司
Priority to EP22766321.8A priority Critical patent/EP4293618A1/en
Publication of WO2022188799A1 publication Critical patent/WO2022188799A1/zh
Priority to US18/464,247 priority patent/US20230419499A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Definitions

  • the present specification relates to the field of medical technology, and in particular, to a system and method for locating brain markers.
  • AC Anterior Commissure
  • PC Posterior Commissure
  • MSP Midsagittal Plane
  • Talairach Talairach cerebral cortex markers are all important brains. Identity structure. These brain marker structures play an important role in the field of brain anatomy imaging analysis. Using these markers to perform atlas registration and mapping, or to establish a brain coordinate system, is useful for analyzing individual brain structures, locating brain functional areas, and even assisting in locating brain pathological areas. significant.
  • the AC, PC, and 6 cerebral cortex identification points on which the brain atlas registration function based on the Talairach coordinate system depends are all manually operated.
  • the location of the MSP is determined by adding an IH at any point on the MSP, which is determined by three points of AC, PC, and IH.
  • manual positioning is time-consuming and labor-intensive, greatly influenced by subjective operators, and has low repeatability.
  • some automatic extraction schemes have been proposed, most of them are aimed at positioning a certain brain marker. Even if a few have realized the entire extraction process of AC, PC, MSP and cerebral cortex markers, the processing process is complex and robust. The performance is poor, the efficiency is not high, and the practical value is not strong.
  • the system includes a processor, and the processor is configured to execute the following methods: acquiring an image of the brain; determining, according to the image and a neural network model, a region identification probability map of the brain, the The point identification probability map and the face identification probability map of the brain; according to the region identification probability map, the point identification probability map, and the surface identification probability map, the segmentation results of the cerebral cortex of the brain are respectively determined , the point identification of the brain, the face identification of the brain; according to the point identification and the face identification, construct a target coordinate system; according to the segmentation result of the cerebral cortex, the target coordinate system and/or The point identification determines the identification point of the cerebral cortex.
  • a brain identification positioning system which is characterized by comprising: an acquisition module for acquiring an image of the brain; a probability map determination module for determining the said image and a neural network model.
  • One of the embodiments of the present specification provides a non-transitory computer-readable medium, including executable instructions, which, when executed by at least one processor, cause the at least one processor to implement a method, including: obtaining a brain according to the image and the neural network model, determine the region identification probability map of the brain, the point identification probability map of the brain, and the face identification probability map of the brain; according to the region identification probability Figure, the point identification probability map, and the surface identification probability map, respectively determine the segmentation result of the cerebral cortex of the brain, the point identification of the brain, and the face identification of the brain; according to the point identification and the surface identification to construct a target coordinate system; according to the segmentation result of the cerebral cortex, the target coordinate system and/or the point identification, determine the identification point of the cerebral cortex.
  • FIG. 1 is a schematic diagram of an application scenario of an identification and positioning system according to some embodiments of this specification
  • FIG. 2 is an exemplary block diagram of an identification positioning system according to some embodiments of the present specification
  • FIG. 3 is an exemplary flowchart of an identification positioning method according to some embodiments of the present specification.
  • FIG. 4 is a schematic diagram of a method for extracting brain identifiers from a neural network model according to some embodiments of the present specification
  • FIG. 5 is an exemplary flowchart of a training method of a neural network model according to some embodiments of the present specification
  • FIG. 6 is an exemplary flowchart of a training method of a neural network model according to some embodiments of the present specification
  • FIG. 7 is a schematic diagram of a training process of a neural network model according to some embodiments of the present specification.
  • FIG. 8 is a schematic structural diagram of a neural network model according to some embodiments of the present specification.
  • FIG. 9 is a schematic diagram of a brain coordinate system according to some embodiments of the present specification.
  • Figure 10 is a schematic diagram of anterior commissure identification points and posterior commissural identification points according to some embodiments of the present specification
  • Figure 11 is a schematic illustration of a midsagittal plane according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • the identification and positioning system may include a computing device, a user terminal, and the identification and positioning system may implement the method and/or process disclosed in this specification to extract the point identification, surface identification, Region identification and other identification results, so as to obtain the characteristic information of specific parts, such as Talairach cortex identification points, etc., thereby reducing the workload of point selection, simplifying the doctor's workflow, and improving the accuracy of human body structure positioning and segmentation.
  • FIG. 1 is a schematic diagram of an application scenario of an identification and positioning system according to some embodiments of the present specification.
  • the system 100 may include a medical imaging device 110 , a first computing device 120 , a second computing device 130 , a user terminal 140 , a storage device 150 and a network 160 .
  • the medical imaging device 110 may refer to a device that reproduces the internal structure of a target object (eg, human body) as an image by using different media.
  • the medical imaging device 110 may be any device that can image or treat a specified body part of a target object (eg, human body), for example, MRI (Magnetic Resonance Imaging), CT (Computed Tomography), PET (Positron) Emission Tomography), etc.
  • MRI Magnetic Resonance Imaging
  • CT Computer
  • PET Positron Emission Tomography
  • the medical imaging device 110 is provided above for illustrative purposes only, and is not intended to limit its scope.
  • the medical imaging device 110 may acquire medical images (eg, magnetic resonance (MRI) images, CT images, etc.) of specified parts of the patient (eg, brain, etc.) and transmit to other components of the system 100 (eg, , the first computing device 120, the second computing device 130, the storage device 150). In some embodiments, the medical imaging device 110 may exchange data and/or information with other components in the system 100 via the network 160 .
  • medical images eg, magnetic resonance (MRI) images, CT images, etc.
  • other components of the system 100 eg, the first computing device 120, the second computing device 130, the storage device 150.
  • the medical imaging device 110 may exchange data and/or information with other components in the system 100 via the network 160 .
  • the first computing device 120 and the second computing device 130 are systems with computing and processing capabilities, which may include various computers, such as servers, personal computers, or computing platforms composed of multiple computers connected in various structures.
  • the first computing device 120 and the second computing device 130 may be implemented on a cloud platform.
  • the cloud platform may include one or a combination of private cloud, public cloud, hybrid cloud, community cloud, distributed cloud, cross-cloud, multi-cloud, etc.
  • the first computing device 120 and the second computing device 130 may be the same device, or may be different devices.
  • the first computing device 120 and the second computing device 130 may include one or more sub-processing devices (eg, single-core processing devices or multi-core multi-core processing devices), and the processing devices may execute program instructions.
  • processing devices may include various common general-purpose central processing units (CPUs), graphics processing units (Graphics Processing Units, GPUs), microprocessors, application-specific integrated circuits (application-specific integrated circuits, ASIC), or other types of integrated circuits.
  • the first computing device 120 may process information and data related to medical images.
  • the first computing device 120 may execute the brain marker localization method shown in some embodiments of the present specification to obtain at least one brain marker localization result, for example, a Talairach cortex marker point and the like.
  • the first computing device 120 may include a neural network model, and the first computing device 120 may obtain the identification probability map of the brain through the neural network model.
  • the first computing device 120 may obtain the trained neural network model from the second computing device 130 .
  • the first computing device 120 may determine a brain marker localization result based on a marker probability map of the brain.
  • first computing device 120 may exchange information and data via network 160 and/or other components in system 100 (eg, medical imaging device 110, second computing device 130, user terminal 140, storage device 150) . In some embodiments, the first computing device 120 may directly connect with the second computing device 130 and exchange information and/or data.
  • the second computing device 130 may be used for model training.
  • the second computing device 130 may execute the neural network model training method shown in some embodiments of this specification to obtain a trained neural network model.
  • the second computing device 130 may acquire training sample images and corresponding gold standard images for training the neural network model.
  • the second computing device 130 may acquire image information from the medical imaging device 110 as training data for the model.
  • the first computing device 120 and the second computing device 130 may also be the same computing device.
  • the user terminal 140 may receive and/or display the processing result of the medical image.
  • the user terminal 140 may receive the identification location result of the medical image from the first computing device 120, and diagnose and treat the patient based on the identification location result.
  • the user terminal 140 may cause the first computing device 120 to execute the identification positioning method as shown in some embodiments of the present specification through an instruction.
  • the user terminal 140 may control the medical imaging device 110 to acquire medical images of a specific part.
  • the user terminal 140 may be one of a mobile device 140-1, a tablet computer 140-2, a laptop computer 140-3, a desktop computer, or other devices having input and/or output functions, or it may be random combination.
  • Storage device 150 may store data or information generated by other devices.
  • the storage device 150 may store medical images acquired by the medical imaging device 110 .
  • the storage device 150 may store data and/or information processed by the first computing device 120 and/or the second computing device 130, eg, brain signature probability maps, brain signature location results, and the like.
  • the storage device 150 may include one or more storage components, and each storage component may be an independent device or a part of other devices. Storage devices can be local or through the cloud.
  • Network 160 may connect components of the system and/or connect portions of the system with external resources.
  • the network 160 enables communication between the various components and with other components outside the system, facilitating the exchange of data and/or information.
  • one or more components in system 100 eg, medical imaging device 110 , first computing device 120 , second computing device 130 , user terminal 140 , storage device 150
  • the network 160 may be any one or more of a wired network or a wireless network.
  • first computing device 120 and/or the second computing device 130 may be based on cloud computing platforms, such as public clouds, private clouds, community and hybrid clouds, and the like. However, these changes and modifications do not depart from the scope of this specification.
  • FIG. 2 is an exemplary block diagram of an identity location system according to some embodiments of the present specification.
  • the identity location system 200 may include an acquisition module 210 , a probability map determination module 220 , and an identity location module 230 .
  • acquisition module 210 may be used to acquire images of the brain.
  • the images of the brain may include MRI images or the like.
  • the probability map determination module 220 may be configured to determine the region identification probability map of the brain, the point identification probability map of the brain, and/or the face of the brain according to the acquired image of the brain and the neural network model Identifies the probability map.
  • the neural network model may be a multi-task model.
  • the neural network model may include shared network layers and/or at least two (e.g., three) branched network layers.
  • the branch network layer may include a first branch network layer, a second branch network layer, and/or a third branch network layer.
  • the first branch network layer can be used to segment the brain, and output the region identification probability map;
  • the second branch network layer can be used to locate the point identification of the brain, and output the point identification probability map;
  • the third branch network layer can It is used to locate the face identification in the brain and output the face identification probability map.
  • the marker location module 230 may be configured to determine the segmentation result of the cerebral cortex of the brain, the point marker of the brain, and the /or the face identification of the brain; construct a target coordinate system according to the point identification and the face identification; and/or determine the identification point of the cerebral cortex according to the segmentation result of the target area, the target coordinate system and/or the point identification.
  • the point identification probability map may include an anterior commissural probability map and/or a posterior commissural probability map, and the point identification may include anterior commissural identification points and/or posterior commissural identification points.
  • the identification positioning module 230 may determine the position of the pixel point corresponding to the maximum probability value in the point identification probability map as the position of the point identification.
  • the face identification probability map may include a midsagittal probability map, and the face identification may include a midsagittal plane.
  • the identification positioning module 230 may determine the target point set according to the surface identification probability map. In some embodiments, the identification positioning module 230 may determine a set of pixel points whose probability is greater than a preset threshold in the surface identification probability map as a target point set.
  • the identification positioning module 230 can fit the target point set to obtain the surface identification. In some embodiments, the identification positioning module 230 may fit the target point set according to the random sampling consistency method to obtain the surface identification.
  • the identification points of the cerebral cortex may include the most anterior point of the cerebral cortex, the most posterior point of the cerebral cortex, the most left point of the cerebral cortex, the most right point of the cerebral cortex, the most inferior point of the cerebral cortex, the most superior point of the cerebral cortex at least one of the points.
  • the identification and positioning module 230 can determine the identification point of the cerebral cortex according to the maximum point or the minimum point of the cerebral cortex in the direction of the three coordinate axes in the target coordinate system; The maximum point or minimum point of the cerebral cortex in the direction of the three coordinate axes in the parallel target coordinate system is the identification point of the cerebral cortex.
  • the identity positioning system 200 may also include a model training module (not shown in FIG. 2 ).
  • the model training module can be used to train neural network models.
  • the acquisition module 210 and/or the model training module may acquire training sample images, and gold standard images corresponding to each training sample image, and the gold standard images may include region-marking gold-standard images, point-marking gold-standard images, and / or face identification gold standard image.
  • the probability map determination module 220 and/or the model training module may input each training sample image into the initial neural network model, and obtain the predicted region identification probability map output by the first branch network layer and the second branch network layer respectively.
  • the output prediction point identifies the probability map, and/or the predicted surface output from the third branch network layer identifies the probability map.
  • the model training module may identify a probability map of predicted regions, predicted point identification probability maps, predicted surface identification probability maps, region identification gold standard images, point identification gold standard images, and/or surface identification gold standard images, Determine the value of the objective loss function.
  • the model training module may adjust the parameters of the initial neural network model according to the value of the target loss function to obtain a trained neural network model.
  • the model training module may be configured on a different computing device than the other modules (acquisition module 210, probability map determination module 220, and identity location module 230).
  • the model training module may be configured on the second computing device 130 while other modules may be configured on the first computing device 120 .
  • FIG. 3 is an exemplary flowchart of an identification positioning method according to some embodiments of the present specification.
  • process 300 may include one or more of the following steps.
  • the process 300 may be performed by the first computing device 120 .
  • Step 310 acquiring an image of the brain.
  • step 310 may be performed by acquisition module 210 .
  • the image of the brain is a medical image of the brain of the target object, for example, an MRI image, a CT image, and the like.
  • the target object may be various organisms, for example, a human body, a small animal, and the like.
  • the images of the brain may include brain MRI images of various structures, eg, T1, T2, T2FLAIR, and the like.
  • the image of the brain may include at least one of a two-dimensional image, a three-dimensional image, and the like.
  • the image of the brain may be obtained by scanning with medical imaging equipment (eg, MRI, CT, etc.), for example, the brain magnetic resonance image 410 shown in FIG. 4 , etc.
  • medical imaging equipment eg, MRI, CT, etc.
  • Step 320 according to the image and the neural network model, determine a region identification probability map of the brain, a point identification probability map of the brain, and/or a face identification probability map of the brain. In some embodiments, step 320 may be performed by probability map determination module 220 .
  • Markers refer to specific anatomical structures or locations of various types of biological organs/tissues, including multiple types, eg, point markers, area markers, area markers, and the like.
  • Point markers can be used to locate anatomical points, eg, anterior commissure markers, posterior commissure markers, etc. in the brain.
  • Face IDs can be used to locate anatomical faces, for example, the midsagittal plane of the brain.
  • Region identification can be used for region identification and segmentation, for example, cerebral cortex segmentation.
  • the identification probability map represents the probability that each part in the medical image is a certain type of identification, which can be a medical image with a marked probability value. Corresponding to the identification, it can include various types, such as point identification probability diagram, surface identification probability Identifies probability maps, etc.
  • the identification probability map may correspond to a brain image, including at least one of a two-dimensional image, a three-dimensional image, and the like.
  • the identification probability map may include one or more of a region identification probability map, a point identification probability map, and a face identification probability map for a particular part (eg, brain, etc.).
  • the markers may be brain markers, and may include point markers, face markers, region markers, and other markers of the brain.
  • other identifications of the brain may include cortical identification points of the brain, a brain coordinate system, and the like.
  • the point identification of the brain may also include one or more of the following: midbrain-pons junction (MPJ) midsagittal junctions; bifurcation points of intracranial blood vessels, which can be applied to The extraction of blood vessels; the intersection of the ventricle, corpus callosum, pons and other structures, which can be used for brain posture correction, alignment of the brain with the template, etc.; the bifurcation point of the sulcus gyrus can be used for brain morphology analysis.
  • the above identification can be obtained through a point identification probability map.
  • one or more brain images can be input into the neural network model, so as to obtain an output identification probability map of one or more types of brains, for example, a region identification probability map, a point identification probability map , face identification probability map, etc.
  • the brain magnetic resonance image 410 can be input into the multi-task neural network model 420 to obtain the output region identification probability map 431 , point identification probability map 432 and surface identification probability map 433 .
  • the neural network model can output all required identification probability maps such as region identification probability maps, point identification probability maps, and surface identification probability maps at one time, so as to obtain all identifications, including region identifications, point identifications, Face identification, etc., and used for subsequent work, such as establishing a brain coordinate system, determining the identification points of the cerebral cortex, etc.
  • the neural network model may individually output a region identification probability map, a point identification probability map, a surface identification probability map, etc., for obtaining the region identification, point identification, surface identification, and the like independently.
  • the neural network model may be a neural network model pre-trained for extracting the identification probability map of a specific part from an image of a specific part (eg, a brain magnetic resonance image of a human body), for example, a convolutional Neural Networks (Convolutional Neural Networks, CNN), Fully Convolutional Networks (Fully Convolutional Networks, FCN).
  • the neural network model may be an FCN, and its network structure may be any type of FCN, for example, UNet (U-shaped network), VNet (V-shaped network), SegNet (semantic segmentation network), and the like.
  • the neural network model can employ a multi-task fully convolutional network with an encoder-decoder structure.
  • the neural network model may be a multi-task model including a shared network layer and at least two branched network layers.
  • the parameters of the shared network layer are shared in different tasks, and the branch network layer can correspond to different tasks, and its parameters are different according to different task branches.
  • the number of branch network layers may be determined according to the number of types of identification probability maps required to be output by the neural network model, eg, the same as the number of types of identification probability maps.
  • each type of identification may correspond to a task branch. Therefore, different types of identification probability maps are respectively output through different branch network layers in the neural network model.
  • the point identification probability map passes through the points of the neural network model.
  • the identification task branch network layer output, the face identification probability map are output through the face identification task branch network layer of the neural network model, and the region identification probability map is output through the region identification task branch network layer of the neural network model. As shown in FIG.
  • the multi-task neural network model 420 includes three task branches, respectively corresponding to cortical segmentation, anatomical point localization, and anatomical plane localization, and the outputs are respectively a region identification probability map 431 , a point identification probability map 432 , and a surface identification probability map 433.
  • the branch network layer in the neural network model may include three branch network layers, namely a first branch network layer, a second branch network layer, and a third branch network layer.
  • the first branch network layer can be used to segment the brain, and output a probability map of region identification;
  • the second branch network layer can be used to point the brain (for example, , AC identification points, PC identification points, etc.), and output the point identification probability map;
  • the third branch network layer can be used to locate the face identification (for example, the midsagittal plane, etc.) of the brain, and output the surface identification probability map, where , the surface identification can correspond to all the pixels on the positioning surface.
  • the first branch network layer may be a branch network layer corresponding to a region identification task of a specific part (eg, brain, etc.), the output of which is a region identification probability map of the specific part.
  • the value of each pixel in the region identification probability map can represent the probability value that the pixel is the region identification of the specific part. For example, if the region identification is a brain parenchyma segmentation region, then the value of each pixel in the brain parenchyma segmentation region probability map. The value represents the probability value that the pixel is a substantial point of the brain.
  • the second branch network layer may be a branch network layer corresponding to the task of point identification (eg, AC identification point, PC identification point, etc.) of a specific part, and its output is a point identification probability map of the specific part.
  • the value of each pixel in the point identification probability map can represent the probability value that the pixel is the point identification of the specific part.
  • the value of each pixel in the anterior commissure probability map indicates that the pixel is an anterior commissure identification point.
  • the value of each pixel in the posterior commissure probability map represents the probability value that the pixel is a posterior commissure identification point.
  • the third branch network layer may be a branch network layer corresponding to a face identification (eg, midsagittal plane, etc.) task of a specific part, and its output is a face identification probability map of the specific part.
  • the value of each pixel in the surface identification probability map can represent the probability value that the pixel is the surface identification of the specific part. For example, if the surface identification is the midsagittal surface, the value of each pixel in the midsagittal surface probability map represents the pixel point. is the probability value of the point on the midsagittal plane.
  • the initial neural network model may be trained based on the training sample images and the corresponding gold standard images to obtain a trained neural network model. For more details on how to train the neural network model, reference may be made to the relevant description of FIG. 5 , and details are not repeated here.
  • Step 330 Determine the segmentation result of the cerebral cortex of the brain, the point identification of the brain, and/or the face identification of the brain, respectively, according to the region identification probability map, the point identification probability map, and the surface identification probability map. In some embodiments, step 330 may be performed by identity location module 230 .
  • the identification localization result refers to information that can be used as an identification of a biological organ/tissue, for example, a point identification, a surface identification, an area identification, a brain coordinate system, a cerebral cortex identification point, and the like.
  • the identification positioning result of the specific part can be determined according to the identification probability map of the specific part (eg, the region identification probability map, the point identification probability map, the surface identification probability map, etc.).
  • the region identification may be the segmentation result of a specific region, for example, the segmentation result of the cerebral cortex.
  • the point identifier of a specific part can be determined according to the point identifier probability map obtained in step 320, for example, the specific location of the anterior commissure marker point AC is determined from the anterior commissure probability map , and the specific position of the posterior commissure identification point PC is determined from the posterior commissure probability map.
  • the point identification 442 may be determined from the point identification probability map 432 .
  • the point identification probability map may include an anterior commissure probability map and a posterior commissure probability map
  • the point identification may include an anterior commissure identification point AC and a posterior commissure identification point PC.
  • the location of key points can be achieved by determining point identifiers of specific parts, wherein the point identifiers can be used as key points.
  • the point identification may also include other brain key points.
  • point positioning is widely used in the intelligence of workflow, or the intermediate steps of automatic algorithms. For example, by locating some key point pairs, point-to-point registration between multiple images, or between images and physical objects can be achieved. space between registrations.
  • the position of the pixel point corresponding to the maximum probability value in the point identification probability map may be determined as the position of the point identification. Specifically, determine the coordinate position of the pixel corresponding to the maximum probability value in the anterior commissure probability map, and determine the position coordinate of the pixel corresponding to the maximum probability value as the position coordinate of the anterior commissure identification point AC; Similarly, the coordinate position of the pixel corresponding to the maximum probability value in the posterior commissure probability map is determined, and the position coordinate of the pixel corresponding to the maximum probability value is determined as the position coordinate of the posterior commissure identification point PC.
  • the coordinate position here refers to the row and column layer coordinates (i, j, l) in the probability map, where i represents the row, j represents the column, and l represents the layer.
  • AC and PC are the determined anterior commissure identification points and posterior commissural identification points, respectively.
  • the knowledge determines the region of interest (ROI) containing the corpus callosum, and uses the segmentation of the corpus callosum, fornix, and brainstem to determine the location of the AC and PC according to their spatial relationship to these anatomical structures.
  • ROI region of interest
  • AC and PC In the methods for determining AC and PC provided by some embodiments of this specification, first obtain an anterior commissure probability map and a posterior commissure probability map through a preset neural network model, and then determine correspondingly according to the anterior commissure probability map and the posterior commissure probability map.
  • AC and PC so that the brain AC and PC can be determined efficiently and accurately without relying on the pre-positioning of other structures and without manual point selection.
  • the surface identification of a specific part can be determined according to the surface identification probability map of the specific part, for example, the specific position of the midsagittal plane is determined from the surface identification probability map.
  • the face identification 443 may be determined from the face identification probability map 433 .
  • the face identification probability map may include a midsagittal probability map, and the face identification may include a midsagittal plane.
  • Figure 11 is a schematic diagram of the determined midsagittal plane.
  • the sulcus fissure is an important anatomical landmark of the brain, and within its range there is an approximate virtual plane that makes the left and right hemispheres of the human brain symmetrical relative to it, this plane is called the midsagittal plane (MSP).
  • MSP midsagittal plane
  • the brain anatomy on both sides of the midsagittal plane achieves maximum symmetry with respect to the midsagittal plane.
  • brain face localization may be achieved by determining the face identification of the brain, wherein the localization of the midsagittal plane is predominant.
  • the midsagittal plane positioning is used in many scenarios.
  • the midsagittal plane is the symmetry plane of the brain. After positioning the midsagittal plane, the brain can be easily divided into left and right sides, and the symmetry of the two sides can be analyzed for disease diagnosis scenarios.
  • the midsagittal offset is calculated by locating the midsagittal plane, which can be used to assess the severity of the hematoma.
  • the set of target points may be determined from a face identification probability map.
  • a set of pixel points whose probability is greater than a preset threshold in the surface identification probability map may be determined as a target point set. Take the surface identification probability map as the midsagittal surface probability map and the surface identification as the midsagittal surface as an example. Specifically, first extract the mid-sagittal plane point set S (or target point set) from the mid-sagittal plane probability map, for example, perform a fixed threshold segmentation on the mid-sagittal plane probability map, and use the set of points whose probability value is greater than the preset threshold as the mid-sagittal plane Plane point set S.
  • the preset threshold may be any constant between 0 and 1, for example, 0.5. After the midsagittal plane point set S is obtained, the midsagittal plane point set S is fitted according to a preset algorithm, and the obtained fitting plane is the midsagittal plane of the target object's brain.
  • the target point set may also be determined in other ways, for example, the set of pixel points with the highest probability in the surface identification probability map is taken as the target point set, which is not limited in this specification.
  • the target point set may be fitted to obtain the surface identification.
  • the target point set may be fitted according to the random sampling consistency (Random Sample Consensus, RANSAC) method to obtain the surface identifier.
  • RANSAC Random Sample Consensus
  • the random sampling consistency method may include:
  • the surface identification probability map is the midsagittal surface probability map
  • the surface identification is the midsagittal surface
  • the midsagittal surface obtained after fitting can be represented by a linear equation, or, a point O on the fitted midsagittal surface and the normal vector Representation, the latter is an example
  • the midsagittal plane obtained by fitting can be expressed as If the preset algorithm is the random sampling consistency method, the surface fitting method based on this method is as follows:
  • M points are randomly sampled from the point set S, and a plane Li is fitted with the M points , and recorded. Then calculate the sum of squares of the distances from the remaining points to the plane Li , and record the sum of the distances as Dist i .
  • the plane L k corresponding to the first sampling (set the number of times as k) with the smallest distance in the recording distance Dist i is the initial positioning plane of the midsagittal plane.
  • the principal component analysis (Principal Component Analysis, PCA) method can be used to determine the normal vector of the midsagittal plane by obtaining the minimum principal component direction of M points.
  • the M sampling points are represented as a matrix A M,3 with M rows and 3 columns, and the eigendecomposition of the matrix A can be realized by the singular value decomposition method SVD decomposition, and the eigendecomposition can be expressed by the following formula:
  • a M,3 U M,M ⁇ M,3 V 3,3 (3)
  • U represents the left singular matrix
  • V represents the right singular matrix
  • represents the characteristic matrix
  • the target point set may also be fitted in other ways to obtain the surface identification, which is not limited in this specification.
  • the midsagittal plane is an important reference plane of the Talairach coordinate system, and its positioning is a prerequisite for positioning AC and PC.
  • the location of the midsagittal plane includes, but is not limited to, methods based on global symmetry analysis, brain parenchyma segmentation, feature point detection, and atlas registration. These algorithms can achieve good results on normal brain structures, but in the face of pathological brain structures, such as brain structure loss of symmetry, or when the structure is significantly different from the template, the adaptability of these methods will be greatly reduced.
  • the midsagittal plane of the target object's brain first obtain a midsagittal plane probability map through a preset neural network model, and then correspondingly determine the midsagittal plane according to the midsagittal plane probability map, regardless of normal brain structure or pathological brain structure, or When there is a big difference between the structure and the template, the midsagittal plane of the target object's brain can be determined efficiently and accurately.
  • the region identification of a specific part can be determined according to the region identification probability map of the specific part (eg, brain, etc.), for example, the segmentation result of the target region can be determined from the region identification probability map. As shown in FIG. 4 , the region identification 441 may be determined according to the region identification probability map 431 .
  • the region identification probability map is an image representing the probability of region identification, for example, a brain parenchyma segmentation region probability map or the like.
  • the target area is a specific area of a specific part, for example, the cerebral cortex, the sulcus gyrus of the brain, the left and right brain, the cerebellum, the ventricle, the brain stem, etc.
  • the segmentation result of the target region may be the result of regional segmentation of various organs/tissues, for example, the segmentation result of the cerebral cortex.
  • the target region may include the cerebral cortex
  • the region identification probability map of the brain may include the brain parenchyma segmentation region probability map
  • the segmentation result of the cerebral cortex may include the brain parenchyma segmentation region, etc.
  • the segmentation results of the cerebral cortex can be determined from the probability map of the segmentation regions of the brain parenchyma.
  • the sulci and gyrus of the brain, the left and right cerebrum, the cerebellum, the ventricle, the brain stem and other structures can also be segmented to obtain the segmentation result of the corresponding target area.
  • the segmentation of these structures can be widely used in scenarios such as brain area parameter statistics for diagnosis and surgical planning.
  • the region identification probability map may include a brain parenchyma segmentation region probability map, and the region identification may include brain parenchyma segmentation regions.
  • the brain parenchyma segmentation region probability map can be generated through a preset threshold to generate a brain parenchyma segmentation binary mask image; the brain parenchyma segmentation region is determined according to the brain parenchyma segmentation binary mask image.
  • threshold segmentation is performed on the probability map of the brain parenchyma segmentation region, wherein the preset threshold can usually be selected as a constant between 0 and 1, for example, 0.5, and the brain parenchyma segmentation binary mask image is obtained after threshold segmentation, for example , set the pixels with the probability value of the brain parenchyma segmentation area greater than or equal to 0.5 as 1, and set the pixels with the probability value of the brain parenchyma segmentation area less than 0.5 as 0, and finally get the value of each pixel in the image. Either 0 or 1, forming a binary mask image of brain parenchyma segmentation. Then, the brain parenchyma segmentation area is determined according to the brain parenchyma segmentation binary mask image.
  • each pixel in each type of identification probability map represents the probability value of the corresponding identification. Therefore, in some embodiments, the specific position of the corresponding identification in the identification probability map may be determined according to the specific probability value of each pixel. In some embodiments, the specific location of the corresponding marker may also be determined from the marker probability map in other ways. For example, it is determined by another neural network model, that is, the probability map of each type of identification is input into another pre-trained neural network model to obtain the position of the corresponding identification in the probability map. For another example, a pixel point in the probability map that meets the preset condition may be determined as a corresponding identifier by screening through a preset condition. This manual does not limit this.
  • Step 340 constructing a target coordinate system according to the point identification and the surface identification.
  • the brain coordinate system 450 can be established according to the point identification 442 and the face identification 443 .
  • step 340 may be performed by identity location module 230 .
  • the target coordinate system is a coordinate system established based on a specific part, which can be used to represent the spatial structure and positional relationship of a specific part, and can be various coordinate systems, such as plane rectangular coordinate system, spherical coordinate system, Talairach ) coordinate system, etc.
  • the target coordinate system can be a brain coordinate system, and the brain coordinate system can realize the correspondence between the structure and the spatial position of the target object's brain, for example, the Talairach coordinate system.
  • the coordinate system makes it possible to study the same brain region of different target subjects in the same neuroanatomical space for lateral comparison.
  • a brain coordinate system may be established based on the point identifiers and face identifiers extracted in the above steps.
  • point identification as the anterior commissure identification point AC and the posterior commissural identification point PC
  • surface identification as the midsagittal plane MSP as an example to illustrate the establishment of the Talairach coordinate system.
  • AC can be used as the origin of the coordinate system, and the direction from PC to AC is defined as the direction of the Y-axis; the axis perpendicular to the midsagittal plane and passing through the AC point is defined as the X-axis, and the positive direction is defined as from the right to the left of the brain; Perpendicular to the X-axis and Y-axis planes, the axis passing through the AC point is defined as the Z axis, and the positive direction is the direction from the foot to the head, so that the brain coordinate system (Talairach coordinate system) as shown in Figure 9 can be constructed.
  • the brain coordinate system Talairach coordinate system
  • Step 350 Determine the identification point of the cerebral cortex according to the segmentation result of the cerebral cortex, the target coordinate system and/or the point identification.
  • the cerebral cortex identification point 460 can be extracted according to the region identification 441 , the point identification 442 and the brain coordinate system 450 .
  • step 350 may be performed by identity location module 230 .
  • the identification points of the cerebral cortex may include the most anterior point of the cerebral cortex, the most posterior point of the cerebral cortex, the most left point of the cerebral cortex, the most right point of the cerebral cortex, the most inferior point of the cerebral cortex, the most superior point of the cerebral cortex at least one of the points.
  • An identification point refers to one or more points used for identification that can represent the spatial structure and/or spatial position of the target area, for example, a Talairach cortex identification point and the like.
  • the Talairach cortex identification point is the frontmost, rearmost, leftmost, rightmost, lowermost and uppermost points of the brain (excluding scalp and cerebrospinal fluid) in the Talairach coordinate space (the direction description is based on the patient coordinate system), a total of six These six points are called cerebral cortex marker points.
  • Table 1 The specific definitions are shown in Table 1:
  • cerebral cortex marker Medical standard determination method AP The intersection of the Y-axis and the cerebral cortex at the front of the brain PP at the last lateral point of the cerebral cortex The intersection of the Y-axis and the cerebral cortex at the back of the brain Leftmost point of cerebral cortex LP The intersection of the line passing through the PC and parallel to the X-axis and the left side of the cerebral cortex RP The intersection of the line passing through the PC and parallel to the X-axis and the right side of the cerebral cortex IP of the lowest lateral point of the cerebral cortex The intersection of the Z axis and the lower side of the cerebral cortex top lateral point of cerebral cortex The intersection of the line passing through the PC and parallel to the Z axis and the top of the cerebral cortex
  • the target area may include the cerebral cortex
  • the segmentation result of the target area may include the segmentation result of the cerebral cortex
  • the target coordinate system may be the Talairach coordinate system
  • the point identifier may include the anterior commissure
  • the identification point AC, the posterior commissure identification point PC, the plane identification may include the midsagittal plane
  • the cerebral cortex identification point can be determined according to the segmentation result of the cerebral cortex, the target coordinate system and/or the point identification.
  • the cerebral cortex contour can be determined according to the brain parenchyma segmentation area, and the outer contour of the brain parenchyma is the cerebral cortex contour, that is, the area formed by the cerebral cortex contour is the brain parenchyma segmentation area.
  • the cerebral cortex identification points may be determined according to the cerebral cortex contour and each axis under the brain coordinate system determined earlier, for example, the frontmost point AP of the cerebral cortex, the last point of the cerebral cortex The lateral point PP, the leftmost point LP of the cerebral cortex, the rightmost point RP of the cerebral cortex, the IP of the lowermost side of the cerebral cortex, the uppermost point SP of the cerebral cortex, etc.
  • the identification point of the cerebral cortex can be determined according to the maximum point or the minimum point of the cerebral cortex in the direction of the three coordinate axes in the target coordinate system.
  • the point with the largest Y-coordinate value of the cerebral cortex contour on the Y-axis of the Talairach coordinate system can be determined as the most anterior point AP of the cerebral cortex; the point with the smallest Y-coordinate value of the cerebral cortex contour on the Y-axis of the Talairach coordinate system can be determined as the cerebral cortex The last lateral point PP; the point with the smallest Z coordinate value of the cerebral cortex contour on the Z axis of the Talairach coordinate system is determined as the lowest lateral point IP of the cerebral cortex.
  • the maximum point or minimum point of the cerebral cortex identified by the point and in the direction of the three coordinate axes in the parallel target coordinate system can be determined as the identification point of the cerebral cortex. For example, pass the post commissure marker PC and determine the point with the maximum X-coordinate value of the cerebral cortex contour on the X-axis of the Talairach coordinate system as the rightmost point RP of the cerebral cortex; The point with the smallest X-coordinate value of the cerebral cortex contour on the X-axis of the Talairach coordinate system is determined as the leftmost point LP of the cerebral cortex; The point was identified as the uppermost point SP of the cerebral cortex.
  • the above-mentioned six cerebral cortex identification points can be obtained by calculating the maximum and minimum coordinate values of all the contour points of the cerebral cortex in the Talairach coordinate system along the X, Y and Z axes respectively.
  • the point is determined as the frontmost point AP of the cerebral cortex; the cortical contour is searched in the opposite direction of the Y-axis of the Talairach coordinate system, and the point with the Y-axis coordinate value of the minimum Y-coordinate value of the cerebral cortex contour is determined as the rearmost point PP of the cerebral cortex; Search the pixel points on the cortical contour in the direction, and determine the point whose Z-axis coordinate value is the smallest Z-coordinate value of the cerebral cortex contour as the lowermost point IP of the cerebral cortex; The point whose Z coordinate value is the maximum Z coordinate value of the cerebral cortex contour is determined as the uppermost point SP of the cerebral cortex; the posterior commissure point PC is the starting point, and the pixel of the cortex contour is searched along the positive direction of the X axis of the Talairach coordinate system, and the X axis coordinate value is the cerebral cortex.
  • the point with the minimum X-coordinate value of the contour is determined as the leftmost point LP of the cerebral cortex; the pixel of the cortical contour is searched in the opposite direction of the X-axis of the Talairach coordinate system, and the point on the X-axis with the X-axis coordinate value of the maximum X-coordinate value of the cerebral cortex contour is determined as The rightmost point of the cerebral cortex is RP.
  • the extraction of cerebral cortex marker points is to segment the two-dimensional plane brain tissue where the cerebral cortex marker points are located and then locate them; or to perform 3D cortical segmentation by using a three-dimensional deformation model to complete the location of the cortical marker points.
  • the stability and time efficiency of this method are very poor, the whole algorithm process is very complicated, the fault tolerance rate is low, and the efficiency is low.
  • the method for extracting cerebral cortex marker points described in some embodiments of this specification first extracts a probability map of brain parenchyma segmentation regions through a neural network model, then determines the brain parenchyma segmentation region from the probability map of brain parenchyma segmentation regions, and segmentes the brain parenchyma according to the brain parenchyma region probability map.
  • the region determines the contour of the cerebral cortex, and finally, the identification point of the cerebral cortex is determined by the maximum and minimum coordinate values of the contour of the cerebral cortex and each axis in the brain coordinate system. In this way, both the stability and the extraction efficiency have been greatly improved. Efficient and accurate identification of cerebral cortex markers.
  • the process of determining the point identification, face identification, region identification and other identifications of the brain (for example, identification points of the cerebral cortex, etc.) of the brain has been described in different embodiments, but it still needs to be emphasized that, In the embodiment of this specification, the extraction of point identifiers, surface identifiers, and area identifiers is performed simultaneously. Specifically, after the probability maps of different types of identifiers are obtained through the preset neural network model, the probability maps of different types are subjected to the above implementation. The method described in the example obtains the identification of the corresponding type. From the probability map to the identification of the corresponding type, it can be regarded as a post-processing process. In this way, the probability map of each type of identification is extracted first, and then the corresponding post-processing process is performed. All brain identities of the target subject are extracted as a whole.
  • the entire process of acquiring the marker probability map and post-processing (determining marker location results) can be completed by a model, and the input of the model can be images of specific parts, such as brain magnetic resonance images, etc., the model
  • the output of can be any marker localization result, for example, region marker, point marker, face marker, brain coordinate system, cerebral cortex marker point, etc.
  • the model may be a machine learning model, eg, a neural network model CNN, FCN, or the like.
  • the model may be one model, or may be formed by connecting multiple models in front of each other. For example, it consists of two models connected in front of each other.
  • the former model is used to determine and output the identification probability map
  • the latter model uses In order to receive the identification probability map output by the previous model, and determine and output the identification positioning result, the training of the model can be carried out in various ways, for example, joint training and the like.
  • FIG. 5 is an exemplary flowchart of a training method of a neural network model according to some embodiments of the present specification.
  • process 500 may include the following steps.
  • process 500 may be performed by second computing device 130 .
  • Step 510 Acquire training sample images and gold standard images corresponding to each training sample image, where the gold standard images include area marking gold standard images, point marking gold standard images, and/or surface marking gold standard images.
  • the training sample image is the training sample set, which is the sample image used to train the neural network model. It can be various types of images, such as CT images, MRI images, etc., or images of various organs/tissues, such as brain external images, cardiac images, etc. In some embodiments, the training sample images may include magnetic resonance images of the head.
  • the gold standard image is the image used as the label of the training sample image, which can be the labeled training sample image.
  • the gold standard images may include area identification gold standard images, point identification gold standard images, and/or area identification gold standard images.
  • the neural network model may be a multi-task network model, and different tasks may correspondingly extract different types of identifications. Therefore, before training the neural network model, it is necessary to obtain training sample images and gold standard images corresponding to each type of identification task. In order to obtain a more accurate identification probability map of each type output by the neural network, it is necessary to enrich the diversity of samples as much as possible when acquiring the training sample images of each type of identification task.
  • training sample images may be obtained in various ways. For example, a large number of normal, pathological, and other special brain magnetic resonance images of different modalities of different subjects are acquired by scanning or acquiring from memory. For another example, each magnetic resonance image is subjected to processing such as scaling, cropping, and deformation.
  • gold standard annotations may be performed on the training sample images to acquire corresponding gold standard images.
  • the training sample images may be gold-standard labeled by different methods according to the specific type of task.
  • the probability value of the pixel where the anatomical marker point is located in each brain magnetic resonance image may be marked as the first value, and the probability value of the pixel point other than the anatomical marker point may be marked as the second value value to obtain a gold standard image corresponding to the point identification task, wherein the first value can be set to 1, and the second value is a value determined by an algorithm constructed according to the distance between each pixel point and the anatomical identification point.
  • the anatomical landmarks may include AC and PC points, and the location coordinates of the AC and PC are recorded in the brain magnetic resonance image. There are two anatomical marker points, so two probability maps of the same size as the brain magnetic resonance image need to be generated.
  • the AC point corresponds to a probability map
  • the PC point corresponds to a probability map.
  • the value of the pixel at the AC position is 1, and the probability value of the remaining pixels is lower as the distance from the AC marked position is farther.
  • the probability is a Gaussian function of distance.
  • the probability values calculated according to the Gaussian function are collectively referred to as the second value.
  • the Gaussian function can be determined by the following formula:
  • p is the probability value of the pixel point
  • d is the distance between the pixel point and the AC or PC marked point
  • the size of the variance will affect the convergence speed of the image algorithm and the positioning accuracy of the final point. Therefore, in order to achieve a faster convergence speed in the early stage of training and good positioning accuracy in the later stage of training, a larger value of ⁇ in the early stage can be used to speed up the convergence speed. , ⁇ is gradually reduced during the training process, and the point prediction accuracy is improved in the later stage of training.
  • the probability values of pixels on the midsagittal plane in each brain magnetic resonance image may be marked as the first value
  • the probability values of pixels other than the pixels on the midsagittal plane may be marked as the first value
  • Three values to obtain the gold standard image corresponding to the surface identification task wherein the first value is similar to the point identification and can be set to 1, and the third value is determined according to a preset algorithm based on the Sagittal distance construction.
  • the labeling method may be to select pixels located on the midsagittal plane of the brain in a plurality of magnetic resonance images by using a labeling tool, and then obtain a gold standard midsagittal plane equation by fitting these pixel points.
  • the number of selected pixels should theoretically be greater than 3, but the more the better the effect. For example, 20 pixels are uniformly labeled on the midsagittal plane for these images for plane fitting. After determining the midsagittal plane, label the remaining pixels on the image. Since there is only one midsagittal plane, after labeling, a probability map of the same size as the original image will be generated. If each pixel on the map is located on the midsagittal plane, the probability value is 1, and if it is not located on the midsagittal plane, the probability value is 1. The closer the distance to the midsagittal plane, the greater the probability value, and its probability value conforms to the Gaussian distribution defined by Equation 5.
  • Equation 5 The probability values calculated according to the Gaussian function defined in Equation 5 may be collectively referred to as the third value, where d in Equation 5 represents the distance between the pixel point and the midsagittal plane.
  • the probability value of the pixel points of the brain parenchyma in each brain magnetic resonance image can be marked as the first value
  • the probability value of the non-parenchymal pixel points can be marked as the fourth value to obtain the region.
  • the gold standard image corresponding to the identification task wherein the first value is similar to the point identification and can be set to 1, and the fourth value can be set to 0.
  • the brain magnetic resonance image through pixel-by-pixel labeling, a binary image with the same size as the input magnetic resonance image and only containing values of 0 and 1 is obtained.
  • a pixel with a value of 1 represents a pixel belonging to the brain parenchyma.
  • Pixels of 0 represent non-parenchymal pixels.
  • the way of labeling can be realized by the open source software freesurfer, and fine-tuned on the basis of the automatic labeling results of the software.
  • the branch of the corresponding region identification task in the neural network model can output a 2-channel probability map, representing the predicted probability of the background and the predicted probability of the cerebral cortex (brain parenchyma) respectively, and the same gold standard can be used to generate two A binary image containing only 0 and 1 values.
  • a probability map representing the background has a background of 1 and a target of 0
  • a probability map representing the cortex has a background of 0 and a target of 1.
  • Step 520 input each training sample image into the initial neural network model, and obtain respectively the predicted region identification probability map output by the first branch network layer, the predicted point identification probability map output by the second branch network layer, and/or the third branch network layer.
  • the predicted facet output from the layer identifies the probability map.
  • the initial neural network model is an untrained neural network model, eg, CNN, FCN, etc.
  • an initial neural network model may be constructed based on the type and number of tasks, and the like.
  • an initial neural network model may be constructed prior to starting training, setting up shared network layers and branched network layers for different types of identification tasks.
  • the network layer is mainly composed of convolution layers, and also includes normalization layers (batch normalization, instance normalization, or group normalization) and activation layers, and can also include pooling layers, transposed volumes Layers, up-sampling layers, etc., are not limited in the embodiments of the present specification.
  • three tasks of different types of identification tasks including point identification task, surface identification task and area identification task, can be performed by heatmap regression, so the outputs of the three branches are the same size as the input image.
  • the output of the region identification task is the probability map of the brain parenchyma segmentation region, so as to determine the brain parenchyma segmentation region according to the probability map of the brain parenchyma segmentation region, and then determine the cerebral cortex contour based on the brain parenchyma segmentation, and then according to the cerebral cortex contour and the brain coordinate system to determine the cerebral cortex marking points;
  • the output of the point marking task is the anterior commissure probability map and the posterior commissure probability map, so as to locate the anterior commissure AC and posterior commissure probability map according to the anterior commissure probability map and the posterior commissure probability map Combine the two points of PC;
  • the midsagittal surface probability map output by the surface identification task to locate the midsagittal surface according to the midsagittal surface probability map (equivalent to locating all the pixels on the midsagittal surface).
  • the network structure when constructing a neural network model, may adopt any type of fully convolutional network, for example, UNet, VNet, SegNet, multi-task fully convolutional network with encoder-decoder structure, and the like.
  • FIG. 8 is a schematic structural diagram of a neural network model according to some embodiments of the present specification.
  • Figure 8 shows a multi-task fully convolutional network with an encoder-decoder structure, including one encoder and three decoders, where the encoder is connected to the decoder through skip connections and underlying connections.
  • the UNet network can be used as the basic model of the multi-task fully convolutional network
  • the encoding structure (Encoder) of the UNet network can be used as a weight sharing layer
  • three different decoding structures (Decoder) are introduced.
  • the structure of the original UNet network can be improved according to the actual situation, for example, reducing the number of basic channels and the number of sampling times, which can reduce the resource occupation of the algorithm.
  • the overall structure of the three decoder branches derived from UNet is the same, because of different tasks, the number of output channels of the three branches can be different.
  • the output of branch 1 can be a 2-channel probability map, wherein each channel corresponds to a probability map, which can respectively represent the predicted probability of the brain parenchyma segmentation area and the predicted probability of the brain background;
  • the output of branch 2 can be a 2-channel probability map , which can represent the predicted probability of anterior commissure AC point and the predicted probability of posterior joint PC point respectively;
  • the output of branch 3 can be a 1-channel probability map, which can represent the predicted probability of points on the mid-sagittal plane.
  • the output of branch 1 may be a 1-channel probability map, which may represent the predicted probability of the brain parenchyma segmented region or the predicted probability of the brain background.
  • a normalization operation may be performed on the training sample images.
  • the training sample as a T1 image (magnetic resonance T1 weighted image) as an example
  • the T1 image needs to be normalized.
  • the normalization manner may include various manners, which are not limited in this specification. For example, computing the mean and variance of an input sample. For another example, for the grayscale of each pixel of the sample, subtract the mean, and then divide by the variance.
  • the normalized training sample set that is, each training sample image
  • the predicted region identification probability output by the first branch network layer can be obtained respectively.
  • Figure, the predicted point identification probability map output by the second branch network layer, and the predicted surface identification probability map output by the third branch network layer can be input into the initial neural network model, and the predicted region identification probability output by the first branch network layer.
  • FIG. 7 is a schematic diagram of a training process of a neural network model according to some embodiments of the present specification.
  • the training sample set can be input into the shared network layer 721 of the multi-task fully convolutional network 720 through step 710, and the shared network layer 721 extracts the features of the training samples; then, the three identification tasks enter their respective branch networks. layer, wherein the area identification task enters the area identification task branch network layer 725, the point identification task enters the point identification task branch network layer 726, and the face identification task enters the face identification task branch network layer 727; then, output the respective identification task results, namely Area identification task results 731 , point identification task results 732 , and surface identification task results 733 , these identification task results may be predicted probability maps for each identification task.
  • Step 530 Determine the value of the objective loss function according to the predicted region identification probability map, the predicted point identification probability map, the predicted surface identification probability map, the region identification gold standard image, the point identification gold standard image, and/or the surface identification gold standard image.
  • the target loss function is the loss function corresponding to the entire neural network model, which can be used to adjust and update the neural network model.
  • the target loss function may be determined based on the identification task result and the corresponding gold standard image.
  • a loss function corresponding to each task may be obtained based on the result of each identification task, that is, an identification probability map, and then a target loss function may be determined based on the obtained one or more loss functions.
  • the value of the objective loss function can be determined from a predicted area identification probability map, predicted point identification probability map, predicted surface identification probability map, area identification gold standard image, point identification gold standard image, and/or area identification gold standard image.
  • Step 540 Adjust the parameters of the initial neural network model according to the value of the target loss function to obtain a trained neural network model.
  • the parameters of the initial neural network model can be adjusted according to the value of the target loss function, for example, the parameters of the initial neural network model can be adjusted by using an error backpropagation gradient descent algorithm, wherein , the optimizer can choose Adam, and the learning rate is set to 10 -4 .
  • the above steps are repeated, and the value of the target loss function is continuously adjusted until the variation range of the value of the target loss function is smaller than the preset value, and the neural network model is obtained.
  • the variation range of the value of the objective loss function is smaller than the preset value, indicating that the value of the objective loss function tends to be stable, indicating that the training process of the neural network model satisfies the convergence condition.
  • the neural network is composed of a shared network layer and different branch networks.
  • the neural network is composed of a shared network layer and different branch networks.
  • FIG. 6 is an exemplary flowchart of a training method of a neural network model according to some embodiments of the present specification.
  • the process 600 may include the following steps. In some embodiments, the process 600 may be performed by the second computing device 130 .
  • Step 610 Determine the value of the first loss function according to the predicted region identification probability map and the region identification gold standard image.
  • the value of the first loss function may be determined according to the predicted region identification probability map and the region identification gold standard image, wherein the first loss function is a loss function corresponding to the region identification task in the neural network model.
  • the first loss function 751 may be determined from the region identification task results 731 and the region identification task gold standard 741 (ie, the region identification gold standard image).
  • the loss function corresponding to the region identification task may be any loss function suitable for the segmentation task, for example, any one or a combination of Dice loss, cross-entropy loss, etc.
  • the first loss function can be Dice Loss, which can be calculated by the following formula:
  • Loss Dice represents the first loss function
  • P represents the gold standard image used for supervised training, that is, the gold standard image of region identification
  • Q represents the predicted image output by the neural network, that is, the predicted region identification probability map.
  • one or more channel outputs may be included, and the output probability map may include at least one of a probability map corresponding to cortical probability, a probability map corresponding to background probability, etc. Therefore,
  • the first loss function can be composed of one or more Dice Loss, wherein each channel corresponds to a Dice Loss.
  • Step 620 Determine the value of the second loss function according to the predicted point identification probability map and the point identification gold standard image.
  • the value of the second loss function may be determined according to the predicted point identification probability map and the point identification gold standard image, where the second loss function is a loss function corresponding to the point identification task in the neural network model.
  • the second loss function 752 may be determined from the point identification task results 732 and the point identification task gold standard 742 (ie, the point identification gold standard image).
  • Step 630 Determine the value of the third loss function according to the predicted surface identification probability map and the surface identification gold standard image.
  • the value of the third loss function may be determined according to the predicted surface identification probability map and the surface identification gold standard image, wherein the third loss function is a loss function corresponding to the surface identification task in the neural network model.
  • the third loss function 753 may be determined from the face identification task results 733 and the face identification task gold standard 743 (ie, the face identification gold standard image).
  • the loss function corresponding to the point identification task and the loss function corresponding to the surface identification task may be any loss function of point detection based on heatmap regression, for example, mean square error loss (MSE Loss) and the like.
  • MSE Loss mean square error loss
  • the face identification task can be viewed as the location of all points on the face.
  • the second loss function and the third loss function can be MSE Loss, which can be calculated by the following formula:
  • Loss MSE represents the second loss function or the third loss function
  • xi represents the probability value of the ith pixel in the gold standard image X
  • y i represents the predicted probability value of the ith pixel in the predicted probability map Y
  • n represents The total number of pixels contained in X and Y.
  • multiple points may be included, so the second loss function consists of multiple MSE Loss, where each point corresponds to an MSE Loss.
  • a midsagittal plane may be included in the face identification task, so the third loss function consists of an MSE Loss.
  • Step 640 Perform weighted summation on the value of the first loss function, the value of the second loss function, and the value of the third loss function to obtain the value of the target loss function.
  • the value of the first loss function, the value of the second loss function, and the value of the third loss function may be weighted and summed to obtain the value of the target loss function.
  • the target loss function 760 can be obtained according to the first loss function 751 , the second loss function 752 and the third loss function 753 .
  • the weight of the loss function of each identification task when calculating the weighted sum, can be set to a fixed value through experience, or the uncertainty of the output result of the network can be evaluated. The higher the uncertainty of the output result of the branch, the more The larger the weight setting of the loss function.
  • the value of the target loss function can be obtained by the following formula:
  • Loss all w 1 *Loss Dice,cortex +w 2 *(Loss MSE,AC +Loss MSE,PC )
  • Loss all represents the target loss function, that is, the total loss function;
  • Loss Dice, cortex represents the loss function corresponding to the cortical probability in the region identification task, that is, the first loss function, where the region identification task only includes the corresponding cortical probability.
  • Loss MSE, AC and Loss MSE, PC respectively represent the loss functions of AC and PC point positioning in the point identification task, and the sum is the second loss function;
  • Loss MSE, MSP represents the third loss function, that is, in the surface identification task
  • w 1 , w 2 , and w 3 are the weights of the three tasks Loss, respectively, and these three weights can choose any value according to the situation of the neural network or be determined according to the empirical value, or adjusted according to the actual situation. For example, to balance the order of magnitude difference between Dice Loss and MSE Loss, the three weights can be 0.1, 0.9, and 0.9, respectively.
  • Equation 8 only represents the loss function of one channel output image (cerebral cortex probability) of the region identification task. If another channel output image (background probability) of the region identification task is also expressed, the formula is as follows:
  • Loss all w 1 *(Loss Dice,cortex +Loss Dice,Background )
  • Loss Dice, cortex and Loss Dice, Background represent the loss functions of the corresponding cortical probability and background probability in the region identification task, and the rest are the same as formula 8.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

本说明书实施例提供一种脑标识定位系统,该系统包括处理器,处理器用于执行以下方法:获取脑部的图像;根据图像和神经网络模型,确定脑部的区域标识概率图、脑部的点标识概率图、脑部的面标识概率图;根据区域标识概率图、点标识概率图、面标识概率图,分别确定脑部的脑皮层的分割结果、脑部的点标识、脑部的面标识;根据点标识和面标识,构建目标坐标系;根据脑皮层的分割结果、目标坐标系和/或点标识,确定脑皮层的标识点。

Description

脑标识定位系统和方法
相关申请的交叉引用
本申请要求享有于2021年03月10日提交的,名称为“脑标识提取方法、装置、计算机设备和存储介质”、申请号为2021102597318的中国申请的权益,其全部内容以引用方式被完全包含在此。
技术领域
本说明书涉及医疗技术领域,特别涉及脑标识定位系统和方法。
背景技术
神经科学领域中,前连合(Anterior Commissure,AC)、后连合(Posterior Commissure,PC)、中矢面(Midsagittal Plane,MSP)以及塔莱拉什(Talairach)大脑皮层标识点均是重要的大脑标识结构。这些大脑标识结构在脑解剖成像分析领域起到重要作用,通过这些标识进行图谱配准映射,或建立大脑坐标系,对于分析个体脑部结构、定位脑部功能区域,甚至辅助脑部病理区域定位具有重要意义。
现有的神经外科分析相关软件中,AC标识点、PC标识点、MSP,以及Talairach皮层标识点的定位,大多需要医生手动进行。以神经外科机器人立体定向辅助系统(Robotized Stereotactic Assistant,ROSA)的神经外科机器人为例,其基于Talairach坐标系脑图谱配准功能所依赖的AC,PC,以及6个脑皮层标识点均采取手动的方式确定,MSP的定位通过增加一个MSP上任一点IH,由AC、PC、IH三点确定。然而手动定位耗时费力,受主观操作者影响大,可重复性低。目前,虽然提出了一些自动提取方案,但绝大多数是针对某一种大脑标识进行定位,即使少数实现了AC、PC、MSP和脑皮层标识点的整个提取流程,但是处理流程复杂,鲁棒性较差,效率不高,实用价值不强。
因此,希望提供一种脑标识定位系统和方法自动对多个脑标识进行准确定位。
发明内容
本说明书实施例之一提供一种标识定位系统。其特征在于,所述系统包括处理器,所述处理器用于执行以下方法:获取脑部的图像;根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、所述脑部的点标识概率图、所述脑部的面标识概率图;根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;根据所述点标识和所述面标识,构建目标坐标系;根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识 点。
本说明书实施例之一提供一种脑标识定位系统,其特征在于,包括:获取模块,用于获取脑部的图像;概率图确定模块,用于根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、所述脑部的点标识概率图、所述脑部的面标识概率图;标识定位模块,用于:根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;根据所述点标识和所述面标识,构建目标坐标系;根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点。
本说明书实施例之一提供一种非暂时性的计算机可读介质,包括可执行指令,所述指令被至少一个处理器执行时,导致所述至少一个处理器实现一种方法,包括:获取脑部的图像;根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、所述脑部的点标识概率图、所述脑部的面标识概率图;根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;根据所述点标识和所述面标识,构建目标坐标系;根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的标识定位系统的应用场景示意图;
图2是根据本说明书一些实施例所示的标识定位系统的示例性模块图;
图3是根据本说明书一些实施例所示的标识定位方法的示例性流程图;
图4是根据本说明书一些实施例所示的神经网络模型提取脑标识方法的示意图;
图5是根据本说明书一些实施例所示的神经网络模型的训练方法的示例性流程图;
图6是根据本说明书一些实施例所示的神经网络模型的训练方法的示例性流程图;
图7是根据本说明书一些实施例所示的神经网络模型的训练过程的示意图;
图8是根据本说明书一些实施例所示的神经网络模型的结构示意图;
图9是根据本说明书一些实施例所示的脑部坐标系的示意图;
图10是根据本说明书一些实施例所示的前连合标识点和后连合标识点的示意图;
图11是根据本说明书一些实施例所示的中矢面的示意图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其它词语可实现相同的目的,则可通过其它表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其它操作添加到这些过程中,或从这些过程移除某一步或数步操作。
在一些应用场景中,标识定位系统可以包括计算设备、用户终端,标识定位系统可以通过计算设备等实施本说明书中披露的方法和/或过程从医学图像中提取特定部位的点标识、面标识、区域标识等标识定位结果,从而获取到特定部位的特征信息,例如,Talairach皮层标识点等,从而减轻了选点的工作量,简化了医生工作流程,提高了人体结构定位和分割的精度。
图1是根据本说明书一些实施例所示的标识定位系统的应用场景示意图。
如图1所示,在一些实施例中,系统100可以包括医学影像设备110、第一计算设备120、第二计算设备130、用户终端140、存储设备150和网络160。
医学影像设备110可以指利用不同的媒介,将目标物体(例如人体)内部的结构重现为影像的装置。在一些实施例中,医学影像设备110可以是任何可以对目标物体(例如人体)的指定身体部位进行成像或治疗的设备,例如,MRI(Magnetic Resonance Imaging)、CT(Computed Tomography)、PET(Positron Emission Tomography)等。上面提供的医学影像设备110仅用于说明目的,而非对其范围的限制。在一些实施例中,医学影像设备110可以获取患者的指定部位(例如,脑部等)的医学图像(例如,磁共振(MRI)图像、CT图像等) 并发送至系统100的其它组件(例如,第一计算设备120、第二计算设备130、存储设备150)。在一些实施例中,医学影像设备110可以通过网络160与系统100中的其它组件进行数据和/或信息的交换。
第一计算设备120和第二计算设备130是具有计算和处理能力的系统,可以包括各种计算机,比如服务器、个人计算机,也可以是由多台计算机以各种结构连接组成的计算平台。在一些实施例中,第一计算设备120和第二计算设备130可以在云平台上实现。例如,云平台可以包括私有云、公共云、混合云、社区云、分布式云、跨云、多云等其中一种或几种的组合。在一些实施例中,第一计算设备120与第二计算设备130可以是同一个设备,也可以是不同的设备。
第一计算设备120和第二计算设备130中可以包括一个或多个子处理设备(例如,单核处理设备或多核多芯处理设备),处理设备可以执行程序指令。仅作为示例,处理设备可以包括各种常见的通用中央处理器(central processing unit,CPU),图形处理器(Graphics Processing Unit,GPU),微处理器,特殊应用集成电路(application-specific integrated circuit,ASIC),或其它类型的集成电路。
第一计算设备120可以处理与医学图像相关的信息和数据。在一些实施例中,第一计算设备120可以执行如本说明书一些实施例所示的脑标识定位方法,得到至少一种脑标识定位结果,例如,Talairach皮层标识点等。在一些实施例中,第一计算设备120可以包括神经网络模型,第一计算设备120可以通过神经网络模型得到脑部的标识概率图。在一些实施例中,第一计算设备120可以从第二计算设备130获取训练好的神经网络模型。在一些实施例中,第一计算设备120可以基于脑部的标识概率图确定脑标识定位结果。在一些实施例中,第一计算设备120可以通过网络160和/或系统100中的其它组件(例如,医学影像设备110、第二计算设备130、用户终端140、存储设备150)交换信息和数据。在一些实施例中,第一计算设备120可以直接与第二计算设备130连接并交换信息和/或数据。
第二计算设备130可以用于模型训练。在一些实施例中,第二计算设备130可以执行如本说明书一些实施例所示的神经网络模型的训练方法,得到训练好的神经网络模型。在一些实施例中,第二计算设备130可以获取训练样本图像及对应的金标准图像,以用于训练神经网络模型。在一些实施例中,第二计算设备130可以从医学影像设备110获取图像信息作为模型的训练数据。在一些实施例中,第一计算设备120和第二计算设备130也可以是同一个计算设备。
用户终端140可以接收和/或展示医学图像的处理结果。在一些实施例中,用户终端 140可以从第一计算设备120接收医学图像的标识定位结果,基于此标识定位结果对患者进行诊断和治疗。在一些实施例中,用户终端140可以通过指令使第一计算设备120执行如本说明书一些实施例所示的标识定位方法。在一些实施例中,用户终端140可以控制医学影像设备110以获取特定部位的医学图像。在一些实施例中,用户终端140可以是移动设备140-1、平板计算机140-2、膝上型计算机140-3、台式计算机等其它具有输入和/或输出功能的设备中的一种或其任意组合。
存储设备150可以存储其它设备产生的数据或信息。在一些实施例中,存储设备150可以存储医学影像设备110采集的医学图像。在一些实施例中,存储设备150可以存储第一计算设备120和/或第二计算设备130处理后的数据和/或信息,例如,脑标识概率图、脑标识定位结果等。存储设备150可以包括一个或多个存储组件,每个存储组件可以是一个独立的设备,也可以是其它设备的一部分。存储设备可以是本地的,也可以通过云实现。
网络160可以连接系统的各组成部分和/或连接系统与外部资源部分。网络160使得各组成部分之间,以及与系统之外其它部分之间可以进行通讯,促进数据和/或信息的交换。在一些实施例中,系统100中的一个或多个组件(例如,医学影像设备110、第一计算设备120、第二计算设备130、用户终端140、存储设备150)可通过网络160发送数据和/或信息给其它组件。在一些实施例中,网络160可以是有线网络或无线网络中的任意一种或多种。
应该注意的是,上述描述仅出于说明性目的而提供,并不旨在限制本说明书的范围。对于本领域普通技术人员而言,在本说明书内容的指导下,可做出多种变化和修改。可以以各种方式组合本说明书描述的示例性实施例的特征、结构、方法和其它特征,以获得另外的和/或替代的示例性实施例。例如,第一计算设备120和/或第二计算设备130可以是基于云计算平台的,例如公共云、私有云、社区和混合云等。然而,这些变化与修改不会背离本说明书的范围。
图2是根据本说明书一些实施例所示的标识定位系统的示例性模块图。
如图2所示,在一些实施例中,标识定位系统200可以包括获取模块210、概率图确定模块220和标识定位模块230。
在一些实施例中,获取模块210可以用于获取脑部的图像。
在一些实施例中,脑部的图像可以包括MRI图像等。
在一些实施例中,概率图确定模块220可以用于根据获取的脑部的图像和神经网络模型,确定脑部的区域标识概率图、脑部的点标识概率图、和/或脑部的面标识概率图。
在一些实施例中,神经网络模型可以为多任务模型。在一些实施例中,神经网络模型 可以包括共享网络层和/或至少二个(例如三个)分支网络层。在一些实施例中,分支网络层可以包括第一分支网络层、第二分支网络层、和/或第三分支网络层。其中,第一分支网络层可以用于分割脑部,并输出区域标识概率图;第二分支网络层可以用于对脑部进行点标识定位,并输出点标识概率图;第三分支网络层可以用于对脑部进行面标识定位,并输出面标识概率图。
在一些实施例中,标识定位模块230可以用于根据区域标识概率图、点标识概率图、和/或面标识概率图,分别确定脑部的脑皮层的分割结果、脑部的点标识、和/或脑部的面标识;根据点标识和面标识,构建目标坐标系;和/或根据目标区域的分割结果、目标坐标系和/或点标识,确定脑皮层的标识点。
在一些实施例中,点标识概率图可以包括前连合概率图和/或后连合概率图,点标识可以包括前连合标识点和/或后连合标识点。
在一些实施例中,标识定位模块230可以将点标识概率图中最大概率值对应的像素点所在的位置确定为点标识的位置。
在一些实施例中,面标识概率图可以包括中矢面概率图,面标识可以包括中矢面。
在一些实施例中,标识定位模块230可以根据面标识概率图,确定目标点集。在一些实施例中,标识定位模块230可以确定面标识概率图中概率大于预设阈值的像素点的集合为目标点集。
在一些实施例中,标识定位模块230可以对目标点集进行拟合,得到面标识。在一些实施例中,标识定位模块230可以根据随机采样一致性方法,对目标点集进行拟合,得到面标识。
在一些实施例中,脑皮层的标识点可以包括脑皮层最前侧点、脑皮层最后侧点、脑皮层最左侧点、脑皮层最右侧点、脑皮层最下侧点、脑皮层最上侧点中的至少一个。
在一些实施例中,标识定位模块230可以根据脑皮层在目标坐标系中三个坐标轴方向上的最大值点或最小值点,确定脑皮层的标识点;和/或确定通过点标识并在平行目标坐标系中三个坐标轴方向上的脑皮层的最大值点或最小值点,为脑皮层的标识点。
在一些实施例中,标识定位系统200还可以包括模型训练模块(图2未示出)。模型训练模块可以用于训练神经网络模型。
在一些实施例中,获取模块210和/或模型训练模块可以获取训练样本图像,及各训练样本图像对应的金标准图像,金标准图像可以包括区域标识金标准图像、点标识金标准图像、和/或面标识金标准图像。
在一些实施例中,概率图确定模块220和/或模型训练模块可以将各训练样本图像输入至初始神经网络模型,分别得到第一分支网络层输出的预测区域标识概率图、第二分支网络层输出的预测点标识概率图、和/或第三分支网络层输出的预测面标识概率图。
在一些实施例中,模型训练模块可以根据预测区域标识概率图、预测点标识概率图、预测面标识概率图、区域标识金标准图像、点标识金标准图像、和/或面标识金标准图像,确定目标损失函数的值。
在一些实施例中,模型训练模块可以根据目标损失函数的值调整初始神经网络模型的参数,以得到训练好的神经网络模型。
在一些实施例中,模型训练模块可以配置在与其它模块(获取模块210、概率图确定模块220和标识定位模块230)不同的计算设备上。例如,模型训练模块可以配置在第二计算设备130上,而其它模块可以配置在第一计算设备120上。
图3是根据本说明书一些实施例所示的标识定位方法的示例性流程图。
如图3所示,流程300可以包括下述步骤中的一个或多个。在一些实施例中,流程300可以由第一计算设备120执行。
步骤310,获取脑部的图像。在一些实施例中,步骤310可以由获取模块210执行。
脑部的图像是目标对象的脑部医学图像,例如,MRI图像、CT图像等。其中,目标对象可以是各种生物体,例如,人体、小动物等。在一些实施例中,脑部的图像可以包括各种结构的脑部MRI图像,例如,T1、T2、T2FLAIR等。在一些实施例中,脑部的图像可以包括二维图像、三维图像等中的至少一种。
在一些实施例中,可以通过医学影像设备(例如,MRI、CT等)扫描获取脑部的图像,例如,如图4所示的脑部磁共振图像410等。
步骤320,根据图像和神经网络模型,确定脑部的区域标识概率图、脑部的点标识概率图、和/或脑部的面标识概率图。在一些实施例中,步骤320可以由概率图确定模块220执行。
标识是指各种类型的生物器官/组织的特定解剖结构或位置,包括多种类型,例如,点标识、面标识、区域标识等。点标识可以用于解剖点的定位,例如,脑部的前连合标识点、后连合标识点等。面标识可以用于解剖面的定位,例如,脑部的中矢面等。区域标识可以用于区域识别和分割,例如,脑皮层分割等。标识概率图表示医学图像中的各部分是某类标识的概率,可以是标识了概率数值的医学图像,与标识对应,可以包括多种类型,例如,点标识概率图、面标识概率图、区域标识概率图等。在一些实施例中,标识概率图可以与脑部图 像对应,包括二维图像、三维图像等中的至少一种。
在一些实施例中,标识概率图可以包括特定部位(例如,脑部等)的区域标识概率图、点标识概率图、面标识概率图中的一个或多个。在一些实施例中,标识可以是脑部的标识,可以包括点标识、面标识、区域标识和脑部的其它标识。在一些实施例中,脑部的其它标识可以包括大脑的皮层标识点、脑部坐标系等。
在一些实施例中,脑部的点标识还可以包括以下中的一种或多种:midbrain-pons junction(MPJ)中矢面上的中脑脑桥连接;颅内血管的分叉点,可以应用于血管的提取;脑室、胼胝体、脑桥等结构的交点,可以用于脑部的体位矫正、大脑与模板的对齐等;大脑沟回的分叉点,可以用于大脑形态分析。在一些实施例中,上述标识可以通过点标识概率图得到。
在一些实施例中,可以将一个或多个脑部图像输入神经网络模型中,从而得到输出的一种或多种类型的脑部的标识概率图,例如,区域标识概率图、点标识概率图、面标识概率图等。如图4所示,可以将脑部磁共振图像410输入多任务神经网络模型420中,得到输出的区域标识概率图431、点标识概率图432和面标识概率图433。
在一些实施例中,神经网络模型可以一次输出区域标识概率图、点标识概率图、面标识概率图等所有所需的标识概率图,以用于获取所有的标识,包括区域标识、点标识、面标识等,并用于后续的工作,例如,建立脑部坐标系、确定脑皮层标识点等。
在一些实施例中,神经网络模型可以单独输出区域标识概率图、点标识概率图、面标识概率图等,以用于单独获取区域标识、点标识、面标识等。
在一些实施例中,神经网络模型可以是预先训练好的用于从特定部位的图像(例如,人体的脑部磁共振图像)中提取特定部位的标识概率图的神经网络模型,例如,卷积神经网络(Convolutional Neural Networks,CNN)、全卷积网络(Fully Convolutional Networks,FCN)。在一些实施例中,神经网络模型可以是FCN,其网络结构可以是任何类型的FCN,例如,UNet(U型网络),VNet(V型网络),SegNet(语义分割网络)等。可选地,神经网络模型可以采用编码-解码结构的多任务全卷积网络。
在一些实施例中,神经网络模型可以为多任务模型,包括共享网络层和至少两个分支网络层。其中,共享网络层的参数在不同的任务中其参数是共享的,而分支网络层可以对应不同的任务,其参数根据不同的任务分支各不相同。
在一些实施例中,分支网络层的数量可以根据神经网络模型所需输出的标识概率图的类型数量来确定,例如,与标识概率图的类型数量相同。在一些实施例中,每种类型的标识可以对应一个任务分支,因此,不同类型的标识概率图分别通过神经网络模型中的不同分支 网络层输出,例如,点标识概率图通过神经网络模型的点标识任务分支网络层输出、面标识概率图通过神经网络模型的面标识任务分支网络层输出、区域标识概率图通过神经网络模型的区域标识任务分支网络层输出。如图4所示,多任务神经网络模型420包括三个任务分支,分别对应皮层分割、解剖点定位、解剖面定位,输出分别为区域标识概率图431、点标识概率图432、面标识概率图433。
在一些实施例中,神经网络模型中的分支网络层可以包括三个分支网络层,即第一分支网络层、第二分支网络层、第三分支网络层。
以特定部位为脑部为例,在一些实施例中,第一分支网络层可以用于分割脑部,并输出区域标识概率图;第二分支网络层可以用于对脑部进行点标识(例如,AC标识点、PC标识点等)定位,并输出点标识概率图;第三分支网络层可以用于对脑部进行面标识(例如,中矢面等)定位,并输出面标识概率图,其中,面标识可以对应于定位面上所有像素点。
在一些实施例中,第一分支网络层可以是对应特定部位(例如,脑部等)的区域标识任务的分支网络层,其输出为该特定部位的区域标识概率图。区域标识概率图中每个像素点的值可以表示该像素点为该特定部位的区域标识的概率值,例如,区域标识为脑实质分割区域,则脑实质分割区域概率图中每个像素点的值表示该像素点为脑实质上的点的概率值。
第二分支网络层可以是对应特定部位的点标识(例如,AC标识点、PC标识点等)任务的分支网络层,其输出为该特定部位的点标识概率图。点标识概率图中每个像素点的值可以表示该像素点为该特定部位的点标识的概率值,例如,前连合概率图中各像素点的值表示该像素点为前连合标识点的概率值,后连合概率图中各像素点的值表示该像素点为后连合标识点的概率值。
第三分支网络层可以是对应特定部位的面标识(例如,中矢面等)任务的分支网络层,其输出为该特定部位的面标识概率图。面标识概率图中每个像素点的值可以表示该像素点为该特定部位的面标识的概率值,例如,面标识是中矢面,则中矢面概率图中各像素点的值表示该像素点为中矢面上的点的概率值。
在一些实施例中,可以基于训练样本图像及对应的金标准图像来训练初始神经网络模型,得到训练好的神经网络模型。关于如何训练神经网络模型的更多内容,可以参见图5的相关描述,在此不再赘述。
步骤330,根据区域标识概率图、点标识概率图、面标识概率图,分别确定脑部的脑皮层的分割结果、脑部的点标识、和/或脑部的面标识。在一些实施例中,步骤330可以由标识定位模块230执行。
标识定位结果是指能够作为生物的器官/组织的标志的信息,例如,点标识、面标识、区域标识、脑部坐标系、脑皮层标识点等。在一些实施例中,可以根据特定部位的标识概率图(例如,区域标识概率图、点标识概率图、面标识概率图等),来确定特定部位的标识定位结果。在一些实施例中,区域标识可以是特定区域的分割结果,例如,脑皮层的分割结果等。
在一些实施例中,可以根据步骤320中获取的点标识概率图,确定特定部位(例如,脑部等)的点标识,例如,从前连合概率图中确定前连合标识点AC的具体位置,从后连合概率图中确定后连合标识点PC的具体位置。如图4所示,可以根据点标识概率图432确定点标识442。
在一些实施例中,点标识概率图可以包括前连合概率图、后连合概率图,点标识可以包括前连合标识点AC、后连合标识点PC。
在一些实施例中,可以通过确定特定部位的点标识来实现关键点的定位,其中,可以把点标识作为关键点。在一些实施例中,点标识还可以包括其它脑关键点。在医疗领域,点定位广泛应用于工作流的智能化,或自动算法的中间步骤,例如,可以通过定位到一些关键点对,实现多个影像间的点对配准、或实现影像与实体物之间的空间注册。仅作为示例,在脑血管分割应用中,通常需要先定位血管起始位置用于血管的生长,可以通过点标识概率图确定血管起始位置的关键点。
在一些实施例中,可以将点标识概率图中最大概率值对应的像素点所在的位置确定为点标识的位置。具体地,确定前连合概率图中的最大概率值对应的像素点所在的坐标位置,并将该最大概率值对应的像素点所在的位置坐标,确定为前连合标识点AC的位置坐标;同样,确定后连合概率图中最大概率值对应的像素点所在的坐标位置,并将最大概率值对应的像素点所在的位置坐标,确定为后连合标识点PC的位置坐标。需要说明的是,这里的坐标位置指的是在概率图中的行列层坐标(i,j,l),i表示行,j表示列,l表示层。如图10所示,AC、PC分别为确定出的前连合标识点、后连合标识点。
AC、PC由于解剖结构尺寸较小,同时其周围有相似灰度的解剖结构干扰,对其直接定位往往比较困难,因此一般地AC、PC的确定普遍依赖其它结构的事先定位,例如,根据解剖知识确定包含有胼胝体的感兴趣区域(Region of interest,ROI),利用胼胝体、穹窿和脑干的分割,根据AC和PC与这些解剖结构的空间位置关系确定AC和PC位置。又或者,基于在MSP图像上手工标记标识点AC、PC和上脑桥沟的顶点的图像,得到三个标识点间关系的模型,并通过上脑桥沟的顶点,利用模型得到AC和PC的位置。无论是那种方式,均存在 过于复杂,对周围结构的检测的误差,会放大AC和PC标识点的误差的缺陷。
本说明书一些实施例提供的AC、PC确定方法,先通过预设的神经网络模型得到前连合概率图和后连合概率图,然后根据前连合概率图和后连合概率图对应确定出AC、PC,使得无需依赖其它结构的事先定位,无需手动选点就可以高效且精准地确定出脑部AC、PC。
在一些实施例中,可以根据特定部位的所述面标识概率图,确定特定部位的面标识,例如,从面标识概率图中确定中矢面的具体位置。如图4所示,可以根据面标识概率图433确定面标识443。
在一些实施例中,面标识概率图可以包括中矢面概率图,面标识可以包括中矢面。如图11所示为确定出的中矢面示意图。
大脑沟裂是脑部的一个重要的解剖标识,在其范围内存在一个近似的虚拟平面使人脑左右半球相对于其对称,这个平面称为中矢面(MSP)。中矢面两侧的脑解剖组织相对于中矢面达到最大限度的对称。
在一些实施例中,可以通过确定脑部的面标识来实现脑部面定位,其中主要为对中矢面的定位。中矢面定位在很多场景中均有应用,中矢面为大脑的对称面,定位中矢面后可以方便的将大脑分为左右两侧,分析两侧的对称性情况,用于疾病诊断场景。仅作为示例,在脑出血场景中,血肿会对大脑实质造成挤压,通过定位中矢面计算中矢面的偏移量,可用于评估血肿的严重程度。
在一些实施例中,可以根据面标识概率图,确定目标点集。
在一些实施例中,可以确定面标识概率图中概率大于预设阈值的像素点的集合为目标点集。以面标识概率图为中矢面概率图,面标识为中矢面为例。具体地,先从中矢面概率图中提取中矢面平面点集S(或称目标点集),例如,对中矢面概率图进行固定阈值分割,将概率值大于预设阈值的点的集合作为中矢面平面点集S。其中,预设阈值可选择0-1之间任意常数,例如0.5。得到中矢面平面点集S之后,根据预设的算法对中矢面平面点集S进行拟合,得到的拟合平面即为目标对象脑部的中矢面。
在一些实施例中,还可以通过其它方式确定目标点集,例如,取面标识概率图中概率最高的像素点的集合为目标点集,本说明书对此不做限制。
在一些实施例中,确定目标点集后,可以对目标点集进行拟合,得到面标识。
在一些实施例中,可以根据随机采样一致性(Random Sample Consensus,RANSAC)方法,对目标点集进行拟合,得到面标识。
在一些实施例中,随机采样一致性方法可以包括:
对以下过程进行多次循环(例如,N次,N>10):a)从目标点集中进行随机采样,确定出一个子集;b)根据子集中的点,拟合出一个平面;c)确定目标点集中除子集之外的剩余点到平面的距离平方和。确定多次循环中,距离平方和最小的一次循环所对应的平面为面标识。
仅作为示例,面标识概率图为中矢面概率图,面标识为中矢面,则拟合后得到的中矢面可采用线性方程表示,或者,采用该拟合的中矢面上的一点O和面的法向量
Figure PCTCN2022079897-appb-000001
表示,以后者为例,拟合得到中矢面可表示为
Figure PCTCN2022079897-appb-000002
若预设的算法为随机采样一致性方法,那么基于该方法的面拟合方法如下:
首先,基于随机采样一致性理论,对以下过程循环N次,其中,N次的区别是M个点不相同,循环时N可取任意常数(例如大于10的常数),例如N=1000。由于最少三个点可确认一个平面,所以M可取任意大于2(即大于等于3)的常数。
从点集S中随机采样出M个点,利用该M个点拟合一个平面L i,并记录。然后计算剩余点到平面L i的距离平方和,记录距离和为Dist i
然后,找到N次循环中,将记录距离Dist i中距离最小的一次采样(设次数为k)所对应的平面L k为中矢面的初始定位平面。
最后,通过以下方式拟合平面
Figure PCTCN2022079897-appb-000003
对于O,可以通过以下公式确定:
Figure PCTCN2022079897-appb-000004
其中,P i为采样得到的M个点中的第i个点的坐标位置,例如,M=3,则O=(P 1+P 2+P 3)/3。
对于面的法向量
Figure PCTCN2022079897-appb-000005
分两种情况确定。第一种,当M=3时,可直接通过三个点A、B、C和公式确定中矢面的法向量,该公式如下所示:
Figure PCTCN2022079897-appb-000006
其中,
Figure PCTCN2022079897-appb-000007
为中矢面的法向量,
Figure PCTCN2022079897-appb-000008
为从A指向B的向量,
Figure PCTCN2022079897-appb-000009
为从A指向C的向量。
第二种,当M>3时,可使用主成分分析(Principal Component Analysis,PCA)的方法,通过求取M个点的最小主成分方向,确定为中矢面的法向量。例如,将M个采样点表示为一个M行3列的矩阵A M,3,通过奇异值分解法SVD分解,可实现矩阵A的特征分解,特征分解可以用如下所示公式表示:
A M,3=U M,MΣ M,3V 3,3    (3)
其中,U表示左奇异矩阵、V表示右奇异矩阵、Σ表示特征矩阵。
取出A分解出的右奇异矩阵V表示A的特征分解矩阵;取出V的第三列V[:,2]表示M个点在3个分解成分中最小成分的方向,作为M个点拟合平面的法向量,法向量
Figure PCTCN2022079897-appb-000010
可以用如下所示公式表示:
Figure PCTCN2022079897-appb-000011
在一些实施例中,还可以通过其它方式对目标点集进行拟合,以得到面标识,本说明书对此不做限制。
一般地,中矢面作为Talairach坐标系的重要基准面,它的定位是定位AC和PC的先决条件。中矢面的定位包括但不限于基于全局对称性分析、基于脑实质分割、基于特征点检测、基于图谱配准等方法。这些算法在正常脑结构上均能取得不错的效果,但是面对病理脑结构,例如脑部结构失去对称性,或者,结构与模板存在较大差异时,这些方法的适应能力会大大下降。
本说明书一些实施例提供的中矢面确定方法,先通过预设的神经网络模型得到中矢面概率图,然后根据中矢面概率图对应确定出中矢面,无论正常脑结构还是病理脑结构,又或者是结构与模板存在较大差异的情况,均可以高效且精准地确定出目标对象的脑部中矢面。
在一些实施例中,可以根据特定部位(例如,脑部等)的区域标识概率图,确定特定部位的区域标识,例如,从区域标识概率图中确定出目标区域的分割结果。如图4所示,可以根据区域标识概率图431来确定区域标识441。
区域标识概率图是表示区域标识的概率的图像,例如,脑实质分割区域概率图等。目标区域是特定部位的特定区域,例如,大脑皮层、大脑的沟回、左右大脑、小脑、脑室、脑干等。目标区域的分割结果可以是各种器官/组织的区域分割的结果,例如,脑皮层的分割结果等。
当特定部位包括脑部时,在一些实施例中,目标区域可以包括脑皮层,脑部的区域标识概率图可以包括脑实质分割区域概率图,脑皮层的分割结果可以包括脑实质分割区域等,可以从脑实质分割区域概率图中确定脑皮层的分割结果。
在一些实施例中,除了分割大脑皮层,得到脑皮层的分割结果,还可以分割大脑的沟回、分割左右大脑、小脑、脑室、脑干等结构,得到相应目标区域的分割结果。这些结构的分割可以广泛应用于诊断的脑区参数统计、手术规划等场景。
在一些实施例中,区域标识概率图可以包括脑实质分割区域概率图,区域标识可以包括脑实质分割区域。可以通过预设的阈值,将脑实质分割区域概率图生成脑实质分割二值掩膜图像;根据脑实质分割二值掩膜图像确定脑实质分割区域。具体地,对脑实质分割区域概 率图进行阈值分割,其中,预设的阈值通常可选择0-1之间的常数,例如为0.5,进行阈值分割后得到脑实质分割二值掩膜图像,例如,将脑实质分割区域概率图中脑实质分割区域概率值大于等于0.5的像素点置为1,脑实质分割区域概率值小于0.5的像素点置为0,最终得到的图像中各像素点的值不是0就是1,形成脑实质分割二值掩膜图像。然后根据该脑实质分割二值掩膜图像确定脑实质分割区域。
各类型标识概率图中各像素点的值表示均是属于对应标识的概率值,因此,在一些实施例中,可以根据各像素点具体的概率值确定标识概率图中对应标识的具体位置。在一些实施例中,还可以通过其它方式从标识概率图中确定对应标识的具体位置。例如,通过另一神经网络模型确定,即将各类型标识概率图输入至另一个预先训练好的神经网络模型中,得到该概率图中对应标识的位置。又例如,可以是通过预设条件筛选,将该概率图中符合预设条件的像素点确定为对应标识。本说明书对此不作限定。
步骤340,根据点标识和面标识,构建目标坐标系。如图4所示,可以根据点标识442和面标识443建立脑部坐标系450。在一些实施例中,步骤340可以由标识定位模块230执行。
目标坐标系是基于特定部位建立的坐标系,可以用于表示特定部位的空间结构及位置关系等,可以是各种坐标系,例如,平面直角坐标系、球面坐标系、塔莱拉什(Talairach)坐标系等。在一些实施例中,目标坐标系可以是脑部坐标系,脑部坐标系可以实现目标对象脑部的结构和空间位置的对应,例如,塔莱拉什(Talairach)坐标系,通过建立脑部坐标系使得可以在同一个神经解剖学空间内研究不同目标对象的同一个脑部区域,以进行横向对比。
在一些实施例中,可以基于上述步骤中提取的点标识和面标识建立脑部坐标系。仅作为示例,以点标识为前连合标识点AC和后连合标识点PC,面标识为中矢面MSP为例,建立Talairach坐标系说明。具体地,可以以AC为坐标系原点,从PC到AC的方向定义为Y轴的方向;垂直于中矢面,且经过AC点的轴定义为X轴,正方向定义为从大脑右到左;垂直于X轴和Y轴平面,经过AC点的轴线定义为Z轴,且正方向为从脚到头方向,以此可以构建出如图9所示的脑部坐标系(Talairach坐标系)。
步骤350,根据脑皮层的分割结果、目标坐标系和/或点标识,确定脑皮层的标识点。如图4所示,可以根据区域标识441、点标识442和脑部坐标系450,提取脑皮层标识点460。在一些实施例中,步骤350可以由标识定位模块230执行。在一些实施例中,脑皮层的标识点可以包括脑皮层最前侧点、脑皮层最后侧点、脑皮层最左侧点、脑皮层最右侧点、脑皮层最下侧点、脑皮层最上侧点中的至少一个。
标识点是指能够表示目标区域的空间结构和/或空间位置等的一个或多个用于标识的点,例如,Talairach皮层标识点等。Talairach皮层标识点是在Talairach坐标空间下,大脑(不含头皮和脑脊液)最前端、最后端、最左端、最右端、最下端、最上端的点(方向描述以病人坐标系为基准),一共六个,该六个点称为脑皮层标识点。具体定义见表1:
表1
脑皮层标识点 医学上的标准确定方法
脑皮层最前侧点AP Y轴和脑皮层在脑前侧的交点
脑皮层最后侧点PP Y轴和脑皮层在脑后侧的交点
脑皮层最左侧点LP 过PC且平行于X轴的线和脑皮层左侧的交点
脑皮层最后侧点RP 过PC且平行于X轴的线和脑皮层右侧的交点
脑皮层最下侧点IP Z轴和脑皮层下侧的交点
脑皮层最上侧点SP 过PC且平行于Z轴的线和脑皮层顶部的交点
通过结合Talairach坐标系和脑皮层标识点,可完成不依赖图像灰度信息匹配的高精度图谱映射和图谱配准,相比于基于灰度信息的图谱配准方法,有更高的精度和更强的适用性,进一步可用于在磁共振图像上无结构信息进行的脑部结构或区域的分割和定位。
当特定部位包括脑部时,在一些实施例中,目标区域可以包括脑皮层,目标区域的分割结果可以包括脑皮层的分割结果,目标坐标系可以为Talairach坐标系,点标识可以包括前连合标识点AC、后连合标识点PC,面标识可以包括中矢面,可以根据脑皮层的分割结果、目标坐标系和/或点标识,确定脑皮层标识点。
在一些实施例中,可以根据脑实质分割区域确定大脑皮层轮廓,脑实质的外轮廓即为大脑皮层轮廓,即大脑皮层轮廓所形成的区域为脑实质分割区域。
在一些实施例中,在确定大脑皮层轮廓之后,可以根据该大脑皮层轮廓与前面确定的脑部坐标系下的各轴线确定出大脑皮层标识点,例如,脑皮层最前侧点AP、脑皮层最后侧点PP、脑皮层最左侧点LP、脑皮层最右侧点RP、脑皮层最下侧点IP、脑皮层最上侧点SP等。
在一些实施例中,可以根据脑皮层在目标坐标系中三个坐标轴方向上的最大值点或最小值点,确定脑皮层标识点。例如,可以将Talairach坐标系的Y轴上大脑皮层轮廓最大Y坐标值的点确定为脑皮层最前侧点AP;将Talairach坐标系的Y轴上大脑皮层轮廓最小Y坐标值的点确定为脑皮层最后侧点PP;将Talairach坐标系的Z轴上大脑皮层轮廓最小Z坐标值的点确定为脑皮层最下侧点IP。
在一些实施例中,可以确定通过点标识并在平行目标坐标系中三个坐标轴方向上的脑 皮层的最大值点或最小值点,为脑皮层的标识点。例如,将经过后连合标识点PC,且与Talairach坐标系的X轴上大脑皮层轮廓最大X坐标值的点确定为脑皮层最右侧点RP;将经过后连合标识点PC,且与Talairach坐标系的X轴上大脑皮层轮廓最小X坐标值的点确定为脑皮层最左侧点LP;将经过后连合标识点,且与Z轴平行的直线上大脑皮层轮廓最大Z坐标值的点确定为脑皮层最上侧点SP。
具体地,可以通过计算大脑皮层所有轮廓点在Talairach坐标系中,沿X、Y和Z轴分别的最大和最小坐标值即可获取到上述6个大脑皮层标识点。先获取大脑皮层轮廓上的点的集合,即皮层点集合;然后以前连合点AC为起点,沿Talairach坐标系Y轴正方向搜索皮层轮廓,将Y轴坐标值为大脑皮层轮廓最大Y坐标值的点确定为脑皮层最前侧点AP;沿Talairach坐标系Y轴反方向搜索皮层轮廓,将Y轴坐标值为大脑皮层轮廓最小Y坐标值的点确定为脑皮层最后侧点PP;沿Z轴反方向搜索皮层轮廓上的像素点,将Z轴坐标值为大脑皮层轮廓最小Z坐标值的点确定为最脑皮层下侧点IP;将经过后连合点PC,且与Z轴平行的直线上,Z坐标值为大脑皮层轮廓最大Z坐标值的点确定为脑皮层最上侧点SP;以后连合点PC为起点,沿Talairach坐标系X轴正方向搜索皮层轮廓像素,将X轴坐标值为大脑皮层轮廓最小X坐标值的点确定为脑皮层最左侧点LP;沿Talairach坐标系X轴反方向搜索皮层轮廓像素,将X轴上X轴坐标值为大脑皮层轮廓最大X坐标值的点确定为脑皮层最右侧点RP。
一般地,针对脑皮层标识点的提取是对脑皮层标识点所在二维平面脑组织分割后再定位;或者是通过使用三维形变模型的方法进行3D皮层分割,完成皮层标识点的定位。但该方法稳定性和时间效率上非常差,整个算法流程非常复杂、容错率低、效率低。
本说明书一些实施例中所述的大脑皮层标识点提取方法,先通过神经网络模型提取到脑实质分割区域概率图,然后从脑实质分割区域概率图中确定脑实质分割区域,并根据脑实质分割区域确定大脑皮层轮廓,最后以大脑皮层轮廓与大脑坐标系中各轴线的最大和最小坐标值,确定出大脑皮层标识点,这样,无论是稳定性还是提取效率都得到了极大改善,从而实现高效且精准的确定出大脑皮层标识点。
上述以不同的实施例分别对脑部的点标识、面标识、区域标识以及脑部的其它标识(例如,脑皮层的标识点等)进行确定的过程进行了说明,但仍然需要强调的是,本说明书实施例中对点标识、面标识、区域标识的提取是同时进行的,具体地,在经过预设的神经网络模型得到不同类型标识的概率图之后,各不同类型的概率图经过上述实施例中所述的方法得到对应类型的标识,从概率图到确定出对应类型的标识可看作是后处理过程,这样,先提取各 类型标识概率图,然后经过对应的后处理过程,即可整体提取出目标主体的所有脑标识。
在一些实施例中,可以将获取标识概率图和后处理过程(确定标识定位结果)的整个流程由一个模型完成,模型的输入可以是特定部位的图像,例如,脑部磁共振图像等,模型的输出可以是任何标识定位结果,例如,区域标识、点标识、面标识、脑部坐标系、脑皮层标识点等。在一些实施例中,该模型可以是机器学习模型,例如,神经网络模型CNN、FCN等。在一些实施例中,该模型可以是一个模型,也可以是多个模型前后连接而成,例如,由两个模型前后相连组成,前一个模型用于确定并输出标识概率图,后一个模型用于接收前一个模型输出的标识概率图,确定并输出标识定位结果,模型的训练可以通过各种方式,例如,联合训练等。
图5是根据本说明书一些实施例所示的神经网络模型的训练方法的示例性流程图。
如图5所示,流程500可以包括下述步骤。在一些实施例中,流程500可以由第二计算设备130执行。
步骤510,获取训练样本图像,及各训练样本图像对应的金标准图像,金标准图像包括区域标识金标准图像、点标识金标准图像、和/或面标识金标准图像。
训练样本图像即训练样本集,是用于训练神经网络模型的样本图像,可以是各种类型的图像,例如,CT图像、MRI图像等,也可以是各种器官/组织的图像,例如,脑部图像、心脏图像等。在一些实施例中,训练样本图像可以包括头部磁共振图像。
金标准图像是用于作为训练样本图像的标签的图像,可以是标注了标识的训练样本图像。在一些实施例中,金标准图像可以包括区域标识金标准图像、点标识金标准图像、和/或面标识金标准图像。
在一些实施例中,神经网络模型可以为多任务网络模型,不同任务可以对应提取不同类型的标识,因此,训练神经网络模型前需要获取各类型标识任务各自对应的训练样本图像和金标准图像。为了使得最终得到神经网络输出的各类型标识概率图更加精确,在获取每个类型标识任务的训练样本图像时,需要尽可能丰富样本的多样性。
在一些实施例中,可以通过各种方式获取训练样本图像。例如,通过扫描或从存储器获取等多种方式获取大量的不同主体的不同模态的正常的、病理的、其它特殊情况的脑部磁共振图像。又例如,各磁共振图像进行伸缩、裁剪、变形等处理。
在一些实施例中,可以在获取训练样本图像时,对训练样本图像进行金标准标注,以获取对应的金标准图像。在一些实施例中,可以根据任务的具体类型通过不同方法对训练样本图像进行金标准标注。
在一些实施例中,对于点标识任务,可以将各脑部磁共振图像中解剖标识点所在像素点的概率值标注为第一值,除解剖标识点以外的像素点的概率值标注为第二值,得到点标识任务对应的金标准图像,其中,第一值可以设为1,第二值为根据各像素点与解剖标识点距离构建的算法确定的值。具体地,解剖标识点可以包括AC和PC点,在脑部磁共振图像中记录AC和PC的位置坐标。解剖标识点为两个,所以需生成两个和脑部磁共振图像同样大小的概率图,AC点对应一个概率图,PC点对应一个概率图,以AC的金标准概率图为例,标注的AC所在位置的像素点值为1,其余像素点的概率值随着距离AC标注位置越远,概率值越低。概率为距离的高斯函数,这里将根据高斯函数计算的概率值统称为第二值,该高斯函数可以通过以下公式确定:
Figure PCTCN2022079897-appb-000012
其中,p为像素点的概率值;d为像素点距离AC或PC标注点的距离;σ为高斯函数的方差,可取任意常数,例如,σ=10。方差的大小会影像算法的收敛速度和最终点的定位精度,所以为实现训练前期具有较快的收敛速度,训练后期具有不错的定位精度,可以对前期的σ取较大的值以加快收敛速度,训练过程中对σ进行逐渐降低,在训练后期提高点预测精度。
在一些实施例中,对于面标识任务,可以将各脑部磁共振图像中中矢面上像素点的概率值标注为第一值,除中矢面上像素点以外的像素点的概率值标注为第三值,得到面标识任务对应的金标准图像,其中,第一值与点标识相似,可以设为1,第三值为根据预设算法确定的值,该预设算法基于各像素点与中矢面的距离构建。具体地,标注时的方式可以是通过标注工具选择多个磁共振图像中位于大脑中矢面上的像素点,然后通过这些像素点拟合得到金标准中矢面方程。选择的像素点数理论上大于3个即可,但效果上越多越好。例如,对这些图像在中矢面上均匀标注20个像素点用于平面拟合。确定中矢面后,对图像上其余的像素点进行标注。由于中矢面只有一个,所以标注后,会生成和原图同样大小的概率图,其中,图上每一个像素点,如果位于中矢面平面上的,则概率值为1,没有位于中矢面平面上的,距离中矢面越近,概率值越大,其概率值符合公式5所定义的高斯分布,因此,确定像素点的概率值时可使用与上述点标识任务相似的方法。可以将根据公式5中所定义的高斯函数计算的概率值统称为第三值,此处公式5中的d表示像素点距离中矢面的距离。
在一些实施例中,对于区域标识任务,可以将各脑部磁共振图像中脑实质的像素点的概率值标注为第一值,非脑实质的像素点的概率值标注第四值,得到区域标识任务对应的金标准图像,其中,第一值与点标识相似,可以设为1,第四值可以设为0。具体地,在脑部磁共振图像中通过逐像素进行标注,得到与输入磁共振图像同样尺寸的,仅含0和1数值的二 值图像,值为1的像素表示属于脑实质的像素,值为0的像素表示非脑实质的像素。其中,标注的方式可以是通过开源软件freesurfer实现,并在软件自动标注结果的基础上进行微调。
在一些实施例中,神经网络模型中的对应区域标识任务的分支可以输出2通道概率图,分别代表背景的预测概率和脑皮层(脑实质)的预测概率,可以用同一个金标准生成两个仅含0和1数值的二值图像,例如,代表背景的概率图背景是1,目标是0;代表脑皮层的概率图背景是0,目标是1。
步骤520,将各训练样本图像输入至初始神经网络模型,分别得到第一分支网络层输出的预测区域标识概率图、第二分支网络层输出的预测点标识概率图、和/或第三分支网络层输出的预测面标识概率图。
初始神经网络模型是未经训练的神经网络模型,例如,CNN、FCN等。在一些实施例中,可以基于任务的类型和数量等构建初始神经网络模型。
在一些实施例中,可以在开始训练之前构建初始神经网络模型,设置共享网络层和不同类型标识任务的分支网络层。其中,网络层中主要由卷积层组成,同时还包括归一化层(批归一化、实例归一化,或组归一化)和激活层,还可以包括池化层、转置卷积层、上采样层等,本说明书实施例对此不作限定。例如,不同类型标识任务包括点标识任务、面标识任务和区域标识任务中三个任务均可以通过热图回归(heatmap regression)的方式进行,因此三个分支的输出均为和输入图像同等尺寸大小的概率图,对应地,区域标识任务输出的是脑实质分割区域概率图,以根据脑实质分割区域概率图确定脑实质分割区域,进而基于脑实质分割确定出大脑皮层轮廓,然后根据大脑皮层轮廓和脑部坐标系确定出大脑皮层标识点;点标识任务输出的是前连合概率图和后连合概率图,以根据前连合概率图和后连合概率图定位出前连合AC和后连合PC两个点;面标识任务输出的中矢面概率图,以根据中矢面概率图定位出中矢面(等效于定位中矢面上所有像素点)。
在一些实施例中,构建神经网络模型时,网络结构可以采用任何类型的全卷积网络,例如,UNet,VNet,SegNet、编码-解码结构的多任务全卷积网络等。
图8是根据本说明书一些实施例所示的神经网络模型的结构示意图。如图8所示为编码-解码结构的多任务全卷积网络,包括1个编码器和3个解码器,其中,编码器通过跳跃连接和底层连接与解码器连接。
在一些实施例中,可以使用UNet网络作为多任务全卷积网络的基础模型,将UNet网络的编码结构(Encoder)作为权重共享层,然后引出三条不同的解码结构(Decoder)。其中,在实际使用时,可根据实际情况对原始的UNet网络的结构进行改进,例如,减少基础通道 数,减少将采样次数,这样可以降低算法的资源占用。UNet引出的3条解码器分支虽然整体结构相同,但因为任务不同,三个分支的输出通道数可以存在差异。其中,分支1的输出可以为2通道概率图,其中,每一个通道对应一个概率图,可以分别代表脑实质分割区域的预测概率和脑背景的预测概率;分支2的输出可以为2通道概率图,可以分别代表前连合AC点的预测概率和后联合PC点的预测概率;分支3的输出可以为1通道概率图,可以代表中矢面上点的预测概率。在一些实施例中,分支1的输出可以为1通道概率图,可以代表脑实质分割区域的预测概率或脑背景的预测概率。
在一些实施例中,在将获取的训练样本图像输入构建好的初始神经网络模型之前,可以对训练样本图像进行归一化操作。以训练样本为T1图像(磁共振T1加权图像)为例,在将获取的T1图像输入构建好的初始神经网络模型中之前,需对T1图像进行归一化操作。归一化方式可以包括各种方式,本说明书对此不做限制。例如,计算输入样本的均值和方差。又例如,对样本的每个像素点的灰度,减去均值,然后除以方差。
在一些实施例中,归一化操作后,可以将归一化后的训练样本集,即各训练样本图像,输入至初始神经网络模型中,分别得到第一分支网络层输出的预测区域标识概率图、第二分支网络层输出的预测点标识概率图、第三分支网络层输出的预测面标识概率图。
图7是根据本说明书一些实施例所示的神经网络模型的训练过程的示意图。
如图7所示,可以通过步骤710将训练样本集输入多任务全卷积网络720的共享网络层721,共享网络层721提取训练样本的特征;然后,三个标识任务分别进入各自的分支网络层,其中,区域标识任务进入区域标识任务分支网络层725,点标识任务进入点标识任务分支网络层726,面标识任务进入面标识任务分支网络层727;然后,输出各自的标识任务结果,即区域标识任务结果731、点标识任务结果732、面标识任务结果733,这些标识任务结果可以是各标识任务的预测概率图。
步骤530,根据预测区域标识概率图、预测点标识概率图、预测面标识概率图、区域标识金标准图像、点标识金标准图像、和/或面标识金标准图像,确定目标损失函数的值。
目标损失函数是整个神经网络模型对应的损失函数,可以用于调整与更新神经网络模型。在一些实施例中,在获取到标识任务结果后,可以基于标识任务结果与对应的金标准图像来确定目标损失函数。
在一些实施例中,可以基于每一个标识任务结果,即标识概率图,来得到各自任务对应的损失函数,然后基于获得的一个或多个损失函数,确定目标损失函数。例如,可以根据预测区域标识概率图、预测点标识概率图、预测面标识概率图、区域标识金标准图像、点标 识金标准图像、和/或面标识金标准图像,确定目标损失函数的值。关于如何基于确定目标损失函数的更多内容,可以参见图6的相关描述,在此不再赘述。
步骤540,根据目标损失函数的值调整初始神经网络模型的参数,以得到训练好的神经网络模型。
在一些实施例中,在得到目标损失函数的值之后,可以根据该目标损失函数的值调整初始神经网络模型的参数,例如,利用误差反传梯度下降算法,调整初始神经网络模型的参数,其中,优化器可选择Adam,学习率设为10 -4。重复上述步骤,持续调整目标损失函数的值,直至目标损失函数的值的变化幅度小于预设值,得到神经网络模型。其中,目标损失函数的值的变化幅度小于预设值表示目标损失函数的值趋于平稳,表示神经网络模型训练过程满足收敛条件。
本说明书一些实施例中,神经网络以共享网络层和不同分支网络构成,这样,在同时提取不同标识任务的概率图时,只有在各自的分支网络层中提取各自的特征,在共享网络层中则是进行参数共享,这样,节省了运行资源,提高了各标识任务的特征提取效率。而且,针对不同任务设计不同的任务的损失函数,可以针对性地监督神经网络的训练进度和方向,保证了神经网络训练效率和准确性。
图6是根据本说明书一些实施例所示的神经网络模型的训练方法的示例性流程图。
如图6所示,流程600可以包括下述步骤。在一些实施例中,流程600可以由第二计算设备130执行。
步骤610,根据预测区域标识概率图与区域标识金标准图像,确定第一损失函数的值。
在一些实施例中,可以根据预测区域标识概率图与区域标识金标准图像,确定第一损失函数的值,其中,第一损失函数是神经网络模型中对应区域标识任务的损失函数。例如,如图7所示,可以根据区域标识任务结果731和区域标识任务金标准741(即,区域标识金标准图像)来确定第一损失函数751。
在一些实施例中,对应区域标识任务的损失函数可以是任何适用于分割任务的损失函数,例如,Dice loss、交叉熵Loss等中的任意一种或其组合。
在一些实施例中,第一损失函数可以为Dice Loss,可以通过以下公式计算:
Figure PCTCN2022079897-appb-000013
其中,Loss Dice表示第一损失函数;P表示用来作为监督训练的金标准图像,即区域标识金标准图像;Q表示神经网络输出的预测图像,即预测区域标识概率图。
在一些实施例中,在区域标识任务中,可以包括一个或多个通道输出,其输出的概率 图可以包括对应脑皮层概率的概率图、对应背景概率的概率图等中的至少一个,因此,第一损失函数可以由一个或多个Dice Loss组成,其中,每个通道都对应一个Dice Loss。
步骤620,根据预测点标识概率图与点标识金标准图像,确定第二损失函数的值。
在一些实施例中,可以根据预测点标识概率图与点标识金标准图像,确定第二损失函数的值,其中,第二损失函数是神经网络模型中对应点标识任务的损失函数。例如,如图7所示,可以根据点标识任务结果732和点标识任务金标准742(即,点标识金标准图像)来确定第二损失函数752。
步骤630,根据预测面标识概率图与面标识金标准图像,确定第三损失函数的值。
在一些实施例中,可以根据预测面标识概率图与面标识金标准图像,确定第三损失函数的值,其中,第三损失函数是神经网络模型中对应面标识任务的损失函数。例如,如图7所示,可以根据面标识任务结果733和面标识任务金标准743(即,面标识金标准图像)来确定第三损失函数753。
在一些实施例中,对应点标识任务的损失函数和对应面标识任务的损失函数可以是任何基于热图回归的点检测的损失函数,例如,均方差损失(MSE Loss)等。在一些实施例中,面标识任务可以看作面上所有点的定位。
在一些实施例中,第二损失函数和第三损失函数可以为MSE Loss,可以通过以下公式计算:
Figure PCTCN2022079897-appb-000014
其中,Loss MSE表示第二损失函数或第三损失函数;x i表示金标准图像X中第i个像素的概率值;y i表示预测概率图Y中第i个像素的预测概率值;n表示X和Y包含的总像素数。
在一些实施例中,在点标识任务中,可以包括多个点(例如,AC和PC),因此第二损失函数由多个MSE Loss组成,其中,每个点都对应一个MSE Loss。
在一些实施例中,在面标识任务中可以包括中矢面一个面,因此第三损失函数由一个MSE Loss组成。
步骤640,对第一损失函数的值、第二损失函数的值和第三损失函数的值进行加权求和,得到目标损失函数的值。
在一些实施例中,在上述步骤后,可以对第一损失函数的值、第二损失函数的值和第三损失函数的值进行加权求和,得到目标损失函数的值。例如,如图7所示,可以根据第一损失函数751、第二损失函数752和第三损失函数753得到目标损失函数760。
在一些实施例中,计算加权和时,各标识任务损失函数的权重可以通过经验设定固定 值,也可以通过评估网络输出结果的不确定性,分支的输出结果不确定性越高,分支的损失函数的权重设置越大。
在一些实施例中,可以通过以下公式得到目标损失函数的值:
Loss all=w 1*Loss Dice,cortex+w 2*(Loss MSE,AC+Loss MSE,PC)
+w 3*Loss MSE,MSP     (8)
其中,Loss all表示目标损失函数,即总的损失函数;Loss Dice,cortex表示区域标识任务中对应脑皮层概率的损失函数,即第一损失函数,此处区域标识任务只包括对应脑皮层概率的一个通道;Loss MSE,AC和Loss MSE,PC分别表示点标识任务中AC和PC点定位的损失函数,其和为第二损失函数;Loss MSE,MSP表示第三损失函数,即面标识任务中MSP面定位的损失函数;w 1、w 2、w 3分别为三个任务Loss的权重,这3个权重可以根据神经网络情况选择任意值或者根据经验值确定,或者根据实际情况调整。例如,为了平衡Dice Loss和MSE Loss在数量级上的差异,3个权重可以分别为0.1、0.9、0.9。
公式8中只表示了区域标识任务一个通道输出图像(脑皮层概率)的损失函数,如果还要表示区域标识任务另一个通道输出图像(背景概率),则公式如下所示:
Loss all=w 1*(Loss Dice,cortex+Loss Dice,Background)
+w 2*(Loss MSE,AC+Loss MSE,PC)+w 3*Loss MSE,MSP   (9)
其中,Loss Dice,cortex和Loss Dice,Background分别表示区域标识任务中对应脑皮层概率和背景概率的损失函数,其余与公式8相同。
应当注意的是,上述有关流程300、流程500、流程600的描述仅仅是为了示例和说明,而不限定本说明书的适用范围。对于本领域技术人员来说,在本说明书的指导下可以对上述流程进行各种修正和改变。然而,这些修正和改变仍在本说明书的范围之内。例如,步骤610、步骤620、步骤630的顺序可以任意交换。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某 些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其它名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的系统组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的系统。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其它材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其它的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (20)

  1. 一种脑标识定位系统,其特征在于,所述系统包括处理器,所述处理器用于执行以下方法:
    获取脑部的图像;
    根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、所述脑部的点标识概率图、所述脑部的面标识概率图;
    根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;
    根据所述点标识和所述面标识,构建目标坐标系;
    根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点。
  2. 如权利要求1所述的系统,其特征在于,所述点标识概率图包括前连合概率图、后连合概率图,所述点标识包括前连合标识点、后连合标识点。
  3. 如权利要求1所述的系统,其特征在于,所述根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识,包括:
    将所述点标识概率图中最大概率值对应的像素点所在的位置确定为所述点标识的位置。
  4. 如权利要求1所述的系统,其特征在于,所述面标识概率图包括中矢面概率图,所述面标识包括中矢面。
  5. 如权利要求1所述的系统,其特征在于,所述根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识,包括:
    根据所述面标识概率图,确定目标点集;
    对所述目标点集进行拟合,得到所述面标识。
  6. 如权利要求5所述的系统,其特征在于,所述根据所述面标识概率图,确定目标点集,包括:
    确定所述面标识概率图中概率大于预设阈值的像素点的集合为所述目标点集。
  7. 如权利要求5所述的系统,其特征在于,所述对所述目标点集进行拟合,得到所述面标识,包括:
    根据随机采样一致性方法,对所述目标点集进行拟合,得到所述面标识。
  8. 如权利要求7所述的系统,其特征在于,所述随机采样一致性方法包括:
    对以下过程进行多次循环:
    a)从所述目标点集中进行随机采样,确定出一个子集;
    b)根据所述子集中的点,拟合出一个平面;
    c)确定所述目标点集中除所述子集之外的剩余点到所述平面的距离平方和;
    确定所述多次循环中,所述距离平方和最小的一次循环所对应的平面为所述面标识。
  9. 如权利要求1所述的系统,其特征在于,其中:
    所述脑皮层的标识点包括脑皮层最前侧点、脑皮层最后侧点、脑皮层最左侧点、脑皮层最右侧点、脑皮层最下侧点、脑皮层最上侧点中的至少一个。
  10. 如权利要求1所述的系统,其特征在于,根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点,包括:
    根据所述脑皮层在所述目标坐标系中三个坐标轴方向上的最大值点或最小值点,确定所述脑皮层的标识点;和/或
    确定通过所述点标识并在平行所述目标坐标系中三个坐标轴方向上的脑皮层的最大值点或最小值点,为所述脑皮层的标识点。
  11. 如权利要求1所述的系统,其特征在于,所述神经网络模型为多任务模型,包括共享网络层和三个分支网络层,所述三个分支网络层包括第一分支网络层、第二分支网络层、第三分支网络层。
  12. 如权利要求11所述的系统,其特征在于,所述第一分支网络层用于分割所述脑部,并输出所述区域标识概率图。
  13. 如权利要求11所述的系统,其特征在于,所述第二分支网络层用于对所述脑部进行 点标识定位,并输出所述点标识概率图。
  14. 如权利要求11所述的系统,其特征在于,所述第三分支网络层用于对所述脑部进行面标识定位,并输出所述面标识概率图。
  15. 如权利要求11所述的系统,其特征在于,所述神经网络模型的训练过程包括:
    获取训练样本图像,及各训练样本图像对应的金标准图像,所述金标准图像包括区域标识金标准图像、点标识金标准图像、面标识金标准图像;
    将所述各训练样本图像输入至初始神经网络模型,分别得到所述第一分支网络层输出的预测区域标识概率图、所述第二分支网络层输出的预测点标识概率图、所述第三分支网络层输出的预测面标识概率图;
    根据所述预测区域标识概率图、所述预测点标识概率图、所述预测面标识概率图、所述区域标识金标准图像、所述点标识金标准图像、所述面标识金标准图像,确定目标损失函数的值;
    根据所述目标损失函数的值调整所述初始神经网络模型的参数,以得到训练好的神经网络模型。
  16. 如权利要求15所述的系统,其特征在于,所述根据所述预测区域标识概率图、所述预测点标识概率图、所述预测面标识概率图、所述区域标识金标准图像、所述点标识金标准图像、所述面标识金标准图像,确定目标损失函数的值,包括:
    根据所述预测区域标识概率图与所述区域标识金标准图像,确定第一损失函数的值;
    根据所述预测点标识概率图与所述点标识金标准图像,确定第二损失函数的值;
    根据所述预测面标识概率图与所述面标识金标准图像,确定第三损失函数的值;
    对所述第一损失函数的值、所述第二损失函数的值和所述第三损失函数的值进行加权求和,得到所述目标损失函数的值。
  17. 如权利要求1所述的系统,其特征在于,所述图像包括MRI图像。
  18. 一种脑标识定位系统,其特征在于,包括:
    获取模块,用于获取脑部的图像;
    概率图确定模块,用于根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、 所述脑部的点标识概率图、所述脑部的面标识概率图;
    标识定位模块,用于:
    根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;
    根据所述点标识和所述面标识,构建目标坐标系;
    根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点。
  19. 如权利要求18所述的系统,其特征在于,其中:
    所述神经网络模型为多任务模型,包括共享网络层和三个分支网络层,所述三个分支网络层包括第一分支网络层、第二分支网络层、第三分支网络层;
    所述第一分支网络层用于分割所述脑部,并输出所述区域标识概率图;
    所述第二分支网络层用于对所述脑部进行点标识定位,并输出所述点标识概率图;
    所述第三分支网络层用于对所述脑部进行面标识定位,并输出所述面标识概率图。
  20. 一种非暂时性的计算机可读介质,包括可执行指令,所述指令被至少一个处理器执行时,导致所述至少一个处理器实现一种方法,包括:
    获取脑部的图像;
    根据所述图像和神经网络模型,确定所述脑部的区域标识概率图、所述脑部的点标识概率图、所述脑部的面标识概率图;
    根据所述区域标识概率图、所述点标识概率图、所述面标识概率图,分别确定所述脑部的脑皮层的分割结果、所述脑部的点标识、所述脑部的面标识;
    根据所述点标识和所述面标识,构建目标坐标系;
    根据所述脑皮层的分割结果、所述目标坐标系和/或所述点标识,确定所述脑皮层的标识点。
PCT/CN2022/079897 2021-03-10 2022-03-09 脑标识定位系统和方法 WO2022188799A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22766321.8A EP4293618A1 (en) 2021-03-10 2022-03-09 Brain identifier positioning system and method
US18/464,247 US20230419499A1 (en) 2021-03-10 2023-09-10 Systems and methods for brain identifier localization

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110259731.8A CN112950600B (zh) 2021-03-10 脑标识提取方法、装置、计算机设备和存储介质
CN202110259731.8 2021-03-10

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/464,247 Continuation US20230419499A1 (en) 2021-03-10 2023-09-10 Systems and methods for brain identifier localization

Publications (1)

Publication Number Publication Date
WO2022188799A1 true WO2022188799A1 (zh) 2022-09-15

Family

ID=76229318

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/079897 WO2022188799A1 (zh) 2021-03-10 2022-03-09 脑标识定位系统和方法

Country Status (3)

Country Link
US (1) US20230419499A1 (zh)
EP (1) EP4293618A1 (zh)
WO (1) WO2022188799A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060270926A1 (en) * 2005-02-07 2006-11-30 Qingmao Hu Methods and systems to segment central sulcus and Sylvian fissure
CN109829922A (zh) * 2018-12-20 2019-05-31 上海联影智能医疗科技有限公司 一种脑图像重定向方法、装置、设备及存储介质
CN111583189A (zh) * 2020-04-16 2020-08-25 武汉联影智融医疗科技有限公司 大脑核团定位方法、装置、存储介质及计算机设备
CN111951272A (zh) * 2020-07-02 2020-11-17 上海联影智能医疗科技有限公司 脑部影像的分割方法、装置、计算机设备和可读存储介质
CN112950600A (zh) * 2021-03-10 2021-06-11 武汉联影智融医疗科技有限公司 脑标识提取方法、装置、计算机设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060270926A1 (en) * 2005-02-07 2006-11-30 Qingmao Hu Methods and systems to segment central sulcus and Sylvian fissure
CN109829922A (zh) * 2018-12-20 2019-05-31 上海联影智能医疗科技有限公司 一种脑图像重定向方法、装置、设备及存储介质
CN111583189A (zh) * 2020-04-16 2020-08-25 武汉联影智融医疗科技有限公司 大脑核团定位方法、装置、存储介质及计算机设备
CN111951272A (zh) * 2020-07-02 2020-11-17 上海联影智能医疗科技有限公司 脑部影像的分割方法、装置、计算机设备和可读存储介质
CN112950600A (zh) * 2021-03-10 2021-06-11 武汉联影智融医疗科技有限公司 脑标识提取方法、装置、计算机设备和存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAO HUA, DAI FEI-PENG, CHEN XIAO-LAN, CHEN SI-PING: "Implementation Of Brain Atlas Registration Based On Talairach Coordinate System", HUANAN-SHIFAN-DAXUE-XUEBAO / ZIRAN-KEXUE-BAN] HUANAN-SHIFAN-DAXUE-XUEBAO = JOURNAL OF SOUTH CHINA NORMAL UNIVERSITY / HUANAN-SHIFAN-DAXUE-XUEBAO BIANJIBU BIANJI CHUBAN, GUANGZHOU, 1983-, CN, vol. 2008, no. 1, 1 February 2008 (2008-02-01), CN , pages 40 - 45, XP055966233, ISSN: 1000-5463 *
XULEI YANG, WAI TENG TANG, GABRIEL TJIO, SI YONG YEO, YI SU: "Automatic detection of anatomical landmarks in brain MR scanning using multi-task deep neural networks", NEUROCOMPUTING, ELSEVIER, AMSTERDAM, NL, 1 April 2019 (2019-04-01), AMSTERDAM, NL , XP055649903, ISSN: 0925-2312, DOI: 10.1016/j.neucom.2018.10.105 *

Also Published As

Publication number Publication date
US20230419499A1 (en) 2023-12-28
CN112950600A (zh) 2021-06-11
EP4293618A1 (en) 2023-12-20

Similar Documents

Publication Publication Date Title
CN110766730B (zh) 图像配准及随访评估方法、存储介质及计算机设备
CN110652317B (zh) 一种产前胎儿超声容积图像中标准切面的自动定位方法
JP5832938B2 (ja) 画像処理装置、方法及びプログラム
US20110216954A1 (en) Hierarchical atlas-based segmentation
Wu et al. Registration of longitudinal brain image sequences with implicit template and spatial–temporal heuristics
WO2022247218A1 (zh) 基于自动勾画的图像配准方法
KR102442090B1 (ko) 수술용 내비게이션 시스템에서의 점정합 방법
Liu et al. Automatic localization of the anterior commissure, posterior commissure, and midsagittal plane in MRI scans using regression forests
Tan et al. An approach to extraction midsagittal plane of skull from brain CT images for oral and maxillofacial surgery
CN112164447B (zh) 图像处理方法、装置、设备及存储介质
CN113822323A (zh) 脑部扫描图像的识别处理方法、装置、设备及存储介质
Pujadas et al. Shape-based normalized cuts using spectral relaxation for biomedical segmentation
Bazgir et al. Kidney segmentation using 3D U-Net localized with Expectation Maximization
Aranda et al. A flocking based method for brain tractography
US20230289969A1 (en) Method, system and device of image segmentation
WO2022188799A1 (zh) 脑标识定位系统和方法
CN112950600B (zh) 脑标识提取方法、装置、计算机设备和存储介质
US20220108525A1 (en) Patient-specific cortical surface tessellation into dipole patches
JP6788113B2 (ja) 医用画像処理装置、方法およびプログラム
Thai et al. Using Deep Convolutional Neural Network for Mouse Brain Segmentation in DT-MRI
Zhang et al. Transformer-Based Multimodal Fusion for Early Diagnosis of Alzheimer's Disease Using Structural MRI And PET
CN114463288B (zh) 脑部医学影像评分方法、装置、计算机设备和存储介质
Mansoori et al. An iterative method for registration of high-resolution cardiac histoanatomical and MRI images
Zhang et al. Multi-modality medical image registration using support vector machines
Zhou et al. Standardized measurement of mid-surface shift of brain based on deep Hough transform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22766321

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022766321

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022766321

Country of ref document: EP

Effective date: 20230912