CN112184720A - Method and system for segmenting rectus muscle and optic nerve of CT image - Google Patents

Method and system for segmenting rectus muscle and optic nerve of CT image Download PDF

Info

Publication number
CN112184720A
CN112184720A CN202010891689.7A CN202010891689A CN112184720A CN 112184720 A CN112184720 A CN 112184720A CN 202010891689 A CN202010891689 A CN 202010891689A CN 112184720 A CN112184720 A CN 112184720A
Authority
CN
China
Prior art keywords
image
segmentation
optic
shape
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010891689.7A
Other languages
Chinese (zh)
Other versions
CN112184720B (en
Inventor
陈晓红
杨健
胡国语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Beijing Tongren Hospital
Original Assignee
Beijing Institute of Technology BIT
Beijing Tongren Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Beijing Tongren Hospital filed Critical Beijing Institute of Technology BIT
Priority to CN202010891689.7A priority Critical patent/CN112184720B/en
Publication of CN112184720A publication Critical patent/CN112184720A/en
Application granted granted Critical
Publication of CN112184720B publication Critical patent/CN112184720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A segmentation method and a segmentation system for internal rectus muscle and optic nerve of a CT image can effectively position the optic chiasma and optic nerve bundle which are not clearly imaged in the CT, effectively make up the weakness of lack of local information in multi-mode fusion and remarkably improve the segmentation precision. The method comprises the following steps: (1) constructing a statistical shape model: the statistical shape model is composed of a training data set, wherein the shape of the anterior visual pathway and the internal rectus muscle are manually delineated; (2) segmentation based on MR/CT image fusion: obtaining a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, and fusing a CT image with the MR image through elastic registration to obtain an initial segmentation result of a anterior visual pathway and an internal rectus muscle; (3) multi-feature constraint segmentation refinement: a multi-feature constraining surface is obtained from the target CT image, and structures not visible in the CT image, including the optic beam and the optic cross, are segmented after fitting the initial segmentation results to the surface.

Description

Method and system for segmenting rectus muscle and optic nerve of CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for segmenting an internal rectus muscle and an optic nerve of a CT image and a system for segmenting the internal rectus muscle and the optic nerve of the CT image.
Background
Stereotactic Radiosurgery (SRS) and Image-guided surgery (IGS) are two techniques commonly used in the treatment of cranial base tumors. Due to the high bone density, CT is the primary imaging modality in the planning phase and surgery of cranial base surgery. In clinic, a surgeon must rely on abundant clinical experience to accurately locate brain structures in a CT image and avoid the injury of surgical instruments to key structures of the skull base (nerves, eyeballs, muscles in the eye sockets, etc.). However, such procedures are very dangerous to the patient. Therefore, automatic segmentation of the anterior visual pathway (optic nerve, optic tract and optic chiasm) and the internal rectus muscle in CT images is critical to improve the accuracy of the procedure and reduce damage to other anatomical structures.
In recent years, segmentation methods have been widely developed and can be divided into atlas registration and statistical shape model base methods. Bekes et al propose a geometric model-based method to segment the eyeball, lens, optic nerve and optic chiasm in CT images. It requires an interactive selection of seed points to initialize the segmentation. Huo Y et al propose a multi-atlas registration segmentation process that includes two steps: (1) bone structure affine registration to crop a visual pathway region in a target and map set, (2) deformable registration of the cropped region. However, due to the low contrast of soft tissue in CT images, atlas registration based methods cannot accurately segment the visual pathway. Chen and Dawant use a method of multi-atlas registration to segment head and neck organs. The method allows the target volume to be initially aligned with the map-set and then local registration is achieved by defining bounding boxes for each structure. Aghdasi et al apply a predefined anatomical model to segment visual organs and some brain structures in MR images. In addition, some studies show that the segmentation accuracy of smaller structures such as optic nerves can be improved by a multi-atlas registration-based method. Over the past few decades, model-based segmentation methods have been widely developed for anterior visual pathway segmentation. Nobel et al combines deformable models and atlas registration with previous local intensities to segment the pre-visual pathway. The statistical shape model includes an active appearance model and an active shape model, and is effective for solving the problem of segmentation of a structure with poor CT image quality. In summary, SSM (statistical shape model) based methods are better suited for poor image quality than atlas registration based methods.
In some other studies, it is also common to use deep learning to segment the cranial base tissue. Jose Dolz et al extracted enhanced features in MR images and proposed a deep learning classification scheme for optic nerve, optic chiasm and pituitary segmentation. Ren et al propose a strategy of interleaving 3D-CNNs for segmentation of pre-vision paths in CT images. In the field of medical image segmentation, U-Net is also widely applied and provides accurate segmentation. However, in the absence of a significant amount of data, neural network-based methods cannot accurately segment the anterior visual pathway and the internal rectus muscle.
A priori knowledge plays an important role in the segmentation of CT images. For statistical shape models, a model constructed from training data may be considered a priori information. Segmentation based on atlas registration depends on the quality of the target image and the prior information. Although CT images of soft tissue (e.g., anterior visual pathway and internal rectus muscle) suffer from a number of deficiencies, such as low contrast, blurred edges and noise. In this case, segmentation can be obtained by fitting a statistical shape model even if the extracted object boundary is blurred and fragmented. Furthermore, unlike the learning-based approach, the statistical shape model-based approach performs well in segmentation when the size of the training set is small. Because the structures of the anterior visual pathway and the internal rectus muscle in the MR data are complete, a statistical shape model can be constructed as prior information to realize accurate segmentation of the CT image.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a CT image segmentation method for rectus muscle and optic nerve, which can effectively position the optic chiasma and optic nerve bundle which are not clearly imaged in CT, and can effectively make up the defect of lack of local information in multi-modal fusion so as to obviously improve the segmentation precision.
The technical scheme of the invention is as follows: the method for segmenting the internal rectus muscle and the optic nerve of the CT image comprises the following steps:
(1) constructing a statistical shape model: the statistical shape model is composed of a training data set, wherein the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) segmentation based on MR/CT image fusion: obtaining a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, and fusing a CT image with the MR image through elastic registration to obtain an initial segmentation result of a anterior visual pathway and an internal rectus muscle;
(3) multi-feature constraint segmentation refinement: a multi-feature constraining surface is obtained from the target CT image, and structures not visible in the CT image, including the optic beam and the optic cross, are segmented after fitting the initial segmentation results to the surface.
The MR data set is used for constructing a prior shape model to assist the segmentation of a structural CT image, and because of the weakness of soft tissue CT imaging, the invention can effectively position the optic chiasma and optic nerve bundle which are not clearly imaged in CT; the multi-feature constrained surface can effectively make up the weakness of lack of local information in multi-modal fusion, so that the segmentation precision is remarkably improved.
Also provided is a system for medial rectus muscle and optic nerve segmentation of CT images, comprising:
a statistical shape model construction module configured to train a shape correspondence of the data set, and construct a statistical shape model of the training shape using principal component analysis;
a MR/CT image fusion-based segmentation module configured to obtain a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, the CT image being fused with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the medial rectus muscle;
a multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, and, after fitting the initial segmentation results to the surface, segment structures not visible in the CT image, including the optic beam and the optic cross.
Drawings
Fig. 1 is a flowchart of a method of intra-rectus muscle and optic nerve segmentation of a CT image according to the present invention.
Detailed Description
As shown in fig. 1, the method for segmenting the rectus muscle and the optic nerve of the CT image comprises the following steps:
(1) constructing a statistical shape model: the statistical shape model is composed of a training data set, wherein the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) segmentation based on MR/CT image fusion: obtaining a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, and fusing a CT image with the MR image through elastic registration to obtain an initial segmentation result of a anterior visual pathway and an internal rectus muscle;
(3) multi-feature constraint segmentation refinement: a multi-feature constraining surface is obtained from the target CT image, and structures not visible in the CT image, including the optic beam and the optic cross, are segmented after fitting the initial segmentation results to the surface.
The MR data set is used for constructing a prior shape model to assist the segmentation of a structural CT image, and because of the weakness of soft tissue CT imaging, the invention can effectively position the optic chiasma and optic nerve bundle which are not clearly imaged in CT; the multi-feature constrained surface can effectively make up the weakness of lack of local information in multi-modal fusion, so that the segmentation precision is remarkably improved.
Preferably, in step (1), in order to construct a statistical shape model, the shape correspondence of the training data set is:
shape correspondence is expressed as a dense mapping between a set of shape points in the MR data set, the correspondence of the two shapes being obtained by pairwise non-rigid registration; for MR data sets
Figure BDA0002653189160000051
Figure BDA0002653189160000052
Obtaining unbiased point correspondences by group shape registration, anAnd the similarity measure shape registration obtained by means of grouping level is expressed as formula (1):
Figure BDA0002653189160000053
where N is the number of training data, d (.) is the Euclidean distance, gijIs the connection between the ith and jth shapes in the dataset;
the connection relation among all shapes is represented by a graphic model, and then the group level registration is realized through the guidance of the graphic model;
obtaining shape correspondence
Figure BDA0002653189160000058
On this basis, the alignment shape is analyzed using the generalized equation.
Preferably, the step (1) adopts principal component analysis to construct a statistical shape model of the training shape, and performs eigenvalue decomposition on the matrix, wherein
Figure BDA0002653189160000059
Are vectorized and then arranged together, and the eigenvectors are arranged according to a descending order of eigenvalues, the first few eigenvectors being used to model the shape data, the statistical shape model being formula (2):
Figure BDA0002653189160000054
where P represents a vectoring matrix vec (P), vectoring average shape
Figure BDA0002653189160000055
And the principal eigenmodes form a matrix Φ that is pre-computed from the training dataset, where b represents the parameters of the model.
Preferably, in the step (2), the reference MR image I is randomly selected from the training datarefAnd its corresponding segmentation image; by fitting a statistical shape model to the reference MR mapLike IrefIs divided intoTTo obtain the reference shape, this process is called surface fitting and is expressed by equation (3):
Figure BDA0002653189160000056
Figure BDA0002653189160000057
wherein DTIs ITThe distance of (a) is transformed,
Figure BDA0002653189160000061
coordinates of points on the statistical shape model are represented, and diag (λ) represents a diagonal matrix composed of eigenvalues λ; b is constrained in a hyper-rectangle defined by β and λ, where λiIs the i-th element of λ, biIs the ith parameter in b; the first term in equation (3) is from each point on the transformed shape model to the surface ITAnd is used to describe registration errors, the second term being a regularization term for statistical shape model deformation, for penalizing the degree of model deformation.
Preferably, the reference MR image I is obtained in step (2) by elastic registration of 3D imagesrefMapping to target CT image ItarFitting a parameterized deformation field by using a B spline; elastic registration of the two images can be achieved by solving the optimal transformation T and calculating according to equation (4):
Figure BDA0002653189160000062
after obtaining the optimized transformation, obtaining a deformation field between the MR and CT images; then, the reference shape is transformed into a target image to realize the fusion of the MR and CT images; the corresponding result is considered as the initial segmentation result of the anterior visual pathway and the internal rectus muscle.
Preferably, the anterior visual pathway and the internal rectus muscle in the step (3) are soft tissues corresponding to a specific gray window in the CT image, and according to this feature, a good enhancement effect is obtained by setting appropriate upper and lower thresholds; bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle.
Preferably, in the step (3), after fitting the optic nerve and internal rectus muscle models to the multi-feature constraint surface, driving the optic nerve and optic cross model; driving a statistical shape model through an optimization formula (3) to enable the space position I between the converted model and the multi-feature constraint surfaceSThe consistency is achieved; finally, the segmentation of optic nerve and internal rectus muscle parts and the prediction of optic bundles and optic cross parts are realized.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, in accordance with the method of the present invention, the present invention also includes a system for segmentation of rectus internus muscle and optic nerve of CT images, which is generally represented in the form of functional blocks corresponding to the steps of the method. The system comprises:
a statistical shape model construction module configured to train a shape correspondence of the data set, and construct a statistical shape model of the training shape using principal component analysis;
a MR/CT image fusion-based segmentation module configured to obtain a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, the CT image being fused with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the medial rectus muscle;
a multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, and, after fitting the initial segmentation results to the surface, segment structures not visible in the CT image, including the optic beam and the optic cross.
Preferably, the MR/CT image fusion based segmentation module performs: random selection of reference MR image I from training datarefAnd its corresponding segmentation image; by fitting a statistical shape model to the reference MR image IrefIs divided intoTTo obtain a reference shape; obtaining a reference MR image I by elastic registration of 3D imagesrefMapping to target CT image ItarFitting a parameterized deformation field by using a B spline; elastic registration of the two images can be achieved by solving the optimal transformation T.
Preferably, the multi-feature constrained segmentation refinement module performs:
the anterior visual pathway and the internal rectus muscle are soft tissues corresponding to a specific gray window in the CT image, and according to this feature, a good enhancement effect is obtained by setting appropriate upper and lower thresholds;
then, reducing noise in the enhanced image by utilizing bilateral filtering, and extracting boundary information of a foresight path and an internal rectus muscle by adopting a Sobel operator;
after fitting the optic nerve and internal rectus muscle models to the multi-feature constraint surface, driving the optic nerve and optic cross model; driving a statistical shape model through an optimization formula (3) to enable the space position I between the converted model and the multi-feature constraint surfaceSThe consistency is achieved; finally, the segmentation of optic nerve and internal rectus muscle parts and the prediction of optic bundles and optic cross parts are realized.
The present invention is described in more detail below.
The invention provides an anatomical shape model based on multi-modal image fusion, which is used for low-contrast anterior visual pathway and internal rectus muscle segmentation in a CT image, and the detailed flow is shown in figure 1. First, the statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated. Second, the shape of the reference MR image is obtained by fitting a statistical shape model to the segmentation result of the MR image. The CT image is then fused with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle. Finally, a multi-feature constraining surface is obtained from the target CT image. After fitting the initial segmentation results to the surface, structures not visible in the CT image, including the bundle and cross-views, may also be segmented.
The contribution of the proposed method is two-fold: first, the MR data set is used to construct a prior shape model to assist in the segmentation of CT images of the structure. Due to the weakness of soft tissue CT imaging, it can effectively localize the optic chiasm and optic nerve bundle that are not imaged clearly in CT. Secondly, the multi-feature constrained surface can effectively make up the weakness of lack of local information in multi-modal fusion. It effectively improves the segmentation accuracy.
(1) Constructing statistical shape models
In order to build a statistical shape model, the shape correspondence of the data set needs to be trained. Shape correspondence may be represented as a dense mapping between a set of shape points in the MR data set. The correspondence of the two shapes can be obtained by a pair-wise non-rigid registration. For MR data sets
Figure BDA0002653189160000081
Figure BDA0002653189160000082
Unbiased point correspondences can be obtained by packet shape registration, and similarity metric shape registration can be obtained by packet-level means as:
Figure BDA0002653189160000083
where N is the number of training data, d (.) is the Euclidean distance, gijIs the connection between the ith and jth shapes in the dataset. The connection relationship between all shapes is represented by a graphical model, and then packet-level registration can be achieved through guidance of the graphical model. Finally, the corresponding relation of the shape is obtained
Figure BDA0002653189160000084
Figure BDA0002653189160000085
On the basis of whichThe alignment shape is analyzed using generalized equations. And constructing a statistical shape model of the training shape by adopting principal component analysis. Performing eigenvalue decomposition on the matrix, wherein
Figure BDA0002653189160000097
Are vectorized and then arranged together, and the feature vectors are arranged according to a descending order of feature values. The first few feature vectors are used to model the shape data. Thus, the statistical shape model can be expressed as:
Figure BDA0002653189160000091
where P represents a vectoring matrix vec (P), vectoring average shape
Figure BDA0002653189160000092
And the principal eigenmodes form a matrix Φ that is pre-computed from the training dataset, where b represents the parameters of the model.
(2) Segmentation based on MR/CT image fusion
Random selection of reference MR image I from training datarefAnd its corresponding segmented image. Can be generated by fitting a statistical shape model to the reference MR image IrefIs divided intoTTo obtain the reference shape. This process, called surface fitting, can be expressed as:
Figure BDA0002653189160000093
Figure BDA0002653189160000094
wherein DTIs ITThe distance of (a) is transformed,
Figure BDA0002653189160000095
coordinates representing points on the statistical shape model, diag (λ) representing the diagonal composed of eigenvalues λAnd (4) matrix. b is constrained in a hyper-rectangle defined by β and λ, where λiIs the i-th element of λ, biIs the ith parameter in b. The first term in equation (3) is from each point on the transformed shape model to the surface ITAnd is used to describe registration errors. The second term is a regularization term for statistical shape model deformation, which penalizes the degree of model deformation.
Obtaining a reference MR image I by elastic registration of 3D imagesrefMapping to target CT image ItarThe fusion of the MR/CT images is realized at the same time. The normalized mutual information is considered as a similarity measure between the two images. This patent uses a B-spline to fit the parametric deformation field. Elastic registration of the two images can be achieved by solving the optimal transformation T and is calculated as follows:
Figure BDA0002653189160000096
after obtaining the optimized transformation, a deformation field between the MR and CT images can be obtained. The reference shape is then transformed into the target image to achieve fusion of the MR and CT images. The corresponding result is considered as the initial segmentation result of the anterior visual pathway and the internal rectus muscle.
(3) Multi-feature constrained segmentation refinement
The anterior visual pathway and internal rectus muscle are soft tissue corresponding to a particular gray scale window in the CT image. According to this feature, a good enhancement effect can be obtained by setting appropriate upper and lower thresholds. Thus, the image is enhanced. The contrast of the anterior visual pathway and the internal rectus muscle in the enhanced image is improved compared to the contrast of the anterior visual pathway and the internal rectus muscle in the original CT image. Bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle. Constraints on the size of the connected component domain can effectively eliminate the effects of noise, and constraints from the initial segmentation can ensure that most of the extracted surface belongs to the anterior visual pathway and the internal rectus muscle.
Modeling the optic nerve and the internal rectus muscleAfter fitting the profiles to the multi-feature constraint surface, the optic nerve and optic chiasm model can also be driven. Driving a statistical shape model through an optimization formula (3) to enable the space position I between the converted model and the multi-feature constraint surfaceSConsistency is achieved. Finally, the segmentation of optic nerve and internal rectus muscle parts and the prediction of optic bundles and optic cross parts are realized.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (10)

1. A method for segmenting internal rectus muscle and optic nerve of a CT image is characterized by comprising the following steps: which comprises the following steps:
(1) constructing a statistical shape model: the statistical shape model is composed of a training data set, wherein the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) segmentation based on MR/CT image fusion: obtaining a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, and fusing a CT image with the MR image through elastic registration to obtain an initial segmentation result of a anterior visual pathway and an internal rectus muscle;
(3) multi-feature constraint segmentation refinement: a multi-feature constraining surface is obtained from the target CT image, and structures not visible in the CT image, including the optic beam and the optic cross, are segmented after fitting the initial segmentation results to the surface.
2. The method for segmentation of rectus interna and optic nerve of a CT image as claimed in claim 1, wherein: in the step (1), in order to construct a statistical shape model, the shape correspondence of the data set is trained:
shape correspondence is expressed as a dense mapping between a set of shape points in the MR data set, the correspondence of the two shapes being obtained by pairwise non-rigid registration; for MR data sets
Figure FDA0002653189150000011
Figure FDA0002653189150000012
Unbiased point correspondences are obtained by grouping shape registration, and similarity measure shape registration obtained by grouping level is expressed as formula (1):
Figure FDA0002653189150000013
where N is the number of training data, d (.) is the Euclidean distance, gijIs the connection between the ith and jth shapes in the dataset;
the connection relation among all shapes is represented by a graphic model, and then the group level registration is realized through the guidance of the graphic model;
obtaining shape correspondence
Figure FDA0002653189150000021
On this basis, the alignment shape is analyzed using the generalized equation.
3. The method for segmentation of rectus interna and optic nerve of a CT image as claimed in claim 2, wherein: in the step (1), a statistical shape model of the training shape is constructed by adopting principal component analysis, and eigenvalue decomposition is carried out on the matrix, wherein
Figure FDA0002653189150000027
Are vectorized and then arranged together, and the eigenvectors are arranged according to a descending order of eigenvalues, the first few eigenvectors being used to model the shape data, the statistical shape model being formula (2):
Figure FDA0002653189150000022
where P represents a vectoring matrix vec (P), vectoring average shape
Figure FDA0002653189150000023
And the principal eigenmodes form a matrix Φ that is pre-computed from the training dataset, where b represents the parameters of the model.
4. The method of segmenting rectus internus and optic nerves of a CT image as set forth in claim 3, wherein: in the step (2), a reference MR image I is randomly selected from the training datarefAnd its corresponding segmentation image; by fitting a statistical shape model to the reference MR image IrefIs divided intoTTo obtain the reference shape, this process is called surface fitting and is expressed by equation (3):
Figure FDA0002653189150000024
Figure FDA0002653189150000025
wherein DTIs ITThe distance of (a) is transformed,
Figure FDA0002653189150000026
coordinates of points on the statistical shape model are represented, and diag (λ) represents a diagonal matrix composed of eigenvalues λ; b is constrained in a hyper-rectangle defined by β and λ, where λiIs the i-th element of λ, biIs the ith parameter in b; the first term in equation (3) is from each point on the transformed shape model to the surface ITAnd is used to describe registration errors, the second term being a regularization term for statistical shape model deformation, for penalizing the degree of model deformation.
5. The method of segmenting rectus internus and optic nerves of a CT image as set forth in claim 4, wherein: the reference MR image I is obtained by elastic registration of the 3D image in the step (2)refMapping to target CT image ItarFitting a parameterized deformation field by using a B spline; elastic registration of the two images can be achieved by solving the optimal transformation T and calculating according to equation (4):
Figure FDA0002653189150000031
after obtaining the optimized transformation, obtaining a deformation field between the MR and CT images; then, the reference shape is transformed into a target image to realize the fusion of the MR and CT images; the corresponding result is considered as the initial segmentation result of the anterior visual pathway and the internal rectus muscle.
6. The method of segmenting rectus internus and optic nerves of a CT image as set forth in claim 5, wherein: the anterior visual pathway and the internal rectus muscle in the step (3) are soft tissues corresponding to a specific gray window in the CT image, and according to the characteristics, a good enhancement effect is obtained by setting appropriate upper and lower thresholds; bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle.
7. The method of segmenting rectus internus and optic nerves of a CT image as set forth in claim 6, wherein: in the step (3), after fitting the optic nerve and internal rectus muscle models to the multi-feature constraint surface, driving the optic nerve and optic cross model; driving a statistical shape model through an optimization formula (3) to enable the space position I between the converted model and the multi-feature constraint surfaceSThe consistency is achieved; finally, the segmentation of optic nerve and internal rectus muscle parts and the prediction of optic bundles and optic cross parts are realized.
8. A system for segmentation of rectus interna and optic nerve of CT images, comprising: it includes: a statistical shape model construction module configured to train a shape correspondence of the data set, and construct a statistical shape model of the training shape using principal component analysis;
a MR/CT image fusion-based segmentation module configured to obtain a shape of a reference MR image by fitting a statistical shape model to a segmentation result of the MR image, the CT image being fused with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the medial rectus muscle;
a multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, and, after fitting the initial segmentation results to the surface, segment structures not visible in the CT image, including the optic beam and the optic cross.
9. The system for medial rectus muscle and optic nerve segmentation of CT images as set forth in claim 8, wherein: the MR/CT image fusion-based segmentation module performs: random selection of reference MR image I from training datarefAnd its corresponding segmentation image; by fitting a statistical shape model to the reference MR image IrefIs divided intoTTo obtain a reference shape; obtaining a reference MR image I by elastic registration of 3D imagesrefMapping to target CT image ItarFitting a parameterized deformation field by using a B spline; elastic registration of the two images can be achieved by solving the optimal transformation T.
10. The system for medial rectus muscle and optic nerve segmentation of CT images as set forth in claim 9, wherein: the multi-feature constraint segmentation refinement module performs:
the anterior visual pathway and the internal rectus muscle are soft tissues corresponding to a specific gray window in the CT image, and according to this feature, a good enhancement effect is obtained by setting appropriate upper and lower thresholds; then, reducing noise in the enhanced image by utilizing bilateral filtering, and extracting boundary information of a foresight path and an internal rectus muscle by adopting a Sobel operator;
after fitting the optic nerve and internal rectus muscle models to the multi-feature constraint surface, driving the optic nerve and optic cross model; driving a statistical shape model through an optimization formula (3) to enable the space position I between the converted model and the multi-feature constraint surfaceSThe consistency is achieved; final realization visionSegmentation of the nerve and internal rectus muscle portions and prediction of the optic tract and the optic chiasm.
CN202010891689.7A 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image Active CN112184720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010891689.7A CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010891689.7A CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Publications (2)

Publication Number Publication Date
CN112184720A true CN112184720A (en) 2021-01-05
CN112184720B CN112184720B (en) 2024-04-23

Family

ID=73925308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010891689.7A Active CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Country Status (1)

Country Link
CN (1) CN112184720B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744271A (en) * 2021-11-08 2021-12-03 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN114974518A (en) * 2022-04-15 2022-08-30 浙江大学 Multi-mode data fusion lung nodule image recognition method and device
CN116258671A (en) * 2022-12-26 2023-06-13 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) MR image-based intelligent sketching method, system, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100266170A1 (en) * 2009-04-20 2010-10-21 Siemens Corporate Research, Inc. Methods and Systems for Fully Automatic Segmentation of Medical Images
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN108961256A (en) * 2018-07-05 2018-12-07 艾瑞迈迪医疗科技(北京)有限公司 Image partition method, operation navigation device, electronic equipment and storage medium
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110163847A (en) * 2019-04-24 2019-08-23 艾瑞迈迪科技石家庄有限公司 Liver neoplasm dividing method and device based on CT/MR image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100266170A1 (en) * 2009-04-20 2010-10-21 Siemens Corporate Research, Inc. Methods and Systems for Fully Automatic Segmentation of Medical Images
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN108961256A (en) * 2018-07-05 2018-12-07 艾瑞迈迪医疗科技(北京)有限公司 Image partition method, operation navigation device, electronic equipment and storage medium
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110163847A (en) * 2019-04-24 2019-08-23 艾瑞迈迪科技石家庄有限公司 Liver neoplasm dividing method and device based on CT/MR image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744271A (en) * 2021-11-08 2021-12-03 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN113744271B (en) * 2021-11-08 2022-02-11 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN114974518A (en) * 2022-04-15 2022-08-30 浙江大学 Multi-mode data fusion lung nodule image recognition method and device
CN116258671A (en) * 2022-12-26 2023-06-13 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) MR image-based intelligent sketching method, system, equipment and storage medium
CN116258671B (en) * 2022-12-26 2023-08-29 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) MR image-based intelligent sketching method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN112184720B (en) 2024-04-23

Similar Documents

Publication Publication Date Title
US11455732B2 (en) Knowledge-based automatic image segmentation
WO2021088747A1 (en) Deep-learning-based method for predicting morphological change of liver tumor after ablation
CN112184720B (en) Method and system for segmenting internal rectus muscle and optic nerve of CT image
Mansoor et al. Deep learning guided partitioned shape model for anterior visual pathway segmentation
Zhu et al. Automatic segmentation of the left atrium from MR images via variational region growing with a moments-based shape prior
Wang et al. Multi-atlas segmentation without registration: a supervoxel-based approach
CN110993065B (en) Brain tumor keyhole surgery path planning method based on image guidance
US9727975B2 (en) Knowledge-based automatic image segmentation
CN113538533B (en) Spine registration method, device and equipment and computer storage medium
WO2023125828A1 (en) Systems and methods for determining feature points
Morra et al. Automatic subcortical segmentation using a contextual model
CN115082493A (en) 3D (three-dimensional) atrial image segmentation method and system based on shape-guided dual consistency
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
EP4293618A1 (en) Brain identifier positioning system and method
CN113222979A (en) Multi-map-based automatic skull base foramen ovale segmentation method
CN115461790A (en) Method and apparatus for classifying structure in image
Yao et al. Statistical location model for abdominal organ localization
Sun et al. Using cortical vessels for patient registration during image-guided neurosurgery: a phantom study
Prasad et al. Skull-stripping with machine learning deformable organisms
Wodzinski et al. Application of demons image registration algorithms in resected breast cancer lodge localization
Sun A Review of 3D-2D Registration Methods and Applications based on Medical Images
Yao et al. Non-rigid registration and correspondence finding in medical image analysis using multiple-layer flexible mesh template matching
CN107730544A (en) Cerebral vessels registration arrangement based on ICP
Lankton Localized statistical models in computer vision
Kun Dense correspondence and statistical shape reconstruction of fractured, incomplete skulls

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant