CN112184720B - Method and system for segmenting internal rectus muscle and optic nerve of CT image - Google Patents

Method and system for segmenting internal rectus muscle and optic nerve of CT image Download PDF

Info

Publication number
CN112184720B
CN112184720B CN202010891689.7A CN202010891689A CN112184720B CN 112184720 B CN112184720 B CN 112184720B CN 202010891689 A CN202010891689 A CN 202010891689A CN 112184720 B CN112184720 B CN 112184720B
Authority
CN
China
Prior art keywords
image
segmentation
shape
model
rectus muscle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010891689.7A
Other languages
Chinese (zh)
Other versions
CN112184720A (en
Inventor
陈晓红
杨健
胡国语
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Beijing Tongren Hospital
Original Assignee
Beijing Institute of Technology BIT
Beijing Tongren Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Beijing Tongren Hospital filed Critical Beijing Institute of Technology BIT
Priority to CN202010891689.7A priority Critical patent/CN112184720B/en
Publication of CN112184720A publication Critical patent/CN112184720A/en
Application granted granted Critical
Publication of CN112184720B publication Critical patent/CN112184720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The method and the system for segmenting the internal rectus muscle and the optic nerve of the CT image can effectively locate the visual intersection and the optic nerve bundle which are not clearly imaged in the CT, and can effectively make up the weakness of lack of local information in multi-mode fusion so as to obviously improve the segmentation accuracy. The method comprises the following steps: (1) constructing a statistical shape model: the statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated; (2) segmentation based on MR/CT image fusion: obtaining the shape of the reference MR image by fitting a statistical shape model to the segmentation result of the MR image, fusing the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle; and (3) multi-feature constraint segmentation refinement: a multi-feature constrained surface is obtained from the target CT image, and after fitting the initial segmentation results to the surface, structures that are not visible in the CT image, including the view bundles and the view intersections, are segmented.

Description

Method and system for segmenting internal rectus muscle and optic nerve of CT image
Technical Field
The invention relates to the technical field of medical image processing, in particular to a method for segmenting internal rectus muscles and optic nerves of a CT image and a system for segmenting the internal rectus muscles and the optic nerves of the CT image.
Background
Stereotactic radiosurgery (Stereotactic Radiosurgery, SRS) and Image-guided surgery (IGS) are two techniques commonly used in the treatment of skull base tumors. Because of the high bone density, CT is the primary imaging modality in the planning stage and procedure of the skull base operation. In clinic, surgeons have to rely on abundant clinical experience to accurately locate brain structures in CT images, avoiding the damage of surgical instruments to critical structures of the skull base (nerves, eyeballs, muscles in the orbit, etc.). However, this procedure is very dangerous to the patient. Thus, automatic segmentation of the anterior visual pathways (optic nerve, optic bundle and optic cross) and internal rectus muscles in CT images is critical to improve surgical accuracy and reduce trauma to other anatomical structures.
In recent years, segmentation methods have been widely developed and can be classified into a basic method of map registration and statistical shape model. Bekes et al propose a geometric model-based method to segment the eyeball, lens, optic nerve and visual intersection in a CT image. It requires interaction of selected seed points to initiate segmentation. Huo Y et al propose a multi-atlas registration segmentation process comprising two steps: (1) Affine registration of bone structure to crop visual access areas in the target and map set, (2) deformable registration of the cropped areas. However, due to the low contrast of soft tissue in CT images, atlas registration-based methods cannot accurately segment the visual pathway. Chen and Dawant use a method of multi-atlas registration to segment head and neck organs. The method allows the target volume to be initially aligned with the atlas and then local registration is achieved by defining a bounding box for each structure. Aghdasi et al apply a predefined anatomical model to segment the visual organ and some brain structures in the MR image. In addition, some researches show that the segmentation accuracy of smaller structures such as optic nerves can be improved by a method based on multi-map registration. Model-based segmentation methods have been widely developed for anterior visual pathway segmentation over the past few decades. Nobel et al combine deformable model and atlas registration with previous local intensities to segment the anterior visual pathway. The statistical shape model includes an active appearance model and an active shape model, which is effective for solving the segmentation problem of the structure with poor CT image quality. In summary, SSM (statistical shape model) based methods are better suited for poor image quality than atlas registration based methods.
In some other studies, it is also common to use deep learning to segment skull base tissue. Jose Dolz et al extract enhancement features from MR images and propose a deep learning classification scheme for optic nerve, visual intersection and segmentation of the pituitary gland. Ren et al propose a strategy of interleaving 3D-CNNs for segmentation of the anterior visual pathway in CT images. In the field of medical image segmentation, U-Net is also widely used and provides accurate segmentation. However, in the case where the data amount is relatively short, the neural network-based method cannot accurately segment the anterior visual pathway and the internal rectus muscle.
The a priori knowledge plays an important role in the segmentation of CT images. For statistical shape models, models constructed from training data may be considered a priori information. Segmentation based on atlas registration depends on the quality of the target image and the prior information. Although CT images of soft tissue (e.g., anterior visual pathways and internal rectus muscles) suffer from a number of drawbacks such as low contrast, blurred edges, and noise. In this case, even if the extracted target boundary is blurred and broken, the segmentation can be obtained by fitting a statistical shape model. Furthermore, unlike learning-based methods, statistical shape model-based methods perform well in segmentation when the size of the training set is small. Since the structure of the anterior visual pathway and the internal rectus muscle in the MR data is complete, a statistical shape model can be constructed as a priori information to achieve accurate segmentation of the CT image.
Disclosure of Invention
In order to overcome the defects of the prior art, the technical problem to be solved by the invention is to provide a segmentation method for internal rectus muscle and optic nerve of a CT image, which can effectively position the visual intersection and optic nerve bundles which are not clearly imaged in CT, and can effectively make up the weakness of lack of local information in multi-mode fusion so as to obviously improve the segmentation precision.
The technical scheme of the invention is as follows: the method for segmenting the internal rectus muscle and the optic nerve of the CT image comprises the following steps:
(1) Constructing a statistical shape model: the statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) Segmentation based on MR/CT image fusion: obtaining the shape of the reference MR image by fitting a statistical shape model to the segmentation result of the MR image, fusing the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
(3) Multi-feature constraint segmentation refinement: a multi-feature constrained surface is obtained from the target CT image, and after fitting the initial segmentation results to the surface, structures that are not visible in the CT image, including the view bundles and the view intersections, are segmented.
The MR data set is used for constructing a priori shape model to assist in segmenting CT images of structures, and due to the weakness of soft tissue CT imaging, the invention can effectively locate the visual intersection and the optic nerve bundles which are not clearly imaged in CT; the multi-feature constraint surface can effectively make up the weakness of lack of local information in multi-mode fusion, so that the segmentation precision is remarkably improved.
Also provided is an internal rectus muscle and optic nerve segmentation system of a CT image, comprising:
the statistical shape model construction module is configured to train the shape corresponding relation of the data set and adopts principal component analysis to construct a statistical shape model of the training shape;
A segmentation module based on MR/CT image fusion, which is configured to obtain the shape of a reference MR image by fitting a statistical shape model to the segmentation result of the MR image, and to fuse the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
A multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, segment structures not visible in the CT image, including view bundles and view intersections, after fitting the initial segmentation result to the surface.
Drawings
Fig. 1 is a flowchart of an internal rectus muscle and optic nerve segmentation method of a CT image according to the present invention.
Detailed Description
As shown in fig. 1, this method for segmenting internal rectus muscle and optic nerve of CT image includes the steps of:
(1) Constructing a statistical shape model: the statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) Segmentation based on MR/CT image fusion: obtaining the shape of the reference MR image by fitting a statistical shape model to the segmentation result of the MR image, fusing the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
(3) Multi-feature constraint segmentation refinement: a multi-feature constrained surface is obtained from the target CT image, and after fitting the initial segmentation results to the surface, structures that are not visible in the CT image, including the view bundles and the view intersections, are segmented.
The MR data set is used for constructing a priori shape model to assist in segmenting CT images of structures, and due to the weakness of soft tissue CT imaging, the invention can effectively locate the visual intersection and the optic nerve bundles which are not clearly imaged in CT; the multi-feature constraint surface can effectively make up the weakness of lack of local information in multi-mode fusion, so that the segmentation precision is remarkably improved.
Preferably, in the step (1), in order to construct a statistical shape model, the shape correspondence of the training dataset:
The shape correspondence is represented as a dense mapping between the shape point sets in the MR data set, and the correspondence of the two shapes is obtained by paired non-rigid registration; for MR data sets The unbiased point correspondence is obtained by group shape registration, and the similarity metric shape registration is obtained by group-level manner as expressed by formula (1):
Where N is the number of training data, d ()'s is the euclidean distance, g ij is the connection between the ith and jth shapes in the dataset;
The connection relation among all the shapes is represented by a graphic model, and then grouping level registration is realized through the guidance of the graphic model;
Obtaining shape correspondence On this basis, the alignment shape was analyzed using a generalized general formula.
Preferably, in the step (1), a statistical shape model of the training shape is constructed by principal component analysis, and the matrix is subjected to eigenvalue decomposition, whereinIs vectorized and then arranged together, and feature vectors are arranged according to a descending order of feature values, the first few feature vectors are used to model shape data, and the statistical shape model is formula (2):
where P represents the vectorized matrix vec (P), vectorized average shape And the matrix Φ of the principal eigenmodes is pre-computed from the training dataset, where b represents the parameters of the model.
Preferably, in the step (2), the reference MR image I ref and the corresponding segmented image thereof are randomly selected from the training data; the reference shape is obtained by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref, a process called surface fitting, expressed as equation (3):
Where D T is the distance transform of I T, Representing coordinates of points on the statistical shape model, diag (λ) representing a diagonal matrix consisting of eigenvalues λ; b is constrained in a hyper-rectangle defined by β and λ, where λ i is the ith element of λ and b i is the ith parameter in b; the first term in equation (3) is the sum of the distances from each point on the transformed shape model to the surface I T and is used to describe the registration error, and the second term is a regularization term of the statistical shape model deformation, which is used to penalize the degree of model deformation.
Preferably, in the step (2), a deformation field mapping the reference MR image I ref to the target CT image I tar is obtained by 3D image elastic registration, and the deformation field is parameterized using B-spline fitting; elastic registration of the two images can be achieved by solving the optimal transformation T and calculated according to equation (4):
After the optimized transformation is obtained, a deformation field between the MR and CT images is obtained; then, transforming the reference shape into a target image to realize fusion of the MR and CT images; the corresponding results are considered as the results of the initialized segmentation of the anterior visual pathway and the internal rectus muscle.
Preferably, the anterior visual pathway and the internal rectus muscle in the step (3) are soft tissues corresponding to a specific gray scale window in the CT image, and according to the characteristics, a good enhancement effect is obtained by setting an appropriate upper threshold and lower threshold; bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle.
Preferably, in the step (3), after fitting the optic nerve and the internal rectus muscle model to the multi-feature constraint surface, driving the optic nerve and the visual cross model; driving a statistical shape model through an optimization formula (3), so that the spatial position I S between the converted model and the multi-feature constraint surface is consistent; finally, the segmentation of the optic nerve and the internal rectus muscle part and the prediction of the optic bundle and the optic cross part are realized.
It will be understood by those skilled in the art that all or part of the steps in implementing the above embodiment method may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, where the program when executed includes the steps of the above embodiment method, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, etc. Accordingly, the present invention also includes, corresponding to the method of the present invention, an internal rectus muscle and optic nerve segmentation system of the CT image, which is generally represented in the form of functional modules corresponding to the steps of the method. The system comprises:
the statistical shape model construction module is configured to train the shape corresponding relation of the data set and adopts principal component analysis to construct a statistical shape model of the training shape;
A segmentation module based on MR/CT image fusion, which is configured to obtain the shape of a reference MR image by fitting a statistical shape model to the segmentation result of the MR image, and to fuse the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
A multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, segment structures not visible in the CT image, including view bundles and view intersections, after fitting the initial segmentation result to the surface.
Preferably, the segmentation module based on MR/CT image fusion performs: randomly selecting a reference MR image I ref and a corresponding segmentation image thereof from the training data; obtaining a reference shape by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref; obtaining a deformation field mapping the reference MR image I ref to the target CT image I tar by 3D image elastic registration, parameterizing the deformation field using B-spline fitting; elastic registration of the two images can be achieved by solving the optimal transformation T.
Preferably, the multi-feature constraint segmentation refinement module performs:
The anterior visual pathway and the internal rectus muscle are soft tissues corresponding to a specific gray scale window in the CT image, and according to the characteristics, a good enhancement effect is obtained by setting an appropriate upper threshold value and lower threshold value;
then reducing noise in the enhanced image by bilateral filtering, and extracting boundary information of a pre-visual pathway and internal rectus muscle by adopting a Sobel operator;
After the optic nerve and the internal rectus muscle model are fitted to the multi-feature constraint surface, the optic nerve and the visual intersection model are driven; driving a statistical shape model through an optimization formula (3), so that the spatial position I S between the converted model and the multi-feature constraint surface is consistent; finally, the segmentation of the optic nerve and the internal rectus muscle part and the prediction of the optic bundle and the optic cross part are realized.
The present invention is described in more detail below.
The invention provides an anatomical shape model based on multi-mode image fusion, which is used for segmentation of low-contrast anterior vision path and internal rectus muscle in CT images, and the detailed flow is shown in figure 1. First, a statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated. Next, the shape of the reference MR image is obtained by fitting a statistical shape model to the segmentation result of the MR image. The CT image is then fused with the MR image by elastic registration to obtain the initial segmentation of the anterior visual pathway and the internal rectus muscle. Finally, a multi-feature constraint surface is obtained from the target CT image. After fitting the initial segmentation results to the surface, structures not visible in the CT image, including the view bundles and view intersections, may also be segmented.
The contribution of the proposed method is two aspects: first, the MR dataset is used to construct an a priori shape model to assist in segmentation of structural CT images. Due to the weakness of soft tissue CT imaging, it can effectively locate the optic intersections and optic nerve bundles that are not clearly imaged in CT. Secondly, the multi-feature constraint surface can effectively make up the weakness of lack of local information in multi-mode fusion. It effectively improves the segmentation accuracy.
(1) Construction of statistical shape model
To construct a statistical shape model, the shape correspondence of the dataset needs to be trained. Shape correspondence may be represented as a dense mapping between sets of shape points in the MR data set. The correspondence of the two shapes can be obtained by a pair of non-rigid alignments. For MR data sets The unbiased point correspondence may be obtained by group shape registration, and the similarity metric shape registration may be obtained by group-level manner expressed as:
Where N is the number of training data, d (-) is the euclidean distance, g ij is the connection between the ith and jth shapes in the dataset. The connection between all shapes is represented by a graphical model, and then packet-level registration can be achieved through the guidance of the graphical model. Finally, the shape corresponding relation is obtained On this basis, the alignment shape was analyzed using a generalized general formula. And constructing a statistical shape model of the training shape by adopting principal component analysis. Eigenvalue decomposition of matrix, wherein/>Is vectorized and then arranged together, and the eigenvectors are arranged according to a descending order of eigenvalues. The first few feature vectors are used to model shape data. Thus, the statistical shape model can be expressed as:
where P represents the vectorized matrix vec (P), vectorized average shape And the matrix Φ of the principal eigenmodes is pre-computed from the training dataset, where b represents the parameters of the model.
(2) Segmentation based on MR/CT image fusion
The reference MR image I ref and its corresponding segmented image are randomly selected from the training data. The reference shape may be obtained by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref. This process, called surface fitting, can be expressed as:
Where D T is the distance transform of I T, Representing the coordinates of points on the statistical shape model, diag (λ) represents a diagonal matrix consisting of eigenvalues λ. b is constrained in a hyper-rectangle defined by β and λ, where λ i is the ith element of λ and b i is the ith parameter in b. The first term in equation (3) is the sum of the distances from each point on the transformed shape model to the surface I T and is used to describe the registration error. The second term is a regularization term for statistical shape model deformation, which penalizes the degree of model deformation.
The deformation field mapping the reference MR image I ref to the target CT image I tar is obtained by 3D image elastic registration, while fusion of the MR/CT images is achieved. The normalized mutual information is considered as a similarity measure between the two images. This patent parameterizes the deformation field using B-spline fitting. Elastic registration of the two images can be achieved by solving the optimal transformation T and is calculated as follows:
After the optimized transformation is obtained, a deformation field between the MR and CT images can be obtained. The reference shape is then transformed into the target image to achieve fusion of the MR and CT images. The corresponding results are considered as the results of the initialized segmentation of the anterior visual pathway and the internal rectus muscle.
(3) Multi-feature constraint segmentation refinement
Anterior visual pathway and internal rectus muscle are soft tissues corresponding to a specific gray scale window in the CT image. According to this feature, a good reinforcing effect can be obtained by setting appropriate upper and lower thresholds. Thus, the image is enhanced. The contrast of the anterior visual pathway and the internal rectus muscle in the enhanced image is improved compared to the contrast of the anterior visual pathway and the internal rectus muscle in the original CT image. Bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle. The constraint of the connected domain size can effectively eliminate the influence of noise, and the constraint from the initial segmentation can ensure that most of the extracted surfaces belong to the anterior visual pathway and the internal rectus muscle.
After fitting the optic nerve and internal rectus muscle models to the multi-feature constraint surface, the optic nerve and visual intersection models can also be driven. And driving the statistical shape model through an optimization formula (3) to enable the spatial position I S between the converted model and the multi-feature constraint surface to be consistent. Finally, the segmentation of the optic nerve and the internal rectus muscle part and the prediction of the optic bundle and the optic cross part are realized.
The present invention is not limited to the preferred embodiments, but can be modified in any way according to the technical principles of the present invention, and all such modifications, equivalent variations and modifications are included in the scope of the present invention.

Claims (6)

1. A CT image internal rectus muscle and optic nerve segmentation method is characterized in that: which comprises the following steps:
(1) Constructing a statistical shape model: the statistical shape model is composed of a training dataset in which the shape of the anterior visual pathway and the internal rectus muscle are manually delineated;
(2) Segmentation based on MR/CT image fusion: obtaining the shape of the reference MR image by fitting a statistical shape model to the segmentation result of the MR image, fusing the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
(3) Multi-feature constraint segmentation refinement: obtaining a multi-feature constrained surface from the target CT image, segmenting structures not visible in the CT image, including view bundles and view intersections, after fitting the initial segmentation results to the surface;
In the step (2), a reference MR image I ref and a corresponding segmented image thereof are randomly selected from the training data; the reference shape is obtained by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref, a process called surface fitting, expressed as equation (3):
Where D T is the distance transform of I T, Representing coordinates of points on the statistical shape model, diag (λ) representing a diagonal matrix consisting of eigenvalues λ; b is constrained in a hyper-rectangle defined by β and λ, where λ i is the ith element of λ and b i is the ith parameter in b; the first term in equation (3) is the sum of the distances from each point on the transformed shape model to the surface I T and is used to describe the registration error, and the second term is a regularization term of the statistical shape model deformation, which is used to penalize the degree of model deformation;
In the step (2), a deformation field mapping the reference MR image I ref to the target CT image I tar is obtained through elastic registration of the 3D image, and B spline fitting is used for parameterizing the deformation field;
Elastic registration of the two images can be achieved by solving the optimal transformation T and calculated according to equation (4):
After the optimized transformation is obtained, a deformation field between the MR and CT images is obtained; then, transforming the reference shape into a target image to realize fusion of the MR and CT images; the corresponding results are considered as the results of the initialized segmentation of the anterior visual pathway and the internal rectus muscle.
2. The method for segmentation of internal rectus muscle and optic nerve of CT images according to claim 1, wherein: in the step (1), in order to construct the statistical shape model, the shape correspondence relationship of the training data set is:
The shape correspondence is represented as a dense mapping between the shape point sets in the MR data set, and the correspondence of the two shapes is obtained by paired non-rigid registration; for MR data sets The unbiased point correspondence is obtained by group shape registration, and the similarity metric shape registration is obtained by group-level manner as expressed by formula (1):
Where N is the number of training data, d ()'s is the euclidean distance, g ij is the connection between the ith and jth shapes in the dataset;
The connection relation among all the shapes is represented by a graphic model, and then grouping level registration is realized through the guidance of the graphic model;
Obtaining shape correspondence On this basis, the alignment shape was analyzed using a generalized general formula.
3. The method for segmentation of internal rectus muscle and optic nerve of CT images according to claim 2, wherein: in the step (1), a statistical shape model of the training shape is constructed by adopting principal component analysis, and the matrix is subjected to eigenvalue decomposition, whereinIs vectorized and then arranged together, and feature vectors are arranged according to a descending order of feature values, the first few feature vectors are used to model shape data, and the statistical shape model is formula (2):
where P represents the vectorized matrix vec (P), vectorized average shape And the matrix Φ of the principal eigenmodes is pre-computed from the training dataset, where b represents the parameters of the model.
4. The method for segmentation of internal rectus muscle and optic nerve of CT images according to claim 3, wherein: the anterior visual pathway and the internal rectus muscle in the step (3) are soft tissues corresponding to a specific gray scale window in the CT image, and according to the characteristics, a good enhancement effect is obtained by setting an appropriate upper threshold value and a lower threshold value; bilateral filtering is then used to reduce noise in the enhanced image, and the Sobel operator is employed to extract boundary information of the anterior visual pathway and the internal rectus muscle.
5. The method for segmentation of internal rectus muscle and optic nerve of CT images according to claim 4, wherein: in the step (3), after the optic nerve and the internal rectus muscle model are fitted to the multi-feature constraint surface, the optic nerve and the visual cross model are driven; driving a statistical shape model through an optimization formula (3), so that the spatial position I S between the converted model and the multi-feature constraint surface is consistent; finally, the segmentation of the optic nerve and the internal rectus muscle part and the prediction of the optic bundle and the optic cross part are realized.
6. An internal rectus muscle and optic nerve segmentation system of a CT image, characterized in that: it comprises the following steps: the statistical shape model construction module is configured to train the shape corresponding relation of the data set and adopts principal component analysis to construct a statistical shape model of the training shape;
A segmentation module based on MR/CT image fusion, which is configured to obtain the shape of a reference MR image by fitting a statistical shape model to the segmentation result of the MR image, and to fuse the CT image with the MR image by elastic registration to obtain an initial segmentation result of the anterior visual pathway and the internal rectus muscle;
A multi-feature constrained segmentation refinement module configured to obtain a multi-feature constrained surface from the target CT image, segment structures not visible in the CT image, including view bundles and view intersections, after fitting the initial segmentation result to the surface;
The segmentation module based on MR/CT image fusion performs: randomly selecting a reference MR image I ref and a corresponding segmentation image thereof from the training data; obtaining a reference shape by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref; obtaining a deformation field mapping the reference MR image I ref to the target CT image I tar by 3D image elastic registration, parameterizing the deformation field using B-spline fitting; the elastic registration of the two images can be realized by solving the optimal transformation T;
the segmentation module based on MR/CT image fusion randomly selects a reference MR image I ref and a corresponding segmentation image thereof from training data; the reference shape is obtained by fitting a statistical shape model to the segmented surface I T of the reference MR image I ref, a process called surface fitting, expressed as equation (3):
Where D T is the distance transform of I T, Representing coordinates of points on the statistical shape model, diag (λ) representing a diagonal matrix consisting of eigenvalues λ; b is constrained in a hyper-rectangle defined by β and λ, where λ i is the ith element of λ and b i is the ith parameter in b; the first term in equation (3) is the sum of the distances from each point on the transformed shape model to the surface I T and is used to describe the registration error, and the second term is a regularization term of the statistical shape model deformation, which is used to penalize the degree of model deformation;
The deformation field mapping the reference MR image I ref to the target CT image I tar is obtained through 3D image elastic registration in the segmentation module based on MR/CT image fusion, and B spline fitting is used for parameterizing the deformation field; elastic registration of the two images can be achieved by solving the optimal transformation T and calculated according to equation (4):
After the optimized transformation is obtained, a deformation field between the MR and CT images is obtained; then, transforming the reference shape into a target image to realize fusion of the MR and CT images; the corresponding results are considered as the results of the initialized segmentation of the anterior visual pathway and the internal rectus muscle.
CN202010891689.7A 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image Active CN112184720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010891689.7A CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010891689.7A CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Publications (2)

Publication Number Publication Date
CN112184720A CN112184720A (en) 2021-01-05
CN112184720B true CN112184720B (en) 2024-04-23

Family

ID=73925308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010891689.7A Active CN112184720B (en) 2020-08-27 2020-08-27 Method and system for segmenting internal rectus muscle and optic nerve of CT image

Country Status (1)

Country Link
CN (1) CN112184720B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744271B (en) * 2021-11-08 2022-02-11 四川大学 Neural network-based automatic optic nerve segmentation and compression degree measurement and calculation method
CN114974518A (en) * 2022-04-15 2022-08-30 浙江大学 Multi-mode data fusion lung nodule image recognition method and device
CN116258671B (en) * 2022-12-26 2023-08-29 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) MR image-based intelligent sketching method, system, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN108961256A (en) * 2018-07-05 2018-12-07 艾瑞迈迪医疗科技(北京)有限公司 Image partition method, operation navigation device, electronic equipment and storage medium
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110163847A (en) * 2019-04-24 2019-08-23 艾瑞迈迪科技石家庄有限公司 Liver neoplasm dividing method and device based on CT/MR image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073220B2 (en) * 2009-04-20 2011-12-06 Siemens Aktiengesellschaft Methods and systems for fully automatic segmentation of medical images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898606A (en) * 2018-06-20 2018-11-27 中南民族大学 Automatic division method, system, equipment and the storage medium of medical image
CN108961256A (en) * 2018-07-05 2018-12-07 艾瑞迈迪医疗科技(北京)有限公司 Image partition method, operation navigation device, electronic equipment and storage medium
CN109961446A (en) * 2019-03-27 2019-07-02 深圳视见医疗科技有限公司 CT/MR three-dimensional image segmentation processing method, device, equipment and medium
CN110163847A (en) * 2019-04-24 2019-08-23 艾瑞迈迪科技石家庄有限公司 Liver neoplasm dividing method and device based on CT/MR image

Also Published As

Publication number Publication date
CN112184720A (en) 2021-01-05

Similar Documents

Publication Publication Date Title
CN112184720B (en) Method and system for segmenting internal rectus muscle and optic nerve of CT image
CN112155729B (en) Intelligent automatic planning method and system for surgical puncture path and medical system
EP3509013A1 (en) Identification of a predefined object in a set of images from a medical image scanner during a surgical procedure
Grimson et al. Utilizing segmented MRI data in image-guided surgery
CN106340021B (en) Blood vessel extraction method
Mansoor et al. Deep learning guided partitioned shape model for anterior visual pathway segmentation
US8073216B2 (en) System and methods for automatic segmentation of one or more critical structures of the ear
Chakravarty et al. Towards a validation of atlas warping techniques
Ibragimov et al. Segmentation of pathological structures by landmark-assisted deformable models
Zhu et al. Automatic segmentation of the left atrium from MR images via variational region growing with a moments-based shape prior
Ibragimov et al. Segmentation of tongue muscles from super-resolution magnetic resonance images
CN110993065B (en) Brain tumor keyhole surgery path planning method based on image guidance
Tu et al. Automated extraction of the cortical sulci based on a supervised learning approach
Liu et al. Patch-based augmentation of Expectation–Maximization for brain MRI tissue segmentation at arbitrary age after premature birth
CN111080676B (en) Method for tracking endoscope image sequence feature points through online classification
US7983464B2 (en) System and method for corpus callosum segmentation in magnetic resonance images
CN115082493A (en) 3D (three-dimensional) atrial image segmentation method and system based on shape-guided dual consistency
CN113222979A (en) Multi-map-based automatic skull base foramen ovale segmentation method
CN115461790A (en) Method and apparatus for classifying structure in image
Prasad et al. Skull-stripping with machine learning deformable organisms
Olveres et al. Midbrain volume segmentation using active shape models and LBPs
Chen et al. Automated segmentation for patella from lateral knee X-ray images
Yao et al. Non-rigid registration and correspondence finding in medical image analysis using multiple-layer flexible mesh template matching
Sun et al. Using cortical vessels for patient registration during image-guided neurosurgery: a phantom study
Hu et al. Multirigid registration of MR and CT images of the cervical spine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant