CN107123104B - Cone beam computed tomography image registration and overlapping method - Google Patents

Cone beam computed tomography image registration and overlapping method Download PDF

Info

Publication number
CN107123104B
CN107123104B CN201610105206.XA CN201610105206A CN107123104B CN 107123104 B CN107123104 B CN 107123104B CN 201610105206 A CN201610105206 A CN 201610105206A CN 107123104 B CN107123104 B CN 107123104B
Authority
CN
China
Prior art keywords
image
target
subset
volume
voxel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610105206.XA
Other languages
Chinese (zh)
Other versions
CN107123104A (en
Inventor
裴玉茹
陈贵
张晓芸
许天民
查红彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201610105206.XA priority Critical patent/CN107123104B/en
Publication of CN107123104A publication Critical patent/CN107123104A/en
Application granted granted Critical
Publication of CN107123104B publication Critical patent/CN107123104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Generation (AREA)

Abstract

The invention provides a registration and overlapping method of a volume image, which comprises the following steps: extracting a reference image subset and a target image subset from the reference volume image and the target volume image respectively; defining structures of interest in a reference image subset and a target image subset, respectively, using a semi-supervised learning method; calculating a spherical gray scale integration operator of each voxel in the structure of interest of the reference image subset and the structure of interest of the target image subset; solving the corresponding relation between the reference image subset and the target image subset through collaborative embedding based on a spherical gray scale integral operator; calculating a rigid transformation parameter between the reference volume image and the target volume image based on the obtained corresponding relation; and applying the calculated rigid transformation parameters to the reference volume image, and overlapping the reference volume image after rigid transformation with the target volume image.

Description

Cone beam computed tomography image registration and overlapping method
Technical Field
The invention relates to a registration and overlapping method of a volume image, in particular to a registration and overlapping method of a cone beam computed tomography image.
Background
Cone Beam Computed Tomography (CBCT) images are a type of volumetric image commonly used in oral clinics to acquire craniofacial morphological information for a particular patient. Compared to conventional two-dimensional radiological images, CBCT images can provide complete three-dimensional geometrical information of the patient. By comparing CBCT images before and after treatment, the change of the three-dimensional craniofacial shape caused by treatment can be quantitatively analyzed. Due to the growth of body tissues of a patient during treatment and the posture change of the patient during different image acquisition processes, before treatment evaluation and quantitative analysis are performed by using CBCT images before and after treatment, three-dimensional CBCT images before and after treatment, namely, body images, must be registered and overlapped.
Registration and overlay of medical images is typically used to establish correspondence of anatomical structures between different volumetric images. The registration and overlay of three-dimensional CBCT images in the prior art still face the following challenges: 1) the volume image belongs to large-scale data, more than one million voxels are contained in the craniofacial CBCT image with medium resolution, so the measurement functions (such as mutual information and measure of standard cross-correlation) used in the traditional medical image registration are relatively time-consuming and easily fall into the local minimum condition; 2) among the anatomical structures recorded in the craniofacial CBCT image, some of the structures remain unchanged during the treatment process, while other structures change with time, so that different structures need to be distinguished in registration and overlay of the volume image, however, manual labeling of the anatomical structures is time-consuming, laborious and operator-dependent, and therefore an effective labeling tool is needed for defining the structures of interest (SOI); 3) the craniofacial CBCT image is generally low in image resolution and poor in image quality due to small radiation dose, so that in the image, a part of bony structures can be blurred due to the similarity with the gray scale of surrounding tissues, and the corresponding relation between the reference CBCT image and the target CBCT image is difficult to determine, and therefore a robust correspondence estimation algorithm is required to realize reliable registration and overlapping of the CBCT images.
Automated image registration and overlay in the field of medical image processing has been under study for many years. In recent years, various voxel-based methods have been used for registration and superimposition of craniofacial images. Mutual Information (Mutual Information) is a measure function often used in registration and overlay methods of volume images to estimate statistical gray levels of volume images and attempt to find a spatial transformation that maximizes Information between individual volume images. Since mutual information is a global metric, it is relatively difficult to estimate the locally optimal transformation. Furthermore, the volume image phase is introduced to replace the gray scale of the image, and the structural correlation is maximized by calculating the mutual information of the local phases. Generally, the method of considering the voxels of the skull base part as stable structures and performing rigid registration and overlap is mostly based on mutual information measure functions. In these methods, craniofacial images, after being superimposed, can be used to evaluate the mandibular area, or can be used to compare morphological changes of bone and soft tissue in maxillofacial surgery. Some volume image registration based on mutual information measure function and overlay software are presented in succession, wherein the MIRIT software is used to calculate the rigid registration of the volume image of the orthognatic surgery patient before and after treatment; the OnDemand3D software is used for calculating the CBCT image overlapping based on the anterior basis of skull; maxilim software was used to overlay the three-dimensional model of the skull base.
In addition, the marker-based approach can also be used to overlay volume images and measure changes in craniofacial structures due to growth and treatment. The marker binding pockels (Procrustes) assay is used for almost all morphological studies. The pilfer analysis and thin plate splines can be used for multivariate analysis of the curve profile of the sample, where three-dimensional markers are manually selected and pilfer's geometric transformations are calculated to estimate the change in the mandible due to growth and treatment.
The orientation-independent three-dimensional Invariant Feature Transform (SIFT) is commonly used for registration and overlay of clinical CT images. For example, Stephane Allair et al, in 2008, published in the "2012 IEEE Computer Vision and Pattern Recognition conference Computer Society conference (Computer Society conference and Pattern Recognition workstations)" entitled "full organization investigation and Improved Feature selection of 3D SIFT with application to Medical Image Analysis", provide a three-dimensional SIFT method for Medical Image Analysis applications.
In the prior art, operations are usually performed directly on dense voxels in the original volume image. The calculation efficiency is low because the data volume is large and the measurement function is complex. Furthermore, in the prior art, manual labeling of regions of interest is not only cumbersome to operate but also operator dependent. In existing marker-based registration and overlay methods, manual operations are required to define the three-dimensional markers. In addition, in the calculation of the three-dimensional descriptor, the direction determination and the estimation of the gradient histogram are complicated and the calculation efficiency is low.
Disclosure of Invention
The technical solution of the present invention is provided to solve one of the above problems or other technical problems in the prior art.
According to an aspect of the present invention, there is provided a registration and overlay method of a volume image, including: extracting a reference image subset and a target image subset from the reference volume image and the target volume image respectively; defining a structure of interest (SOI) in the reference image subset and the target image subset, respectively, using a semi-supervised learning (semi-supervised learning) method; calculating a spherical gray scale integral operator of each voxel in the SOI of the reference image subset and the SOI of the target image subset; solving the corresponding relation between the reference image subset and the target image subset through collaborative embedding based on a spherical gray scale integral operator; calculating a rigid transformation parameter between the reference volume image and the target volume image based on the obtained corresponding relation; and applying the calculated rigid transformation parameters to the reference volume image, and overlapping the reference volume image after rigid transformation with the target volume image.
According to an embodiment of the present invention, the step of extracting the reference image subset and the target image subset in the reference volume image and the target volume image, respectively, may include: respectively performing Gaussian smoothing on the reference volume image and the target volume image; calculating a first-order reference differential image between the Gaussian smooth images according to the reference body image; calculating a first-order target differential image between the Gaussian smooth images according to the target volume image; extracting a local extreme value of the first-order reference differential image as a reference image subset of the reference body image; and extracting a local extremum of the first-order target differential image as a target image subset of the target volume image.
According to the embodiment of the present invention, the threshold values may be set for the reference volume image and the target volume image, respectively, and voxels having a gray value smaller than the threshold value may be removed from the extracted reference image subset and target image subset.
According to an embodiment of the present invention, before the step of extracting the reference image subset and the target image subset, the method may further include processing the reference volume image and the target volume image with a histogram matching operator.
According to an embodiment of the present invention, semi-supervised learning methods may be used to define the SOI in the reference image subset and the target image subset, respectively, based on the volume prior of the SOI input in advance. The volume priors of the SOI's can be obtained by counting in advance the voxels belonging to different types of SOI's.
According to an embodiment of the present invention, the step of calculating the spherical gray scale integral operator of the voxel may comprise: forming a sphere by taking a voxel to be calculated as a sphere center and a preset length as a radius, wherein the voxel to be calculated is a voxel with the sphere center; equally dividing the formed ball into N cone bodies; integrating the gray levels of the voxels in each volume cone after N equal divisions according to formula (1):
Figure BDA0000929581440000041
formula (1)
Wherein r represents the radius of the sphere, BjJ is more than or equal to 1 and less than or equal to N, v represents a body cone BjFor each voxel in the covered space, the function I (v) returns a value of the gray difference between the voxel v and the voxel at the center of the sphere, uj,rDenotes the j-th individual cone B after a sphere of radius r is equally divided into N body conesjGray scale integration of voxels within; and sequencing N results obtained after calculation aiming at the N individual cones to obtain an N-dimensional vector ur=(u1,r,u2,r,……,uN,r) Wherein u isrRepresenting the gray scale integral of the sphere-centered voxel for a sphere of radius r.
According to an embodiment of the present invention, the step of calculating a spherical gray scale integral operator of the voxel may further include: selecting M different radiuses by taking a voxel to be calculated as a sphere center to obtain M groups of N-dimensional vectors
Figure BDA0000929581440000042
1≤i≤M。
According to an embodiment of the present invention, N-40, M-6, ri∈[2,10]Wherein r isiThe unit of (d) is the number of voxels.
According to an embodiment of the present invention, the step of solving the correspondence between the reference image subset and the target image subset by collaborative embedding based on a spherical gray scale integration operator may include: establishing graph structure G corresponding to reference image subset and target image subsetE(A, Ψ), wherein the node set A is a test imageVoxels in the subset and the target image subset, the edge joins Ψ comprise joins internal to each image subset and between each image subset; for graph Structure GE(a, Ψ) calculating an edge weight value connecting each voxel inside each image subset based on a euclidean distance between each voxel of the image subsets and a distance between the spherical gray scale integration operators corresponding to each voxel, and calculating an edge weight value connecting each voxel between each image subset based on a distance between the spherical gray scale integration operators corresponding to each voxel; establishing a collaborative embedded target function based on each edge weight value obtained by calculation; performing Laplace matrix characteristic decomposition on the target function to obtain a characteristic vector corresponding to the minimum characteristic value, namely coordinates of the reference image subset and the target image subset in a collaborative embedded space; and defining a threshold value in the collaborative embedding space, and when the distance between the voxel in the reference image subset and the voxel in the target image subset in the collaborative embedding space is smaller than the threshold value, determining that the pair of voxels corresponds to each other.
According to an embodiment of the present invention, the rigid transformation parameters between the reference volume image and the target volume image may include a rotation matrix and a translation vector transformed from the reference volume image to the target volume image.
Drawings
The above and other features and advantages of the exemplary embodiments of the present inventive concept will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings. The drawings are intended to depict example embodiments of the inventive concept and should not be interpreted as limiting the intended scope of the claims. In the drawings:
fig. 1 illustrates a flowchart of a registration and overlay method of a volume image according to an embodiment of the inventive concept; and
fig. 2A and 2B illustrate schematic diagrams of a sphere gray scale integration operator used in a registration and overlap method of a volume image according to an embodiment of the inventive concept.
Detailed Description
Exemplary embodiments of the inventive concept will be described in detail below with reference to the accompanying drawings.
The inventive concept may, however, be exemplified in many different forms and should not be construed as being limited to the exemplary embodiments set forth herein. Furthermore, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the inventive concept to those skilled in the art. In the drawings, the same reference numerals will be used throughout to designate the same or similar elements.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present inventive concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting of the inventive concepts. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted in the illustrations may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Unless otherwise indicated, all terms used herein including descriptive terms or technical terms should be understood to have meanings apparent to one of ordinary skill in the art. In addition, terms defined in general dictionaries and used in the following description should be understood to have meanings equivalent to those used in the relevant description, and the terms should not be construed as ideal or excessively formal unless otherwise specified herein. It should also be appreciated that, without specific recitation, an "image" is referred to herein as a "volumetric image".
Fig. 1 illustrates a flowchart of a registration and overlay method of a volume image according to an embodiment of the inventive concept.
Referring to fig. 1, a registration and overlap method 100 of a volume image according to an embodiment of the inventive concept includes: extracting a reference image subset and a target image subset in the reference volume image and the target volume image, respectively (S110); defining structures of interest in the reference image subset and the target image subset, respectively, using a semi-supervised learning method (S120); calculating a spherical gray scale integration operator for each voxel in the structure of interest of the reference image subset and the structure of interest of the target image subset (S130); solving a correspondence between the reference image subset and the target image subset by collaborative embedding based on a spherical gray scale integration operator (S140); calculating a rigid transformation parameter between the reference volume image and the target volume image based on the obtained correspondence (S150); and applying the calculated rigid transformation parameters to the reference volume image and overlapping the rigid-transformed reference volume image with the target volume image (S160).
In the step of extracting image subsets (S110), respective image subsets are extracted from the reference volume image and the target image, respectively. The reference volume image and the target volume image may be Cone Beam Computed Tomography (CBCT) volume images.
Specifically, each CBCT volume image may be gaussian smoothed and a first order difference image between the respective order gaussian smoothed images may be calculated. That is, a first-order reference differential image between the respective order gaussian smooth images is calculated for the reference volume image, and a first-order target differential image between the respective order gaussian smooth images is calculated for the target volume image. Subsequently, local extrema of the first order reference differential image and local extrema of the first order target differential image are extracted to form the reference image subset and the target image subset, respectively, i.e., the image subsets extracted from the CBCT volume image may be defined by the respective voxels at the location of the extrema. By extracting image subsets from CBCT volume images, the size of the computed data can be greatly reduced.
According to an embodiment of the present invention, before extracting the image subset from the CBCT volume image, the reference CBCT volume image and the target CBCT volume image may be processed (S100) using a histogram matching operator, as shown in fig. 1. However, the inventive concept is not so limited, and the volumetric image may be processed with or without other matching operators.
Furthermore, since the voxels extracted based on the local extrema of the differential image may be located in the background region of the volume image, making the calculation process unstable, a threshold may be defined for the reference CBCT volume image and the target CBCT volume image, respectively, to remove voxels having a gray value smaller than a predetermined threshold from the extracted image subset.
In the step of defining a structure of interest (SOI) (S120), a semi-supervised learning method is used for the reference image subset and the target image subset, respectively.
In oral clinical applications, registration and overlay of volumetric images will typically be based on stable anatomical structures, e.g., anterior skull base, zygomatic arch, etc. These stable anatomical structures constitute an SOI that can be used in registration and overlay methods of volumetric images. With the semi-supervised learning approach, a small number of structural labels given by the user can be spread over the entire subset.
With semi-supervised learning methods, a large amount of unlabeled data can be trained using a small amount of labeled data. Semi-supervised learning is intermediate between unsupervised learning (i.e., without using any labeled training data) and supervised learning (i.e., using all labeled training data). Studies have shown that by combining a small amount of labeled data, the learning accuracy of unlabeled data can be greatly improved.
Specifically, a small number of slice images (i.e., slice images constituting the reference volume image and the target volume image, which are plane images) are first labeled by the user, and the contents of the labeling may include a start slice image, an end slice image, and an intermediate slice image of the SOI. Then, based on the labeled voxels, a semi-supervised learning approach is applied to the subset of images so that the label can be diffused to all voxels in the subset of images.
The process of labeling a voxel is a process of classifying the voxel according to the anatomical structure to which the voxel belongs. In oral clinical applications, types of SOI may include: anterior skull base (CB), left zygomatic arch (ZL), right zygomatic arch (ZR), and other structures (OB). Thus, voxels can be labeled using CB, ZL, ZR, and OB, respectively.
The inventors of the present application have noticed that when using the prior art semi-supervised learning method for CBCT volume images in oral clinical applications, a phenomenon occurs in which the markers diffuse and degenerate, for example, due to the close proximity of the SOI to other bones or tissues, when using the prior art semi-supervised learning method, the markers diffuse to other bones or tissues than the SOI. In order to avoid the occurrence of the marker diffusion degradation, according to the embodiment of the present invention, a volume prior (volume prior) of the SOI is introduced as a boundary constraint condition in the conventional semi-supervised learning method, that is, the SOI is defined in the image subset of the reference CBCT volume image and the image subset of the target CBCT volume image respectively using the semi-supervised learning method based on the volume prior of the SOI input in advance. The volume priors of the SOI can be obtained by counting in advance the voxels belonging to different types of SOI. In particular, the volume prior of any type of SOI may be expressed as a ratio of the number of voxels belonging to that type of SOI to the overall number of voxels of the image subset. For example, the number of voxels belonging to the anterior basis of the skull (CB) is n by the prior statisticsCBTotal number of voxels of the image subset is nTotalThe volume of the anterior base is a priori deltaCBCan be represented as nCB/nTotal
After labeling all voxels in the subset of images is completed, voxels in the subset of images that belong to a particular SOI (e.g., anterior skull base, zygomatic arch, etc.) may be selected for estimation of the overlap parameters.
In the step of calculating the spherical gray scale integration operator of the voxels (S130), the spherical gray scale integration operator is calculated for each voxel in the SOI of the reference image subset and the SOI of the target image subset. The spherical gray scale integration operator of the voxels may be used as an original feature space for determining whether two voxels (i.e., the voxel in the reference volume image and the voxel in the target volume image) correspond when subsequently establishing the correspondence between the reference image subset and the target image subset (step S140). Using the spherical gray scale integration operator of voxels as the original feature space, the spatial distribution and texture context information of the voxels in the subset of images can be described. Furthermore, the computation of the spherical gray scale integration operator of voxels is relatively small compared to other operators in the prior art that can be used as the original feature space.
Fig. 2A and 2B illustrate schematic diagrams of a sphere gray scale integration operator used in a registration and overlap method of a volume image according to an embodiment of the inventive concept. Fig. 2A schematically shows the calculation of the sphere gray scale integration operator for spheres of different radii in a plan view, and fig. 2B is a perspective view schematically showing a body cone after equally dividing a sphere N of different radii.
Referring to fig. 2A and 2B, in calculating the spherical gray scale integration operator of a voxel, one sphere is formed with the voxel as the center of the sphere (hereinafter also referred to as "spherical-center voxel") and r as the radius, and the formed sphere N (for example, N ═ 40) is equally divided as shown in fig. 2B. For example, an article entitled "Distributing Man Power on As sphere" published on pages 5-11 by E.B. Saff and A.B. J.Kuijlaars in 1997, volume 12, The "The mathematic intelligner", is given a method of N-equally dividing a sphere.
Subsequently, referring to fig. 2A, the gray levels of the voxels within each volume cone after N-equal division are integrated. See the following equation (2) for a specific integration equation:
Figure BDA0000929581440000091
formula (2)
Wherein the content of the first and second substances,
Figure BDA0000929581440000092
is to represent halfDiameter riAfter the sphere N of (i is more than or equal to 1 and less than or equal to M) is equally divided, the gray scale integral of the voxel in the jth (j is more than or equal to 1 and less than or equal to N) individual cone; b isjRepresents the jth individual cone; the function i (v) returns the value of the gray difference between voxel v and the voxel at the center of the sphere.
Since the volume image is represented in a discrete manner, it is associated with the volume cone B in the actual calculation processjThe corresponding gray scale integral can be approximately expressed as the following equation (3):
Figure BDA0000929581440000093
formula (3)
Wherein v represents a body cone BjIndividual voxels in the covered space.
Respectively calculating and calculating each cone Bj(j is more than or equal to 1 and less than or equal to N) and sorting the calculated results (for example, sorting from low to high or sorting from high to low) to avoid the influence of rotation on the voxel characteristic description. Thus, for a radius riThe N-dimensional feature description vector can be obtained by dividing the sphere by N
Figure BDA0000929581440000101
To be used as a spherical gray scale integral operator of the spherical center voxel.
According to one embodiment of the invention, a plurality of different r may be selectedi(1 ≦ i ≦ M) to obtain an N × M dimensional feature description matrix as the sphere gray scale integration operator for the sphere center voxel. According to one embodiment of the invention, M may be 6, radius ri∈[2,10],riThe unit of (d) is the number of voxels.
With the spherical gray scale integration as an operator, the spatial distribution of the voxels in the image subset and the texture context information can be described. However, in such raw feature space, symmetric voxels and structures cause mapping confusion, and such raw feature space is high in dimension. In order to avoid mapping confusion caused by symmetric voxels and structures in the original feature space and simultaneously reduce the dimension of the feature space, the reference image subset and the target image subset can be embedded into a manifold (modeled) with a lower dimension by a joint embedding (joint embedding) algorithm, and the corresponding relation between the reference image subset and the target image subset is solved in a joint embedding space.
Here, in order to establish semantic correspondence, the collaborative embedding must satisfy the following two requirements: similar voxels in the reference image subset and the target image subset must be close enough in the collaborative embedding space; and the global structure of the respective image subsets can be maintained after co-embedding the manifold.
In particular, a subset S of reference images is establishedrefAnd a subset of target images StarCorresponding graph structure GE(A, Ψ), wherein the node set A is a subset S of the reference imagerefAnd a subset of target images StarThe voxel, edge join Ψ in (b) contains the joins Ψ inside each image subsetintraAnd connections Ψ between image subsetsinterAnd Ψ ═ Ψinter∪Ψinter
For the thus constructed graph structure GE(A, Ψ) calculating an edge weight value W connecting each voxel inside each image subset based on the Euclidean distance between each voxel of the image subsets and the distance between the spherical gray scale integration operators corresponding to each voxelintraAnd calculating edge weights W connecting the voxels between the image subsets based on distances between the spherical gray scale integration operators corresponding to the voxelsinter. Therefore, the reference image subset S is solvedrefAnd a subset of target images StarIn the correspondence relationship (S), the euclidean distance between each voxel of the image subset is taken into consideration, and the distance between the spherical gray scale integration operators corresponding to each voxel is taken into consideration, that is, the correspondence relationship between the reference image subset and the target image subset is solved by collaborative embedding based on the spherical gray scale integration operators (step S140).
Based on each side weight W obtained by calculationintraAnd WinterAnd establishing a cooperative embedded objective function. According to an embodiment of the present invention, the objective function e (f) may be represented by the following formula (4):
Figure BDA0000929581440000111
formula (4)
Where F is the projection function that transforms the voxel from the original feature space to the manifold, SkAnd SlRepresenting a subset S of imagesrefAnd a subset of target images StarI.e., k, l ∈ { ref, tar }. When in use
Figure BDA0000929581440000112
And isWhen o isij1, otherwise oij0, wherein SOIkAnd SOIlRepresenting a subset S of imagesrefAnd a subset of target images StarThe SOI of (1). When k and l simultaneously represent a reference image or a target image (i.e., k ═ l), Wkl=Wintra(ii) a When k and l represent the reference image and the target image, respectively (i.e., k ≠ l), Wkl=Winter. The constant coefficient α kl is used to adjust the structure preservation of the image subsets and the neighbor embedding of similar voxels between the image subsets.
The objective function e (f) constructed in the above manner can satisfy two requirements of collaborative embedding, i.e., similar voxels in the reference image subset and the target image subset must be close enough in collaborative embedding space (i.e., the case where k ≠ l); and the global structure of the respective image subsets can be maintained after co-embedding the manifold (i.e. k ═ l case).
Because the target function of the collaborative embedding is consistent with the Laplace feature mapping, the feature vector corresponding to the minimum feature value is obtained by performing feature decomposition on the Laplace matrix, and the obtained feature vector is the reference image subset SrefAnd a subset of target images StarCoordinates in the co-embedding space, i.e. the optimal solution of the objective function.
Due to the reference image subset SrefAnd a subset of target images StarIs independently performed, and thus cannot guarantee the reference picture subset SrefVoxel and target image subset S in (1)tarThe voxels in (1) correspond one-to-one. Can be arranged inDefining a threshold in the embedding space, only if a subset S of the reference image is presentrefVoxel and target image subset S in (1)tarThe distance between the voxels in the embedding space is smaller than the threshold, and the pair of voxels is considered to be corresponding.
Based on the obtained correspondence, rigid transformation parameters between the reference volume image and the target volume image may be calculated (S150), i.e., a rotation matrix and a translation vector transformed from the reference volume image to the target volume image are solved. Rigid transformation parameters may be calculated from correspondences between image subsets obtained by cooperative embedding of different SOIs.
Subsequently, the calculated rigid transformation parameters may be applied to the reference volume image, and the rigid-transformed reference volume image may be overlapped with the target volume image (S160).
The distance of the surface of the stabilized bone structure (e.g., anterior skull base) after superimposition can be calculated to verify the accuracy of the superimposition based on the cooperatively embedded volumetric images. Experiments prove that the distance is less than 0.5mm, and the accuracy requirement of the oral clinic on the body image overlapping can be completely met.
According to the registration and overlapping method of the volume images, disclosed by the embodiment of the invention, the CBCT volume images before and after treatment can be quickly and effectively subjected to rigid registration and overlapping. By extracting the image subsets, the data size in the calculation is effectively reduced.
In addition, the invention also provides a rotation-independent spherical gray scale integration operator for describing the image subset so as to describe the context information of the voxel. Because the spherical gray scale integration operator only comprises linear calculation, the complex calculation of direction estimation, gradient histogram and the like in the traditional three-dimensional feature description is effectively overcome.
A semi-supervised learning method (or semi-supervised random walk algorithm) is adopted for labeling a stable structure (or an interesting structure), so that the condition that all the labeling is carried out manually and interactively by a user is avoided, the dependency on an operator is reduced, and the problem of complicated interaction is solved.
The method for solving the corresponding relation of the image subsets based on the set collaborative embedding algorithm is provided, wherein the corresponding relation between the voxels in each image subset can be obtained through the operation of a few dominant components in the collaborative embedding space, and compared with the calculation of registering in the original feature space, the data dimension can be effectively reduced. Meanwhile, the overall structure of the image subset is kept in the collaborative embedding space, so that mapping confusion caused by symmetric voxels or structures in the reference volume image and the target volume image is avoided, and reliable overlapping is realized.
According to the registration and overlapping method of the volume images, the problems that the calculation amount is large and the partial minimum is easy to happen in the volume image overlapping according to the traditional method are effectively solved.
Various advantages and effects of the respective exemplary embodiments are not limited to the above description, and can be easily understood by explanation of specific embodiments in the present disclosure.
While various exemplary embodiments have been shown and described above, it will be apparent to those skilled in the art that modifications and variations can be made without departing from the scope of the inventive concept as defined by the appended claims.

Claims (9)

1. A registration and overlay method of volume images, comprising:
extracting a reference image subset and a target image subset from the reference volume image and the target volume image respectively;
defining structures of interest in a reference image subset and a target image subset, respectively, using a semi-supervised learning method;
calculating a spherical gray scale integration operator of each voxel in the structure of interest of the reference image subset and the structure of interest of the target image subset;
solving the corresponding relation between the reference image subset and the target image subset through collaborative embedding based on a spherical gray scale integral operator;
calculating a rigid transformation parameter between the reference volume image and the target volume image based on the obtained corresponding relation; and
applying the calculated rigid transformation parameters to the reference volume image, overlapping the reference volume image after rigid transformation with the target volume image,
wherein, the step of calculating the spherical gray scale integral operator of the voxel comprises the following steps:
forming a sphere by taking a voxel to be calculated as a sphere center and a preset length as a radius, wherein the voxel to be calculated is a voxel with the sphere center;
equally dividing the formed ball into N cone bodies;
integrating the gray levels of the voxels in each volume cone after N equal divisions according to formula (1):
Figure FDA0002147095850000011
r represents the radius of the sphere and,
Bjrepresents the jth individual cone, j is more than or equal to 1 and less than or equal to N,
v denotes the cone BjEach voxel in the space covered is represented by a single line,
the function i (v) returns the value of the gray difference between voxel v and the voxel at the center of the sphere,
uj,rdenotes the j-th individual cone B after a sphere of radius r is equally divided into N body conesjGray scale integration of voxels within; and
sorting N results obtained after calculation aiming at N individual cones to obtain an N-dimensional vector
ur=(u1,r,u2,r,......,uN,r),
Wherein u isrRepresenting the gray scale integral of the sphere-centered voxel for a sphere of radius r.
2. The registration and superimposition method of volume images according to claim 1, wherein the step of extracting the reference image subset and the target image subset in the reference volume image and the target volume image, respectively, comprises:
respectively performing Gaussian smoothing on the reference volume image and the target volume image;
calculating a first-order reference differential image between the Gaussian smooth images according to the reference body image;
calculating a first-order target differential image between the Gaussian smooth images according to the target volume image;
extracting a local extreme value of the first-order reference differential image as a reference image subset of the reference body image; and
and extracting the local extremum of the first-order target differential image as a target image subset of the target volume image.
3. The volume image registration and superimposition method according to claim 2, wherein a threshold value is set for each of the reference volume image and the target volume image, and voxels having a gray value smaller than the threshold value are removed from the extracted reference image subset and target image subset.
4. The method of registering and overlaying volumetric images according to claim 1, wherein prior to the step of extracting the reference image subset and the target image subset, said method further comprises:
and processing the reference volume image and the target volume image by using a histogram matching operator.
5. The registration and overlay method of volumetric images according to claim 1, wherein the structure of interest is defined in the reference image subset and the target image subset, respectively, using a semi-supervised learning method based on a volume prior of the structure of interest inputted in advance,
wherein the volume prior of the structure of interest is obtained by counting in advance voxels belonging to different types of structures of interest.
6. The method for registration and superimposition of volumetric images according to claim 1, wherein the step of calculating a spherical gray scale integration operator of voxels further comprises:
selecting M different radiuses by taking a voxel to be calculated as a sphere center to obtain M groups of N-dimensional vectors
Figure FDA0002147095850000031
1≤i≤M。
7. According to claimThe method of registering and overlaying volume images according to claim 6, wherein N is 40, M is 6, ri∈[2,10],riThe unit of (d) is the number of voxels.
8. The registration and superimposition method of a volume image according to claim 1, wherein the step of solving the correspondence between the reference image subset and the target image subset by collaborative embedding based on a spherical gray scale integration operator includes:
establishing graph structure G corresponding to reference image subset and target image subsetE(a, Ψ), wherein the node set a is a voxel in the reference image subset and the target image subset, and the edge connections Ψ comprise connections within each image subset and connections between each image subset;
for graph Structure GE(a, Ψ) calculating an edge weight value connecting each voxel inside each image subset based on a euclidean distance between each voxel of the image subsets and a distance between the spherical gray scale integration operators corresponding to each voxel, and calculating an edge weight value connecting each voxel between each image subset based on a distance between the spherical gray scale integration operators corresponding to each voxel;
establishing a collaborative embedded target function based on each edge weight value obtained by calculation;
performing Laplace matrix characteristic decomposition on the target function to obtain a characteristic vector corresponding to the minimum characteristic value, namely coordinates of the reference image subset and the target image subset in a collaborative embedded space; and
a threshold is defined in the collaborative embedding space, and when the distance between a voxel in the reference image subset and a voxel in the target image subset in the collaborative embedding space is smaller than the threshold, the pair of voxels is considered to be corresponding.
9. The registration and superimposition method for volume images according to claim 1, wherein rigid transformation parameters between the reference volume image and the target volume image include a rotation matrix and a translation vector transformed from the reference volume image to the target volume image.
CN201610105206.XA 2016-02-25 2016-02-25 Cone beam computed tomography image registration and overlapping method Active CN107123104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610105206.XA CN107123104B (en) 2016-02-25 2016-02-25 Cone beam computed tomography image registration and overlapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610105206.XA CN107123104B (en) 2016-02-25 2016-02-25 Cone beam computed tomography image registration and overlapping method

Publications (2)

Publication Number Publication Date
CN107123104A CN107123104A (en) 2017-09-01
CN107123104B true CN107123104B (en) 2020-01-07

Family

ID=59717774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610105206.XA Active CN107123104B (en) 2016-02-25 2016-02-25 Cone beam computed tomography image registration and overlapping method

Country Status (1)

Country Link
CN (1) CN107123104B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8615127B2 (en) * 2010-01-15 2013-12-24 Vanderbilt University System and method for point-based rigid registration with anisotropic weighting
CN102024273A (en) * 2010-12-10 2011-04-20 中国人民解放军国防科学技术大学 Nonrigid registration method based on implicit vector space
CN104574372A (en) * 2014-12-21 2015-04-29 天津大学 Image registration method based on similar feature triangles
CN104933672B (en) * 2015-02-26 2018-05-29 浙江德尚韵兴图像科技有限公司 Method based on quick convex optimized algorithm registration three dimensional CT with ultrasonic liver image

Also Published As

Publication number Publication date
CN107123104A (en) 2017-09-01

Similar Documents

Publication Publication Date Title
Xu et al. Evaluation of six registration methods for the human abdomen on clinically acquired CT
Cootes et al. A unified framework for atlas matching using active appearance models
Hill et al. Model-based interpretation of 3d medical images.
Kadoury et al. Spine segmentation in medical images using manifold embeddings and higher-order MRFs
Chu et al. FACTS: fully automatic CT segmentation of a hip joint
Ibragimov et al. Segmentation of pathological structures by landmark-assisted deformable models
Hammernik et al. Vertebrae segmentation in 3D CT images based on a variational framework
Pereañez et al. Accurate segmentation of vertebral bodies and processes using statistical shape decomposition and conditional models
CN104050666B (en) Brain MR image method for registering based on segmentation
JP5832938B2 (en) Image processing apparatus, method, and program
Zheng et al. Scaled, patient-specific 3D vertebral model reconstruction based on 2D lateral fluoroscopy
Zheng et al. Accurate and robust reconstruction of a surface model of the proximal femur from sparse-point data and a dense-point distribution model for surgical navigation
Rogelj et al. Validation of a nonrigid registration algorithm for multimodal data
Savva et al. Geometry-based vs. intensity-based medical image registration: A comparative study on 3D CT data
Jacinto et al. Multi-atlas automatic positioning of anatomical landmarks
Binaghi et al. Automatic segmentation of MR brain tumor images using support vector machine in combination with graph cut
Alvén et al. Überatlas: fast and robust registration for multi-atlas segmentation
Schramm et al. Toward fully automatic object detection and segmentation
CN107123104B (en) Cone beam computed tomography image registration and overlapping method
Zheng et al. Adaptive segmentation of vertebral bodies from sagittal MR images based on local spatial information and Gaussian weighted chi-square distance
Rezaeitabar et al. Automatic 3D segmentation of individual facial muscles using unlabeled prior information
Pei et al. Superimposition of cone-beam computed tomography images by joint embedding
Suganya et al. Intensity based image registration by maximization of mutual information
CN109063208A (en) A kind of medical image search method merging various features information
Urschler et al. Assessing breathing motion by shape matching of lung and diaphragm surfaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant