NL2025814A - Precise positioning system for three-dimensional position of tumor by multi-image fusion - Google Patents
Precise positioning system for three-dimensional position of tumor by multi-image fusion Download PDFInfo
- Publication number
- NL2025814A NL2025814A NL2025814A NL2025814A NL2025814A NL 2025814 A NL2025814 A NL 2025814A NL 2025814 A NL2025814 A NL 2025814A NL 2025814 A NL2025814 A NL 2025814A NL 2025814 A NL2025814 A NL 2025814A
- Authority
- NL
- Netherlands
- Prior art keywords
- image
- images
- fusion
- tumor
- positioning system
- Prior art date
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 44
- 206010028980 Neoplasm Diseases 0.000 title claims abstract description 26
- 230000011218 segmentation Effects 0.000 claims abstract description 27
- 238000007781 pre-processing Methods 0.000 claims abstract description 16
- 230000009466 transformation Effects 0.000 claims description 25
- 238000001914 filtration Methods 0.000 claims description 17
- 238000000354 decomposition reaction Methods 0.000 claims description 13
- 210000002569 neuron Anatomy 0.000 claims description 13
- 241000270295 Serpentes Species 0.000 claims description 9
- 238000013528 artificial neural network Methods 0.000 claims description 9
- 230000002860 competitive effect Effects 0.000 claims description 9
- 238000003709 image segmentation Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000000034 method Methods 0.000 description 10
- 230000002708 enhancing effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000001574 biopsy Methods 0.000 description 1
- 230000008081 blood perfusion Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000008570 general process Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/35—Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/149—Segmentation; Edge detection involving deformable models, e.g. active contour models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20116—Active contour; Active surface; Snakes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Image Processing (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The present invention provides a precise positioning system for a three-dimensional position of a tumor by multi-image fusion. The system comprises: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor. The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor.
Description
PRECISE POSITIONING SYSTEM FOR THREE-DIMENSIONAL POSITION OF TUMOR BY MULTI- IMAGE FUSION Field of the Invention The present invention belongs to the field of image processing, and particularly relates to a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
Background of the Invention The statement of this section merely provides background art information related to the present invention, and does not necessarily constitute the prior art.
With the development of medical imaging technology, modern medical treatment is closely related to medical imaging information.
The diagnosis and condition of most diseases require evidences from medical images.
Different medical images provide different information of related organs: CT and MR provide structural information such as anatomical structures of organs with high spatial resolution, and SPECT and PET provide functional information such as blood perfusion of organs.
If the information of structural images and functional images is organically fused and comprehensively processed to obtain new information, new thoughts will be brought to clinical diagnosis and treatment.
Multi-image fusion imaging can improve the sensitivity and specificity of tumor diagnosis, and also provide more information for positioning of biopsy, thereby reducing the deficiencies of morphological imaging.
However, how to register and fuse multi-modal images to implement precise three-dimensional positioning of a tumor is crucial.
Summary of the Invention In order to solve the above problems, the present invention proposes a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor.
According to some embodiments, the present invention adopts the following technical solution: A precise positioning system for a three-dimensional position of a tumor by multi-image fusion, including: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and the fused image can provide a more intuitive, more comprehensive, and clearer image basis. Segmentation and three-dimensional reconstruction are performed on the fused image to precisely position a tumor.
As a further limitation, the image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window; a one-dimensional sequence ff), fa is assumed, the length of the window 1s m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, and the number in the center is used as an output of filtering.
As a further limitation, the image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
As a further limitation, the registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
As a further limitation, the registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
As a further limitation, the segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
Compared with the prior art, the beneficial effects of the present invention are: The present invention uses medical image fusion technology to properly fuse multi-modal images that provide structural and functional medical information, which can provide a more comprehensive basis for judgment, and makes up for the shortcomings of single-mode images in providing one-sided information.
The present invention uses a combination of median filtering and wavelet transformation edge enhancement in image preprocessing, which effectively removes noise, retains signals with better smoothness, can enhance image edges and has better visual effects.
The present invention uses a registration method based on mutual information and a wavelet pyramid fusion method in registration and fusion, which almost can be used for registration of images in any different modes, further enhances image edge information, and avoids dark images and new interference caused by inverse wavelet transformation.
The present invention uses a Snake model image segmentation and positioning method based on a competitive neural network in segmentation and positioning, wherein the competitive neural network is used for initial segmentation of images, thereby realizing an automatic segmentation technology, solving the sensitivity of a Snake model to the initial contour, and well solving the shortcoming of unsatisfactory detection effects on concave contours or convex contours with high curvatures.
Brief Description of the Drawings The accompanying drawings constituting a part of the present application are used for providing a further understanding of the present application, and the schematic embodiments of the present application and the description thereof are used for interpreting the present application, rather than constituting improper limitations to the present application.
Fig. 1 is a flowchart of precise tumor positioning by multi-image fusion according to 5 the present invention; Fig. 2 is a flowchart of Snake model image segmentation and positioning based on a competitive neural network according to the present invention; Fig. 3 is a flowchart of registration based on mutual information according to the present invention.
Detailed Description of the Embodiments The present invention will be further illustrated below in conjunction with the accompanying drawings and embodiments.
It should be pointed out that the following detailed descriptions are all exemplary and aim to further illustrate the present application. Unless otherwise specified, all technological and scientific terms used in the descriptions have the same meanings generally understood by those of ordinary skill in the art of the present application.
It should be noted that the terms used herein are merely for describing specific embodiments, but are not intended to limit exemplary embodiments according to the present application. As used herein, unless otherwise explicitly pointed out by the context, the singular form is also intended to include the plural form. In addition, it should also be understood that when the terms “include” and/or “comprise” are used in the specification, they indicate features, steps, operations, devices, components and/or their combination.
In the present invention, the terms such as “upper”, “lower”, “left”, “right”, “front”, “rear”, “vertical”, “horizontal”, “side”, and “bottom” indicate the orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only relationship terms determined for the convenience of describing the structural relationships of various components or elements of the present invention, but do not specify any component or element in the present invention, and cannot be understood as limitations to the present invention.
In the present invention, the terms such as “fixed”, “coupled” and “connected” should be generally understood, for example, the “connected” may be fixedly connected, integrally connected, detachably connected, directly connected, or indirectly connected by a medium.
For a related scientific research or technical person in this art, the specific meanings of the above terms in the present invention may be determined according to specific circumstances, and cannot be understood as limitations to the present invention.
A precise positioning system for a three-dimensional position of a tumor by multi-image fusion includes: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window.
The image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
The registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
The registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; The low-frequency portion of the last layer is fused by taking the maximum of the coefficients; high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
The segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
As shown in Fig. 1, the working method of the above system mainly includes three aspects: image preprocessing, registration and fusion, segmentation and positioning.
The image preprocessing mainly uses median filtering and wavelet transform edge enhancement technologies; the registration and fusion mainly use a registration method based on mutual information and a wavelet pyramid fusion method; and the segmentation and positioning mainly use a Snake model image segmentation and positioning method based on a competitive neural network.
The images are preprocessed by median filtering, wherein for an image, a rectangular sliding window (the size of the window is generally odd) is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window. A one-dimensional sequence ff), fa 1s assumed, the length of the window is m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, the number in the center is used as an output y; of filtering, and its mathematical formula is expressed as: veMedff fi. fin] i€Z v= re The median filtering of two-dimensional data can be expressed as: YiimMed{ Xig ja Xik+1 jks Xieje} K71,2, Further, the image edge enhancement technology based on wavelet transformation performs wavelet transform decomposition using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into four sub-band images: LL (horizontal low frequency, vertical low frequency), LH (horizontal low frequency, vertical high frequency), HL (horizontal high frequency, vertical low frequency), and HH (horizontal high frequency, vertical high frequency), and a wavelet coefficient of each scale is obtained.
The smaller wavelet coefficients are regarded as noise, and the noise is filtered out by setting an appropriate threshold. The threshold L is obtained according to the variance of each sub-band image, where j is the current number of transformed layers,
and iis 1, 2, 3, which respectively represent HH, HL and LH sub-band images. The wavelet coefficient of each layer is transformed as follows to obtain an estimated value, | (x,v)- 7 w/(x,v)2 1! Ww) (x,y)=10, hw; (x, y) <T; | (,v)+ Tw! (x,y) < Tj Detail information in an image is usually contained in high-frequency components, so after the noise is removed, different enhancement coefficients are required to enhance detail components of the image within different frequency ranges, thereby improving the image quality and enhancing the layering and visual effects. An enhancement coefficient K, is set to enhance the wavelet coefficients after the threshold processing, w(x.) =K xw, (x.y) K, = JixK j is the current number of transformed layers, K is an empirical weight As for the registration process based on mutual information, the mutual information is a similarity measure of statistical correlation between two random variables. If two images are geometrically aligned, the mutual information of their corresponding voxel pairs is maximal. This method requires neither assumption about the relationship between image intensities nor segmentation or any preprocessing on the images, is not sensitive to data missing, and almost can be used for registration of images in any different modes. The general process is shown in Fig. 3.
Two images are registered, one as a reference image R and the other as a floating image F. To calculate the mutual information, coordinate transformation is first performed. The coordinate transformation is to transform pixels of the floating image F and then register the transformed pixels to the reference image R, and rigid transformation is used here. The coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers. The gray values of corresponding points on the reference image R need to be obtained by linear interpolation. Each pixel of the floating image F is rotated and then registered to the R,
and a joint histogram A(F,R) is calculated through the transformed pixel point pairs.
The mutual information formula is: 1(F.R)=Y pFR(f.rlog PER) Tr PE()PR(r) TE in > PIR(S,P) js joint probability distribution, and Ten can be obtained by normalizing the joint gray histogram MER) of the two images.
The edge probability distribution can be directly obtained from the joint probability distribution: pF (f)=>" pFR(f,r), pR() => pFR(f.r). rel feF The wavelet pyramid fusion method is to perform certain layers (three layers) of orthogonal wavelet transformation on the images F and R that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on.
One the one hand of the fusion, the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and on the other hand, high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused, wherein the weighting function is win, P) =r, x wp (x,y) +7, xwy(x,¥) 1, =K, «(1-L05),) =12,.N.
Nis the number of layers for wavelet transformation, and K, is an enhancement coefficient.
This algorithm uses all the high-frequency information, thus avoiding dark images caused by inverse wavelet transformation, also avoiding undesirable effects of unnecessary interference information due to irregular changes in high-frequency coefficients after wavelet transformation, and enhancing image edge information.
As shown in Fig. 2, in the Snake model image segmentation and positioning method based on a competitive neural network, the competitive neural network is a master-slave network, and the slave network is a Kohonen network group. After initial segmentation of the images, the results are used to initialize the state of neurons in the master network, and the state is dynamically evolved until the neurons are converged to an attractor of the master network.
If an LxL image fli, j) (7, j=1.2,...,L) has M different gray levels, a network having L*LxM neurons is established according to the method of setting M neurons for each pixel. The mm neuron at the pixel (7, j) is N,,, and its active value is v,, , representing that the pixel (i, j) has the possibility of gray m. Obviously, 0<v, <1, and Af > ¥,, =1. The intensity of interconnection from the neuron N,, to Ney iS Tm n=l and Ti zm = Temmis assumed. Each neuron in the network receives the inputs of
LL M itself and other neurons. The function A, => >> 7... Vea of a network state k Im vector J represents the total effect of the active values of other neurons on Nim, wherein the state vector is v= (Waar Visas Va Vases Vises Vin) and the energy
LL M LL XM Ev) = SSDS) Tm sin “Vim function of the network at the state vector is i=l Jal mel kel [sl ned The next Snake model is a process of minimizing the energy function.
Described above are merely preferred embodiments of the present application, and the present application is not limited thereto. Various modifications and variations may be made to the present application for those skilled in the art. Any modification, equivalent substitution, improvement or the like made within the spirit and principle of the present application shall fall into the protection scope of the present application.
Although the specific embodiments of the present invention are described above in combination with the accompanying drawing, the protection scope of the present invention is not limited thereto. It should be understood by those skilled in the art that various modifications or variations could be made by those skilled in the art based on the technical solutions of the present invention without any creative effort, and these modifications or variations shall fall into the protection scope of the present invention.
Claims (9)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910888197.XA CN110660063A (en) | 2019-09-19 | 2019-09-19 | Multi-image fused tumor three-dimensional position accurate positioning system |
Publications (2)
Publication Number | Publication Date |
---|---|
NL2025814A true NL2025814A (en) | 2021-05-17 |
NL2025814B1 NL2025814B1 (en) | 2021-12-14 |
Family
ID=69037327
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2025814A NL2025814B1 (en) | 2019-09-19 | 2020-06-11 | Precise positioning system for three-dimensional position of tumor by multi-image fusion |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110660063A (en) |
NL (1) | NL2025814B1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727672B (en) * | 2018-12-28 | 2023-04-07 | 江苏瑞尔医疗科技有限公司 | Prediction and tracking method for respiratory movement of patient thoracoabdominal tumor |
CN111228655A (en) * | 2020-01-14 | 2020-06-05 | 于金明 | Monitoring method and device based on virtual intelligent medical platform and storage medium |
CN111210911A (en) * | 2020-01-15 | 2020-05-29 | 于金明 | Radiotherapy external irradiation auxiliary diagnosis and treatment system based on virtual intelligent medical platform |
CN111477304A (en) * | 2020-04-03 | 2020-07-31 | 北京易康医疗科技有限公司 | Tumor irradiation imaging combination method for fusing PET (positron emission tomography) image and MRI (magnetic resonance imaging) image |
CN111667486B (en) * | 2020-04-29 | 2023-11-17 | 杭州深睿博联科技有限公司 | Multi-modal fusion pancreas segmentation method and system based on deep learning |
CN113450294A (en) * | 2021-06-07 | 2021-09-28 | 刘星宇 | Multi-modal medical image registration and fusion method and device and electronic equipment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080292194A1 (en) * | 2005-04-27 | 2008-11-27 | Mark Schmidt | Method and System for Automatic Detection and Segmentation of Tumors and Associated Edema (Swelling) in Magnetic Resonance (Mri) Images |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107610162A (en) * | 2017-08-04 | 2018-01-19 | 浙江工业大学 | A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation |
CN109035160B (en) * | 2018-06-29 | 2022-06-21 | 哈尔滨商业大学 | Medical image fusion method and image detection method based on fusion medical image learning |
-
2019
- 2019-09-19 CN CN201910888197.XA patent/CN110660063A/en active Pending
-
2020
- 2020-06-11 NL NL2025814A patent/NL2025814B1/en not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080292194A1 (en) * | 2005-04-27 | 2008-11-27 | Mark Schmidt | Method and System for Automatic Detection and Segmentation of Tumors and Associated Edema (Swelling) in Magnetic Resonance (Mri) Images |
Non-Patent Citations (2)
Title |
---|
"Genetic and Evolutionary Computing : Proceedings of the Twelfth International Conference on Genetic and Evolutionary Computing, December 14-17, 2019; Changzhou, Jiangsu, China", vol. 295, 1 January 2014, SPRINGER, Berlin, ISSN: 2194-5357, article ABDELSAMEA MOHAMMED M. ET AL: "A Survey of SOM-Based Active Contour Models for Image Segmentation : Proceedings of the 10th International Workshop, WSOM 2014, Mittweida, Germany, July, 2-4, 2014", pages: 293 - 302, XP055841344, DOI: 10.1007/978-3-319-07695-9_28 * |
VANI M ET AL: "Multi focus and multi modal image fusion using wavelet transform", 2015 3RD INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATION AND NETWORKING (ICSCN), IEEE, 26 March 2015 (2015-03-26), pages 1 - 6, XP033211230, DOI: 10.1109/ICSCN.2015.7219924 * |
Also Published As
Publication number | Publication date |
---|---|
NL2025814B1 (en) | 2021-12-14 |
CN110660063A (en) | 2020-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
NL2025814B1 (en) | Precise positioning system for three-dimensional position of tumor by multi-image fusion | |
Hou et al. | Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model | |
Gurusamy et al. | A machine learning approach for MRI brain tumor classification | |
Singh et al. | Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain | |
Miao et al. | Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning | |
Karthik et al. | A comprehensive framework for classification of brain tumour images using SVM and curvelet transform | |
CN113012173A (en) | Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI | |
Gan et al. | BM3D-based ultrasound image denoising via brushlet thresholding | |
CN108305279B (en) | A kind of super voxel generation method of the brain magnetic resonance image of iteration space fuzzy clustering | |
Ding et al. | M4fnet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration | |
CN116342444A (en) | Dual-channel multi-mode image fusion method and fusion imaging terminal | |
Kabir | Early stage brain tumor detection on MRI image using a hybrid technique | |
Qian et al. | Multi-scale context UNet-like network with redesigned skip connections for medical image segmentation | |
Yang et al. | Current advances in computational lung ultrasound imaging: a review | |
Beetz et al. | Point2Mesh-Net: Combining point cloud and mesh-based deep learning for cardiac shape reconstruction | |
CN109285176A (en) | A kind of cerebral tissue dividing method cut based on regularization figure | |
Gupta et al. | Ischemic stroke detection using image processing and ANN | |
Heller et al. | Computer aided diagnosis of skin lesions from morphological features | |
Namburete et al. | Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas | |
Lecesne et al. | Segmentation of cardiac infarction in delayed-enhancement MRI using probability map and transformers-based neural networks | |
Belfilali et al. | Left ventricle analysis in echocardiographic images using transfer learning | |
El-Shafai et al. | Traditional and deep-learning-based denoising methods for medical images | |
Muthiah et al. | Fusion of MRI and PET images using deep learning neural networks | |
Fan et al. | DAGM-fusion: A dual-path CT-MRI image fusion model based multi-axial gated MLP | |
Valverde et al. | Multiple sclerosis lesion detection and segmentation using a convolutional neural network of 3D patches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM | Lapsed because of non-payment of the annual fee |
Effective date: 20230701 |