NL2025814B1 - Precise positioning system for three-dimensional position of tumor by multi-image fusion - Google Patents

Precise positioning system for three-dimensional position of tumor by multi-image fusion Download PDF

Info

Publication number
NL2025814B1
NL2025814B1 NL2025814A NL2025814A NL2025814B1 NL 2025814 B1 NL2025814 B1 NL 2025814B1 NL 2025814 A NL2025814 A NL 2025814A NL 2025814 A NL2025814 A NL 2025814A NL 2025814 B1 NL2025814 B1 NL 2025814B1
Authority
NL
Netherlands
Prior art keywords
image
images
fusion
tumor
dimensional
Prior art date
Application number
NL2025814A
Other languages
Dutch (nl)
Other versions
NL2025814A (en
Inventor
Yuan Shuanghu
Li Wei
Liu Wenju
Wang Suzhen
Dong Leilei
Li Li
Liu Ning
Wei Yuchun
Yu Jinming
Li Xiaoxiao
Original Assignee
Shandong Cancer Hospital And Inst
Univ Shandong
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Cancer Hospital And Inst, Univ Shandong filed Critical Shandong Cancer Hospital And Inst
Publication of NL2025814A publication Critical patent/NL2025814A/en
Application granted granted Critical
Publication of NL2025814B1 publication Critical patent/NL2025814B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The present invention provides a precise positioning system for a three-dimensional 5 position of a tumor by multi-image fusion. The system comprises: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional 10 reconstruction on the fused image to determine the position of a tumor. The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor. 15

Description

PRECISE POSITIONING SYSTEM FOR THREE-DIMENSIONAL POSITION OF TUMOR BY MULTI- IMAGE FUSION Field of the Invention The present invention belongs to the field of image processing, and particularly relates to a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
Background of the Invention The statement of this section merely provides background art information related to the present invention, and does not necessarily constitute the prior art.
With the development of medical imaging technology, modern medical treatment is closely related to medical imaging information.
The diagnosis and condition of most diseases require evidences from medical images.
Different medical images provide different information of related organs: CT and MR provide structural information such as anatomical structures of organs with high spatial resolution, and SPECT and PET provide functional information such as blood perfusion of organs.
If the information of structural images and functional images is organically fused and comprehensively processed to obtain new information, new thoughts will be brought to clinical diagnosis and treatment.
Multi-image fusion imaging can improve the sensitivity and specificity of tumor diagnosis, and also provide more information for positioning of biopsy, thereby reducing the deficiencies of morphological imaging.
However, how to register and fuse multi-modal images to implement precise three-dimensional positioning of a tumor is crucial.
Summary of the Invention In order to solve the above problems, the present invention proposes a precise positioning system for a three-dimensional position of a tumor by multi-image fusion.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and performs segmentation and three-dimensional reconstruction on the fused image to precisely position a tumor.
According to some embodiments, the present invention adopts the following technical solution: A precise positioning system for a three-dimensional position of a tumor by multi-image fusion, including: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The present invention uses medical image fusion technology to properly fuse multi-modal images that provide different medical information, and the fused image can provide a more intuitive, more comprehensive, and clearer image basis. Segmentation and three-dimensional reconstruction are performed on the fused image to precisely position a tumor.
As a further limitation, the image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window; a one-dimensional sequence ff), fa is assumed, the length of the window 1s m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, and the number in the center is used as an output of filtering.
As a further limitation, the image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
As a further limitation, the registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
As a further limitation, the registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
As a further limitation, the segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
Compared with the prior art, the beneficial effects of the present invention are: The present invention uses medical image fusion technology to properly fuse multi-modal images that provide structural and functional medical information, which can provide a more comprehensive basis for judgment, and makes up for the shortcomings of single-mode images in providing one-sided information.
The present invention uses a combination of median filtering and wavelet transformation edge enhancement in image preprocessing, which effectively removes noise, retains signals with better smoothness, can enhance image edges and has better visual effects.
The present invention uses a registration method based on mutual information and a wavelet pyramid fusion method in registration and fusion, which almost can be used for registration of images in any different modes, further enhances image edge information, and avoids dark images and new interference caused by inverse wavelet transformation.
The present invention uses a Snake model image segmentation and positioning method based on a competitive neural network in segmentation and positioning, wherein the competitive neural network is used for initial segmentation of images, thereby realizing an automatic segmentation technology, solving the sensitivity of a Snake model to the initial contour, and well solving the shortcoming of unsatisfactory detection effects on concave contours or convex contours with high curvatures.
Brief Description of the Drawings The accompanying drawings constituting a part of the present application are used for providing a further understanding of the present application, and the schematic embodiments of the present application and the description thereof are used for interpreting the present application, rather than constituting improper limitations to the present application.
Fig. 1 is a flowchart of precise tumor positioning by multi-image fusion according to 5 the present invention; Fig. 2 is a flowchart of Snake model image segmentation and positioning based on a competitive neural network according to the present invention; Fig. 3 is a flowchart of registration based on mutual information according to the present invention.
Detailed Description of the Embodiments The present invention will be further illustrated below in conjunction with the accompanying drawings and embodiments.
It should be pointed out that the following detailed descriptions are all exemplary and aim to further illustrate the present application. Unless otherwise specified, all technological and scientific terms used in the descriptions have the same meanings generally understood by those of ordinary skill in the art of the present application.
It should be noted that the terms used herein are merely for describing specific embodiments, but are not intended to limit exemplary embodiments according to the present application. As used herein, unless otherwise explicitly pointed out by the context, the singular form is also intended to include the plural form. In addition, it should also be understood that when the terms “include” and/or “comprise” are used in the specification, they indicate features, steps, operations, devices, components and/or their combination.
In the present invention, the terms such as “upper”, “lower”, “left”, “right”, “front”, “rear”, “vertical”, “horizontal”, “side”, and “bottom” indicate the orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only relationship terms determined for the convenience of describing the structural relationships of various components or elements of the present invention, but do not specify any component or element in the present invention, and cannot be understood as limitations to the present invention.
In the present invention, the terms such as “fixed”, “coupled” and “connected” should be generally understood, for example, the “connected” may be fixedly connected, integrally connected, detachably connected, directly connected, or indirectly connected by a medium.
For a related scientific research or technical person in this art, the specific meanings of the above terms in the present invention may be determined according to specific circumstances, and cannot be understood as limitations to the present invention.
A precise positioning system for a three-dimensional position of a tumor by multi-image fusion includes: an image preprocessing module, configured to preprocess acquired multi-modal images; a registration and fusion module, configured to register and align the images based on mutual information and fuse the aligned images; and a segmentation and reconstruction module, configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the position of a tumor.
The image preprocessing module preprocesses the images by median filtering, wherein for an image, a rectangular sliding window is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window.
The image preprocessing module preprocesses the images by image edge enhancement based on wavelet transformation, wherein wavelet transform decomposition is performed using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into a plurality of sub-band images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are regarded as noise, the noise is filtered out by setting an appropriate threshold, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving image quality and enhancing layering and visual effects.
The registration and fusion module is configured to calculate the mutual information using one of two images as a reference image and the other as a floating image by: first performing coordinate transformation and registering the transformed pixels of the floating image F to the reference image R, wherein the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and obtaining gray values of corresponding points on the reference image R by interpolation, rotating each pixel of the floating image F and then registering the pixels to the reference image R, calculating a joint histogram and edge probability distribution through the transformed pixel point pairs, thus obtaining the mutual information; wherein when the mutual information is maximum, the two images are geometrically aligned.
The registration and fusion module is configured to fuse the images by using wavelet pyramid fusion, which performs certain layers of orthogonal wavelet transformation on the reference image and the floating image that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on; The low-frequency portion of the last layer is fused by taking the maximum of the coefficients; high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused.
The segmentation and reconstruction module is configured to perform segmentation by Snake model image segmentation and positioning based on a competitive neural network, and after initial segmentation of the images, the results are used to initialize the state of neurons in a master network and the state is dynamically evolved until convergence.
As shown in Fig. 1, the working method of the above system mainly includes three aspects: image preprocessing, registration and fusion, segmentation and positioning.
The image preprocessing mainly uses median filtering and wavelet transform edge enhancement technologies; the registration and fusion mainly use a registration method based on mutual information and a wavelet pyramid fusion method; and the segmentation and positioning mainly use a Snake model image segmentation and positioning method based on a competitive neural network.
The images are preprocessed by median filtering, wherein for an image, a rectangular sliding window (the size of the window is generally odd) is generated with each pixel in the image as the center, then all pixels in this window are sorted according to the gray values from small to big, a median of the sorted sequence is calculated, and this median is used to replace the pixel value of a center point in the window. A one-dimensional sequence ff), fa 1s assumed, the length of the window is m, the median filtering on the sequence is to extract m numbers fizv,fies+1, fieo fiet, fiey from the input sequence, wherein i is the central position of the window, v= Bl then the m points are sorted according to the numerical values thereof, the number in the center is used as an output y; of filtering, and its mathematical formula is expressed as: veMedff fi. fin] i€Z v= re The median filtering of two-dimensional data can be expressed as: YiimMed{ Xig ja Xik+1 jks Xieje} K71,2, Further, the image edge enhancement technology based on wavelet transformation performs wavelet transform decomposition using a Mallat algorithm on the images de-noised by median filtering, the scales of decomposition are three layers, each layer of wavelet decomposition decomposes an image to be decomposed into four sub-band images: LL (horizontal low frequency, vertical low frequency), LH (horizontal low frequency, vertical high frequency), HL (horizontal high frequency, vertical low frequency), and HH (horizontal high frequency, vertical high frequency), and a wavelet coefficient of each scale is obtained.
The smaller wavelet coefficients are regarded as noise, and the noise is filtered out by setting an appropriate threshold. The threshold L is obtained according to the variance of each sub-band image, where j is the current number of transformed layers,
and iis 1, 2, 3, which respectively represent HH, HL and LH sub-band images. The wavelet coefficient of each layer is transformed as follows to obtain an estimated value, | (x,v)- 7 w/(x,v)2 1! Ww) (x,y)=10, hw; (x, y) <T; | (,v)+ Tw! (x,y) < Tj Detail information in an image is usually contained in high-frequency components, so after the noise is removed, different enhancement coefficients are required to enhance detail components of the image within different frequency ranges, thereby improving the image quality and enhancing the layering and visual effects. An enhancement coefficient K, is set to enhance the wavelet coefficients after the threshold processing, w(x.) =K xw, (x.y) K, = JixK j is the current number of transformed layers, K is an empirical weight As for the registration process based on mutual information, the mutual information is a similarity measure of statistical correlation between two random variables. If two images are geometrically aligned, the mutual information of their corresponding voxel pairs is maximal. This method requires neither assumption about the relationship between image intensities nor segmentation or any preprocessing on the images, is not sensitive to data missing, and almost can be used for registration of images in any different modes. The general process is shown in Fig. 3.
Two images are registered, one as a reference image R and the other as a floating image F. To calculate the mutual information, coordinate transformation is first performed. The coordinate transformation is to transform pixels of the floating image F and then register the transformed pixels to the reference image R, and rigid transformation is used here. The coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers. The gray values of corresponding points on the reference image R need to be obtained by linear interpolation. Each pixel of the floating image F is rotated and then registered to the R,
and a joint histogram A(F,R) is calculated through the transformed pixel point pairs.
The mutual information formula is: 1(F.R)=Y pFR(f.rlog PER) Tr PE()PR(r) TE in > PIR(S,P) js joint probability distribution, and Ten can be obtained by normalizing the joint gray histogram MER) of the two images.
The edge probability distribution can be directly obtained from the joint probability distribution: pF (f)=>" pFR(f,r), pR() => pFR(f.r). rel feF The wavelet pyramid fusion method is to perform certain layers (three layers) of orthogonal wavelet transformation on the images F and R that participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information and diagonal information, the low-frequency information is processed in the same way on each layer, and so on.
One the one hand of the fusion, the low-frequency portion of the last layer is fused by taking the maximum of the coefficients; and on the other hand, high-frequency wavelet coefficients of transformation on each layer in the other three directions are hierarchically and linearly weighted and fused, wherein the weighting function is win, P) =r, x wp (x,y) +7, xwy(x,¥) 1, =K, «(1-L05),) =12,.N.
Nis the number of layers for wavelet transformation, and K, is an enhancement coefficient.
This algorithm uses all the high-frequency information, thus avoiding dark images caused by inverse wavelet transformation, also avoiding undesirable effects of unnecessary interference information due to irregular changes in high-frequency coefficients after wavelet transformation, and enhancing image edge information.
As shown in Fig. 2, in the Snake model image segmentation and positioning method based on a competitive neural network, the competitive neural network is a master-slave network, and the slave network is a Kohonen network group. After initial segmentation of the images, the results are used to initialize the state of neurons in the master network, and the state is dynamically evolved until the neurons are converged to an attractor of the master network.
If an LxL image fli, j) (7, j=1.2,...,L) has M different gray levels, a network having L*LxM neurons is established according to the method of setting M neurons for each pixel. The mm neuron at the pixel (7, j) is N,,, and its active value is v,, , representing that the pixel (i, j) has the possibility of gray m. Obviously, 0<v, <1, and Af > ¥,, =1. The intensity of interconnection from the neuron N,, to Ney iS Tm n=l and Ti zm = Temmis assumed. Each neuron in the network receives the inputs of
LL M itself and other neurons. The function A, => >> 7... Vea of a network state k Im vector J represents the total effect of the active values of other neurons on Nim, wherein the state vector is v= (Waar Visas Va Vases Vises Vin) and the energy
LL M LL XM Ev) = SSDS) Tm sin “Vim function of the network at the state vector is i=l Jal mel kel [sl ned The next Snake model is a process of minimizing the energy function.
Described above are merely preferred embodiments of the present application, and the present application is not limited thereto. Various modifications and variations may be made to the present application for those skilled in the art. Any modification, equivalent substitution, improvement or the like made within the spirit and principle of the present application shall fall into the protection scope of the present application.
Although the specific embodiments of the present invention are described above in combination with the accompanying drawing, the protection scope of the present invention is not limited thereto. It should be understood by those skilled in the art that various modifications or variations could be made by those skilled in the art based on the technical solutions of the present invention without any creative effort, and these modifications or variations shall fall into the protection scope of the present invention.

Claims (9)

ConclusiesConclusions 1. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden, omvattende: een beeldvoorverwerkingsmodule die voor de voorverwerking van vergaarde multi-modale beelden geconfigureerd is; een registratie- en fusiemodule die voor de registratie en uitlijning van de beelden op basis van wederzijdse informaties en voor de fusie van de uitgelijnde beelden geconfigureerd is; en een segmentatie- en reconstructiemodule die voor de uitvoering van segmentatie en drie- dimensionale reconstructie op het gefuseerde beeld ten einde de ligging van een tumor te bepalen geconfigureerd is.An accurate positioning system for a three-dimensional location of a tumor by means of multi-image fusion, comprising: an image pre-processing module configured for pre-processing multi-modal collected images; a registration and fusion module configured for the registration and alignment of the images based on mutual information and for the fusion of the aligned images; and a segmentation and reconstruction module configured to perform segmentation and three-dimensional reconstruction on the fused image to determine the location of a tumor. 2. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerder beelden volgens conclusie 1, waarbij de beeldvoorverwerkingsmodule de beelden bij middel van mediaanfiltering voorverwerkt, waarin voor een beeld een rechthoekig schuifvenster voortgebracht wordt met elke pixel in het beeld als centrum, dan alle pixels in dit venster volgens de grijswaarden van klein naar groot gesorteerd worden, een mediaan van de gesorteerde rij berekend wordt, en deze mediaan ter vervanging van de pixelwaarde van een centraal punt in het venster gebruikt wordt.The precision positioning system for a three-dimensional location of a tumor by means of the fusion of multiple images according to claim 1, wherein the image preprocessing module preprocesses the images by means of median filtering, wherein for an image a rectangular scroll window is generated with each pixel in the image as center, then all pixels in this window are sorted according to the grayscale from smallest to largest, a median of the sorted row is calculated, and this median is used to replace the pixel value of a central point in the window. 3. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 2, waarbij een eendimensionale rij ff, …f, verondersteld is, de lengte van het venster m is, de mediaanfiltering op de rij m getallen fi fia, Fico, fier, fi uit de invoerrij moet selecteren, waarbij ì de centrale positie van het venster is, v= (m - 1}/2, dan de m punten volgens hun numerische waarden gesorteerd worden, en het getal in het midden als uitvoer van de filtering gebruikt wordt,The accurate positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 2, wherein a one-dimensional row ff, ... f, is assumed, the length of the window is m, the median filtering on the row m numbers fi fia, Fico, fier, fi from the input row, where ì is the central position of the window, v= (m - 1}/2, then the m points are sorted according to their numerical values, and the number in the center is used as the filtering output, 4. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 1, waarbij de beeldvoorverwerkingsmodule ter voorverwerking van de beelden door een beeldrandverbetering gebaseerd op een wavelettransformatie, een wavelettransformatiedecompositie bij middel van een algoritme van Mallat op de door mediaanfiltering ontruisde beelden uitgevoerd wordt, de decompositieschalen meervoudige lagen zijn, elke laag van de waveletdecompositie een te ontleden beeld in een veelheid van subbandbeelden ontleedt en een waveletcoéfficient van elke schaal bekomt, de waveletcoëfficiënten kleiner dan een ingestelde waarde als ruis beschouwd worden, het ruis door een aangepaste drempel in te stellen gefilterd wordt, en verschillende verbeteringscoëfficiënten geselecteerd worden om detailcomponenten van het beeld binnen verschillende frequentiebereikend te verbeteren, waardoor de kwaliteit van het beeld verbeterd wordt en gelaagdheids- en visuele effecten vergroot worden.The precision positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the image preprocessing module for preprocessing the images by an image edge enhancement based on a wavelet transform, a wavelet transform decomposition by means of an algorithm of Mallat on the median filtering is performed, the decomposition scales are multiple layers, each layer of the wavelet decomposition parses an image to be decomposed into a plurality of subband images and obtains a wavelet coefficient of each scale, the wavelet coefficients smaller than a set value are considered noise, the noise by setting a custom threshold is filtered, and different enhancement coefficients are selected to enhance detail components of the image within different frequency ranges, thereby improving and layering the quality of the image density and visual effects are increased. 5. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 4, waarbij de decompositieschalen drie lagen omvatten.The precise positioning system for a three-dimensional tumor location by means of the multi-image fusion of claim 4, wherein the decomposition scales comprise three layers. 6, Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 1, waarbij de registratie- en fusiemodule geconfigureerd is ter berekening van de wederzijdse informaties bij middel van een van twee beelden als referentiebeeld en van de andere als zwevend beeld door: eerst een coördinatentransformatie door te voeren en door de getransformeerde pixels van het zwevende beeld F naar het referentiebeeld R te registreren, waarbij de coördinaten van de pixels van het zwevende beeld F na de codrdinatentransformatie niet noodzakelijk gehele getallen zijn; en door grijswaarden van de overeenstemmende punten op het referentiebeeld R door interpolatie te bekomen, door elke pixel van het zwevende beeld F te doen draaien en dan de pixels naar het referentiebeeld R te registreren, door een gezamenlijk histogram en randwaarschijnlijkheidsverdeling doorheen de getransformeerde pixelpuntenparen, door aldus de wederzijdse informaties te bekomen; waarbij, wanneer de wederzijdse informaties maximaal zijn, de twee beelden meetkundig uitgelijnd zijn.The accurate positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the registration and fusion module is configured to calculate the mutual information by means of one of two images as a reference image and the other as a floating image by: first performing a coordinate transformation and by registering the transformed pixels of the floating image F to the reference image R, the coordinates of the pixels of the floating image F after the coordinate transformation are not necessarily integers; and by obtaining gray values of the corresponding points on the reference image R by interpolation, by rotating each pixel of the floating image F and then registering the pixels to the reference image R, by creating a common histogram and edge probability distribution through the transformed pairs of pixels, by thus to obtain the mutual information; wherein, when the mutual information is maximal, the two images are geometrically aligned. 7. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 1, waarbij de registratie- en fusiemodule geconfigureerd is om de beelden bij middel van een wavelet-piramidefusie, die bepaalde lagen van van een orthogonale wavelettransformatie op het referentiebeeld en op het zwevende beeld doorvoert, die aan de fusie deelnemen ten einde vier subbeelden te verkrijgen die informaties in de lage frequenties, horizontale informaties, verticale informaties en diagonale informaties voorstellen, waarbij verkrijgen de informaties in de lage frequenties op dezelfde wijze op elke laag verwerkt worden, en zo verder.The precision positioning system for a three-dimensional location of a tumor by means of the multi-image fusion according to claim 1, wherein the registration and fusion module is configured to transfer the images by means of a wavelet-pyramid fusion, which comprises certain layers of an orthogonal wavelet transformation. on the reference image and on the floating image, which participate in the fusion to obtain four sub-images representing low-frequency information, horizontal information, vertical information, and diagonal information, wherein the low-frequency information in the same way obtains each layer are processed, and so on. 8. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 7, waarbij het laag-freguentiegedeelte van de laatste laag gefuseerd wordt door de grootste coëfficiënt te nemen; en wavelet-transformatie coëfficiënten in de hoge frequenties op elke laag in de drie andere richtingen hiërarchisch en lineair gewogen en gefuseerd worden.The precise positioning system for a three-dimensional tumor location by means of the multi-image fusion according to claim 7, wherein the low-frequency portion of the last layer is fused by taking the largest coefficient; and wavelet transform coefficients in the high frequencies on each layer in the other three directions are weighted and fused hierarchically and linearly. 9. Nauwkeurig positioneringssysteem voor een driedimensionale ligging van een tumor bij middel van de fusie van meerdere beelden volgens conclusie 1, waarbij de segmentatie- en reconstructiemodule geconfigureerd is om een segmentatie door beeldsegmentatie bij middel van een Snake-model en een positionering gebaseerd op een competitief neuraal netwerk door te voeren, en na een beginsegmentatie van de beelden, de resultaten voor de initialisatie van de neuronentoestand in een masternetwerk gebruikt worden en de neuronentoestand dynamisch tot convergentie ontwikkeld wordt.The precision positioning system for a three-dimensional tumor location by means of the multi-image fusion according to claim 1, wherein the segmentation and reconstruction module is configured to provide segmentation by image segmentation by means of a Snake model and a positioning based on a competitive neural network, and after an initial segmentation of the images, the results are used for the initialization of the neuron state in a master network and the neuron state is dynamically developed to convergence.
NL2025814A 2019-09-19 2020-06-11 Precise positioning system for three-dimensional position of tumor by multi-image fusion NL2025814B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910888197.XA CN110660063A (en) 2019-09-19 2019-09-19 Multi-image fused tumor three-dimensional position accurate positioning system

Publications (2)

Publication Number Publication Date
NL2025814A NL2025814A (en) 2021-05-17
NL2025814B1 true NL2025814B1 (en) 2021-12-14

Family

ID=69037327

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2025814A NL2025814B1 (en) 2019-09-19 2020-06-11 Precise positioning system for three-dimensional position of tumor by multi-image fusion

Country Status (2)

Country Link
CN (1) CN110660063A (en)
NL (1) NL2025814B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109727672B (en) * 2018-12-28 2023-04-07 江苏瑞尔医疗科技有限公司 Prediction and tracking method for respiratory movement of patient thoracoabdominal tumor
CN111228655A (en) * 2020-01-14 2020-06-05 于金明 Monitoring method and device based on virtual intelligent medical platform and storage medium
CN111210911A (en) * 2020-01-15 2020-05-29 于金明 Radiotherapy external irradiation auxiliary diagnosis and treatment system based on virtual intelligent medical platform
CN111477304A (en) * 2020-04-03 2020-07-31 北京易康医疗科技有限公司 Tumor irradiation imaging combination method for fusing PET (positron emission tomography) image and MRI (magnetic resonance imaging) image
CN111667486B (en) * 2020-04-29 2023-11-17 杭州深睿博联科技有限公司 Multi-modal fusion pancreas segmentation method and system based on deep learning
CN113450294A (en) * 2021-06-07 2021-09-28 刘星宇 Multi-modal medical image registration and fusion method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006114003A1 (en) * 2005-04-27 2006-11-02 The Governors Of The University Of Alberta A method and system for automatic detection and segmentation of tumors and associated edema (swelling) in magnetic resonance (mri) images
CN107610162A (en) * 2017-08-04 2018-01-19 浙江工业大学 A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation
CN109035160B (en) * 2018-06-29 2022-06-21 哈尔滨商业大学 Medical image fusion method and image detection method based on fusion medical image learning

Also Published As

Publication number Publication date
NL2025814A (en) 2021-05-17
CN110660063A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
NL2025814B1 (en) Precise positioning system for three-dimensional position of tumor by multi-image fusion
Hou et al. Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model
Gurusamy et al. A machine learning approach for MRI brain tumor classification
Singh et al. Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain
Miao et al. Local segmentation of images using an improved fuzzy C-means clustering algorithm based on self-adaptive dictionary learning
Azam et al. Multimodal medical image registration and fusion for quality enhancement
Karthik et al. A comprehensive framework for classification of brain tumour images using SVM and curvelet transform
Gan et al. BM3D-based ultrasound image denoising via brushlet thresholding
CN113012173A (en) Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
Ding et al. M4fnet: Multimodal medical image fusion network via multi-receptive-field and multi-scale feature integration
Kabir Early stage brain tumor detection on MRI image using a hybrid technique
CN103366348A (en) Processing method and processing device for restraining bone image in X-ray image
CN109285176A (en) A kind of cerebral tissue dividing method cut based on regularization figure
Gupta et al. Ischemic stroke detection using image processing and ANN
Beetz et al. Point2Mesh-Net: Combining point cloud and mesh-based deep learning for cardiac shape reconstruction
Heller et al. Computer aided diagnosis of skin lesions from morphological features
Qian et al. Multi-scale context UNet-like network with redesigned skip connections for medical image segmentation
Namburete et al. Multi-channel groupwise registration to construct an ultrasound-specific fetal brain atlas
Lecesne et al. Segmentation of cardiac infarction in delayed-enhancement MRI using probability map and transformers-based neural networks
CN116342444A (en) Dual-channel multi-mode image fusion method and fusion imaging terminal
Muthiah et al. Fusion of MRI and PET images using deep learning neural networks
Valverde et al. Multiple sclerosis lesion detection and segmentation using a convolutional neural network of 3D patches
Javed et al. Weighted fusion of MRI and PET images based on fractal dimension
Muthu et al. Morphological operations in medical image pre-processing
Belfilali et al. Left ventricle analysis in echocardiographic images using transfer learning

Legal Events

Date Code Title Description
MM Lapsed because of non-payment of the annual fee

Effective date: 20230701