CN110211079A - The fusion method and device of medical image - Google Patents
The fusion method and device of medical image Download PDFInfo
- Publication number
- CN110211079A CN110211079A CN201910430661.0A CN201910430661A CN110211079A CN 110211079 A CN110211079 A CN 110211079A CN 201910430661 A CN201910430661 A CN 201910430661A CN 110211079 A CN110211079 A CN 110211079A
- Authority
- CN
- China
- Prior art keywords
- image
- mode image
- mode
- network model
- indicate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Abstract
The invention discloses a kind of fusion method of medical image and device, this method comprises: obtaining the Pixel Information of first mode image and second mode image to be fused, wherein first mode image and second mode image are the image of different modalities;By the Pixel Information of first mode image and second mode image, it is input in the image co-registration network model that training obtains in advance, export the blending image of first mode image and second mode image, wherein, image co-registration network model is the model merged based on semantic information to different modalities image, and semantic information is used to characterize the meaning of pixel value in different modalities image.The present invention can merge different modalities image based on semantic information, so that fused multi-modality images are easy to read, realize the technical effect for really providing the medical image of different modalities for clinician to assist in the treatment of.
Description
Technical field
The present invention relates to field of image processing more particularly to the fusion methods and device of a kind of medical image.
Background technique
This part intends to provides background or context for the embodiment of the present invention stated in claims.Description herein
Recognize it is the prior art not because not being included in this section.
Medical image refers to the human internal organ or disease for medicine or medical research, obtained using various medical imaging devices
Become the image of tissue.Since every kind of medical imaging devices can only obtain the medical image of single mode, and the medicine of single mode
Image is often difficult to provide enough information.For example, can clearly be reflected by the MR image that MR imaging apparatus obtains soft
The structures such as tissue, but it is insensitive to calcification point;And have by the CT image that CT scan equipment obtains relatively strong
Spatial resolution and geometrical property, can clearly reflect the structures such as bone, but relatively low to the contrast of soft tissue.Cause
And how to merge the medical image of different medical imaging devices single mode obtained, obtain multi-modal medicine
Image has a very important significance medical research and medical diagnosis with obtaining more fully information.
It is by the medical image of two single modes currently, obtaining the main stream approach of multi-modality medical image in the prior art
It directlys adopt image fusion technology to permeate an image, so that boundary information and structural information are more clear in fused image
It is clear.Due to not accounting for the semantic information of image, so that fused image is difficult to understand for.
For example, Fig. 1 a and Fig. 1 b show the CT image and MR image of a certain patient's head, it will be shown in Fig. 1 a and Fig. 1 b
The image original image to be fused as two, Fig. 1 c show based on CNN-LP network structure to CT image shown in Fig. 1 a and
The image that MR image is merged shown in Fig. 1 b, Fig. 1 d are shown based on NSCT-PCDC network structure to shown in Fig. 1 a
CT image and Fig. 1 b shown in the image that is merged of MR image, Fig. 1 e show based on NSST-PAPCNN network knot
The image that structure merges MR image shown in CT image shown in Fig. 1 a and Fig. 1 b.
Analysis is it is found that existing image interfusion method, the color in original image is put into blending image, will lead to two
A problem: 1. regional area (for example, region shown in rectangle frame in Fig. 1 a, Fig. 1 b, Fig. 1 c, Fig. 1 d and Fig. 1 e) is blurred processing
Fall;2. causing some positions (for example, cerebrospinal fluid and skull) in fused image difficult since color is identical in original image
To distinguish.
Summary of the invention
The embodiment of the present invention provides a kind of fusion method of medical image, is melting to solve medical image in the prior art
There is technical issues that during closing semantic missing or, this method comprises: the first mode image that acquisition is to be fused
With the Pixel Information of second mode image, wherein first mode image and second mode image are the image of different modalities;By
The Pixel Information of one modality images and second mode image is input in the image co-registration network model that training obtains in advance, defeated
The blending image of first mode image and second mode image out, wherein image co-registration network model is based on semantic information pair
The model that different modalities image is merged, semantic information are used to characterize the meaning of pixel value in different modalities image.
The embodiment of the present invention also provides a kind of fusing device of medical image, exists to solve medical image in the prior art
There is technical issues that in fusion process semantic missing or, which includes: image information acquisition unit, for obtaining
Take the Pixel Information of first mode image and second mode image to be fused, wherein first mode image and second mode figure
Image as being different modalities;Image information processing unit, for believing the pixel of first mode image and second mode image
Breath is input in the obtained image co-registration network model of training in advance, exports melting for first mode image and second mode image
Close image, wherein image co-registration network model is the model merged based on semantic information to different modalities image, semanteme letter
Cease the meaning for characterizing pixel value in different modalities image.
The embodiment of the present invention also provides a kind of computer equipment, to solve in the prior art medical image in fusion process
It is middle there is technical issues that semantic missing or, which includes memory, processor and is stored in storage
On device and the computer program that can run on a processor, processor realize melting for above-mentioned medical image when executing computer program
Conjunction method.
The embodiment of the present invention also provides a kind of computer readable storage medium, exists to solve medical image in the prior art
There is technical issues that in fusion process semantic missing or, the computer-readable recording medium storage have execute it is above-mentioned
The computer program of the fusion method of medical image.
In the embodiment of the present invention, machine learning training is first passed through in advance and obtains one based on semantic information to different modalities image
The image co-registration network model merged, when being merged to different modalities image, it is only necessary to by different modalities figure
The Pixel Information of picture is input in the image co-registration network model, and the blending image of different modalities image can be obtained.
Through the embodiment of the present invention, different modalities image can be merged based on semantic information, so that fused
Multi-modality images are easy to read, and realize the skill for really providing the medical image of different modalities for clinician to assist in the treatment of
Art effect.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.In the accompanying drawings:
Fig. 1 a is a kind of head CT image schematic diagram provided in the embodiment of the present invention;
Fig. 1 b is a kind of head MR image schematic diagram provided in the embodiment of the present invention;
Fig. 1 c is that the one kind provided in the embodiment of the present invention merges Fig. 1 a and Fig. 1 b based on CNN-LP network structure
Obtained multi-modality images schematic diagram;
Fig. 1 d is that the one kind provided in the embodiment of the present invention melts Fig. 1 a and Fig. 1 b based on NSCT-PCDC network structure
Close obtained multi-modality images schematic diagram;
Fig. 1 e is that the one kind provided in the embodiment of the present invention is based on NSST-PAPCNN network structure to Fig. 1 a and Fig. 1 b progress
Merge obtained multi-modality images schematic diagram;
Fig. 2 is a kind of fusion method flow chart of the medical image provided in the embodiment of the present invention;
Fig. 3 is a kind of image co-registration process schematic based on semantic information provided in the embodiment of the present invention;
Fig. 4 is that a kind of multi-modality images Fusion Model based on W-Net network structure provided in the embodiment of the present invention is shown
It is intended to;
Fig. 5 a is a kind of coding network model schematic using U-Net network structure provided in the embodiment of the present invention;
Fig. 5 b is a kind of decoding network model schematic using U-Net network structure provided in the embodiment of the present invention;
Fig. 6 a is a kind of CT image schematic diagram provided in the embodiment of the present invention;
Fig. 6 b is a kind of MR image schematic diagram provided in the embodiment of the present invention;
Fig. 6 c is that the one kind provided in the embodiment of the present invention merges Fig. 6 a and Fig. 6 b based on CNN-LP network structure
Obtained multi-modality images schematic diagram;
Fig. 6 d is that the one kind provided in the embodiment of the present invention melts Fig. 6 a and Fig. 6 b based on NSCT-PCDC network structure
Close obtained multi-modality images schematic diagram;
Fig. 6 e is that the one kind provided in the embodiment of the present invention is based on NSST-PAPCNN network structure to Fig. 6 a and Fig. 6 b progress
Merge obtained multi-modality images schematic diagram;
Fig. 6 f is that the one kind provided in the embodiment of the present invention is merged to obtain based on GF network structure to Fig. 6 a and Fig. 6 b
Multi-modality images schematic diagram;
Fig. 6 g is that the one kind provided in the embodiment of the present invention is based on NSCT-RPCNN network structure to Fig. 6 a and Fig. 6 b progress
Merge obtained multi-modality images schematic diagram;
Fig. 6 h is that the one kind provided in the embodiment of the present invention merge to Fig. 6 a and Fig. 6 b based on W-Net network structure
The multi-modality images schematic diagram arrived;
Fig. 7 is a kind of fusing device schematic diagram of the medical image provided in the embodiment of the present invention.
Specific embodiment
Understand in order to make the object, technical scheme and advantages of the embodiment of the invention clearer, with reference to the accompanying drawing to this hair
Bright embodiment is described in further details.Here, the illustrative embodiments of the present invention and their descriptions are used to explain the present invention, but simultaneously
It is not as a limitation of the invention.
In the description of this specification, used "comprising", " comprising ", " having ", " containing " etc. are open
Term, that is, mean including but not limited to.Reference term " one embodiment ", " specific embodiment ", " some embodiments ",
" such as " etc. description mean that specific features, structure or feature described in conjunction with this embodiment or example are contained in the application's
In at least one embodiment or example.In the present specification, schematic expression of the above terms are not necessarily referring to identical
Embodiment or example.Moreover, specific features, structure or the feature of description in any one or more embodiments or can be shown
It can be combined in any suitable manner in example.The step of involved in each embodiment, sequentially is used to schematically illustrate the implementation of the application, wherein
The step of sequence be not construed as limiting, can appropriately adjust as needed.
A kind of fusion method of medical image is provided in the embodiment of the present invention, Fig. 2 is to provide in the embodiment of the present invention
A kind of fusion method flow chart of medical image, as shown in Fig. 2, this method comprises the following steps:
S201 obtains the Pixel Information of first mode image and second mode image to be fused, wherein first mode figure
Picture and second mode image are the image of different modalities;
The Pixel Information of first mode image and second mode image is input to the image that training obtains in advance by S202
In converged network model, the blending image of first mode image and second mode image is exported, wherein image co-registration network model
For the model merged based on semantic information to different modalities image, semantic information is for characterizing pixel in different modalities image
The meaning of value.
It should be noted that first mode image and second mode image are intended to indicate that two kinds of differences in the embodiment of the present invention
The image of mode, can be but not limited to the single mode image that two or more following any medical imaging devices obtain: magnetic is total
Shake equipment, CT equipment, ultrasonic device, various X-ray machines, various radar stealthy materials, frequency microscope etc..
Embodiment as one preferred, the embodiment of the present invention with MR image (utilize magnetic resonance equipment obtain image) and
CT image (utilize CT equipment obtain image) is described in detail for being merged.MR image seems non-with CT image
Often alike (by gray value comparison different between black to white), but since the two image-forming principle is entirely different, thus, picture in image
The meaning of element value is also entirely different.If MR image directly merged with CT image, it will lead to semantic missing or semantic conflict asked
Topic.The fusion method of the medical image provided through the embodiment of the present invention carries out MR image and CT image based on semantic information
Fusion is also able to reflect out the knot such as soft tissue so that can either clearly reflect the structural informations such as bone in fused image
Structure information.
From the foregoing, it will be observed that the fusion method of medical image provided in an embodiment of the present invention, it is trained that machine learning is first passed through in advance
The image co-registration network model that different modalities image is merged based on semantic information to one, to different modalities image into
When row fusion, it is only necessary to the Pixel Information of different modalities image is input in the image co-registration network model, can be obtained
To the blending image of different modalities image.
Through the embodiment of the present invention, different modalities image can be merged based on semantic information, so that fused
Multi-modality images are easy to read, and realize the skill for really providing the medical image of different modalities for clinician to assist in the treatment of
Art effect.
It should be noted that above-mentioned image co-registration network model can be through the machine learning network that training obtains in advance
Model, as a result, as an alternative embodiment, the fusion method of medical image provided in an embodiment of the present invention can also lead to
Following steps are crossed to train to obtain above-mentioned image co-registration network model: utilizing training data training image converged network model;?
In trained process, the parameter of image co-registration network model is adjusted until the loss function satisfaction of image co-registration network model is default
The condition of convergence;Wherein, image co-registration network model includes: coding network model and decoding network model;Wherein, coding network mould
The input data of type is first mode image and second mode image to be fused, and the output data of coding network model is first
The blending image of modality images and second mode image;The input data of decoding network model is first mode image and the second mould
The blending image of state image, the output data of decoding network model are the first mode image and second reconstructed according to blending image
Modality images;Loss function includes at least the reconstructed error of first mode image and second mode image.
Optionally, loss function can further include: sparse penalty term and L2 regularization term, as a kind of optional
Embodiment, the expression formula of loss function can be with are as follows:
Wherein,
Wherein, xctIndicate first mode image to be fused;xmrIndicate second mode image to be fused;Indicate weight
The first mode image of structure;Indicate the second mode image of reconstruct;Indicate the first mode of reconstruct
The Euclidean distance of image and first mode image to be fused;Indicate reconstruct second mode image with
The Euclidean distance of second mode image to be fused, for characterizing blending image zijWith the KL divergence of constant picture;Sparse penalty term is indicated, for characterizing the KL divergence of blending image Yu constant picture;zijIndicate fusion figure
The pixel value of coordinate (i, j) as in;ρ indicates constant;Indicate L2 regularization term;α indicates the weight of sparse penalty term;β table
Show the weight of L2 regularization term.
Due to the image that first mode image and second mode image are different modalities, the meaning that image pixel value represents is not
Together, in order to realize the purpose merged based on semantic information to different modalities image, medicine figure provided in an embodiment of the present invention
The fusion method of picture, being merged based on image co-registration network model to first mode image and second mode image specifically can be with
It is implemented by the following steps: according to the Pixel Information of first mode image, extracting the first semantic information of first mode image,
And the Pixel Information according to second mode image, extract the second semantic information of second mode image;By the first semantic information
Target image space is mapped to the second semantic information;It is semantic to the first semantic information and second to believe based on target image space
Breath is merged, and the blending image of first mode image and second mode image is obtained.
With first mode image for CT image, second mode image is for MR image, Fig. 3 is provided in an embodiment of the present invention
A kind of image co-registration process schematic based on semantic information, as shown in figure 3, extracting CT image in the first semantic letter of mode 1
Breath and MR image construct the image space of a mode 3 in the second semantic information of mode 2, by the first semantic information and the
Two semantic informations are merged in mode 3, to obtain fused image, wherein each pixel value both represents mode in mode 3
1 pixel value meaning also represents the pixel value meaning of mode 2.
As a preferred embodiment, the embodiment of the present invention constructs implementation of the present invention using U-Net network structure
Multi-modality images Fusion Model based on semantic information in example, it is shown in Fig. 4 based on the multi-modal of W-Net network structure to obtain
Image co-registration model, as shown in figure 4, x=[xct,xmr] indicate two images to be fused inputted, wherein xctIndicate first
Modality images (for example, CT image), xmrIt indicates second mode image (for example, MR image);EθPresentation code network model;DφTable
Show decoding network model;Z indicates blending image;Indicate decoding network model DφIt is reconstructed and is given birth to according to blending image Z
At two images, whereinIndicate the first mode image (for example, CT image) of reconstruct,Indicate the second mode of reconstruct
Image (for example, MR image);Indicate the loss function of entire W-Net network structure model.
In multi-modality images Fusion Model provided in an embodiment of the present invention, coding network model EθUsing multiple input path,
The U-Net network (as shown in Figure 5 a, Fig. 5 a only shows two input channels) of single output channel, to realize to first mode image
xctWith second mode image xmrFusion, the blending image Z both obtained;Decoding network model DφUsing single input channel, more
The U-Net network (as shown in Figure 5 b, Fig. 5 b only shows two output channels) of output channel, is reconstructed with realizing according to blending image
First mode imageWith second mode image
Assuming that image to be fused is that MR image shown in CT image shown in Fig. 6 a and Fig. 6 b shows that then Fig. 6 c show base
In the multi-modality images schematic diagram that CNN-LP network structure merges Fig. 6 a and Fig. 6 b;Fig. 6 d, which is shown, to be based on
The multi-modality images schematic diagram that NSCT-PCDC network structure merges Fig. 6 a and Fig. 6 b;Fig. 6 e, which is shown, to be based on
The multi-modality images schematic diagram that NSST-PAPCNN network structure merges Fig. 6 a and Fig. 6 b;Fig. 6 f, which is shown, to be based on
The multi-modality images schematic diagram that GF network structure merges Fig. 6 a and Fig. 6 b;Fig. 6 g is shown based on NSCT-RPCNN
The multi-modality images schematic diagram that network structure merges Fig. 6 a and Fig. 6 b;Fig. 6 h is shown based on the embodiment of the present invention
The multi-modality images schematic diagram that Fig. 6 a and Fig. 6 b are merged based on W-Net network structure provided.Pass through comparison chart
6c, Fig. 6 d, Fig. 6 e, Fig. 6 f, Fig. 6 g and Fig. 6 h can be seen that the figure provided in an embodiment of the present invention based on W-Net network structure
As the obtained multi-modal fusion image of fusion method can clearly reflect the information of each single mode image.
Table 1, which is shown, carries out each evaluation index that image co-registration obtains using the image interfusion method of heterogeneous networks structure
Experimental result comparison, wherein QMIIt indicates mutual information evaluation index, is a kind of measurement based on entropy, for measuring blending image
Retain the information content of source images;QAB/FIt indicates marginal information evaluation index, is a kind of measurement based on gradient, measurement fusion figure
The information at the edge of picture retains;SSIM indicates structural similarity evaluation index, is a kind of assessment based on structural similarity, surveys
Measure the structural similarity between blending image and original image;QDIt is using Daly filter based on human visual system
(HVS) evaluation index, for measuring the vision difference between blending image and source images;SemanticLoss indicates semantic damage
Index is lost, for measuring the reconstruction error of blending image to indicate the semantic conflict in blending image;Wherein, QMI、QAB/F, CT and
The index value of MR-T2 is all that the higher the better, and QDIt is more lower better with the index value of SemanticLoss.
The evaluation index of 1 different images fusion method of table
As can be seen from Table 1, the fusion method of medical image provided in an embodiment of the present invention, due to considering semantic damage
It loses, thus, index is lost for semanteme, Method of Medical Image Fusion provided in an embodiment of the present invention can be optimal fusion effect
Fruit, and in visual evaluation index, it is close with conventional images fusion method.
A kind of fusing device of medical image is additionally provided in the embodiment of the present invention, as described in the following examples.Due to
The principle that the Installation practice solves the problems, such as is similar to the fusion method of medical image, therefore the implementation of the Installation practice can be with
Referring to the implementation of method, overlaps will not be repeated.
Fig. 7 is a kind of fusing device schematic diagram of the medical image provided in the embodiment of the present invention, as shown in fig. 7, the dress
It sets and includes:
Image information acquisition unit 71, the pixel for obtaining first mode image and second mode image to be fused are believed
Breath, wherein first mode image and second mode image are the image of different modalities;
Image information processing unit 72, for being input to the Pixel Information of first mode image and second mode image
In the image co-registration network model that training obtains in advance, the blending image of first mode image and second mode image is exported,
In, image co-registration network model is the model merged based on semantic information to different modalities image, and semantic information is used for table
Levy the meaning of pixel value in different modalities image.
From the foregoing, it will be observed that the fusing device of medical image provided in an embodiment of the present invention, it is trained that machine learning is first passed through in advance
The image co-registration network model that different modalities image is merged based on semantic information to one, to different modalities image into
When row fusion, the Pixel Information of different modalities image to be fused is obtained by image information acquisition unit 71, passes through figure
As the Pixel Information of different modalities image to be fused is input to the image co-registration that training obtains in advance by information process unit 72
In network model, the blending image of different modalities image can be obtained.
Through the embodiment of the present invention, different modalities image can be merged based on semantic information, so that fused
Multi-modality images are easy to read, and realize the skill for really providing the medical image of different modalities for clinician to assist in the treatment of
Art effect.
As an alternative embodiment, in the fusing device of medical image provided in an embodiment of the present invention, image letter
Breath processing unit 72 can specifically include: extraction of semantics module 721 is extracted for the Pixel Information according to first mode image
First semantic information of first mode image, and according to the Pixel Information of second mode image, extract second mode image
Second semantic information;Image space mapping block 722, for the first semantic information and the second semantic information to be mapped to target figure
Image space;Image co-registration module 723 carries out the first semantic information and the second semantic information for being based on target image space
Fusion, obtains the blending image of first mode image and second mode image.
In an alternative embodiment, the fusing device of medical image provided in an embodiment of the present invention can also be further
It include: model training unit 73, for utilizing training data training image converged network model;Model adjustment unit 74, is used for
During training, the parameter of image co-registration network model is adjusted until the loss function of image co-registration network model meets in advance
If the condition of convergence;Wherein, image co-registration network model includes: coding network model and decoding network model;Wherein, coding network
The input data of model is first mode image and second mode image to be fused, and the output data of coding network model is the
The blending image of one modality images and second mode image;The input data of decoding network model is first mode image and second
The blending image of modality images, the output data of decoding network model are the first mode image reconstructed according to blending image and the
Two modality images;Wherein, loss function includes at least reconstructed error, reconstructed error by the first mode image that reconstructs with it is to be fused
First mode image between error, and reconstruct second mode image and second mode image to be fused between mistake
Difference determines.
Optionally, loss function can also include: sparse penalty term and L2 regularization term, as a kind of optional embodiment party
Formula, the expression formula of loss function can be with are as follows:
Wherein,
Wherein, xctIndicate first mode image to be fused;xmrIndicate second mode image to be fused;Indicate weight
The first mode image of structure;Indicate the second mode image of reconstruct;Indicate the first mode figure of reconstruct
As the Euclidean distance with first mode image to be fused;Indicate reconstruct second mode image with to
The Euclidean distance of the second mode image of fusion;Indicate sparse penalty term, for characterize blending image and often
The KL divergence of spirogram piece;zijIndicate the pixel value of coordinate (i, j) in blending image;ρ indicates constant;Indicate L2 regularization
?;α indicates the weight of sparse penalty term;The weight of β expression L2 regularization term.
The embodiment of the present invention also provides a kind of computer equipment, to solve in the prior art medical image in fusion process
It is middle there is technical issues that semantic missing or, which includes memory, processor and is stored in storage
On device and the computer program that can run on a processor, processor realize that any one of the above is optional when executing computer program
Or preferred medical image fusion method.
The embodiment of the present invention also provides a kind of computer readable storage medium, exists to solve medical image in the prior art
There is technical issues that in fusion process semantic missing or, the computer-readable recording medium storage have execute it is above-mentioned
The computer program of the fusion method of any one optional or preferred medical image.
In conclusion the embodiment of the invention provides a kind of Medical image fusion scheme based on image, semantic, to medicine
Carry out semantics extraction, semantic transforms and the semantic fusion of image can obtain the fusion figure for being easy to read of semantic congruence
Picture, so that Medical Image Fusion really can provide the doctor from different modalities for clinician in blending image
Image information is learned with assisting in diagnosis and treatment.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of equipment (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Particular embodiments described above has carried out further in detail the purpose of the present invention, technical scheme and beneficial effects
Describe in detail it is bright, it should be understood that the above is only a specific embodiment of the present invention, the guarantor being not intended to limit the present invention
Range is protected, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should be included in this
Within the protection scope of invention.
Claims (10)
1. a kind of fusion method of medical image characterized by comprising
Obtain the Pixel Information of first mode image and second mode image to be fused, wherein the first mode image and
The second mode image is the image of different modalities;
By the Pixel Information of the first mode image and the second mode image, it is input to the image that training obtains in advance and melts
It closes in network model, exports the blending image of the first mode image and the second mode image, wherein described image is melted
Closing network model is the model merged based on semantic information to different modalities image, and institute's semantic information is for characterizing difference
The meaning of pixel value in modality images.
2. the method as described in claim 1, which is characterized in that by the first mode image and the second mode image
Pixel Information is input in the obtained image co-registration network model of training in advance, exports the first mode image and described the
The blending image of two modality images, comprising:
According to the Pixel Information of first mode image, the first semantic information of first mode image is extracted, and according to the second mould
The Pixel Information of state image extracts the second semantic information of second mode image;
First semantic information and second semantic information are mapped into target image space;
Based on the target image space, first semantic information and second semantic information are merged, institute is obtained
State the blending image of first mode image and the second mode image.
3. the method as described in claim 1, which is characterized in that the method also includes:
Utilize training data training described image converged network model;
During training, damage of the parameter up to described image converged network model of described image converged network model is adjusted
It loses function and meets the default condition of convergence;
Wherein, described image converged network model includes: coding network model and decoding network model;Wherein, the coding net
The input data of network model is first mode image and second mode image to be fused, the output number of the coding network model
According to the blending image for first mode image and second mode image;The input data of the decoding network model is first mode
The blending image of image and second mode image, the output data of the decoding network model are the reconstructed according to blending image
One modality images and second mode image;
The loss function includes at least the reconstructed error of first mode image and second mode image.
4. method as claimed in claim 3, which is characterized in that the loss function further include: sparse penalty term and L2 canonical
Change item, wherein the expression formula of the loss function are as follows:
Wherein,
Wherein, xctIndicate first mode image to be fused;xmrIndicate second mode image to be fused;Indicate reconstruct
First mode image;Indicate the second mode image of reconstruct;Indicate the first mode image of reconstruct
With the Euclidean distance of first mode image to be fused;Indicate reconstruct second mode image with to
The Euclidean distance of the second mode image of fusion;Indicate sparse penalty term, for characterize blending image and often
The KL divergence of spirogram piece;zijIndicate the pixel value of coordinate (i, j) in blending image;ρ indicates constant;Indicate L2 regularization
?;α indicates the weight of sparse penalty term;The weight of β expression L2 regularization term.
5. a kind of fusing device of medical image characterized by comprising
Image information acquisition unit, for obtaining the Pixel Information of first mode image and second mode image to be fused,
In, the first mode image and the second mode image are the image of different modalities;
Image information processing unit, for inputting the Pixel Information of the first mode image and the second mode image
In the image co-registration network model obtained to preparatory training, melting for the first mode image and the second mode image is exported
Close image, wherein described image converged network model is the model merged based on semantic information to different modalities image, institute
Semantic information is used to characterize the meaning of pixel value in different modalities image.
6. device as claimed in claim 5, which is characterized in that described image information process unit includes:
Extraction of semantics module extracts the first semantic letter of first mode image for the Pixel Information according to first mode image
Breath, and according to the Pixel Information of second mode image, extract the second semantic information of second mode image;
Image space mapping block, for first semantic information and second semantic information to be mapped to target image sky
Between;
Image co-registration module, for being based on the target image space, to first semantic information and the second semantic letter
Breath is merged, and the blending image of the first mode image and the second mode image is obtained.
7. device as claimed in claim 5, which is characterized in that described device further include:
Model training unit, for utilizing training data training described image converged network model;
Model adjustment unit, for during training, adjusting the parameter of described image converged network model until the figure
As the loss function of converged network model meets the default condition of convergence;
Wherein, described image converged network model includes: coding network model and decoding network model;Wherein, the coding net
The input data of network model is first mode image and second mode image to be fused, the output number of the coding network model
According to the blending image for first mode image and second mode image;The input data of the decoding network model is first mode
The blending image of image and second mode image, the output data of the decoding network model are the reconstructed according to blending image
One modality images and second mode image;
The loss function includes at least reconstructed error, and the reconstructed error is by the first mode image that reconstructs and to be fused the
Error between error between one modality images, and the second mode image and second mode image to be fused of reconstruct is true
It is fixed.
8. device as claimed in claim 7, which is characterized in that the loss function further include: sparse penalty term and L2 canonical
Change item, wherein the expression formula of the loss function are as follows:
Wherein,
Wherein, xctIndicate first mode image to be fused;xmrIndicate second mode image to be fused;Indicate the of reconstruct
One modality images;Indicate the second mode image of reconstruct;Indicate reconstruct first mode image with
The Euclidean distance of first mode image to be fused;Indicate reconstruct second mode image with it is to be fused
Second mode image Euclidean distance;Sparse penalty term is indicated, for characterizing blending image and constant figure
The KL divergence of piece;zijIndicate the pixel value of coordinate (i, j) in blending image;ρ indicates constant;It indicates L2 regularization term, uses
In the complexity of characterization model;α indicates the weight of sparse penalty term;The weight of β expression L2 regularization term.
9. a kind of computer equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, which is characterized in that the processor is realized described in any one of Claims 1-4 when executing the computer program
The fusion method of medical image.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has perform claim
It is required that the computer program of the fusion method of any one of 1 to 4 medical image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910430661.0A CN110211079B (en) | 2019-05-22 | 2019-05-22 | Medical image fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910430661.0A CN110211079B (en) | 2019-05-22 | 2019-05-22 | Medical image fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110211079A true CN110211079A (en) | 2019-09-06 |
CN110211079B CN110211079B (en) | 2021-07-13 |
Family
ID=67788139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910430661.0A Expired - Fee Related CN110211079B (en) | 2019-05-22 | 2019-05-22 | Medical image fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110211079B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832656A (en) * | 2020-07-17 | 2020-10-27 | 复旦大学 | Medical human-computer interaction assistance system and computer-readable storage medium containing the same |
CN113255756A (en) * | 2021-05-20 | 2021-08-13 | 联仁健康医疗大数据科技股份有限公司 | Image fusion method and device, electronic equipment and storage medium |
CN113888663A (en) * | 2021-10-15 | 2022-01-04 | 推想医疗科技股份有限公司 | Reconstruction model training method, anomaly detection method, device, equipment and medium |
WO2022001237A1 (en) * | 2020-06-28 | 2022-01-06 | 广州柏视医疗科技有限公司 | Method and system for automatically recognizing image of primary tumor of nasopharyngeal carcinoma |
CN116682565A (en) * | 2023-07-28 | 2023-09-01 | 济南蓝博电子技术有限公司 | Digital medical information on-line monitoring method, terminal and medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203488A (en) * | 2016-07-01 | 2016-12-07 | 福州大学 | A kind of galactophore image Feature fusion based on limited Boltzmann machine |
CN108961196A (en) * | 2018-06-21 | 2018-12-07 | 华中科技大学 | A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively |
CN108986115A (en) * | 2018-07-12 | 2018-12-11 | 佛山生物图腾科技有限公司 | Medical image cutting method, device and intelligent terminal |
CN109360633A (en) * | 2018-09-04 | 2019-02-19 | 北京市商汤科技开发有限公司 | Medical imaging processing method and processing device, processing equipment and storage medium |
CN109544554A (en) * | 2018-10-18 | 2019-03-29 | 中国科学院空间应用工程与技术中心 | A kind of segmentation of plant image and blade framework extracting method and system |
-
2019
- 2019-05-22 CN CN201910430661.0A patent/CN110211079B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106203488A (en) * | 2016-07-01 | 2016-12-07 | 福州大学 | A kind of galactophore image Feature fusion based on limited Boltzmann machine |
CN108961196A (en) * | 2018-06-21 | 2018-12-07 | 华中科技大学 | A kind of 3D based on figure watches the conspicuousness fusion method of point prediction attentively |
CN108986115A (en) * | 2018-07-12 | 2018-12-11 | 佛山生物图腾科技有限公司 | Medical image cutting method, device and intelligent terminal |
CN109360633A (en) * | 2018-09-04 | 2019-02-19 | 北京市商汤科技开发有限公司 | Medical imaging processing method and processing device, processing equipment and storage medium |
CN109544554A (en) * | 2018-10-18 | 2019-03-29 | 中国科学院空间应用工程与技术中心 | A kind of segmentation of plant image and blade framework extracting method and system |
Non-Patent Citations (1)
Title |
---|
叶德荣: "CT、MRI脑图像融合技术的研究", 《医疗设备信息》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022001237A1 (en) * | 2020-06-28 | 2022-01-06 | 广州柏视医疗科技有限公司 | Method and system for automatically recognizing image of primary tumor of nasopharyngeal carcinoma |
CN111832656A (en) * | 2020-07-17 | 2020-10-27 | 复旦大学 | Medical human-computer interaction assistance system and computer-readable storage medium containing the same |
CN113255756A (en) * | 2021-05-20 | 2021-08-13 | 联仁健康医疗大数据科技股份有限公司 | Image fusion method and device, electronic equipment and storage medium |
CN113888663A (en) * | 2021-10-15 | 2022-01-04 | 推想医疗科技股份有限公司 | Reconstruction model training method, anomaly detection method, device, equipment and medium |
CN116682565A (en) * | 2023-07-28 | 2023-09-01 | 济南蓝博电子技术有限公司 | Digital medical information on-line monitoring method, terminal and medium |
CN116682565B (en) * | 2023-07-28 | 2023-11-10 | 济南蓝博电子技术有限公司 | Digital medical information on-line monitoring method, terminal and medium |
Also Published As
Publication number | Publication date |
---|---|
CN110211079B (en) | 2021-07-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211079A (en) | The fusion method and device of medical image | |
Beers et al. | High-resolution medical image synthesis using progressively grown generative adversarial networks | |
KR102507711B1 (en) | Medical image processing apparatus, medical image processing method, and computer readable medium | |
Wolterink et al. | Deep MR to CT synthesis using unpaired data | |
US9478009B2 (en) | Method and apparatus for acquiring overlapped medical image slices | |
AU2006205025B2 (en) | Method and system for displaying blood flow | |
US20200037962A1 (en) | Plane selection using localizer images | |
CN110246137A (en) | A kind of imaging method, device and storage medium | |
KR20150090117A (en) | Method and system for displaying to a user a transition between a first rendered projection and a second rendered projection | |
Eslami et al. | Automatic vocal tract landmark localization from midsagittal MRI data | |
Hans et al. | Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging | |
JP2007275595A (en) | View creating method for reproducing tomographic image data | |
Lasso et al. | SlicerHeart: An open-source computing platform for cardiac image analysis and modeling | |
US10964074B2 (en) | System for harmonizing medical image presentation | |
Wang et al. | A fast 3D brain extraction and visualization framework using active contour and modern OpenGL pipelines | |
CN106204623B (en) | More contrast image synchronizations are shown and the method and device of positioning and demarcating | |
Hewer et al. | A multilinear tongue model derived from speech related MRI data of the human vocal tract | |
Graf et al. | Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation | |
CN115761134A (en) | Twin model generation method, system, nuclear magnetic resonance device, and medium | |
Zhou et al. | Clinical validation of an AI-based motion correction reconstruction algorithm in cerebral CT | |
Gao et al. | A dual adversarial calibration framework for automatic fetal brain biometry | |
Rohleder et al. | Cross-domain metal segmentation for CBCT metal artifact reduction | |
US20230410413A1 (en) | Systems and methods for volume rendering | |
US20240053421A1 (en) | Systems and methods for magnetic resonance imaging | |
Li et al. | Knee orientation detection in MR scout scans using 3D U-net |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210713 |
|
CF01 | Termination of patent right due to non-payment of annual fee |