CN118397059B - Model training method and registration method for multi-mode image enhancement and registration - Google Patents

Model training method and registration method for multi-mode image enhancement and registration Download PDF

Info

Publication number
CN118397059B
CN118397059B CN202410840678.4A CN202410840678A CN118397059B CN 118397059 B CN118397059 B CN 118397059B CN 202410840678 A CN202410840678 A CN 202410840678A CN 118397059 B CN118397059 B CN 118397059B
Authority
CN
China
Prior art keywords
image
processed
matrix
feature matrix
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410840678.4A
Other languages
Chinese (zh)
Other versions
CN118397059A (en
Inventor
商睿哲
董家铭
王新怀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202410840678.4A priority Critical patent/CN118397059B/en
Publication of CN118397059A publication Critical patent/CN118397059A/en
Application granted granted Critical
Publication of CN118397059B publication Critical patent/CN118397059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a model training method and a registration method for multi-mode image enhancement and registration, wherein the training method comprises the following steps: acquiring an MR image sequence to be processed and a CT image sequence to be processed, obtaining M groups of matching image pairs according to mutual information of the MR image to be processed and the CT image to be processed, and selecting a plurality of groups of matching image pairs as a training set; selecting two MR images to be processed and one CT image to be processed in a training set, and inputting the two MR images to be processed, the one CT image to be processed and the one Gaussian noise image to the target image sequence enhancement model to be trained together so as to train the target image sequence enhancement model to be trained, thereby obtaining a trained target image sequence enhancement model. The trained target image sequence enhancement model obtained by the method can generate the MR image matched with the CT image, and is beneficial to improving the accuracy of three-dimensional reconstruction of the image.

Description

Model training method and registration method for multi-mode image enhancement and registration
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a model training method and a registration method for multi-mode image enhancement and registration.
Background
The computer-aided and medical imaging technology has become an important component in the field of bone tumor limb-protecting surgery, can help doctors to complete the work of operation route planning, personalized scheme formulation, preoperative simulation and the like, and remarkably improves the success rate of limb-protecting. At present, images are generally analyzed in a manual or semi-automatic tumor marking mode, which consumes a great deal of time and effort for surgeons, so that a tumor segmentation research by using a deep learning technology which is rapidly developed in recent years is imperative.
In the process, the contradiction exists between the higher requirement of the deep learning technology on the training data amount and the limitation of the medical image amount, and especially when the multi-mode image is utilized for analysis, the data amount mismatch among different images can lead to further reduction of the available data amount, so that the difficulty of subsequent operation is increased, and the tumor segmentation effect is influenced. At the same time, image data of different modalities also need to be registered to provide effective information.
Therefore, how to synthesize and correlate reliable MR (Magnetic Resonance ) images in CT (Computed Tomography, electronic computed tomography) image sequences is a need for a solution.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a model training method and a registration method for multi-mode image enhancement and registration. The technical problems to be solved by the invention are realized by the following technical scheme:
the invention provides a model training method for multi-mode image enhancement and registration, which comprises the following steps:
Acquiring an MR image sequence to be processed and a CT image sequence to be processed, wherein the MR image sequence to be processed comprises M MR images to be processed which are arranged in sequence, the CT image sequence to be processed comprises N CT images to be processed which are arranged in sequence, and M and N are integers larger than 0;
Obtaining M groups of matched image pairs according to mutual information of the MR image to be processed and the CT image to be processed, and selecting a plurality of groups of matched image pairs from the M groups of matched image pairs as a training set, wherein each group of matched image pairs comprises one MR image to be processed and one CT image to be processed which are matched with each other;
Selecting two MR images to be processed and one CT image to be processed in the training set, inputting the two MR images to be processed, the CT image to be processed and the Gaussian noise image to be processed into a target image sequence enhancement model to be trained together so as to train the target image sequence enhancement model to be trained, and obtaining a trained target image sequence enhancement model, wherein the trained target image sequence enhancement model is used for matching CT images to be registered to obtain registered MR images; wherein, the two MR images to be processed selected in the training set are not matched with the CT images to be processed selected; the target image sequence enhancement model is based on a model for generating an countermeasure network.
In one embodiment of the invention, acquiring an MR image sequence to be processed and a CT image sequence to be processed comprises:
acquiring an initial MR image sequence and an initial CT image sequence, wherein the initial MR image sequence comprises M Zhang Chushi MR images, and the initial CT image sequence comprises N initial CT images;
Converting the brightness values of the initial MR image and the initial CT image into gray values to obtain an MR gray image and a CT gray image;
Respectively reassigning the gray values of the MR gray scale image and the CT gray scale image by adopting a histogram equalization algorithm, so that the gray values of the MR gray scale image and the CT gray scale image are uniformly distributed within a preset range, and an MR uniform distribution image and a CT uniform distribution image are obtained;
And respectively homogenizing the gray values of the MR uniform distribution image and the CT uniform distribution image to the interval of [0, 255] to obtain the MR image to be processed and the CT image to be processed, wherein all the MR images to be processed form the MR image sequence to be processed, and all the CT images to be processed form the CT image sequence to be processed.
In one embodiment of the present invention, obtaining M sets of matching image pairs according to mutual information of the MR image to be processed and the CT image to be processed includes:
Acquiring a first MR image to be processed in the MR image sequence to be processed;
Selecting a CT image to be processed with the maximum mutual information between the CT image to be processed and the first MR image to be processed from the CT image sequence to be processed based on the mutual information between the first MR image to be processed and the CT image to be processed, and forming a first group of matched image pairs;
And based on a preset step length, selecting matched CT images to be processed from the second MR image to be processed to the M th MR image to be processed in the CT image sequence to be processed, and obtaining M-1 group matching image pairs.
In one embodiment of the present invention, based on a preset step length, selecting matched CT images to be processed from the CT image sequences to be processed for the second MR image to be processed to the mth MR image to be processed, to obtain M-1 group of matched image pairs, including:
For the (m+1) th to-be-processed MR image, judging whether n+lambda is an integer, if so, selecting the (n+lambda) th to-be-processed CT image from the to-be-processed CT image sequence as an to-be-matched image, if not, selecting the to-be-processed CT image with larger mutual information with the (m+1) th to-be-processed MR image from two to-be-processed CT images adjacent to the (n+lambda) th to-be-processed CT image as the to-be-matched image, wherein the (M) th to-be-processed MR image and the (N) th to-be-processed CT image form an (M) th to-be-matched image pair, lambda is a preset step length, lambda=Sm/Sc, sm is an interlayer interval of the to-be-processed MR image sequence, sc is an interlayer interval of the to-be-processed CT image sequence, M is more than or equal to 1 and M is less than or equal to 1 and N is less than or equal to N;
Judging whether mutual information between the CT image to be matched and the (m+1) th MR image to be processed is larger than or equal to a preset threshold value, if so, forming the CT image to be matched and the (m+1) th MR image to be processed into an (m+1) th group matching image pair, and if not, selecting the CT image to be processed with the largest mutual information between the CT image to be matched and the (m+1) th MR image to be processed from the CT image sequence to be processed, and forming the (m+1) th group matching image pair.
In one embodiment of the present invention, the target image sequence enhancement model includes a generator and a discriminator, the generator including an encoder, a feature extraction unit, and a decoder;
Selecting two MR images to be processed and one CT image to be processed in the training set, inputting the two selected MR images to be processed, one CT image to be processed and one Gaussian noise image into a target image sequence enhancement model to be trained together so as to train the target image sequence enhancement model to be trained, and obtaining a trained target image sequence enhancement model, wherein the method comprises the following steps:
Selecting two MR images to be processed and one CT image to be processed in the training set, and inputting the two MR images to be processed, the one CT image to be processed and the one Gaussian noise image to the encoder to obtain a first feature matrix corresponding to the Gaussian noise image, a second feature matrix, a third feature matrix and a fourth feature matrix corresponding to the two MR images to be processed respectively;
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the feature extraction unit to obtain a feature matrix to be decoded;
inputting the feature matrix to be decoded to the decoder to obtain a first MR enhanced image;
Based on the dynamic differential loss function constructed by the first MR enhanced image and the MR image to be processed to be registered, adjusting parameters of the target image sequence enhanced model to be trained in a counter-propagation mode to obtain a primarily trained target image sequence enhanced model, and obtaining a second MR enhanced image output by the primarily trained target image sequence enhanced model; the to-be-registered to-be-processed MR images are selected from the training set and are positioned between two to-be-processed MR images input to the to-be-trained target image sequence enhancement model, and the to-be-processed CT images input to the to-be-trained target image sequence enhancement model and the to-be-registered to-be-processed MR images are positioned in the same group of matched image pairs;
And inputting the second MR enhanced image to the discriminator so as to obtain the trained target image sequence enhanced model according to the discrimination result of the discriminator.
In one embodiment of the invention, the feature extraction unit comprises a graph attention module, a channel attention module, and a residual structure;
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the feature extraction unit to obtain a feature matrix to be decoded, including:
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix into the drawing attention module, wherein the drawing attention module obtains a fifth feature matrix corresponding to the Gaussian noise image, a sixth feature matrix, a seventh feature matrix and an eighth feature matrix corresponding to the MR image to be processed, wherein the sixth feature matrix, the seventh feature matrix and the eighth feature matrix correspond to the CT image to be processed are respectively corresponding to the two MR images to be processed through capturing the similarity among features;
Inputting the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix into the channel attention module, wherein the channel attention module obtains a first weight of the fifth feature matrix, a second weight of the sixth feature matrix, a third weight of the seventh feature matrix and a fourth weight of the eighth feature matrix through convolution operation, multiplies the first weight by the fifth feature matrix, the second weight by the sixth feature matrix, the third weight by the seventh feature matrix, the fourth weight by the eighth feature matrix, and adds all multiplication results to obtain a ninth feature matrix;
And inputting the ninth feature matrix into the residual structure, and obtaining the feature matrix to be decoded after convolution operation.
In one embodiment of the present invention, the first feature matrix, the second feature matrix, the third feature matrix, and the fourth feature matrix are input to the attention module, and the attention module obtains a fifth feature matrix corresponding to the gaussian noise image, a sixth feature matrix corresponding to the two MR images to be processed, a seventh feature matrix, and an eighth feature matrix corresponding to the CT image to be processed by capturing similarities between features, where the method includes:
step 3.211, inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the drawing force module, and then dividing the first feature matrix into a plurality of first matrix blocks;
Step 3.212, sequentially selecting one first matrix block, and respectively selecting a second matrix block, a third matrix block and a fourth matrix block which are the same as the first matrix block in position from the second feature matrix, the third feature matrix and the fourth feature matrix;
Step 3.213, combining the first matrix block, the second matrix block, the third matrix block and the fourth matrix block into a combined matrix;
step 3.214, selecting a plurality of pixel points adjacent to the first matrix block to obtain a pixel matrix;
Step 3.215, combining the combination matrix and the pixel matrix into a splicing matrix;
Step 3.216, respectively performing weighted summation processing on the first matrix block, the second matrix block, the third matrix block and the fourth matrix block in the spliced matrix according to the connection relationship among the first matrix block, the second matrix block, the third matrix block and the fourth matrix block, so as to respectively and correspondingly obtain a fifth matrix block, a sixth matrix block, a seventh matrix block and an eighth matrix block;
Step 3.217, replacing the first matrix block of the first feature matrix, the second matrix block of the second feature matrix, the third matrix block of the third feature matrix, and the fourth matrix block of the fourth feature matrix with the fifth matrix block, the sixth matrix block, the seventh matrix block, and the eighth matrix block;
and step 3.218, repeating the steps 3.212 to 3.217 until all matrix blocks in the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix are replaced, and correspondingly obtaining the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix.
In one embodiment of the invention, the dynamic differential loss function is expressed as:
Wherein, Representing a dynamic differential loss function,Representing the MR images to be processed to be registered,A first MR enhanced image is represented,The goal of the representation generator is to slaveTo the point ofIs used for the mapping of (a),Representing the first MR-enhanced imageThe desired operation is performed by the individual pixels,Representing the probability distribution of the first MR enhanced image,Representing the number of pixels in the MR image to be processed to be registered,The representation generator registers the first MR image to be processedThe difference in the results of the mapping of the individual pixels to the first MR enhanced image,Representing allAverage value of (2).
In one embodiment of the present invention, inputting the second MR enhanced image to the arbiter to obtain the trained target image sequence enhancement model according to the discrimination result of the arbiter includes:
Inputting the second MR enhanced image to the discriminator to obtain a discrimination value;
Judging the relation between the judging value and the judging threshold value, if the judging value is larger than the judging threshold value and the judging result is false, updating the judging threshold value according to the loss value obtained by the kurtosis loss function, continuing training the primarily trained target image sequence enhancement model until the judging value is smaller than or equal to the latest judging threshold value, obtaining the trained target image sequence enhancement model, and if the judging value is smaller than or equal to the judging threshold value and the judging result is true, taking the primarily trained target image sequence enhancement model as the trained target image sequence enhancement model.
The invention also provides a registration method of the multi-mode image, which comprises the following steps:
Acquiring CT images to be registered;
inputting the CT image to be registered into the trained target image sequence enhancement model according to any one of the above embodiments, and obtaining a registered MR image.
The invention has the beneficial effects that:
According to the method, an MR image sequence to be processed and a CT image sequence to be processed are firstly obtained, M groups of matching image pairs are obtained according to mutual information of the MR image to be processed and the CT image to be processed, a plurality of groups of matching image pairs are selected from the M groups of matching image pairs to serve as training sets, each group of matching image pairs comprises one MR image to be processed and one CT image to be processed which are matched with each other, then two MR images to be processed and one CT image to be processed are selected in the training sets, the two selected MR images to be processed, one CT image to be processed and one Gaussian noise image are jointly input into a target image sequence enhancement model to be trained, training is conducted on the target image sequence enhancement model to be trained, the trained target image sequence enhancement model is obtained, and the obtained trained target image sequence enhancement model can be matched with the CT image to be registered, so that a registered MR image is obtained. Therefore, the trained target image sequence enhancement model obtained by the invention can generate the MR image matched with the CT image, is beneficial to improving the accuracy of three-dimensional reconstruction of the image, and provides a basic technical support function for the medical image processing field.
Drawings
FIG. 1 is a flow chart of a training method for a model for multi-modality image enhancement and registration according to an embodiment of the present invention;
Fig. 2 is a schematic diagram of a target image sequence enhancement model according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but embodiments of the present invention are not limited thereto.
Example 1
Referring to fig. 1, fig. 1 is a flowchart of a training method for a model for multi-mode image enhancement and registration according to an embodiment of the present invention, where the training method for a model for multi-mode image enhancement and registration includes:
Step 1, acquiring an MR image sequence to be processed and a CT image sequence to be processed, wherein the MR image sequence to be processed comprises M MR images to be processed which are arranged in sequence, the CT image sequence to be processed comprises N CT images to be processed which are arranged in sequence, and M and N are integers which are larger than 0.
Here, the MR image sequence to be processed and the CT image sequence to be processed are both acquired at the same part of the same person. For example, M is 30 and N is 300.
Optionally, the M MR images to be processed and the N CT images to be processed are arranged in spatial order.
In an alternative embodiment, step 1 may specifically include:
Step 1.1, acquiring an initial MR image sequence and an initial CT image sequence, wherein the initial MR image sequence comprises M Zhang Chushi MR images, and the initial CT image sequence comprises N initial CT images.
Here, both the initial MR image sequence and the initial CT image sequence are acquired at the same location of the same person. For example, from the root of the thigh to the mid-calf.
Optionally, the M Zhang Chushi MR images and the N Zhang Chushi CT images are all arranged in spatial order from first to second.
And 1.2, converting brightness values of the initial MR image and the initial CT image into gray values to obtain an MR gray image and a CT gray image.
And 1.3, respectively reassigning the gray values of the MR gray scale image and the CT gray scale image by adopting a histogram equalization algorithm, so that the gray values of the MR gray scale image and the CT gray scale image are uniformly distributed within a preset range, and an MR uniformly distributed image and a CT uniformly distributed image are obtained.
Specifically, the histogram equalization algorithm is adopted to redistribute the gray values of the MR gray image and the CT gray image respectively, that is, uniformly divide the plurality of intervals in the preset range, so that the number of the pixels in each interval is uniformly distributed, thereby obtaining the MR uniform distribution image and the CT uniform distribution image, for example, the preset range is divided into 5 intervals, the number of the pixels uniformly distributed to each interval is 10, 7 pixels in the MR gray image fall into the 1 st interval, 3 pixels with smaller gray values can be selected from the 2 nd interval in the order from small to large, for example, the gray values of 3 pixels selected from the 2 nd interval are respectively 51, 55 and 60, the gray value of each selected pixel is multiplied by A1/A2, A1 is the largest gray value in the 1 st interval, A2 is the largest value in the pixels selected from the 2 nd interval, for example, 51, 55, 60 and 50/60 are multiplied by 1 st interval, and other results of multiplying all the other such intervals are multiplied by 1.
Here, the preset range of the MR gray image is a range consisting of a minimum gray value and a maximum gray value of the MR gray image, and the preset range of the CT gray image is a range consisting of a minimum gray value and a maximum gray value of the CT gray image.
Step 1.4, respectively homogenizing the gray values of the MR uniform distribution image and the CT uniform distribution image to the [0, 255] interval to obtain an MR image to be processed and a CT image to be processed, wherein all the MR images to be processed form an MR image sequence to be processed, and all the CT images to be processed form a CT image sequence to be processed.
In this embodiment, the formula for homogenization is:
Wherein, For the normalized gray-scale value,For the gray value before normalization,For the minimum gray value of the MR uniformly distributed image or the CT uniformly distributed image,The maximum gray value of the MR uniformly distributed image or the CT uniformly distributed image.
And 2, obtaining M groups of matched image pairs according to mutual information of the MR image to be processed and the CT image to be processed, and selecting a plurality of groups of matched image pairs from the M groups of matched image pairs as a training set, wherein each group of matched image pairs comprises one MR image to be processed and one CT image to be processed which are matched with each other.
Specifically, in this embodiment, matching processing is performed on the MR image to be processed and the CT image to be processed through mutual information of the MR image to be processed and the CT image to be processed, so as to obtain matching image pairs composed of M sets of mutually matched MR images to be processed and CT images to be processed, and then several sets of matching image pairs are selected from the matching image pairs as a training set, and the remaining matching image pairs are used as a test set.
For example, 80% of the matching image pairs are randomly selected as the training set, and the remaining 20% are selected as the test set.
In an alternative embodiment, step 2 may specifically include:
step 2.1, acquiring a first MR image to be processed in the MR image sequence to be processed.
Step 2.2, selecting the CT image to be processed with the maximum mutual information between the CT image to be processed and the first MR image to be processed from the CT image sequence to be processed based on the mutual information between the first MR image to be processed and the CT image to be processed, and forming a first group of matching image pairs.
In this embodiment, the calculation formula of mutual information of two images is as follows:
Wherein, For the MR images to be processed,For the CT image to be processed,For mutual information between the MR image to be processed and the CT image to be processed,For the edge probability distribution of the MR images to be processed,For the number of gray values a in the MR image to be processed,For the number of pixels in the MR image to be processed,For the edge probability distribution of the CT image to be processed,For the gray value in the CT image to be processed to beIs used in the number of (a) and (b),For the number of pixels in the CT image to be processed,
And 2.3, based on a preset step length, selecting matched CT images to be processed from the second MR image to be processed to the M th MR image to be processed in the CT image sequence to be processed, and obtaining M-1 group matching image pairs.
And 2.31, judging whether n+lambda is an integer for the (m+1) th to-be-processed MR image, if so, selecting the (n+lambda) th to-be-processed CT image from the to-be-processed CT image sequence as an to-be-matched image, and if not, selecting the to-be-processed CT image with larger mutual information with the (m+1) th to-be-processed MR image in two to-be-processed CT images adjacent to the (n+lambda) th to-be-processed CT image as the to-be-matched image, wherein the (M) th to-be-processed MR image and the (N) th to-be-processed CT image form an M-group to-be-matched image pair, lambda is a preset step length, lambda=sm/Sc, sm is an inter-layer interval of the to-be-processed MR image sequence, sc is an inter-layer interval of the to-be-processed CT image sequence, M is more than or equal to 1 and less than or equal to N.
Specifically, assuming that the mth MR image to be processed and the nth CT image to be processed are mutually matched to form an mth matching image pair, then it is first required to determine whether n+λ is an integer, if n+λ is an integer, then the (n+λ) th CT image to be processed may be directly selected as the image to be matched of the (m+1) th MR image to be processed, if n+λ is not an integer, then two front and back CT images closest to the (n+λ) th CT image to be processed, for example, n+λ is 6.5, then the sixth CT image to be processed and the seventh CT image to be processed are selected, and then a CT image to be processed with a mutual information greater than that of the (m+1) th CT image to be processed is selected from the two CT images to be processed to be matched. Here, the inter-layer interval is the distance between two adjacent images.
Step 2.32, judging whether mutual information between the CT image to be matched and the (m+1) th MR image to be processed is larger than or equal to a preset threshold value, if so, forming the CT image to be matched and the (m+1) th MR image to be processed into an (m+1) th group of matched image pairs, and if not, selecting the CT image to be processed with the largest mutual information between the CT image to be matched and the (m+1) th MR image to be processed from the CT image sequence to be processed, and forming the (m+1) th group of matched image pairs.
In this embodiment, for other MR images to be processed, the CT images to be processed matching the other MR images are continuously searched according to the method of step 2.31 until all the matching image pairs are obtained.
It should be noted that, the specific value of the preset threshold is not limited in this embodiment, and those skilled in the art may set the preset threshold to 1.05 according to specific requirements.
Step 3, selecting two MR images to be processed and one CT image to be processed in a training set, inputting the two MR images to be processed, the one CT image to be processed and the one Gaussian noise image to a target image sequence enhancement model to be trained together, training the target image sequence enhancement model to be trained to obtain a trained target image sequence enhancement model, and matching the trained target image sequence enhancement model with the CT image to be registered to obtain a registered MR image; wherein, two MR images to be processed selected in the training set are not matched with the CT images to be processed selected; the target image sequence enhancement model is based on a model for generating an countermeasure network.
Specifically, three to-be-processed MR images and one to-be-processed CT image are selected in a training set, preferably three to-be-processed MR images which are spatially continuous are respectively recorded as MR1, MR2 and MR3, MR2 is spatially located between MR1 and MR3, the selected to-be-processed CT images and MR2 in the training set form a group of matched image pairs, and then a gaussian noise image is randomly generated, so that MR1, MR3, the to-be-processed CT image matched with MR2 and the gaussian noise image can be input into a to-be-trained target image sequence enhancement model together, and training is performed on the to-be-trained target image sequence enhancement model, thereby obtaining a trained target image sequence enhancement model.
In this embodiment, the target image sequence enhancement model is a model based on generating an countermeasure network, referring to fig. 2, and includes a generator and a discriminator, and the generator includes an encoder, a feature extraction unit, and a decoder.
In an alternative embodiment, step 3 may specifically include:
Step 3.1, selecting two MR images to be processed and one CT image to be processed in a training set, and inputting the two MR images to be processed, the one CT image to be processed and the one Gaussian noise image to an encoder to obtain a first feature matrix corresponding to the Gaussian noise image, a second feature matrix corresponding to the two MR images to be processed, a third feature matrix corresponding to the two MR images to be processed and a fourth feature matrix corresponding to the CT image to be processed.
Alternatively, for the images to be input to the encoder, a circle of pixels with gray values of 0 may be filled around the images, and then the images may be input to the encoder.
Optionally, the encoder has a plurality of image feature extraction layers sequentially connected, all the image feature extraction layers have the same structure, and each image feature extraction layer comprises a convolution layer, a pooling layer, a normalization layer and an activation layer sequentially connected, so that after the gaussian noise image is processed by the plurality of image feature extraction layers, a first feature matrix is obtained, after the two MR images to be processed are respectively processed by the plurality of image feature extraction layers, a second feature matrix and a third feature matrix are correspondingly obtained, and after the CT image to be processed is processed by the plurality of image feature extraction layers, a fourth feature matrix is obtained. For example, the encoder includes 4 image feature extraction layers connected in sequence.
And 3.2, inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix into a feature extraction unit to obtain a feature matrix to be decoded.
In this embodiment, the feature extraction unit includes a graph attention module, a channel attention module, and a residual structure.
And 3.21, inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix into a drawing attention module, and obtaining a fifth feature matrix corresponding to the Gaussian noise image, a sixth feature matrix, a seventh feature matrix and an eighth feature matrix corresponding to the to-be-processed CT image, which are respectively corresponding to the two to-be-processed MR images, by capturing the similarity among the features.
And 3.211, inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix into a drawing force module, and dividing the first feature matrix into a plurality of first matrix blocks.
For example, the first matrix block has a size of I rows and 1 columns, I being for example 4.
And 3.212, sequentially selecting a first matrix block, and respectively selecting a second matrix block, a third matrix block and a fourth matrix block which are positioned in the same position as the first matrix block from the second feature matrix, the third feature matrix and the fourth feature matrix.
Specifically, the first matrix blocks may be sequentially selected in order from left to right and then from top to bottom, for the first matrix block currently selected, a second matrix block with the same position as the first matrix block is selected in the second feature matrix, a third matrix block with the same position as the first matrix block is selected in the third feature matrix, and a fourth matrix block with the same position as the first matrix block is selected in the fourth feature matrix.
Step 3.213, combining the first matrix block, the second matrix block, the third matrix block and the fourth matrix block into a combined matrix.
Specifically, the first matrix block, the second matrix block, the third matrix block, and the fourth matrix block are combined into a combined matrix in an order sequentially arranged from left to right. The size of the combining matrix is, for example, 4 rows and 4 columns.
And 3.214, selecting a plurality of pixel points adjacent to the first matrix block to obtain a pixel matrix.
Specifically, all the pixel points nearest to the first matrix block are selected from the first feature matrix, then a plurality of pixel points are selected from the pixel points randomly, and the pixel points are combined into a pixel matrix, for example, 8 pixel points are selected to form a pixel matrix with 4 rows and 2 columns.
Step 3.215, combining the combination matrix and the pixel matrix into a mosaic matrix.
Specifically, the combination matrix and the pixel matrix are spliced, so that a spliced matrix is obtained. The size of the splice matrix is, for example, 4 rows and 6 columns.
And 3.216, respectively carrying out weighting treatment on the first matrix block, the second matrix block, the third matrix block and the fourth matrix block in the spliced matrix according to the connection relation among the first matrix block, the second matrix block, the third matrix block and the fourth matrix block, and respectively correspondingly obtaining a fifth matrix block, a sixth matrix block, a seventh matrix block and an eighth matrix block.
Specifically, for a first matrix block in the spliced matrix, a matrix block with a connection relation with the first matrix block is selected from a second matrix block, a third matrix block and a fourth matrix block, then the first matrix block, the matrix block with a connection relation with the first matrix block and each column element in the pixel matrix are subjected to weighted summation processing, so as to obtain a fifth matrix block, for example, the second matrix block and the third matrix block are connected with the first matrix block, and the fourth matrix block is not connected with the first matrix block, so that the second matrix block and the third matrix block are multiplied by a coefficient respectively, each column element in the pixel matrix is multiplied by a coefficient respectively, then the multiplied results are added, and the added result is added with the first matrix block to obtain the fifth matrix block, wherein the value range of each coefficient is 0-1, and the sum of all coefficients is 1.
For a second matrix block in the spliced matrix, firstly selecting a matrix block with a connection relation with the second matrix block from the first matrix block, the third matrix block and the fourth matrix block, then carrying out weighted summation on the second matrix block and the matrix block with the connection relation with the second matrix block, namely multiplying the matrix block with the connection relation with the second matrix block by a coefficient respectively, then adding the multiplied results, and adding the added result to the second matrix block to obtain a sixth matrix block. For a third matrix block in the spliced matrix, firstly selecting a matrix block with a connection relation with the third matrix block from the first matrix block, the second matrix block and the fourth matrix block, then carrying out weighted summation on the third matrix block and the matrix block with the connection relation with the third matrix block, namely multiplying the matrix block with the connection relation with the third matrix block by a coefficient respectively, then adding the multiplied results, and adding the added result to the third matrix block to obtain a seventh matrix block. For a fourth matrix block in the spliced matrix, firstly selecting a matrix block with a connection relation with the fourth matrix block from the first matrix block, the second matrix block and the third matrix block, then carrying out weighted summation on the fourth matrix block and the matrix block with the connection relation with the fourth matrix block, namely multiplying the matrix block with the connection relation with the fourth matrix block by a coefficient respectively, then adding the multiplied results, and adding the added result with the fourth matrix block to obtain an eighth matrix block.
And 3.217, replacing the first matrix block of the first feature matrix, the second matrix block of the second feature matrix, the third matrix block of the third feature matrix and the fourth matrix block of the fourth feature matrix by using the fifth matrix block, the sixth matrix block, the seventh matrix block and the eighth matrix block.
Specifically, the first matrix block of the first feature matrix currently processed is replaced with a fifth matrix block, the second matrix block of the second feature matrix currently processed is replaced with a sixth matrix block, the third matrix block of the third feature matrix currently processed is replaced with a seventh matrix block, and the fourth matrix block of the fourth feature matrix currently processed is replaced with an eighth matrix block.
And 3.218, repeatedly executing the steps 3.212 to 3.217 until all matrix blocks in the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix are replaced, so as to obtain a fifth feature matrix, a sixth feature matrix, a seventh feature matrix and an eighth feature matrix.
Specifically, steps 3.212 to 3.217 are repeatedly executed, all first matrix blocks in the first feature matrix are replaced by fifth matrix blocks to obtain a fifth feature matrix, all second matrix blocks in the second feature matrix are replaced by sixth matrix blocks to obtain a sixth feature matrix, all third matrix blocks in the third feature matrix are replaced by seventh matrix blocks to obtain a seventh feature matrix, and all fourth matrix blocks in the fourth feature matrix are replaced by eighth matrix blocks to obtain an eighth feature matrix.
And 3.22, inputting the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix into a channel attention module, obtaining a first weight of the fifth feature matrix, a second weight of the sixth feature matrix, a third weight of the seventh feature matrix and a fourth weight of the eighth feature matrix by the channel attention module through convolution operation, multiplying the first weight by the fifth feature matrix, the second weight by the sixth feature matrix, the third weight by the seventh feature matrix, the fourth weight by the eighth feature matrix, and adding all multiplication results to obtain a ninth feature matrix.
Specifically, the channel attention module performs convolution operation on the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix through convolution kernels, so as to correspondingly obtain a first weight, a second weight, a third weight and a fourth weight, then multiplies the first weight by the fifth feature matrix, multiplies the second weight by the sixth feature matrix, multiplies the third weight by the seventh feature matrix, multiplies the fourth weight by the eighth feature matrix, and finally correspondingly adds all the multiplied results to obtain the ninth feature matrix. Therefore, the original characteristics can be properly saved while the characteristic matrixes are fused as much as possible.
And 3.23, inputting the ninth feature matrix into a residual structure, and obtaining the feature matrix to be decoded after convolution operation.
Specifically, inputting the ninth feature matrix into the residual structure, performing convolution operation by using convolution check on the ninth feature matrix to restore the channel number to the original channel number, thereby obtaining a feature matrix to be decoded, and outputting the feature matrix to be decoded to a decoder. The forward effect of the image can be ensured through the residual error structure.
And 3.3, inputting the feature matrix to be decoded into a decoder to obtain a first MR enhanced image.
Optionally, the decoder has a plurality of image feature reconstruction layers connected in sequence, the number of the image feature reconstruction layers is the same as that of the image feature extraction layers, the structures of all the image feature reconstruction layers are the same, and the image feature reconstruction layers comprise a convolution layer, a deconvolution layer, a normalization layer and an activation layer which are connected in sequence. The perception and extraction of image features is enhanced by the decoder.
Step 3.4, based on the dynamic differential loss function constructed by the first MR enhanced image and the MR image to be processed to be registered, adjusting parameters of a target image sequence enhanced model to be trained in a counter-propagation mode to obtain a primarily trained target image sequence enhanced model, and obtaining a second MR enhanced image output by the primarily trained target image sequence enhanced model; the to-be-registered to-be-processed MR images are selected from the training set and are positioned between two to-be-processed MR images input to the to-be-trained target image sequence enhancement model, and the to-be-processed CT images input to the to-be-trained target image sequence enhancement model and the to-be-registered to-be-processed MR images are positioned in the same group of matched image pairs.
Specifically, a dynamic differential loss function is constructed by using the first MR enhanced image and the MR image to be registered to obtain a loss value through the dynamic differential loss function, and then parameters of a target image sequence enhanced model to be trained are adjusted in a counter-propagation mode until the loss value reaches the minimum or the training reaches the maximum training times, so that a primarily trained target image sequence enhanced model is obtained, and at the moment, the output of the primarily trained target image sequence enhanced model is the second MR enhanced image.
Here, the MR images to be registered to be processed are spatially located between two MR images to be processed which are input to the target image sequence enhancement model to be trained, and are matched with the CT images to be processed which are input to the target image sequence enhancement model to be trained.
Alternatively, the dynamic differential loss function is expressed as:
Wherein, Representing a dynamic differential loss function,Representing the MR images to be processed to be registered,A first MR enhanced image is represented,The goal of the representation generator is to slaveTo the point ofIs used for the mapping of (a),Representing the first MR-enhanced imageThe desired operation is performed by the individual pixels,Representing the probability distribution of the first MR enhanced image,Representing the number of pixels in the MR image to be processed to be registered,The representation generator registers the first MR image to be processedThe difference in the results of the mapping of the individual pixels to the first MR enhanced image,Representing allAverage value of (2).
And 3.5, inputting the second MR enhanced image into the discriminator so as to obtain a trained target image sequence enhanced model according to the discrimination result of the discriminator.
And 3.51, inputting the second MR enhanced image into a discriminator to obtain a discrimination value.
And 3.52, judging the relation between the judging value and the judging threshold, if the judging value is larger than the judging threshold and the judging result is false, updating the judging threshold according to the loss value obtained by the kurtosis loss function, and continuing to train the primarily trained target image sequence enhancement model (namely continuing to train the primarily trained target image sequence enhancement model according to the training steps) until the judging value is smaller than or equal to the latest judging threshold, so as to obtain the trained target image sequence enhancement model, and if the judging value is smaller than or equal to the judging threshold and the judging result is true, taking the primarily trained target image sequence enhancement model as the trained target image sequence enhancement model.
In this embodiment, the update method of the discrimination threshold is as follows: and multiplying the loss value obtained by using the kurtosis loss function by the non-updated discrimination threshold to obtain the updated discrimination threshold.
Alternatively, the kurtosis loss function is expressed as:
Wherein, Representing the kurtosis loss function,A second MR enhanced image is represented,The goal of the representation generator is to slaveTo the point ofIs used for the mapping of (a),Representing the first in the second MR-enhanced imageThe desired operation is performed by the individual pixels,Representing the probability distribution of the second MR enhanced image,Representing the number of pixels in the second MR enhanced image,Representing the first MR-enhanced imageThe number of pixels in a pixel is one,Representing the average of all pixels in the second MR enhanced image,Representing the standard deviation of the second MR enhanced image,Representing the first of the MR images to be registered for processingThe number of pixels in a pixel is one,Representing the mean value of all pixel points in the MR image to be processed to be registered,Representing the standard deviation of the MR images to be processed to be registered.
Optionally, the arbiter comprises 4 convolution layers which are different in size from 16×16 to 4×4 and are connected in sequence, and a normalization layer, an activation function layer and a full connection layer are further included after the 4 convolution layers.
In addition, the MR image data set is smaller, so that the gradient vanishes or the problem of overfitting is easily caused, and norms regularization for the image gradient can be added behind the discriminator to improve the training effect and obtain a more stable judging result.
In a specific embodiment, after the trained target image sequence enhancement model is obtained, the trained target image sequence enhancement model may be further tested by using a test set, and the remaining sets of matching image pairs that are not obtained as the training set may be used as the test set. And testing the trained target image sequence enhancement model through the test set to determine the effect of the trained target image sequence enhancement model.
The invention carries out sequence registration based on the combination of mutual information and interlayer interval information; by introducing a multichannel input and feature extraction unit into the generator, MR enhanced images which can be used for image fusion and automatic segmentation model training are generated, the problem of data deficiency in subsequent fusion segmentation model training is effectively relieved, the accuracy of three-dimensional image reconstruction is improved, and a basic technical support effect is provided for the field of medical image processing.
The trained target image sequence enhancement model obtained by the invention can generate MR enhancement images, can be used for the subsequent training of fusion segmentation models, and is beneficial to solving the problem of medical image data deficiency. The adopted drawing meaning force module can capture similarity information among images and optimize details of local features; the channel attention module can adjust the weight value of the input information of each layer in the characteristic extraction process, and the generated MR enhanced image is more real under the combined action, so that effective data can be provided for subsequent training and other applications.
Example two
The invention also provides a multi-mode image registration method based on the first embodiment, which comprises the following steps:
Acquiring CT images to be registered;
And inputting the CT image to be registered into the trained target image sequence enhancement model in the first embodiment to obtain a registered MR image.
Specifically, on the basis of obtaining the trained target image sequence enhancement model in the embodiment, a CT image (i.e., a CT image to be registered) actually required to be registered may be input to the trained target image sequence enhancement model, and the generator of the trained target image sequence enhancement model may output the registered MR image.
The multi-mode image registration method provided in the present embodiment has similar implementation principle and technical effects to those of the multi-mode image enhancement and registration method provided in the first embodiment, and will not be described herein.
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, one skilled in the art can combine and combine the different embodiments or examples described in this specification.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (8)

1. A training method for a model for multi-modal image enhancement and registration, the training method comprising:
Acquiring an MR image sequence to be processed and a CT image sequence to be processed, wherein the MR image sequence to be processed comprises M MR images to be processed which are arranged in sequence, the CT image sequence to be processed comprises N CT images to be processed which are arranged in sequence, and M and N are integers larger than 0;
Obtaining M groups of matched image pairs according to mutual information of the MR image to be processed and the CT image to be processed, and selecting a plurality of groups of matched image pairs from the M groups of matched image pairs as a training set, wherein each group of matched image pairs comprises one MR image to be processed and one CT image to be processed which are matched with each other;
Selecting two MR images to be processed and one CT image to be processed in the training set, inputting the two MR images to be processed, the CT image to be processed and the Gaussian noise image to be processed into a target image sequence enhancement model to be trained together so as to train the target image sequence enhancement model to be trained, and obtaining a trained target image sequence enhancement model, wherein the trained target image sequence enhancement model is used for matching CT images to be registered to obtain registered MR images; wherein, the two MR images to be processed selected in the training set are not matched with the CT images to be processed selected; the target image sequence enhancement model is a model based on generating an countermeasure network, wherein,
The target image sequence enhancement model comprises a generator and a discriminator, wherein the generator comprises an encoder, a feature extraction unit and a decoder;
Selecting two MR images to be processed and one CT image to be processed in the training set, inputting the two selected MR images to be processed, one CT image to be processed and one Gaussian noise image into a target image sequence enhancement model to be trained together so as to train the target image sequence enhancement model to be trained, and obtaining a trained target image sequence enhancement model, wherein the method comprises the following steps:
Selecting two MR images to be processed and one CT image to be processed in the training set, and inputting the two MR images to be processed, the one CT image to be processed and the one Gaussian noise image to the encoder to obtain a first feature matrix corresponding to the Gaussian noise image, a second feature matrix, a third feature matrix and a fourth feature matrix corresponding to the two MR images to be processed respectively;
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the feature extraction unit to obtain a feature matrix to be decoded;
inputting the feature matrix to be decoded to the decoder to obtain a first MR enhanced image;
Based on the dynamic differential loss function constructed by the first MR enhanced image and the MR image to be processed to be registered, adjusting parameters of the target image sequence enhanced model to be trained in a counter-propagation mode to obtain a primarily trained target image sequence enhanced model, and obtaining a second MR enhanced image output by the primarily trained target image sequence enhanced model; the to-be-registered to-be-processed MR images are selected from the training set and are positioned between two to-be-processed MR images input to the to-be-trained target image sequence enhancement model, and the to-be-processed CT images input to the to-be-trained target image sequence enhancement model and the to-be-registered to-be-processed MR images are positioned in the same group of matched image pairs;
Inputting the second MR enhanced image to the discriminator so as to obtain the trained target image sequence enhanced model according to the discrimination result of the discriminator;
wherein the dynamic differential loss function is expressed as:
Wherein, Representing a dynamic differential loss function,Representing the MR images to be processed to be registered,A first MR enhanced image is represented,The goal of the representation generator is to slaveTo the point ofIs used for the mapping of (a),Representing the first MR-enhanced imageThe desired operation is performed by the individual pixels,Representing the probability distribution of the first MR enhanced image,Representing the number of pixels in the MR image to be processed to be registered,The representation generator registers the first MR image to be processedThe difference in the results of the mapping of the individual pixels to the first MR enhanced image,Representing allAverage value of (2).
2. Training method according to claim 1, characterized in that acquiring an MR image sequence to be processed and a CT image sequence to be processed comprises:
acquiring an initial MR image sequence and an initial CT image sequence, wherein the initial MR image sequence comprises M Zhang Chushi MR images, and the initial CT image sequence comprises N initial CT images;
Converting the brightness values of the initial MR image and the initial CT image into gray values to obtain an MR gray image and a CT gray image;
Respectively reassigning the gray values of the MR gray scale image and the CT gray scale image by adopting a histogram equalization algorithm, so that the gray values of the MR gray scale image and the CT gray scale image are uniformly distributed within a preset range, and an MR uniform distribution image and a CT uniform distribution image are obtained;
And respectively homogenizing the gray values of the MR uniform distribution image and the CT uniform distribution image to the interval of [0, 255] to obtain the MR image to be processed and the CT image to be processed, wherein all the MR images to be processed form the MR image sequence to be processed, and all the CT images to be processed form the CT image sequence to be processed.
3. Training method according to claim 1, characterized in that obtaining M sets of matching image pairs from mutual information of the MR image to be processed and the CT image to be processed comprises:
Acquiring a first MR image to be processed in the MR image sequence to be processed;
Selecting a CT image to be processed with the maximum mutual information between the CT image to be processed and the first MR image to be processed from the CT image sequence to be processed based on the mutual information between the first MR image to be processed and the CT image to be processed, and forming a first group of matched image pairs;
And based on a preset step length, selecting matched CT images to be processed from the second MR image to be processed to the M th MR image to be processed in the CT image sequence to be processed, and obtaining M-1 group matching image pairs.
4. A training method according to claim 3, wherein selecting the matched CT images from the CT image sequences to be processed for the second MR image to be processed to the mth MR image to be processed based on a preset step length, to obtain M-1 group of matched image pairs, comprises:
For the (m+1) th to-be-processed MR image, judging whether n+lambda is an integer, if so, selecting the (n+lambda) th to-be-processed CT image from the to-be-processed CT image sequence as an to-be-matched image, if not, selecting the to-be-processed CT image with larger mutual information with the (m+1) th to-be-processed MR image from two to-be-processed CT images adjacent to the (n+lambda) th to-be-processed CT image as the to-be-matched image, wherein the (M) th to-be-processed MR image and the (N) th to-be-processed CT image form an (M) th to-be-matched image pair, lambda is a preset step length, lambda=Sm/Sc, sm is an interlayer interval of the to-be-processed MR image sequence, sc is an interlayer interval of the to-be-processed CT image sequence, M is more than or equal to 1 and M is less than or equal to 1 and N is less than or equal to N;
Judging whether mutual information between the CT image to be matched and the (m+1) th MR image to be processed is larger than or equal to a preset threshold value, if so, forming the CT image to be matched and the (m+1) th MR image to be processed into an (m+1) th group matching image pair, and if not, selecting the CT image to be processed with the largest mutual information between the CT image to be matched and the (m+1) th MR image to be processed from the CT image sequence to be processed, and forming the (m+1) th group matching image pair.
5. The training method of claim 1, wherein the feature extraction unit comprises a graph attention module, a channel attention module, and a residual structure;
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the feature extraction unit to obtain a feature matrix to be decoded, including:
Inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix into the drawing attention module, wherein the drawing attention module obtains a fifth feature matrix corresponding to the Gaussian noise image, a sixth feature matrix, a seventh feature matrix and an eighth feature matrix corresponding to the MR image to be processed, wherein the sixth feature matrix, the seventh feature matrix and the eighth feature matrix correspond to the CT image to be processed are respectively corresponding to the two MR images to be processed through capturing the similarity among features;
Inputting the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix into the channel attention module, wherein the channel attention module obtains a first weight of the fifth feature matrix, a second weight of the sixth feature matrix, a third weight of the seventh feature matrix and a fourth weight of the eighth feature matrix through convolution operation, multiplies the first weight by the fifth feature matrix, the second weight by the sixth feature matrix, the third weight by the seventh feature matrix, the fourth weight by the eighth feature matrix, and adds all multiplication results to obtain a ninth feature matrix;
And inputting the ninth feature matrix into the residual structure, and obtaining the feature matrix to be decoded after convolution operation.
6. The training method of claim 5, wherein inputting the first feature matrix, the second feature matrix, the third feature matrix, and the fourth feature matrix to the attention module, the attention module obtaining a fifth feature matrix corresponding to the gaussian noise image, a sixth feature matrix corresponding to the two MR images to be processed, a seventh feature matrix, and an eighth feature matrix corresponding to the CT image to be processed by capturing similarities between features, comprises:
step 3.211, inputting the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix to the drawing force module, and then dividing the first feature matrix into a plurality of first matrix blocks;
Step 3.212, sequentially selecting one first matrix block, and respectively selecting a second matrix block, a third matrix block and a fourth matrix block which are the same as the first matrix block in position from the second feature matrix, the third feature matrix and the fourth feature matrix;
Step 3.213, combining the first matrix block, the second matrix block, the third matrix block and the fourth matrix block into a combined matrix;
step 3.214, selecting a plurality of pixel points adjacent to the first matrix block to obtain a pixel matrix;
Step 3.215, combining the combination matrix and the pixel matrix into a splicing matrix;
Step 3.216, respectively performing weighted summation processing on the first matrix block, the second matrix block, the third matrix block and the fourth matrix block in the spliced matrix according to the connection relationship among the first matrix block, the second matrix block, the third matrix block and the fourth matrix block, so as to respectively and correspondingly obtain a fifth matrix block, a sixth matrix block, a seventh matrix block and an eighth matrix block;
Step 3.217, replacing the first matrix block of the first feature matrix, the second matrix block of the second feature matrix, the third matrix block of the third feature matrix, and the fourth matrix block of the fourth feature matrix with the fifth matrix block, the sixth matrix block, the seventh matrix block, and the eighth matrix block;
and step 3.218, repeating the steps 3.212 to 3.217 until all matrix blocks in the first feature matrix, the second feature matrix, the third feature matrix and the fourth feature matrix are replaced, and correspondingly obtaining the fifth feature matrix, the sixth feature matrix, the seventh feature matrix and the eighth feature matrix.
7. The training method of claim 1, wherein inputting the second MR enhanced image to the arbiter to obtain the trained target image sequence enhancement model based on the discrimination result of the arbiter comprises:
Inputting the second MR enhanced image to the discriminator to obtain a discrimination value;
Judging the relation between the judging value and the judging threshold value, if the judging value is larger than the judging threshold value and the judging result is false, updating the judging threshold value according to the loss value obtained by the kurtosis loss function, continuing training the primarily trained target image sequence enhancement model until the judging value is smaller than or equal to the latest judging threshold value, obtaining the trained target image sequence enhancement model, and if the judging value is smaller than or equal to the judging threshold value and the judging result is true, taking the primarily trained target image sequence enhancement model as the trained target image sequence enhancement model.
8. A method for registration of multi-modal images, comprising:
Acquiring CT images to be registered;
Inputting the CT image to be registered into the trained target image sequence enhancement model according to any one of claims 1 to 7, and obtaining a registered MR image.
CN202410840678.4A 2024-06-27 2024-06-27 Model training method and registration method for multi-mode image enhancement and registration Active CN118397059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410840678.4A CN118397059B (en) 2024-06-27 2024-06-27 Model training method and registration method for multi-mode image enhancement and registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410840678.4A CN118397059B (en) 2024-06-27 2024-06-27 Model training method and registration method for multi-mode image enhancement and registration

Publications (2)

Publication Number Publication Date
CN118397059A CN118397059A (en) 2024-07-26
CN118397059B true CN118397059B (en) 2024-09-17

Family

ID=91996007

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410840678.4A Active CN118397059B (en) 2024-06-27 2024-06-27 Model training method and registration method for multi-mode image enhancement and registration

Country Status (1)

Country Link
CN (1) CN118397059B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
CN114882051A (en) * 2022-04-25 2022-08-09 大连理工大学 Automatic segmentation and three-dimensional reconstruction method for pelvic bone tumor based on multi-modal image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7583857B2 (en) * 2005-08-24 2009-09-01 Siemens Medical Solutions Usa, Inc. System and method for salient region feature based 3D multi modality registration of medical images
CN113506334B (en) * 2021-06-07 2023-12-15 刘星宇 Multi-mode medical image fusion method and system based on deep learning
CN116402865B (en) * 2023-06-06 2023-09-15 之江实验室 Multi-mode image registration method, device and medium using diffusion model
CN116823613A (en) * 2023-06-30 2023-09-29 复旦大学 Multi-mode MR image super-resolution method based on gradient enhanced attention

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112906720A (en) * 2021-03-19 2021-06-04 河北工业大学 Multi-label image identification method based on graph attention network
CN114882051A (en) * 2022-04-25 2022-08-09 大连理工大学 Automatic segmentation and three-dimensional reconstruction method for pelvic bone tumor based on multi-modal image

Also Published As

Publication number Publication date
CN118397059A (en) 2024-07-26

Similar Documents

Publication Publication Date Title
CN106373109B (en) A kind of medical image mode synthetic method
CN110348515B (en) Image classification method, image classification model training method and device
CN111932550B (en) 3D ventricle nuclear magnetic resonance video segmentation system based on deep learning
CN115578404B (en) Liver tumor image enhancement and segmentation method based on deep learning
CN107194912B (en) Brain CT/MR image fusion method based on sparse representation and improved coupled dictionary learning
Hsu et al. Automatic segmentation of liver PET images
CN109087298B (en) Alzheimer's disease MRI image classification method
CN107862665B (en) CT image sequence enhancement method and device
CN117036162B (en) Residual feature attention fusion method for super-resolution of lightweight chest CT image
CN112508808A (en) CT (computed tomography) dual-domain joint metal artifact correction method based on generation countermeasure network
CN111091575B (en) Medical image segmentation method based on reinforcement learning method
Song et al. Denoising of MR and CT images using cascaded multi-supervision convolutional neural networks with progressive training
CN108710950A (en) A kind of image quantization analysis method
CN112750137B (en) Liver tumor segmentation method and system based on deep learning
Fantini et al. Automatic MR image quality evaluation using a Deep CNN: A reference-free method to rate motion artifacts in neuroimaging
CN115223193B (en) Capsule endoscope image focus identification method based on focus feature importance
CN115601268A (en) LDCT image denoising method based on multi-scale self-attention generation countermeasure network
CN116935051B (en) Polyp segmentation network method, system, electronic equipment and storage medium
CN111340760B (en) Knee joint positioning method based on multitask two-stage convolution neural network
CN118397059B (en) Model training method and registration method for multi-mode image enhancement and registration
CN108596900B (en) Thyroid-associated ophthalmopathy medical image data processing device and method, computer-readable storage medium and terminal equipment
Oulbacha et al. MRI to C‐arm spine registration through Pseudo‐3D CycleGANs with differentiable histograms
CN116309754A (en) Brain medical image registration method and system based on local-global information collaboration
CN111126424A (en) Ultrasonic image classification method based on convolutional neural network
CN113850816B (en) Cervical cancer MRI image segmentation device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant